Archive for the ‘Benchmarking’ Category

Key issues in developing rapid assessment methods of ecosystem state

Friday, February 25th, 2011

David K. Rowe and his colleagues from the National Institute of Water and Atmospheric Research of New Zealand have developed a rapid method to score stream reaches. In presenting their method, the go through several of the key steps (and difficulties) in developing such “rapid assessment methods”. We summarize these below:

  • Scores are often given in reference to a desirable state (or “best of a kind”). This is helpful for ensuring that all assessors share the same upper bounds in their assessment. Selecting reference sites is however a tricky task and in particular if assessments do not focus on an ecosystem as a whole but on separate “functions”. Rowe et al. (2009) raise the issue of artificial streams performing certain functions better than reference sites. In this case they argue that this “over-performance” should be ignored: the artificial stream should be given the maximum score – that of the reference – for the particular function.
  • The selection of variables is a key step in method development. It requires an understanding of the system being assessed or of the main drivers of the system’s state (i.e. a conceptual model of the system). The conceptual models can be tested using field data. As an example, Delaware’s DERAP method was built through multiple regression analysis of independent evaluations of wetland states with set of stressor variables against (on 250 different wetlands!).
  • Developing algorithms for combining several variables into single scores is where many methods fail to convince (see for ex. McCarthy et al. 2004). Algorithms can be tested against results from established methods or best professional judgement, using field sites or consensual reference sites for example. Alternatively, statistical models can be used to weight the variables (as in the development of DERAP).
  • Redundancy is unavoidable because of the interdependence of the many processes being assessed. Moreover, redundancy contributes to robustness in the face of user/assessor subjectivity. As an example, Florida’s UMAM method relies on best professional judgement but gives detailed guidelines through a list of criteria that are very redundant. The robustness of a method to user bias can be assessed through sensitivity analysis.
  • Once a method has been proposed, it must be revised and improved through testing and feedback from widespread use.
  • The team who developed California’s rapid assessment method (CRAM) also made recommendations concerning model development (available here). They offer a more formalized step-by-step process that includes several of the points raised by Rowe and his co-authors.

    The Tolstoy effect

    Sunday, December 12th, 2010

    In Anna Karenina, Tolstoy reminds us that “happy families are all alike” while “every unhappy family is unhappy in its own way”.

    Emilie Stander and Joan Ehrenfeld concluded that the same was true for wetlands. They studied the functioning of wetlands used as (supposedly “pristine”) reference wetlands for wetland mitigation in New Jersey (USA) and found that in this heavily urbanized setting, even reference wetlands were “unhappy”… (pdf here)

    This raises issues for using typologies of wetlands in the assessment of wetland states (as in the context of wetland mitigation in the USA).

    Identifying reference wetlands on the basis of standard structural indicators is misleading when wetlands are in heavily modified landscapes and watersheds. They suggest that instead, multi-year data on functioning should be used to create appropriate typologies of wetland functioning.

    A further step would be to use “theoretical” references for assessing wetland state but this would most likely make in-the-field assessment more difficult.