Posts Tagged ‘biodiversity offsets’

Biodiversity offsets: the most promising nature-based opportunity for UK businesses?!

Monday, July 9th, 2012

DEFRA (the UK government department responsible for policy and regulations on the environment, food and rural affairs) recently published a report on opportunities for UK business that value and/or protect nature’s services. What does it say?

Well, the authors identified 12 promising opportunities for UK business to help protect and value nature. First among them is the development of biodiversity offsets and habitat banking. The report suggest they move from their current voluntary status to a mandatory regime.

Rank 1=: BIODIVERSITY OFFSETS, INCLUDING THROUGH CONSERVATION BANKING – an opportunity to stimulate creation of new companies and new business models for existing companies to provide biodiversity offsets in the UK, by moving from the current voluntary approach to a (soft regulation) mandatory regime.

The report mentions “soft regulation”, and describes (section 2.1, 1 of the final report) this as:

regulation or unambiguous policy interpretation by government that clarifies that biodiversity offsets are necessary in defined circumstances, and that establishes a framework for implementation to a particular standard, including through conservation banks.

The report also mentions the need to :

support for a brokering system which can provide national, regional and local choice against desired spatial delivery, and can provide transparency and ease of purchase of credits and management of contracts with those providing offset sites, all of which would reduce risk

To learn more about the business side of the report’s conclusions, dive in and read Attachment 1.

Grasslands: are they all equivalent?

Although the report’s overall outlook is positive, it doesn’t mean creating a market for biodiversity offsets will be straightforward. There are still many technical and institutional difficulties to overcome

  • how will the avoidance and reduction steps of the mitigation hierarchy be enforced?
  • how are “credits” constructed?
  • how will their prices be set?
  • how are liabilities defined?
  • who is in charge of controls and sanctions?
  • (…)
  • These questions are not new, but they deserve some detailed thinking, and transparent debates.

    The metrics debate: habitat for middle-aged great blue herons who don’t like shrimp?

    Sunday, April 22nd, 2012

    Whenever discussions on biodiversity offsets get technical, they either focus on legal and cost issues (if you’re paying) or on their underlying ecological reality (if you’re the regulator). Concerning the latter, the question is how you actually assess equivalence between what is lost on the one hand, and what is generated by the offset on the other? So it’s all about what and how you measure to assess those gains and losses – hence the metrics debate.

    In his blog, Morgan Robertson exposes this issue as a “paradox”.

    I’ve been thinking about this for a long time — in fact it seems like everything I’ve ever written boils down to “defining environmental commodities is HARD because ecology is complex and commodities need to be abstract”.

    The paradox is that the metrics must strike a difficult balance between their ecological precision and their ability to foster exchanges on a market for offsets.

    Too much precision (i.e. the habitat for middle-aged great blue herons who don’t like shrimp of Robertson fame since 2004) might better reflect the complexities, or rather the ecological uniqueness, of each location (and time), being assessed. It would however make any market completely useless… At the other extreme, a metric that hardly encompasses these complexities (try wetland area) would make the market highly fungible.

    This paradox should be on the mind of anyone developing metrics or methods for assessing ecological equivalence or credit-debit systems, or using them to actually design an offset scheme. The same applies to any type of ecosystem service market off course (PES or otherwise).

    It is interesting to note that in their pilot schemes for testing habitat banking, France and the United Kingdom have made very different choices in terms of metrics. More on this later…

    Long-term floodplain meadows cannot realistically be recreated!

    Friday, October 28th, 2011

    Biodiversity offsets are making headlines as a new instrument or tool for biodiversity conservation in the UK. DEFRA defines offsets as actions in favour of biodiversity that are carried-out in compensation for planned impacts (e.g. from development) and provide a measurable outcome. Whenever possible, offsets should target the same biodiversity components (species, habitat types etc.) as those that will be impacted. This raises the question of their “restorability”.

    In a recent paper published in the Journal of Applied Ecology, Ben Woodcock, Alison McDonald and Richard Pywell of CEH investigate the restorability of long-term floodplain meadows on agricultural land in South-Eastern England. Using an 22 years old restoration experiment, they show that given the current restoration trajectory, it would take over 150 years for the former arable land to have a species composition close to that of long-term floodplain grasslands. Even when being less restrictive in terms of restoration goals, i.e. focusing on the “types” of species (described using their morphological and reproductive characteristics or “traits”), it would take over 70 years. This is slightly more realistic but still a very long term prospect.

    Ecosystems or habitat types for which restoration is a (very) long-term endeavour might fall outside the scope of offset schemes. As the authors say:

    any compensation scheme proclaiming they can replace floodplain meadows lost to development (i.e. gravel extraction) is being wholly unrealistic.

    As well as actually avoiding the destruction of hard or impossible to replace ecosystems and habitat types, these findings raise two issues:

  • can the destruction of these habitats be offset by restoring or enhancing degraded habitats (of the same type). This amounts to exchanging area (for which there will be a net loss) by quality (for which there would be no net loss). Is this acceptable? Another option considered by DEFRA is to use out-of-kind offsets.
  • how can the delays associated with restoration or enhancement of habitat types be taken into account in the design and sizing of biodiversity offsets. DEFRA proposes to use “multipliers” for this but these will probably be very hard to justify, ecologically, as ensuring that offsets lead to no-net-loss of the particular target habitat type.
  • Hopefully, the pilot scheme launched by the UK government will give the opportunity to test these solutions…

    Principles, criteria and indicators for biodiversity offsets

    Sunday, July 10th, 2011

    The Business and Biodiversity Offsets Program (BBOP) has launched a consultative process on several documents it drafted:

  • Guidance on the BBOP standards for biodiversity offsets, under a Principles, Criteria and Indicators (PCI) format
  • Guidance on assessing how an offset actually contributes to No-Net-Loss (NNL)
  • Guidance on assessing which components of biodiversity can and cannot be offset.
  • These are important documents which may become standard best practice, especially for firms operating in countries with no established procedures or official guidance on designing, sizing and implementing offsets. Voluntary offset initiatives by private firms are particularly targeted by BBOP.

    The PCI document give a useful overview of the many requirements of offsets, and thus reveals the various specialized knowledge and know-how required to design or assess them.

    Offsetting is hard and BBOP is providing excellent and timely guidance! Don’t hesitate to contribute!

    The ecosystem valuation debate

    Tuesday, June 7th, 2011

    The Lancaster Environment Centre recently organized an on-line debate on ecosystem valuation. You can check out a summary of the debate on this page. Participants plan to produce a policy guidance document for future UK policy concerning market based instruments for biodiversity conservation and ecosystem services.

    The same debate on ecosystem valuation will take place tomorrow in Paris (France), under the auspices of the IDDRI, a think-tank. In preparation to the symposium, Emma Broughton and Romain Pirard wrote a short piece on market-based instruments for biodiversity (pdf).

    Their article proposes a typology of instruments which distinguishes:

  • Regulations changing relative prices
  • Coasean type agreements
  • Reverse auctions
  • Tradable permits
  • Specific markets for environmental products
  • Premium capture on existing markets
  • The authors discuss the pros and cons of each one of these instruments.

    Learn more in the paper and participate in the on-going debate!

    Key issues in developing rapid assessment methods of ecosystem state

    Friday, February 25th, 2011

    David K. Rowe and his colleagues from the National Institute of Water and Atmospheric Research of New Zealand have developed a rapid method to score stream reaches. In presenting their method, the go through several of the key steps (and difficulties) in developing such “rapid assessment methods”. We summarize these below:

  • Scores are often given in reference to a desirable state (or “best of a kind”). This is helpful for ensuring that all assessors share the same upper bounds in their assessment. Selecting reference sites is however a tricky task and in particular if assessments do not focus on an ecosystem as a whole but on separate “functions”. Rowe et al. (2009) raise the issue of artificial streams performing certain functions better than reference sites. In this case they argue that this “over-performance” should be ignored: the artificial stream should be given the maximum score – that of the reference – for the particular function.
  • The selection of variables is a key step in method development. It requires an understanding of the system being assessed or of the main drivers of the system’s state (i.e. a conceptual model of the system). The conceptual models can be tested using field data. As an example, Delaware’s DERAP method was built through multiple regression analysis of independent evaluations of wetland states with set of stressor variables against (on 250 different wetlands!).
  • Developing algorithms for combining several variables into single scores is where many methods fail to convince (see for ex. McCarthy et al. 2004). Algorithms can be tested against results from established methods or best professional judgement, using field sites or consensual reference sites for example. Alternatively, statistical models can be used to weight the variables (as in the development of DERAP).
  • Redundancy is unavoidable because of the interdependence of the many processes being assessed. Moreover, redundancy contributes to robustness in the face of user/assessor subjectivity. As an example, Florida’s UMAM method relies on best professional judgement but gives detailed guidelines through a list of criteria that are very redundant. The robustness of a method to user bias can be assessed through sensitivity analysis.
  • Once a method has been proposed, it must be revised and improved through testing and feedback from widespread use.
  • The team who developed California’s rapid assessment method (CRAM) also made recommendations concerning model development (available here). They offer a more formalized step-by-step process that includes several of the points raised by Rowe and his co-authors.

    Adding a first step to the mitigation hierarchy

    Friday, November 5th, 2010

    The standard mitigation hierarchy is to avoid impacts on the environment, reduce impacts that could not be avoided and only offset residual impacts.

    Applying this mitigation hierarchy requires precise knowledge on the environment that might be impacted, in order to design appropriate measures to avoid and reduce impacts as well as design and size offsets. To underline the importance of this knowledge, why not add “study” to this hierarchy?

    Any thoughts?