Blog

Why a ‘shrinking’ technology will never dominate the marginal

May 10, 2016 by Jannick Schmidt

– The case of electricity mixes in consequential LCA

Have you ever doubted what to include in your technology mixes? What happens when a technology is dominating the mix of today, but we know that the technology will be phased out and reduced in the future?
To an extent the answer depends on your study. But the studies we perform are for decision support, where our clients need to understand the environmental impacts of a change in demand for the product under study. For such studies we need a consequential approach – a modelling approach in which activities in a product system are linked so that all activities that are expected to change are included.

At the core of the consequential LCA are the marginal suppliers. They are defined as the ones being installed as a consequence of a change in demand (in increasing and stable markets, which is the general case for most products). You can read more details about marginal suppliers at www.consequential-lca.org.

Here, let us look at a concrete example. In our Energy Club we developed consequential LCIs on electricity for more than 20 different countries and regions (contact us for availability of these data). In these LCIs we base the identification of the marginal suppliers on the above reasoning. So the marginal electricity suppliers are those that are predicted to increase their installed capacity. The only exception is when they are constrained. That could be by a dependence on an input of a waste or by‐product or if the increase is fixed by legislation or other factors not related to changes in demand. Technologies with declining trends are not regarded as being part of the marginal.

For instance, we found that the suppliers in Denmark that are predicted to increase capacity are wind and biomass; while coal is a decreasing (old) technology. This means that wind and biomass are marginal, while coal is not part of the marginal. See the figure below.

production volume electricity

Any timing of the phasing out of technologies, such as coal, can provide a little flexibility and thereby make it a small part of the marginal, but never a significant part. Situations with flexibility could be:
‐ When there is a significant amount of overcapacity of the old technology
‐ In very short periods of time, to solve short‐term (quarters of years to entire years) need for capacity.

But still the shrinking technology will never dominate the marginal. This is because the timing of the phase out of old technologies is predominantly determined by other factors than changes in demand. These other factors include the life time/replacement rate of the technology as well as the economic performance compared to the new technologies that are installed.

Reference:
Muñoz I, Schmidt J H, de Saxcé M, Dalgaard R, Merciai S (2015), Inventory of country specific electricity in LCA ‐ consequential scenarios. Version 3.0. Report of the 2.‐0 LCA Energy Club. 2.-0 LCA consultants, Aalborg

It is not about money!

April 7, 2016 by Bo Weidema

money-1

The first working draft for ISO 14008 on monetary valuation of environmental impacts has now been sent out for commenting among the standardisation body members. The chair of the ISO Working Group is Bengt Steen, who already in the early 1990’ies introduced monetary valuation to Life Cycle Impact Assessment with his EPS method.

In the Life Cycle Assessment (LCA) community there is a growing awareness that valuation is needed, and that the “ban” on weighting for comparative assertions that was introduced in the ISO 14044 LCA standard is not a viable position. Valuation serves the purpose of facilitating comparisons across different environmental midpoint impact categories, by applying weights (values) that reflect their relative importance (ISO 14040). Without valuation it becomes impossible to recommend the best decision when the options score best on different impact categories.

But not everyone is comfortable that monetary valuation is the best answer. In their recent expansion of the European Union Product Environmental Footprint (PEF) pilot tests to include a comparison of results when using different forms of weighting, monetary valuation methods were explicitly excluded, stating that “monetisation approaches (e.g. EPS2000, STEPWISE), will have to be dealt with separately”.

This leads me to highlight some frequent and important misunderstandings about monetary valuation, and here I first have to say: It is not about the money! Most of the criticism of monetary valuation applies to any form for valuation, including that of panel weightings and distance-to-target methods:
The values discussed in monetary valuation (and in comparative valuation in general) are the marginal values referring to trade-offs between alternative resource allocations, not the general moral values like the general value of democracy or the value of human life as such, that cannot be subject to quantified measurement and trade-offs. Much critique of monetary and marginal valuation comes from a confusion of these two types of values.

Valuation is often criticised for being anthropocentric. This critique is correct, but it is also irrelevant in its essence. Valuation has to be anthropocentric, since its purpose is to support human decision-making. Any concern for other species (or for that matter for any other group than the one that has the power to take the decision) must necessarily come as a concession from those who perform the valuation. However, the fact that it appears very difficult – or rather impossible – to design a truly non-anthropocentric valuation scheme, does not make it unimportant to raise the issue and seriously contemplate its relevance when deciding on the design of a valuation method. It should also be noted that an anthropocentric valuation does not necessarily imply a low valuation of nature; nature does have high value for humans, both use value (today often referred to as ecosystem services) and non-use values (existence value and bequest value).

Monetary valuation is also criticised for giving more weight to people who have more money. While this critique may seem intuitively correct, it is not true if equity-weighting is performed correctly. A simple weighting proportional to the inverse of the income will ensure that the same impact will be weighted equally across all levels of income. More advanced, empirically determined utility-weights can be applied, which lead to larger weights to poor population groups than to richer. Such equity-weighting should also be applied if values are expressed in non-monetary units! Because it is about values, not about the money!

Money is just a unit of exchange. You could equally well use units of Quality Adjusted Life Years, happiness-years, eco-points, or “Environmental Load Units” (ELUs) as Bengt Steen suggested. It is not about the money or the unit; it is about making things comparable. However, research has shown that when people are asked compare two goods with a monetary equivalent, their answers are more egoistic – less altruistic – than when asked to compare the same goods in a context where money is not mentioned explicitly. So the unit does matter, but only in the sense that answers given in the two different contexts can only be compared after adjustment for the context-dependent bias. This is just one out of many biases that can be introduced when asking people about their values:

It is a widespread critique of valuation methods that they assume that participants exhibit rational, utility-maximising behaviour when making valuations, while empirical evidence show that people do not exhibit this rational behaviour, neither in normal market transactions nor in experimental settings, but are influenced by the framing of the decision situation. A large body of literature on behavioural economics suggests improvements to the survey techniques to control and adjust for the systematic biases caused by the contextual and informational setting of the valuation.

When comparing items for which trade-offs between alternative resource allocations are in reality being made, as is most often the case in LCA, the problem of choice is unavoidable, and an outright rejection of valuation – monetary or not – is not a viable position.

However, a lot of the criticism of current valuations is valid. Many current valuations suffer from insufficient inclusion of equity-weighting, from many of the unnecessary biases described above, and from unnecessarily high uncertainties. But these are problems that can be solved, that need to be solved, and that we are working on solving, both in the ISO working group on monetary valuation and in our new crowd-funded Monetarisation Club that I urge anyone with interest in this development to join.

Consequential LCA is not scenario modelling

March 13, 2016 by Bo Weidema

In the discussion on the differences between attributional and consequential modelling for Life Cycle Assessment (LCA), it is a common misunderstanding that consequential modelling requires scenario modelling or other form of forecasting, and therefore should be more uncertain than attributional models based on average, allocated data from the recent past.

In their pure form, attributional models do not intend to say anything about the future, but it is still very common to find them applied to support claims about what is the best action to take to reduce future environmental impacts. One argument used to defend this (mis)use is the uncertainty argument.

In reality, both consequential and attributional models can be based on data from the recent past:

  • attributional models are based on data from the recent past on observed average behaviour of activities,
  • consequential models are based on data from the recent past on observed marginal changes in activities (including marginal substitutions).

In this way, consequential models that seek to identify the here and now consequences of decisions do not need any more assumptions (or more uncertain data) than do attributional models.

If dealing with issues further into the future, both attributional and consequential models would benefit from scenario modelling or other forms of forecasting, because future average behaviour, as well as future marginal changes, are different from those that can be predicted by extrapolating data from the recent past.

But when choosing forecasting techniques, there is no reason to believe that a precise linear extrapolation of the past average is a better model of the future than a less precise, but more accurate, consequential model. I have tried to illustrate this graphically here:

Impact-timeFor more details on the consequential modelling and the differences to attributional modelling, see consequential-lca.org

Don’t take zero for an answer

February 25, 2016 by Bo Weidema

I often encounter the idea that some numbers are just too uncertain to use and that it would be better not to quantify the issue and describe it qualitatively instead. Earlier this month I was presented with another example: At the first meeting of the ISO working group on monetary valuation, one of the environmental economists present used the effects of electromagnetic fields near high voltage transmission lines as an example of an impact that should not be quantified because the knowledge was just too uncertain.

I claimed that uncertainties cannot and should not be an argument for not quantifying. Even when describing something qualitatively instead – placing a flag as some people say – the most likely consequence is that decision makers will leave the issue out of consideration altogether – or maybe even worse: the missing quantification may be used by advocates for special interests to argue that the issue is important – even though a simple quantification of the worst case would have shown that it cannot be.

Another often-stated argument against using uncertain numbers is that decision makers cannot handle uncertainties. But this is simply not true. Uncertainties are a certain part of life – we live with them everyday. As Lise Laurin, CEO of EarthShift, put it in a recent discussion on PRé’s LCA discussion list:

We all want to know if it is going to rain on the day of our picnic. But what we get is a probability. If the forecaster says there is a 30% chance of rain, we may go ahead and schedule the picnic but have contingency plans just in case. The only thing we need to know about the process of forecasting the weather is that it’s really complicated’.

small zero crossedAnd, yes, quantifying environmental effects include a lot of complicated issues – and some of these issues will certainly have high uncertainties. Sometimes our data are just not that accurate, either because the methodology is in its infancy or because no one has bothered to do that thorough study yet. I am a firm believer in presenting data with their inherent uncertainties and not exclude any part of the inventory just because it is complicated.

Our role and duty towards the policy- and decision-makers is not to decide that some impact or other have to be excluded, because it has a high uncertainty – but perhaps we need to put more emphasis on educating our customers and stakeholders to ask for completeness and uncertainty in the information they are given – and not to take zero for an answer.