Controlled burns in conifer forests

Eike Luedeling and Cory Whitney, University of Bonn, Germany


Decision analysis with the decisionSupport package

The decisionSupport package

Decision-makers often wish to have a quantitative basis for their decisions. However, there is often no ‘hard data’ for many important variables, which can paralyze decision-making processes or lead decision-makers to conclude that large research efforts are needed before a decision can be made. That is, many variables decision makers must consider cannot be precisely quantified, at least not without unreasonable effort. The major objective of (prescriptive) decision analysis is to support decision-making processes faced with this problem (Luedeling and Shepherd, 2016). Decision analysis can make forecasts of decision outcomes without precise numbers, as long as probability distributions describing the possible values for all variables can be estimated.

The decisionSupport package (Luedeling and Göhring, 2017) implements this as a Monte Carlo simulation, which generates a large number of plausible system outcomes, based on random numbers for each input variable that are drawn from user-specified probability distributions. It also conducts a sensitivity analysis (based on Projection-to-Latent-Structures regression) to highlight uncertain variables that have large impacts on model outputs (Luedeling and Gassner, 2012; Wold et al., 2001). This approach is useful for determining whether a clearly preferable course of action can be delineated based on the present state of knowledge without the need for further information. If the distribution of predicted system outcomes doesn’t imply a clearly preferable decision option, variables identified as carrying decision-relevant uncertainty can then be targeted by decision-supporting research (Luedeling et al., 2015). This approach is explained in more detail below.

The decisionSupport() function in the R package decisionSupport can be applied to conduct decision analysis. The function requires two inputs:

  1. an R function that predicts decision outcomes based on the variables named in a separate data table. This R function is customized by the user to address a particular decision problem to provide the decision analysis model.
  2. an input table (in .csv format) specifying the names and probability distributions for all variables used in the decision model. These distributions aim to represent the full range of possible values for each component of the model.

These two inputs are provided as arguments to the decisionSupport() function, which conducts a Monte Carlo analysis with repeated model runs based on probability distributions for all uncertain variables. The data table and model are customized to fit the particulars of a specific decision.

Example – Controlled burning for ecological conservation in forests of the American West

The ecology of the conifer forests of the American west is largely shaped by wildfires. Many ecologists recommend regular and low-intensity burns to manage the build-up of combustible understory materials (shrubs, fallen branches). However, not all municipalities or regions implement these practices. Failure to follow the recommended controlled burning regime may lead to the build-up of fire stock and increase the severity of wildfires. Such wildfires have destroyed many conifer forest ecosystems of the Western United States in the recent past.

Here, we provide a simple example (in annotated R code) of how the decisionSupport package can be used to inform a decision process. The example provided simulates the decision of forest managers to use controlled fires in conifer forests vs. running the risk of severe fire.

Input table

To support the model building process we design an input table wildfire_input_table.csv containing some of the basic values for the analysis. This table contains all the variables used in the model. Their distributions are described by a 90% confidence interval, which is specified by lower (5% quantile) and upper (95% quantile) bounds, as well as the shape of the distribution. This example uses four different distributions:

  1. const – a constant value
  2. norm – a normal distribution
  3. tnorm_0_1 – a truncated normal distribution that can only have values between 0 and 1 (useful for probabilities; note that 0 and 1, as well as numbers outside this interval are not permitted as inputs)
  4. posnorm – a normal distribution truncated at 0 (only positive values allowed)

Here’s the input table for the wildfire example:

##                          variable  lower median    upper distribution
## 1                         n_years   20.0     NA    20.00        const
## 2                       area_size    1.0     NA     1.00        const
## 3                 initial_biomass   20.0     NA   100.00      posnorm
## 4           biomass_after_burning   10.0     NA    20.00      posnorm
## 5       biomass_after_severe_fire    2.0     NA     5.00      posnorm
## 6         biomass_after_mild_fire   10.0     NA    20.00      posnorm
## 7        bio_accu_rate_contr_burn   10.0     NA    40.00      posnorm
## 8     bio_accu_rate_no_contr_burn   40.0     NA   100.00      posnorm
## 9              severity_threshold   50.0     NA   120.00      posnorm
## 10                  discount_rate    2.0     NA     6.00      posnorm
## 11   controlled_burning_frequency    2.0     NA     2.00        const
## 12                      fire_risk    0.1     NA     0.25    tnorm_0_1
## 13     cost_of_controlled_burning  300.0     NA  1000.00      posnorm
## 14            env_imp_severe_fire 1000.0     NA  6000.00      posnorm
## 15              env_imp_mild_fire -500.0     NA   800.00         norm
## 16   fire_fighting_cost_mild_fire  100.0     NA   500.00      posnorm
## 17 fire_fighting_cost_severe_fire 1000.0     NA 10000.00      posnorm

For a full list of possible distributions, type ?random.estimate1d in R. When specifying confidence intervals for truncated distributions, note that approximately 5% of the random values should ‘fit’ within the truncation interval on either side. If there isn’t enough space, the function will generate a warning (usually it will still work, but the inputs may not look like you intended them to).

Default distributions are provided for all variables, but feel free to make adjustments by editing the .csv file in a spreadsheet program.

Developing the model

To set up the analysis we first define a time line in the input table that contains information about whether there is a wildfire or a controlled burn in each year of our planning horizon. We specify this using the variable n_years (we use 20 years in this example).

We define a probability fire_risk (which can range from 0 to 1) that a wildfire will break out in a given year, and use the chance_event() function to simulate a time series of wildfire occurrence:

This line of R code produces a series of n_years (here: 20) numbers (called a vector in R), which have a fire_risk chance of being 1 (specified by the value_if argument) and are 0 otherwise. The 1s indicate years that feature a wildfire, 0s mark years without a fire.

We also simulate whether controlled burns occur in a given year. If we assume that fire managers follow a strict schedule, this can be computed as

In the above code controlled_burning_frequency is from the input table, which is simply the number of years between two controlled burns (should be an integer). The symbol %% calls a modulo operation, which returns the remainder after dividing each of the year numbers (for 1:n_years) by controlled_burning_frequency. If the year number is a multiple of the controlled_burning_frequency the code returns a TRUE evaluation. If the year is not a multiple, it returns a FALSE. These values are then converted into 0s and 1s by using the as.numeric() function.

The code below combines these vectors into a data.frame called that contains all information on our fire scenario.

##    Year Fire Controlled
## 1     1    1          0
## 2     2    0          1
## 3     3    0          0
## 4     4    1          1
## 5     5    1          0
## 6     6    0          1
## 7     7    0          0
## 8     8    0          1
## 9     9    0          0
## 10   10    0          1
## 11   11    0          0
## 12   12    0          1
## 13   13    0          0
## 14   14    0          1
## 15   15    1          0
## 16   16    0          1
## 17   17    0          0
## 18   18    0          1
## 19   19    0          0
## 20   20    0          1

We want to distinguish between mild and severe fires, because these differ strongly in the impact on the ecosystem and in the costs incurred in fighting the fire. To determine whether a fire is mild or severe, we simulate the amount of combustible biomass that has accumulated in a given year. When this biomass exceeds the threshold severity_threshold (from the input table) at the time of a wildfire outbreak, we classify the fire as severe, otherwise as mild.

To do this, we first have to simulate the amount of combustible biomass. We start by setting up two data.frames to store the results of this simulation, one for the no-burn scenario (no.control), and one for the scenario with controlled burns (control):

Each of the two data.frames has three columns for the simulation year (Year), the combustible biomass (Biomass) and the fire type (FireType). For year 1, we set the biomass for both scenarios to an initial level described by initial_biomass from the input table. We then start a loop that cycles through all years (with y used as a counter to describe which year we’re in) and computes that year’s biomass, according to the fire scenario that is encountered in each year. We distinguish between four different fire settings (using the switch function): 1) When neither fire nor controlled burns occur, biomass increases by a certain percentage (bio_accu_rate_no_contr_burn). 2) When a controlled burn occurs, biomass is reduced to biomass_after_burning, but then increases by a certain percentage (bio_accu_rate_contr_burn). 3) When a mild fire occurs, the biomass is reduced to biomass_after_mild_fire. 4) When a severe fire occurs, the biomass declines to biomass_after_severe_fire.

All variables in steps 1-4 are from the input table.

We can then determine the fire severity by comparing the combustible biomass level with the fire severity. For this, we use several ifelse calls that first check if there was a controlled burn, then (if not) whether there was a fire, and finally whether biomass exceeded the threshold for a severe fire. Depending on the outcome of these tests, the FireType is set to “Controlled”, “None”, “Severe” or “Mild”.

This is simulated with the following code for the control scenario:

For the no.control scenario, the code looks similar, except that there are no consequences of a Controlled fire, because such fires don’t occur here:

At this point, the fire time series is complete, with all information stored in the data.frames control and no.control:

##    Year   Biomass   FireType
## 1     1 77.975512     Severe
## 2     2  4.490215 Controlled
## 3     3 16.928781       None
## 4     4 32.397074 Controlled
## 5     5 16.928781       Mild
## 6     6 10.840328 Controlled
## 7     7 16.928781       None
## 8     8 32.397074 Controlled
## 9     9 16.928781       None
## 10   10 32.397074 Controlled
## 11   11 16.928781       None
## 12   12 32.397074 Controlled
## 13   13 16.928781       None
## 14   14 32.397074 Controlled
## 15   15 16.928781       Mild
## 16   16 10.840328 Controlled
## 17   17 16.928781       None
## 18   18 32.397074 Controlled
## 19   19 16.928781       None
## 20   20 32.397074 Controlled
##    Year     Biomass FireType
## 1     1   77.975512   Severe
## 2     2    4.490215     None
## 3     3    8.593048     None
## 4     4   16.444751     Mild
## 5     5   10.840328     Mild
## 6     6   10.840328     None
## 7     7   20.745435     None
## 8     8   39.701109     None
## 9     9   75.977104     None
## 10   10  145.399473     None
## 11   11  278.254971     None
## 12   12  532.504191     None
## 13   13 1019.067916     None
## 14   14 1950.218299     None
## 15   15 3732.186395   Severe
## 16   16    4.490215     None
## 17   17    8.593048     None
## 18   18   16.444751     None
## 19   19   31.470773     None
## 20   20   60.226484     None

In the above simulation, the controlled_burning_frequency was 2 and the severity_threshold 56.

With this information we can move on to computing the cost implications for each scenario. We only consider three types of costs: the cost of controlled burning (cost_of_controlled_burning), the cost of fighting fires (fire_fighting_cost), and the value of the environmental damage caused by a fire (env_imp). We distinguish between severe and mild fires, so we get separate variables for the cost of firefighting (fire_fighting_cost_mild_fire and fire_fighting_cost_severe_fire) and the environmental impacts (env_imp_mild_fire and env_imp_severe_fire). For each year and each scenario, the applicable costs are selected (depending on the fire type) and added up to produce the total cost for each option. The result is then multiplied by the area under consideration (area_size) to produce the total cost.

Now we have two vectors that contain the annual costs of the two scenarios (bottomline_contr_burn and bottomline_no_burn). The net benefits of the controlled burning strategy (benefit_of_controlled_burning) are then simply calculated as the difference between control and no control:

This produces, for example:

##  [1] -4378.0739     0.0000     0.0000  -712.3284  -712.3284     0.0000
##  [7]     0.0000     0.0000     0.0000     0.0000     0.0000     0.0000
## [13]     0.0000     0.0000 -4378.0739     0.0000     0.0000     0.0000
## [19]     0.0000     0.0000
##  [1] -4378.0739  -478.3568     0.0000  -478.3568  -712.3284  -478.3568
##  [7]     0.0000  -478.3568     0.0000  -478.3568     0.0000  -478.3568
## [13]     0.0000  -478.3568  -712.3284  -478.3568     0.0000  -478.3568
## [19]     0.0000  -478.3568
##  [1]    0.0000 -478.3568    0.0000  233.9716    0.0000 -478.3568    0.0000
##  [8] -478.3568    0.0000 -478.3568    0.0000 -478.3568    0.0000 -478.3568
## [15] 3665.7455 -478.3568    0.0000 -478.3568    0.0000 -478.3568

The last step in this decision model is to compute the total benefits by summing up the yearly values. A slight complication in this is that very few people are indifferent about the time at which they are faced with costs or benefits. Most people would prefer to have income now over having that same income in the future, and they feel the opposite way about expenses (most prefer expenses later). This characteristic is known as the time preference. We also say that people ‘discount’ future costs and benefits. This is considered when calculating the so-called Net Present Value (NPV), which discounts future costs or benefits by an investor-specific discount rate. Usually, the further in the future these costs and benefits occur, the lower their perceived value. decisionSupport provides a discounting function called discount(), which implements the standard equation used by economists to compute the NPV. For example, a discount rate of 4% applied to a regular income stream of 100 dollars per year over 10 years would produce:

##  [1] 100.00000  96.15385  92.45562  88.89964  85.48042  82.19271  79.03145
##  [8]  75.99178  73.06902  70.25867

If we add the additional argument calculate_NPV=TRUE, to our code, the function automatically sums up these values to produce the Net Present Value:

## [1] 843.5332

We see that the perceived net benefit, the Net Present Value, is not 1000 dollars, as we would get by just adding up the yearly values, but only 844 dollars.

Applied to our example, and using the discount_rate variable (from the input table), this produces:

With this metric, we can compare the two forest management options.

Here’s an illustration of the basic steps of the simulation: