`riskyr`

is a toolbox for rendering risk literacy more transparent. Its goal is to gain deeper insights into risk-related scenarios with a minimum of hassle and maximum of fun.

This page assumes that you just installed the package and are now eager to see what you can do with it. (See the User guide for a more general introduction.)

How can you use `riskyr`

? Basically, there are two ways to get started:

Define your own

`riskyr`

scenario from risk-related information (typically provided in terms of probabilities).Inspect one of 25 predefined

`riskyr`

scenarios to get a glimpse of what types of scenarios are possible.

Either way, you will soon explore some specific scenario and uncover relationships between its parameters. Please load the package first (if you have not already done so):

`library("riskyr") # loads the package`

Let us launch your `riskyr`

-career by creating a ficticious risk scenario that we construct from scratch:^{1}

Identifying reoffendersImagine you are developing a test to predict if a jailed criminal offender will reoffend after his or her release. Your research yields the following information:

- 45% of 753 jailed offenders in this prison re-offend after they are released (
`prev = .45`

).

- Your test correctly detects those who will re-offend in 98% of the cases (
`sens = .98`

).

- Your test falsely identifies 54% of those who will not re-offend as potential re-offenders. Conversely, this implies that your test correctly identifies 46% of those that will not reoffend (
`spec = .46`

).John D. is about to get released and is tested. The test predicts that he will reoffend.

What is the probability that John D. will actually reoffend, given his test result?

To answer this question, you could calculate the corresponding probabilities or frequencies (as explained in the user guide). Alternatively, you can use the `riskyr()`

function to create a `riskyr`

scenario that you can modify, inspect, and visualize in various ways.

First, we need to translate the numeric information provided in our example into parameter values:

The probability of reoffending provides the

*prevalence*in our population:`prev = .45`

.The test’s conditional probability of correctly detecting a reoffender provides its

*sensitivity*:`sens = .98`

.The test’s conditional probability of correctly detecting someone who will not reoffend provides its

*specificity*:`spec = .46`

.

(This corresponds to a*false alarm rate*`fart = 1 - spec = .54`

.)In addition, the

*population size*of your sample was mentioned to be`N = 753`

.^{2}

The following code defines a perfectly valid `riskyr`

scenario:

```
# Create a minimal scenario:
my.scenario <- riskyr(prev = .45,
sens = .98,
spec = .46)
```

To make various outputs more recognizable, many aspects of a `riskyr`

scenario can be described by setting optional `arguments`

:

`scen.lbl`

specifies a label by which you can reckognize the scenario (e.g., “Identifying reoffenders”).

`popu.lbl`

specifies the population of interest (e.g., “inmates”).`cond.lbl`

specifies the condition of interest (i.e., “reoffending”).`cond.true.lbl`

specifies a label for the condition being true (“offends again”).`cond.false.lbl`

specifies a label for the condition being false (“does not offend again”).`dec.lbl`

specifies the nature of the decision, prediction, or test (“test result”).`dec.pos.lbl`

specifies a positive decision regarding the condition (“predict to reoffend”).`dec.neg.lbl`

specifies a negative decision regarding the condition (“predict to not reoffend”).`hi.lbl`

,`mi.lbl`

,`fa.lbl`

, and`cr.lbl`

specify the four possible combinations of conditions and decisions:- hit: The test predicts that the inmate reoffends and s/he does (“reoffender found”);

- miss: The test predicts that the inmate does not reoffend but s/he does (“reoffender missed”);

- false alarm: The test predicts that the inmate reoffends but s/he does not (“false accusation”;

- correct rejection: The test predicts that the inmate does not reoffend and s/he does not (“correct release”).

- hit: The test predicts that the inmate reoffends and s/he does (“reoffender found”);

Whereas specifying three essential probabilities is necessary to define a valid `riskyr`

scenario, providing `N`

and the other arguments are entirely optional. For illustrative purposes, we create a very well-specified `riskyr`

scenario:

```
# Create a customized scenario:
my.scenario <- riskyr(scen.lbl = "Identifying reoffenders",
popu.lbl = "prison inmates",
cond.lbl = "reoffending",
cond.true.lbl = "offends again", cond.false.lbl = "does not offend again",
dec.lbl = "test result",
dec.pos.lbl = "predict to\nreoffend", dec.neg.lbl = "predict to\nnot reoffend",
hi.lbl = "reoffender found", mi.lbl = "reoffender missed",
fa.lbl = "false accusation", cr.lbl = "correct release",
prev = .45, # prevalence of being a reoffender.
sens = .98, # p( will reoffend | offends again )
spec = .46, # p( will not reoffend | does not offend again )
fart = NA, # p( will reoffend | does not offend gain )
N = 753, # population size
scen.src = "(a ficticious example)")
```

We *could* inspect the details of `my.scenario`

with `summary(my.scenario)`

. But it’s much more colorful to immediately check some visualizations of our scenario:

An icon array displays the entire population of inmates classified by condition (whether they will reoffend or not) and decisions (our test’s predictions). We can plot this display for our scenario by using the `plot`

function and specifying the `plot.type = "icons"`

:

`plot(my.scenario, plot.type = "icons")`

From the icon array, We can easily see that roughly half of the inmates reoffend (see the icons in dark green and dark red). The majority of the reoffenders are classified correctly (shown in dark green rather than dark red).

But where is John D.? His test result predicted that he would reoffend. Depending on his actual behavior, this means that he will *either* be classified as a “reoffender found” (if he actually reoffends: dark green icons) *or* as a “false accusation” (if he does not reoffend: light red icons). As there are a similar number of both types of icons (with some skew towards “reoffenders found”), it appears that his chances of actually reoffending are only slighly higher than chance.

To dig into the dirty details of `my.scenario`

, let us look at its `summary`

:

```
summary(my.scenario)
#> Scenario: Identifying reoffenders
#>
#> Condition: reoffending
#> Decision: test result
#> Population: prison inmates
#> N = 753
#> Source: (a ficticious example)
#>
#> Probabilities:
#>
#> Essential probabilities:
#> prev sens mirt spec fart
#> 0.45 0.98 0.02 0.46 0.54
#>
#> Other probabilities:
#> ppod PPV NPV FDR FOR
#> 0.738 0.598 0.966 0.402 0.034
#>
#> Frequencies:
#>
#> by conditions:
#> cond.true cond.false
#> 339 414
#>
#> by decision:
#> dec.pos dec.neg
#> 556 197
#>
#> by correspondence (of decision to condition):
#> dec.cor dec.err
#> 522 231
#>
#> 4 essential (SDT) frequencies:
#> hi mi fa cr
#> 332 7 224 190
#>
#> Accuracy:
#>
#> acc:
#> 0.694
```

The text output (printed in R’s console window) provides a brief description of our scenario (i.e., its name, the condition and decision of interest, as well as the type and size of population), followed by a range of numeric parameters (structured into probabilities, frequencies, and overall accuracy).

In the present case, we were interested in a person’s conditional probability of reoffending given a positive test result. This metric is also known as the *positive predictive value* (PPV). Our summary information shows `PPV = 0.598`

. Thus, based on the information provided, John D.’s probability of reoffending is 59.8% (quite in line with our visual estimate from the icon array above).

An alternative way to view the our scenario is a frequency tree that splits the population into two subgroups (e.g., by the two possible results of our test) and then classify all members of the population by the possible combinations of decision and actual condition, yielding the same four types of frequencies as identified above (and listed as `hi`

, `mi`

, `fa`

, and `cr`

in the `summary`

above):

`plot(my.scenario, plot.type = "tree", by = "dc") # plot tree diagram (splitting N by decision)`

The frequency tree also shows us how the PPV (shown on the arrow on the lower left) can be computed from frequencies (shown in the boxes): PPV = (number of offenders found)/(number of people predicted to reoffend) (or `PPV = hi/dec.pos`

). Numerically, we see that `PPV = 332/556`

, which amounts to about 60% (or `1 - 0.403`

).

The tree also depicts additional information that corresponds to our `summary`

from above. For instance, if we had wondered about the negative predictive value (NPV) of a negative test result (i.e., the conditional probability of not offending given that the test predicted this), the tree shows this to be `NPV = 190/197`

or about 96.4% (as `NPV = cr/dec.neg`

). Again, this closely corresponds to our `summary`

information of `NPV = 0.966`

.^{3}

A good question to ask is: To what extend do the positive and negative predictive values (PPV and NPV) depend on the likelihood of reoffending in our population (i.e., the condition’s prevalence)? To answer this, the following code allows to show conditional probabilities (here `PPV`

and `NPV`

) as a function of `prev`

:

`plot(my.scenario, plot.type = "curve") # plot default curve [what = c("prev", "PPV", "NPV")]:`

As before, we can read off that the current values of `PPV = 59.76%`

and `NPV = 96,56%`

. Importantly, the curves also show that the prevalence value is absolutely crucial for the value of both `PPV`

and `NPV`

. For instance, if `prev`

dropped further, the `PPV`

of our test was also considerably lower. In fact, both the PPV and NPV vary from 0 to 1 depending on the value of `prev`

, which means that specifying them is actually meaningless when the corresponding value of `prev`

is not communicated as well. (See the guide on functional perspectives for additional information and options.)

Having illustrated how we can create a scenario from scratch and begin to inspect it in a few ways, we can now turn towards loading scenarios that are contained in the `riskyr`

package.

As defining your own scenarios can be cumbersome and the literature is full of existing problems (that study so-called Bayesian reasoning), `riskyr`

provides a set of – currently 25) – pre-defined scenarios (stored in a list `scenarios`

). The following table provides a first overview of the scenarios avaliable, including their relevant condition, their population size `N`

, and basic probability information:

Scenario | Condition | N | prev | sens | spec | fart | |
---|---|---|---|---|---|---|---|

2 | Mammografie 1 | Brustkrebs | 1000 | 0.010 | 0.900 | 0.910 | 0.090 |

3 | Nackenfaltentest (NFT) | Down Syndrom/Trisomie 21 | 1000 | 0.010 | 0.900 | 0.950 | 0.050 |

4 | HIV 1 (f) | HIV positive | 100000 | 0.000 | 1.000 | 1.000 | 0.000 |

5 | HIV 2 (f) | HIV positive | 250000 | 0.000 | 1.000 | 1.000 | 0.000 |

6 | Mammography 2 | breast cancer | 1000 | 0.010 | 0.800 | 0.900 | 0.100 |

7 | Sepsis | suffering from sepsis | 100 | 0.100 | 0.800 | 0.900 | 0.100 |

8 | Cab problem | Company | 100 | 0.150 | 0.800 | 0.800 | 0.200 |

9 | Sigmoidoskopie 1 | Darmkrebs | 1000000 | 0.000 | 0.700 | 0.900 | 0.100 |

10 | Sigmoidoskopie 1 | Darmkrebs | 1000000 | 0.035 | 0.700 | 0.900 | 0.100 |

11 | Brustkrebs 1 | Brustkrebs | 1000 | 0.010 | 0.900 | 0.910 | 0.090 |

12 | Brustkrebs 2 (BRCA1) | Brustkrebs | 1000 | 0.600 | 0.900 | 0.910 | 0.090 |

13 | Brustkrebs 3 (BRCA1+pos. Mam.) | Brustkrebs | 1000 | 0.940 | 0.900 | 0.910 | 0.090 |

14 | HIV 3 (m) | HIV positiv | 10000000 | 0.000 | 0.997 | 1.000 | 0.000 |

15 | HIV 4 (m) | HIV positiv | 10000000 | 0.000 | 0.997 | 1.000 | 0.000 |

16 | Nackenfaltentest 2 (NFT) | Down Syndrom/Trisomie 21 | 10000 | 0.019 | 0.800 | 0.820 | 0.180 |

17 | Amniozentese (pos. NFT) | Down Syndrom/Trisomie 21 | 10000 | 0.080 | 0.994 | 0.995 | 0.005 |

18 | Musical town | membership in a choir | 1000 | 0.500 | 0.200 | 0.400 | 0.600 |

19 | Mushrooms | color | 100 | 0.200 | 0.200 | 0.950 | 0.050 |

20 | Bowel cancer (FOB screening) | Bowel cancer | 2030 | 0.015 | 0.667 | 0.910 | 0.090 |

21 | PSA test 1 (high prev) | prostate cancer | 1000 | 0.500 | 0.210 | 0.940 | 0.060 |

22 | PSA test 2 (low prev) | prostate cancer | 1000 | 0.063 | 0.210 | 0.940 | 0.060 |

23 | Colorectal cancer | colorectal cancer | 100000 | 0.300 | 0.050 | 0.970 | 0.030 |

24 | Psylicraptis screening | psylicraptis | 100 | 0.010 | 0.900 | 0.990 | 0.010 |

25 | Mammography 6 (prob) | breast cancer | 1000 | 0.010 | 0.800 | 0.904 | 0.096 |

26 | Mammography 6 (freq) | breast cancer | 1000 | 0.010 | 0.800 | 0.904 | 0.096 |

In the following, we show you can select and explore these scenarios.

Let us assume you want to learn more about the controversy surrounding screening prodecures of prostate-cancer (known as PSA screening). Scenario 21 in our collection of `scenarios`

is from an article on this topic (Arkes & Gaissmaier, 2012). To select a particular scenario, simply assign it to an R object. For instance, we can assign Scenario 21 to `s21`

:

`s21 <- scenarios$n21 # assign pre-defined Scenario 21 to s21.`

As each scenario is stored as a list, different aspects of a scenario can be printed by their element names:

```
# Show basic scenario information:
s21$scen.lbl # shows descriptive label:
#> [1] "PSA test 1 (high prev)"
s21$cond.lbl # shows current condition:
#> [1] "prostate cancer"
s21$dec.lbl # shows current decision:
#> [1] "PSA test"
s21$popu.lbl # shows current population:
#> [1] "1000 patients with symptoms diagnostic of prostate cancer taking a PSA test."
s21$scen.apa # shows current source:
#> [1] "Arkes, H. R., & Gaissmaier, W. (2012). Psychological research and the prostate-cancer screening controversy. Psychological Science, 23(6), 547--553."
```

A description of the scenario can be printed by calling `s21$scen.txt`

:

`With a cutoff point of 4 ng/ml, the PSA test is reported to have a sensitivity of approximately 21% and a specificity of approximately 94% (Thompson et al., 2005). That means the PSA test will correctly classify 21% of the men with prostate cancer and 94% of the men who do not have prostate cancer. Conversely, the test will miss about 79% of the men who actually have prostate cancer, and raise a false alarm in 6% of the men who actually do not have prostate cancer. Suppose that this test is given to 1,000 patients at a urology clinic who have symptoms diagnostic of prostate cancer. Perhaps 50% of these men truly have prostate cancer. Table 1 depicts this situation. Of the 135 men who test positive, 105 actually have prostate cancer. Thus, the positive predictive value of the PSA test in this situation is approximately 78% (i.e., 105/135 _ 100).`

As explained above, an overview of the main parameters of a scenario is provided by `summary`

:

```
summary(s21) # summarizes key scenario information:
#> Scenario: PSA test 1 (high prev)
#>
#> Condition: prostate cancer
#> Decision: PSA test
#> Population: 1000 patients with symptoms diagnostic of prostate cancer taking a PSA test.
#> N = 1000
#> Source: Arkes & Gaissmaier (2012), p. 550
#>
#> Probabilities:
#>
#> Essential probabilities:
#> prev sens mirt spec fart
#> 0.50 0.21 0.79 0.94 0.06
#>
#> Other probabilities:
#> ppod PPV NPV FDR FOR
#> 0.135 0.778 0.543 0.222 0.457
#>
#> Frequencies:
#>
#> by conditions:
#> cond.true cond.false
#> 500 500
#>
#> by decision:
#> dec.pos dec.neg
#> 135 865
#>
#> by correspondence (of decision to condition):
#> dec.cor dec.err
#> 575 425
#>
#> 4 essential (SDT) frequencies:
#> hi mi fa cr
#> 105 395 30 470
#>
#> Accuracy:
#>
#> acc:
#> 0.575
```

Note that – in this particular population – the prevalence for the condition (prostate cancer) is assumed to be relatively high (with a value of 50%).

To eyeball basic scenario information, we could first plot its entire population as an icon array:

`plot(s21, plot.type = "icons", cex.lbl = 0.75) # plot default icon array: `

This initial inspection reveals that the overall accuracy of the decision considered (PSA test 1 (high prev)) is not great: There are almost as many cases of incorrect classifications (shown in red) as correct ones (shown in green). In fact, both the summary and the icon array note that the overall accuracy of the test is at 57.5%. Given that green squares signal correct classifications and red squares signal incorrect classifications, it is immediately obvious that our main issue with accuracy here consists in so-called misses: Patients with cancer that remain undetected (marked with “cancer and negative” and denoted by icons in dark red).

Next, we plot and look at a network diagram, to further illuminate the interplay between probabilities and frequencies in this scenario:

`plot(s21, plot.type = "fnet", area = "sq") # network diagram (with numeric probability labels):`

The plot shows how the probability values of key condition and decision parameters split the assumed population of `N = 1000`

patients into subgroups that correspond to 9 frequencies (listed in the summary). Calling the same command with the optional argument `p.lbl = "nam"`

prints the probability names, rather than their values (see also the options `p.lbl = "min"`

and `"mix"`

.) The middle row of boxes shows the four essential frequencies (of hits `hi`

, misses `mi`

, false alarms `fa`

, and correct rejections `cr`

) in colors corresponding to the icon array above. Our setting of `area = "sq"`

in our `plot`

command switched the default display (of rectangular boxes) to a version in which the areas at each level of the network add up to the area of `N`

and the areas of boxes linked by arrows on lower levels add up to the area of the box on the higher level. Thus, the relative sizes of boxes illustrate the frequency of corresponding cases, making it easy to spot locations and paths with few or many cases.

A fact that was not obvious in the icon array is that the current scenario yields mostly negative decisions (865 out of 1000, or 86.5%). Of those, 470 (or 54.3%) are correct (see `cr`

shown in light green) and 395 (or 45.7%) are incorrect (see `mi`

shown in dark red). The ratio `cr/dec.neg = 470/865 = 54.3`

% indicates the negative predictive value (`NPV`

) of the test.

Can you find the current positive predictive value (`PPV`

) in the diagram? (Hint: Its value marks an arrow between two boxes, but is also shown on the margin and was contained in the summary above.)

One way to further understand the relations between basic probabilities – like the prevalence `prev`

, which does *not* depend on our decision, but only on the environmental probability of the condition (here: “prostate cancer”) – and probabilities conditional on both the condition and on features of the decision (like `PPV`

and `NPV`

) is to plot the latter as a function of the former. Calling `plot.type = "curve"`

does this for us:

`plot(s21, plot.type = "curve", what = "all") # plot "all" available curves:`

The additional argument `what = "all"`

instructed `riskyr`

to provide us with additional curves, corresponding to the percentage of positive decisions (`ppod`

) and overall accuracy (`acc`

). Just like `PPV`

and `NPV`

, the values of these metrics crucially depend on the value of the current prevalence (shown on the x-axis). Interestingly, the curves of `ppod`

and `acc`

appear quite linear, even though the `riskyr`

function plots them in exactly the same way as `PPV`

and `NPV`

. Would you have predicted this without seeing it?

Explain why the line of `acc`

intersects the curve of `PPV`

at the point at the same point as the curve of `NPV`

.

While the curves shown for `PPV`

and `NPV`

so far illustrate their dependence on the prevalence (`prev`

), we can also ask: How do these conditional probabilities vary as a function of the decisions sensitivity (`sens`

) and specificity (`spec`

)? To provide a visual answer to this question, we plot them as 3D planes (i.e., as functions of both `sens`

and `spec`

, for a given value of `prev`

) with the following commands:^{4}

```
op <- par(no.readonly = TRUE) # save plot settings.
par(mfrow = c(1, 2)) # 1 row with 2 plots:
## Plot plane of PPV and NPV as functions of sens and spec (for given prev):
plot(s21, plot.type = "plane", what = "PPV", cex.lbl = 0.75) # PPV by sens x spec (fixed prev)
plot(s21, plot.type = "plane", what = "NPV", cex.lbl = 0.75) # NPV by sens x spec (fixed prev)
```

`par(op) # reset plot settings.`

This comparison shows that the curves of `PPV`

and `NPV`

(created by `plot.type = "curve"`

above) were only two out of an infinite number of possible intersections of two planes (created by `plot.type = "plane"`

here). Consequently, the current values of `PPV`

and `NPV`

(shown as yellow points on the planes) crucially depend on the condition’s prevalence (`prev`

), the decision’s sensitivity (`sens`

), and the decision’s specificity (`spec`

).

In retrospect, these dependencies make it clear why it is so hard to provide an answer to the seemingly simple question: What’s the probability of having some condition when testing positive or negative for it? While `riskyr`

cannot simplify this issue, we hope that you are convinced that it helps to compute, transform, and see some relationships that are not immediately obvious from the mathematical definitions of the underlying concepts. If you feel that this improves your understanding, we came a little closer to our goal of rendering risk literacy more transparent.

Scenario 22 in the `riskyr`

collection of `scenarios`

contains a version of this situation that assumes a different population with a lower prevalence value for the same condition. Inspect and explore the consequence of this change by following the steps shown above.

Here are the first steps – note the changes to the values of `s21`

from above:

```
s22 <- scenarios$n22 # assign pre-defined Scenario 22 to s22.
# Show basic scenario information:
s22$scen.lbl # shows descriptive label:
#> [1] "PSA test 2 (low prev)"
s22$popu.lbl # shows current population:
#> [1] "1000 males of the general population taking a PSA test"
```

This should suffice to get you started with using `riskyr`

. Have fun exploring the provided scenarios and creating your own examples!

Arkes, H. R., & Gaissmaier, W. (2012). Psychological research and the prostate-cancer screening controversy.

*Psychological Science*,*23*, 547–553.Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism.

*Science Advances*,*4*, eaao5580.

`riskyr`

VignettesNr. | Vignette | Content |
---|---|---|

A. | User guide | Motivation and general instructions |

B. | Data formats | Data formats: Frequencies and probabilities |

C. | Confusion matrix | Confusion matrix and accuracy metrics |

D. | Functional perspectives | Adopting functional perspectives |

E. | Quick start primer | Quick start primer |

The data is made-up, but the issue is real. See Dressel and Farid (2018) for a recent study on this issue.↩

If no population size value

`N`

is specified, a suitable value is provided.↩The difference between

`NPV = 190/197 = 0.964467`

(when computing the ratio of frequencies) and`NPV = 0.966`

(in the`summary`

) is due to rounding tree frequencies to integer values. If absolute precision is required, we can plot the frequency tree without rounding by adding an argument`round = FALSE`

.↩The

`par()`

commands before and after the calls to`plot`

in this example are not needed if you re-create the plots for yourself. They only set and reset the R plotting space to allow plotting both planes next to each other.↩