This vignette is about monotonic effects, a special way of handling discrete predictors that are on an ordinal or higher scale (Bürkner & Charpentier, in review). A predictor, which we want to model as monotonic (i.e., having a monotonically increasing or decreasing relationship with the response), must either be integer valued or an ordered factor. As opposed to a continuous predictor, predictor categories (or integers) are not assumed to be equidistant with respect to their effect on the response variable. Instead, the distance between adjacent predictor categories (or integers) is estimated from the data and may vary across categories. This is realized by parameterizing as follows: One parameter, \(b\), takes care of the direction and size of the effect similar to an ordinary regression parameter. If the monotonic effect is used in a linear model, \(b\) can be interpreted as the expected average difference between two adjacent categories of the ordinal predictor. An additional parameter vector, \(\zeta\), estimates the normalized distances between consecutive predictor categories which thus defines the shape of the monotonic effect. For a single monotonic predictor, \(x\), the linear predictor term of observation \(n\) looks as follows:

\[\eta_n = b D \sum_{i = 1}^{x_n} \zeta_i\]

The parameter \(b\) can take on any real value, while \(\zeta\) is a simplex, which means that it satisfies \(\zeta_i \in [0,1]\) and \(\sum_{i = 1}^D \zeta_i = 1\) with \(D\) being the number of elements of \(\zeta\). Equivalently, \(D\) is the number of categories (or highest integer in the data) minus 1, since we start counting categories from zero to simplify the notation.

A main application of monotonic effects are ordinal predictors that can be modeled this way without falsely treating them either as continuous or as unordered categorical predictors. In Psychology, for instance, this kind of data is omnipresent in the form of Likert scale items, which are often treated as being continuous for convenience without ever testing this assumption. As an example, suppose we are interested in the relationship of yearly income (in $) and life satisfaction measured on an arbitrary scale from 0 to 100. Usually, people are not asked for the exact income. Instead, they are asked to rank themselves in one of certain classes, say: ‘below 20k’, ‘between 20k and 40k’, ‘between 40k and 100k’ and ‘above 100k’. We use some simulated data for illustration purposes.

```
income_options <- c("below_20", "20_to_40", "40_to_100", "greater_100")
income <- factor(sample(income_options, 100, TRUE),
levels = income_options, ordered = TRUE)
mean_ls <- c(30, 60, 70, 75)
ls <- mean_ls[income] + rnorm(100, sd = 7)
dat <- data.frame(income, ls)
```

We now proceed with analyzing the data modeling `income`

as a monotonic effect.

The summary methods yield

```
Family: gaussian
Links: mu = identity; sigma = identity
Formula: ls ~ mo(income)
Data: dat (Number of observations: 100)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 31.14 1.55 28.07 34.17 1.00 2536 2583
moincome 14.75 0.71 13.36 16.15 1.00 2508 2472
Simplex Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
moincome1[1] 0.64 0.04 0.56 0.73 1.00 3106 2493
moincome1[2] 0.24 0.04 0.15 0.33 1.00 3605 2749
moincome1[3] 0.12 0.04 0.04 0.19 1.00 3169 2052
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 7.08 0.52 6.14 8.17 1.00 3020 2289
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
```

The distributions of the simplex parameter of `income`

, as shown in the `plot`

method, demonstrate that the largest difference (about 70% of the difference between minimum and maximum category) is between the first two categories.

Now, let’s compare of monotonic model with two common alternative models. (a) Assume `income`

to be continuous:

```
Family: gaussian
Links: mu = identity; sigma = identity
Formula: ls ~ income_num
Data: dat (Number of observations: 100)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 23.16 2.43 18.41 27.83 1.00 4017 3319
income_num 14.54 0.87 12.89 16.26 1.00 4071 3113
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 9.14 0.66 7.95 10.53 1.00 3641 2923
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
```

or (b) Assume `income`

to be an unordered factor:

```
Family: gaussian
Links: mu = identity; sigma = identity
Formula: ls ~ income
Data: dat (Number of observations: 100)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 30.84 1.59 27.76 33.89 1.00 2860 2355
income2 28.82 2.25 24.51 33.31 1.00 3213 2670
income3 39.40 2.00 35.46 43.40 1.00 2906 2868
income4 44.69 2.18 40.34 48.93 1.00 3070 2899
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 7.07 0.51 6.15 8.13 1.00 3778 2941
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
```

We can easily compare the fit of the three models using leave-one-out cross-validation.

```
Output of model 'fit1':
Computed from 4000 by 100 log-likelihood matrix
Estimate SE
elpd_loo -339.5 7.0
p_loo 4.8 0.8
looic 678.9 14.0
------
Monte Carlo SE of elpd_loo is 0.0.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Output of model 'fit2':
Computed from 4000 by 100 log-likelihood matrix
Estimate SE
elpd_loo -364.3 7.1
p_loo 3.1 0.7
looic 728.6 14.3
------
Monte Carlo SE of elpd_loo is 0.0.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Output of model 'fit3':
Computed from 4000 by 100 log-likelihood matrix
Estimate SE
elpd_loo -339.5 6.9
p_loo 4.9 0.8
looic 679.0 13.9
------
Monte Carlo SE of elpd_loo is 0.0.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Model comparisons:
elpd_diff se_diff
fit1 0.0 0.0
fit3 -0.1 0.3
fit2 -24.9 5.1
```

The monotonic model fits better than the continuous model, which is not surprising given that the relationship between `income`

and `ls`

is non-linear. The monotonic and the unordered factor model have almost identical fit in this example, but this may not be the case for other data sets.

In the previous monotonic model, we have implicitly assumed that all differences between adjacent categories were a-priori the same, or formulated correctly, had the same prior distribution. In the following, we want to show how to change this assumption. The canonical prior distribution of a simplex parameter is the Dirichlet distribution, a multivariate generalization of the beta distribution. It is non-zero for all valid simplexes (i.e., \(\zeta_i \in [0,1]\) and \(\sum_{i = 1}^D \zeta_i = 1\)) and zero otherwise. The Dirichlet prior has a single parameter \(\alpha\) of the same length as \(\zeta\). The higher \(\alpha_i\) the higher the a-priori probability of higher values of \(\zeta_i\). Suppose that, before looking at the data, we expected that the same amount of additional money matters more for people who generally have less money. This translates into a higher a-priori values of \(\zeta_1\) (difference between ‘below_20’ and ‘20_to_40’) and hence into higher values of \(\alpha_1\). We choose \(\alpha_1 = 2\) and \(\alpha_2 = \alpha_3 = 1\), the latter being the default value of \(\alpha\). To fit the model we write:

```
prior4 <- prior(dirichlet(c(2, 1, 1)), class = "simo", coef = "moincome1")
fit4 <- brm(ls ~ mo(income), data = dat,
prior = prior4, sample_prior = TRUE)
```

The `1`

at the end of `"moincome1"`

may appear strange when first working with monotonic effects. However, it is necessary as one monotonic term may be associated with multiple simplex parameters, if interactions of multiple monotonic variables are included in the model.

```
Family: gaussian
Links: mu = identity; sigma = identity
Formula: ls ~ mo(income)
Data: dat (Number of observations: 100)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 31.09 1.55 28.03 34.07 1.00 2860 2831
moincome 14.76 0.70 13.40 16.10 1.00 2628 2787
Simplex Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
moincome1[1] 0.65 0.04 0.57 0.73 1.00 3370 2523
moincome1[2] 0.24 0.05 0.15 0.32 1.00 4009 2593
moincome1[3] 0.12 0.04 0.04 0.19 1.00 3428 2200
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 7.07 0.51 6.17 8.13 1.00 3674 2724
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
```

We have used `sample_prior = TRUE`

to also obtain samples from the prior distribution of `simo_moincome1`

so that we can visualized it.

As is visible in the plots, `simo_moincome1[1]`

was a-priori on average twice as high as `simo_moincome1[2]`

and `simo_moincome1[3]`

as a result of setting \(\alpha_1\) to 2.

Suppose, we have additionally asked participants for their age.

We are not only interested in the main effect of age but also in the interaction of income and age. Interactions with monotonic variables can be specified in the usual way using the `*`

operator:

```
Family: gaussian
Links: mu = identity; sigma = identity
Formula: ls ~ mo(income) * age
Data: dat (Number of observations: 100)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 37.26 5.70 26.68 48.99 1.00 1507 1936
age -0.18 0.16 -0.50 0.11 1.00 1403 1767
moincome 10.59 2.36 5.88 15.11 1.00 1291 1778
moincome:age 0.11 0.06 -0.00 0.24 1.00 1229 1825
Simplex Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
moincome1[1] 0.70 0.12 0.44 0.91 1.00 1522 1478
moincome1[2] 0.20 0.11 0.02 0.42 1.00 1539 1851
moincome1[3] 0.10 0.06 0.00 0.24 1.00 2201 1448
moincome:age1[1] 0.45 0.23 0.03 0.87 1.00 1554 1867
moincome:age1[2] 0.34 0.21 0.02 0.80 1.00 1859 2497
moincome:age1[3] 0.21 0.16 0.01 0.62 1.00 2314 1953
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 6.99 0.51 6.09 8.06 1.00 2871 2192
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
```

Suppose that the 100 people in our sample data were drawn from 10 different cities; 10 people per city. Thus, we add an identifier for `city`

to the data and add some city-related variation to `ls`

.

```
dat$city <- rep(1:10, each = 10)
var_city <- rnorm(10, sd = 10)
dat$ls <- dat$ls + var_city[dat$city]
```

With the following code, we fit a multilevel model assuming the intercept and the effect of `income`

to vary by city:

```
Family: gaussian
Links: mu = identity; sigma = identity
Formula: ls ~ mo(income) * age + (mo(income) | city)
Data: dat (Number of observations: 100)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Group-Level Effects:
~city (Number of levels: 10)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 18.91 5.14 11.36 31.07 1.00 1408 1981
sd(moincome) 1.94 1.10 0.15 4.39 1.01 1090 1371
cor(Intercept,moincome) -0.50 0.38 -0.98 0.40 1.00 2787 2382
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 48.02 8.86 31.95 66.15 1.00 1313 1741
age -0.32 0.18 -0.69 -0.00 1.00 1632 1755
moincome 9.44 2.61 4.10 14.33 1.00 1649 2063
moincome:age 0.15 0.07 0.02 0.29 1.00 1601 1567
Simplex Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
moincome1[1] 0.71 0.14 0.39 0.93 1.00 2019 2249
moincome1[2] 0.19 0.12 0.01 0.48 1.00 1794 2389
moincome1[3] 0.09 0.06 0.00 0.24 1.00 3429 2675
moincome:age1[1] 0.53 0.23 0.06 0.90 1.00 2297 2788
moincome:age1[2] 0.32 0.20 0.02 0.77 1.00 2407 2566
moincome:age1[3] 0.15 0.12 0.01 0.48 1.00 3199 2690
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 6.25 0.51 5.37 7.39 1.00 2826 2899
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
```

reveals that the effect of `income`

varies only little across cities. For the present data, this is not overly surprising given that, in the data simulations, we assumed `income`

to have the same effect across cities.

Bürkner P. C. & Charpentier, E. (in review). Monotonic Effects: A Principled Approach for Including Ordinal Predictors in Regression Models. *PsyArXiv preprint*.