In the adaptation level experiment participants had to assess weights of the objects placed in their hands by using a verbal scale: very very light, very light, light, medium light, medium, medium heavy, heavy, very heavy and very very heavy. The task was to assess the weight of an object that was placed on the palm of their hand. To standardize the procedure the participants had to place the elbow on the desk, extend the palm and assess the weight of the object after it was placed on their palm by slight up and down movements of their arm. During the experiment participants were blinded by using non-transparent fabric. In total there were 15 objects of the same shape and size but different mass (photo film canisters filled with metallic balls). Objects were grouped into three sets:

- light set: 45 g, 55 g, 65 g, 75 g, 85 g (weights 1 to 5),
- medium set: 95 g, 105 g, 115 g, 125 g, 135 g (weights 6 to 10),
- heavy set: 145 g, 155 g, 165 g, 175 g, 185 g (weights 11 to 15).

The experimenter sequentially placed weights in the palm of the participant and recorded the trial index, the weight of the object and participant’s response. The participants were divided into two groups, in group 1 the participants first assessed the weights of the light set in ten rounds within which all five weights were weighted in a random order. After completing the 10 rounds with the light set, the experimenter switched to the medium set, without any announcement or break. The participant then weighted the medium set across another 10 rounds of weighting the five weights in a random order within each round. In group 2 the overall procedure was the same, the only difference being that they started with the 10 rounds of the heavy set and then performed another 10 rounds of weighting of the medium set. Importantly, the weights within each set were given in random order and the experimenter switched between sets seamlessly without any break or other indication to the participant.

We will use the **bayes4psy** package to show that the two groups provide different assessment of the weights in the second part of the experiment even though both groups are responding to weights from the same (medium) set. The difference is very pronounced at first but then fades away with subsequent assessments of medium weights. This is congruent with the hypothesis that each group formed a different adaptation level during the initial phase of the task, the formed adaptation level then determined the perceptual experience of the same set of weights at the beginning of the second part of the task.

We will conduct the analysis by using the hierarchical linear model. First we have to construct fits for the second part of the experiment for each group independently. The code below loads and prepares the data, just like in the previous example, subject indexes have to be mapped to a [1, n] interval. We will use to **ggplot2** package to fine-tune graph axes and properly annotate graphs returned by the **bayes4psy** package.

```
# libs
library(bayes4psy)
library(dplyr)
library(ggplot2)
# load data
data <- adaptation_level
# separate groups and parts
group1_part2 <- data %>% filter(group == 1 & part == 2)
group2_part2 <- data %>% filter(group == 2 & part == 2)
```

Once the data is prepared we can fit the Bayesian models, the input data comes in the form of three vectors, \(x\) stores indexes of the measurements, \(y\) subject’s responses and \(s\) indexes of subjects. Note here that, due to vignette limitations, all fits are built using only one chain, using more chains in parallel is usually more efficient. Also to increase the building speed of vignettes we greatly reduced the amount of iterations, use an appropriate amount of iterations when executing actual analyses!

```
fit1 <- b_linear(x=group1_part2$sequence,
y=group1_part2$response,
s=group1_part2$subject,
iter=500, warmup=100, chains=1)
```

```
##
## SAMPLING FOR MODEL 'linear' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: WARNING: There aren't enough warmup iterations to fit the
## Chain 1: three stages of adaptation as currently configured.
## Chain 1: Reducing each adaptation stage to 15%/75%/10% of
## Chain 1: the given number of warmup iterations:
## Chain 1: init_buffer = 14
## Chain 1: adapt_window = 76
## Chain 1: term_buffer = 10
## Chain 1:
## Chain 1: Iteration: 1 / 500 [ 0%] (Warmup)
## Chain 1: Iteration: 50 / 500 [ 10%] (Warmup)
## Chain 1: Iteration: 100 / 500 [ 20%] (Warmup)
## Chain 1: Iteration: 101 / 500 [ 20%] (Sampling)
## Chain 1: Iteration: 150 / 500 [ 30%] (Sampling)
## Chain 1: Iteration: 200 / 500 [ 40%] (Sampling)
## Chain 1: Iteration: 250 / 500 [ 50%] (Sampling)
## Chain 1: Iteration: 300 / 500 [ 60%] (Sampling)
## Chain 1: Iteration: 350 / 500 [ 70%] (Sampling)
## Chain 1: Iteration: 400 / 500 [ 80%] (Sampling)
## Chain 1: Iteration: 450 / 500 [ 90%] (Sampling)
## Chain 1: Iteration: 500 / 500 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.593 seconds (Warm-up)
## Chain 1: 2.818 seconds (Sampling)
## Chain 1: 3.411 seconds (Total)
## Chain 1:
```

```
fit2 <- b_linear(x=group2_part2$sequence,
y=group2_part2$response,
s=group2_part2$subject,
iter=500, warmup=100, chains=1)
```

```
##
## SAMPLING FOR MODEL 'linear' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: WARNING: There aren't enough warmup iterations to fit the
## Chain 1: three stages of adaptation as currently configured.
## Chain 1: Reducing each adaptation stage to 15%/75%/10% of
## Chain 1: the given number of warmup iterations:
## Chain 1: init_buffer = 14
## Chain 1: adapt_window = 76
## Chain 1: term_buffer = 10
## Chain 1:
## Chain 1: Iteration: 1 / 500 [ 0%] (Warmup)
## Chain 1: Iteration: 50 / 500 [ 10%] (Warmup)
## Chain 1: Iteration: 100 / 500 [ 20%] (Warmup)
## Chain 1: Iteration: 101 / 500 [ 20%] (Sampling)
## Chain 1: Iteration: 150 / 500 [ 30%] (Sampling)
## Chain 1: Iteration: 200 / 500 [ 40%] (Sampling)
## Chain 1: Iteration: 250 / 500 [ 50%] (Sampling)
## Chain 1: Iteration: 300 / 500 [ 60%] (Sampling)
## Chain 1: Iteration: 350 / 500 [ 70%] (Sampling)
## Chain 1: Iteration: 400 / 500 [ 80%] (Sampling)
## Chain 1: Iteration: 450 / 500 [ 90%] (Sampling)
## Chain 1: Iteration: 500 / 500 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.806 seconds (Warm-up)
## Chain 1: 2.655 seconds (Sampling)
## Chain 1: 3.461 seconds (Total)
## Chain 1:
```

The fitting process is always followed by the quality analysis.

```
## Inference for Stan model: linear.
## 1 chains, each with iter=500; warmup=100; thin=1;
## post-warmup draws per chain=400, total post-warmup draws=400.
##
## mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
## alpha[1] 7.65 0.02 0.34 7.06 7.41 7.63 7.87 8.39 258 1.00
## alpha[2] 8.63 0.01 0.23 8.21 8.48 8.62 8.78 9.06 293 1.00
## alpha[3] 7.90 0.01 0.28 7.35 7.71 7.89 8.08 8.49 393 1.00
## alpha[4] 8.84 0.01 0.20 8.49 8.70 8.84 8.97 9.26 395 1.00
## alpha[5] 8.19 0.01 0.20 7.81 8.06 8.19 8.32 8.59 431 1.00
## alpha[6] 7.33 0.02 0.30 6.72 7.14 7.34 7.53 7.86 274 1.00
## alpha[7] 8.02 0.01 0.23 7.59 7.86 8.02 8.18 8.48 340 1.01
## alpha[8] 8.41 0.01 0.20 8.05 8.26 8.40 8.53 8.79 335 1.00
## alpha[9] 7.36 0.02 0.26 6.85 7.21 7.37 7.53 7.84 295 1.00
## alpha[10] 7.90 0.01 0.24 7.46 7.72 7.88 8.05 8.38 494 1.00
## alpha[11] 8.71 0.01 0.18 8.36 8.60 8.70 8.83 9.08 580 1.00
## alpha[12] 8.31 0.02 0.25 7.85 8.15 8.31 8.48 8.81 248 1.00
## alpha[13] 8.02 0.01 0.24 7.55 7.85 8.02 8.18 8.52 377 1.00
## alpha[14] 7.15 0.01 0.24 6.67 7.00 7.13 7.32 7.62 311 1.00
## alpha[15] 8.20 0.01 0.23 7.76 8.04 8.21 8.35 8.65 343 1.00
## beta[1] -0.14 0.00 0.05 -0.24 -0.17 -0.14 -0.11 -0.06 215 1.00
## beta[2] -0.12 0.00 0.03 -0.18 -0.14 -0.11 -0.10 -0.05 309 1.00
## beta[3] -0.11 0.00 0.04 -0.19 -0.14 -0.11 -0.08 -0.03 414 1.00
## beta[4] -0.13 0.00 0.03 -0.20 -0.15 -0.13 -0.11 -0.08 386 1.00
## beta[5] -0.09 0.00 0.03 -0.15 -0.11 -0.09 -0.07 -0.02 351 1.00
## beta[6] -0.11 0.00 0.04 -0.20 -0.14 -0.12 -0.09 -0.04 324 1.00
## beta[7] -0.12 0.00 0.03 -0.19 -0.14 -0.12 -0.10 -0.06 346 1.00
## beta[8] -0.06 0.00 0.03 -0.12 -0.08 -0.05 -0.03 0.00 305 1.00
## beta[9] -0.11 0.00 0.04 -0.18 -0.14 -0.11 -0.09 -0.04 305 1.00
## beta[10] -0.11 0.00 0.04 -0.18 -0.13 -0.11 -0.08 -0.04 366 1.00
## beta[11] -0.11 0.00 0.03 -0.17 -0.13 -0.11 -0.09 -0.06 674 1.00
## beta[12] -0.17 0.00 0.04 -0.26 -0.20 -0.17 -0.14 -0.10 197 1.00
## beta[13] -0.13 0.00 0.04 -0.21 -0.16 -0.13 -0.11 -0.07 409 1.00
## beta[14] -0.08 0.00 0.04 -0.15 -0.10 -0.08 -0.06 -0.01 367 1.00
## beta[15] -0.05 0.00 0.03 -0.12 -0.07 -0.05 -0.03 0.02 244 1.00
## sigma[1] 1.67 0.01 0.17 1.38 1.55 1.65 1.77 2.03 514 1.00
## sigma[2] 0.99 0.00 0.11 0.82 0.91 0.99 1.06 1.21 451 1.01
## sigma[3] 1.43 0.01 0.12 1.23 1.35 1.43 1.52 1.68 560 1.00
## sigma[4] 0.74 0.00 0.08 0.60 0.68 0.73 0.80 0.92 446 1.00
## sigma[5] 0.88 0.01 0.09 0.73 0.82 0.87 0.93 1.08 350 1.00
## sigma[6] 1.56 0.01 0.12 1.34 1.48 1.56 1.65 1.82 439 1.00
## sigma[7] 1.26 0.01 0.12 1.05 1.18 1.26 1.35 1.49 341 1.00
## sigma[8] 0.77 0.00 0.09 0.62 0.71 0.76 0.82 0.96 535 1.00
## sigma[9] 1.27 0.01 0.13 1.05 1.19 1.26 1.35 1.57 417 1.00
## sigma[10] 1.07 0.00 0.11 0.90 0.99 1.06 1.14 1.31 476 1.00
## sigma[11] 0.79 0.00 0.08 0.64 0.73 0.78 0.83 0.97 281 1.00
## sigma[12] 0.99 0.00 0.10 0.82 0.93 0.98 1.05 1.19 478 1.00
## sigma[13] 1.07 0.00 0.11 0.87 0.99 1.06 1.13 1.30 519 1.00
## sigma[14] 1.04 0.00 0.10 0.86 0.97 1.04 1.12 1.26 507 1.00
## sigma[15] 0.97 0.00 0.11 0.79 0.89 0.96 1.04 1.20 604 1.00
## mu_a 8.04 0.01 0.19 7.63 7.91 8.05 8.18 8.39 323 1.00
## mu_b -0.11 0.00 0.02 -0.15 -0.12 -0.11 -0.10 -0.07 339 1.00
## mu_s 1.10 0.00 0.09 0.93 1.04 1.10 1.16 1.29 471 1.00
## sigma_a 0.63 0.01 0.18 0.38 0.51 0.60 0.71 1.03 169 1.00
## sigma_b 0.05 0.00 0.02 0.02 0.04 0.05 0.06 0.09 135 1.00
## sigma_s 0.33 0.00 0.08 0.22 0.28 0.32 0.38 0.50 504 1.00
## lp__ -374.49 0.48 5.49 -384.45 -378.50 -374.44 -370.44 -364.46 133 1.01
##
## Samples were drawn using NUTS(diag_e) at Wed Feb 19 11:04:16 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
```

```
# the command below is commented out for the sake of brevity
#print(fit2)
# visual inspection
plot(fit1)
```

The trace plot showed no MCMC related issues, effective sample sizes of parameters relevant for our analysis (\(\mu_a\), \(\mu_b\) and \(\mu_s\)) are large enough. Since the visual inspection of the fit also looks good we can continue with our analysis. To get a quick description of fits we can take a look at the summary statistics of model’s parameters.

```
## intercept (alpha): 8.04 +/- 0.01192, 95% HDI: [7.69, 8.42]
## slope (beta): -0.11 +/- 0.00094, 95% HDI: [-0.15, -0.07]
## sigma: 1.10 +/- 0.00364, 95% HDI: [0.91, 1.27]
```

```
## intercept (alpha): 5.83 +/- 0.02450, 95% HDI: [5.27, 6.39]
## slope (beta): 0.12 +/- 0.00140, 95% HDI: [0.09, 0.16]
## sigma: 1.40 +/- 0.00673, 95% HDI: [1.18, 1.64]
```

Values of intercept suggest that our initial hypothesis about adaptation level is true. Subject’s that weighted lighter object in the first part of the experiment (**fit1**) find medium objects at the beginning of experiment’s second part heavier than subjects that weighted heavier objects in the first part (**fit2**). We can confirm this assumption by using functions that perform a more detailed analysis (e.g. **compare_means** and **plot_means_difference**, see the outputs below).

```
## ---------- Intercept ----------
## Probabilities:
## - Group 1 < Group 2: 0.00 +/- 0.00000
## - Group 1 > Group 2: 1.00 +/- 0.00000
## 95% HDI:
## - Group 1 - Group 2: [1.57, 2.88]
##
## ---------- Slope ----------
## Probabilities:
## - Group 1 < Group 2: 1.00 +/- 0.00000
## - Group 1 > Group 2: 0.00 +/- 0.00000
## 95% HDI:
## - Group 1 - Group 2: [-0.29, -0.18]
```

```
##
## ---------- Using only the intercept parameter. ----------
```

The fact that the slope for the first group is very likely to be negative (the whole 95% HDI lies below 0) and positive for the second group (the whole 95% HDI lies above 0) suggests that the adaptation level phenomenon fades away with time. We can visualize this by plotting means and distributions underlying both fits. The plotting functions in the **bayes4psy** package return regular **ggplot2** plot objects, so we can use the same techniques to annotate or change the look and feel of graphs as we would with the usual **ggplot2** visualizations.

```
plot_distributions(fit1, fit2) +
labs(title="Part II", x="measurement number", y="") +
theme(legend.position=) +
theme(legend.position="none") +
scale_x_continuous(limits=c(1, 10), breaks=seq(1:10)) +
ylim(0, 10)
```

Based on the analysis above, the hypothesis that each group formed a different adaptation level during the initial phase of the task seems to be true. Group that switches from heavy to medium weights assesses weights as lighter than they really are while for the group that switches from light to medium the weights appear heavier. With time these adaptation levels fade away and assessments converge to similar estimates of weight.