# Introduction

Designing Monte Carlo simulations can be a fun and rewarding experience. Whether you are interested in evaluating the performance of a new optimizer, re-evaluating previous research claims (like the ANOVA is 'robust' to violations of normality), determine power rates for an upcoming research proposal, or simply to appease a strange thought in your head about a new statistical idea you heard about, designing Monte Carlo simulations can be incredibly rewarding and are extremely important to those who are statistically oriented. However, organizing simulations can be a challenge, and all too often coders resort to the dreaded “for-loop” strategy, for-ever resulting in confusing, error prone, and simulation specific code. The package SimDesign is one attempt to fix these and other issues that often arise when designing Monte Carlo simulation experiments.

Generally speaking, Monte Carlo simulations can be broken into three major components:

• generate your data from some model/probability density function given various design conditions to be studied (e.g., sample size, distributions, group sizes, etc),
• analyse the generated data using whatever statistical analyses you are interested in (e.g., $$t$$-test, ANOVA, SEMs, IRT, etc), and collect the statistics/CIs/$$p$$-values/parameter estimates you are interested in, and
• summarise the results after repeating the simulations $$R$$ number of times to obtain empirical estimates of the population's behavior.

Each operation above represents the essential components of the SimDesign package. The design component is represented by a data.frame object containing the simulation conditions to be investigated, while generate, analyse, and summarise represent user-defined functions which comprise the three steps in the simulation. Each of these components are constructed and passed to the runSimulation() function where the simulation steps are evaluated, ultimately returning a data.frame object containing the simulation results.

## A general overview

After loading the SimDesign package, we begin by defining the required user-constructed functions. To expedite this process, a call to SimFunctions() will create a template to be filled in, where all the necessary functional arguments have been pre-assigned, and only the body of the functions need to be modified. The documentation of each argument can be found in the respective R help files, however there organization is very simple conceptually.

To begin, the following code should be copied and saved to an external source (i.e., text) file.

library(SimDesign)
SimFunctions()

#-------------------------------------------------------------------

library(SimDesign)

Design <- expand.grid(condition1 = NA,
condition2 = NA)

#-------------------------------------------------------------------

Generate <- function(condition, fixed_objects = NULL) {
dat <- data.frame()
dat
}

Analyse <- function(condition, dat, fixed_objects = NULL) {
ret <- c(stat1 = NaN, stat2 = NaN)
ret
}

Summarise <- function(condition, results, fixed_objects = NULL) {
ret <- c(bias = NaN, RMSE = NaN)
ret
}

#-------------------------------------------------------------------

results <- runSimulation(design=Design, replications=1000, generate=Generate,
analyse=Analyse, summarise=Summarise)


Alternatively, if you are lazy (read: efficient) or just don't like copy-and-pasting, SimFunctions() can write the output to a file by providing a filename argument. The following creates a file (mysim.R) containing the simulation design/execution and required user-defined functions.

SimFunctions('mysim')


For larger simulations, you may want to use two files, and if you'd prefer to have helpful comments included then these can be achieved with the singlefile and comments arguments, respectively.

SimFunctions('mysim', singlefile = FALSE, comments = TRUE)


Personally, I find keeping the design and functions separate when writing larger real-world simulations, though there are other reasons to keep them separate. For example, when debugging code (either through the debug argument or by explicitly using browser() calls) GUIs such as Rstudio are usually better at tracking the debugged functions. As a good amount of your time will initially be spent debugging, it's good to make this as painless as possible. Second, it's easier and more fluid to simply source() (keyboard shortcut ctrl + shift + s) the file containing functions and not worry that you might accidentally start running your simulation; though this is a matter of preference. Finally, the structure is often more readable, especially when you come back to your code sometime in the future after you've long forgotten what exactly you've written. In the design file then, you can describe the simulation study more thoroughly with comments and generally outline the specifics of how the simulation is to be run, while the functions file simply contains the underlying mechanics and cogs required to run the simulation machine. Of course, this all a matter of preference, and you may prefer to use the defaults in your simulation work.

# Simulation: Determine estimator efficiency

As a toy example, let's consider how the following question can be investigated with SimDesign:

Question: How does trimming affect recovering the mean of a distribution? Investigate this using different sample sizes with Gaussian and $$\chi^2$$ distributions. Also, demonstrate the effect of using the median to recover the mean.

### Define the conditions

First, define the condition combinations that should be investigated. In this case we wish to study 4 different sample sizes, and use a symmetric and skewed distribution. The use of expand.grid() is extremely helpful here to create a completely crossed-design for each combination (there are 8 in total).

Design <- expand.grid(sample_size = c(30, 60, 120, 240),
distribution = c('norm', 'chi'))
Design

##   sample_size distribution
## 1          30         norm
## 2          60         norm
## 3         120         norm
## 4         240         norm
## 5          30          chi
## 6          60          chi
## 7         120          chi
## 8         240          chi


Each row in Design represents a unique condition to be studied in the simulation. In this case, the first condition to be studied comes from row 1, where $$N=30$$ and the distribution should be normal.

### Define the functions

We first start by defining the data generation functional component. The only argument accepted by this function is condition, which will always be a single row from the Design data.frame object of class data.frame. Conditions are run sequentially from row 1 to the last row in Design. It is also possible to pass a fixed_objects object to the function for including fixed sets of population parameters and other conditions, however for this simple simulation this input is not required.

Generate <- function(condition, fixed_objects = NULL) {
N <- condition$sample_size dist <- condition$distribution
if(dist == 'norm'){
dat <- rnorm(N, mean = 3)
} else if(dist == 'chi'){
dat <- rchisq(N, df = 3)
}
dat
}


As we can see above, Generate() will return a numeric vector of length $$N$$ containing the data to be analysed each with a population mean of 3 (because a $$\chi^2$$ distribution has a mean equal to its df). Next, we define the analyse component to analyse said data:

Analyse <- function(condition, dat, fixed_objects = NULL) {
M0 <- mean(dat)
M1 <- mean(dat, trim = .1)
M2 <- mean(dat, trim = .2)
med <- median(dat)

ret <- c(mean_no_trim=M0, mean_trim.1=M1, mean_trim.2=M2, median=med)
ret
}


This function accepts the data previously returned from Generate() (dat), the condition vector previously mentioned.

At this point, we may conceptually think of the first two functions as being evaluated independently $$R$$ times to obtain $$R$$ sets of results. In other words, if we wanted the number of replications to be 100, the first two functions would be independently run (at least) 100 times, the results from Analyse() would be stored, and we would then need to summarise these 100 elements into meaningful meta statistics to describe their empirical properties. This is where computing meta-statistics such as bias, root mean-square error, detection rates, and so on are of primary importance. Unsurprisingly, then, this is the purpose of the summarise component:

Summarise <- function(condition, results, fixed_objects = NULL) {
obs_bias <- bias(results, parameter = 3)
obs_RMSE <- RMSE(results, parameter = 3)
ret <- c(bias=obs_bias, RMSE=obs_RMSE, RE=RE(obs_RMSE))
ret
}


Again, condition is the same as was defined before, while results is a matrix containing all the results from Analyse(), where each row represents the result returned from each respective replication, and the number of columns is equal to the length of a single vector returned by Analyse().

That sounds much more complicated than it is — all you really need to know for this simulation is that an $$R$$ x 4 matrix called results is available to build a suitable summary from. Because the results is a matrix, apply() is useful to apply a function over each respective row. The bias and RMSE are obtained for each respective statistic, and the overall result is returned as a vector.

Stopping for a moment and thinking carefully, we know that each condition will be paired with a unique vector returned from Summarise(). Therefore, you might be thinking that the result returned from the simulation will be in a rectangular form, such as in a matrix or data.frame. Well, you'd be right — good for you.

### Putting it all together

The last stage of the SimDesign work-flow is to pass the four defined elements to the runSimulation() function which, unsurprisingly given it's name, runs the simulation. There are numerous options available in the function, and these should be investigated by reading the help(runSimulation) HTML file. Options for performing simulations in parallel, storing/resuming temporary results, debugging functions, and so on are available. Below we simply request that each condition be run 1000 times on a single processor, and finally store the results to an object called results.

results <- runSimulation(Design, replications = 1000, generate=Generate,
analyse=Analyse, summarise=Summarise)

##
##
Design row: 1/8;   Started: Thu Jun 27 23:42:41 2019;   Total elapsed time: 0.00s
##
##
Design row: 2/8;   Started: Thu Jun 27 23:42:41 2019;   Total elapsed time: 0.53s
##
##
Design row: 3/8;   Started: Thu Jun 27 23:42:42 2019;   Total elapsed time: 1.06s
##
##
Design row: 4/8;   Started: Thu Jun 27 23:42:42 2019;   Total elapsed time: 1.60s
##
##
Design row: 5/8;   Started: Thu Jun 27 23:42:43 2019;   Total elapsed time: 2.18s
##
##
Design row: 6/8;   Started: Thu Jun 27 23:42:44 2019;   Total elapsed time: 2.73s
##
##
Design row: 7/8;   Started: Thu Jun 27 23:42:44 2019;   Total elapsed time: 3.29s
##
##
Design row: 8/8;   Started: Thu Jun 27 23:42:45 2019;   Total elapsed time: 3.87s

results

##   sample_size distribution bias.mean_no_trim bias.mean_trim.1
## 1          30         norm           0.01016           0.0114
## 2          60         norm          -0.00376          -0.0031
## 3         120         norm          -0.00476          -0.0041
## 4         240         norm          -0.00059          -0.0012
## 5          30          chi           0.00360          -0.3128
## 6          60          chi           0.00040          -0.3366
## 7         120          chi           0.00895          -0.3311
## 8         240          chi           0.00372          -0.3460
##   bias.mean_trim.2 bias.median RMSE.mean_no_trim RMSE.mean_trim.1
## 1          0.01154      0.0115             0.182            0.187
## 2         -0.00171     -0.0021             0.131            0.136
## 3         -0.00263     -0.0023             0.091            0.094
## 4         -0.00084      0.0010             0.066            0.068
## 5         -0.45195     -0.5836             0.452            0.528
## 6         -0.47864     -0.6207             0.316            0.447
## 7         -0.47064     -0.6012             0.219            0.390
## 8         -0.48942     -0.6309             0.154            0.376
##   RMSE.mean_trim.2 RMSE.median RE.mean_no_trim RE.mean_trim.1
## 1            0.193       0.220               1            1.1
## 2            0.140       0.160               1            1.1
## 3            0.097       0.113               1            1.1
## 4            0.070       0.082               1            1.1
## 5            0.620       0.752               1            1.4
## 6            0.563       0.705               1            2.0
## 7            0.515       0.646               1            3.2
## 8            0.511       0.652               1            6.0
##   RE.mean_trim.2 RE.median REPLICATIONS SIM_TIME                COMPLETED
## 1            1.1       1.5         1000    0.53s Thu Jun 27 23:42:41 2019
## 2            1.1       1.5         1000    0.53s Thu Jun 27 23:42:42 2019
## 3            1.1       1.5         1000    0.54s Thu Jun 27 23:42:42 2019
## 4            1.1       1.5         1000    0.58s Thu Jun 27 23:42:43 2019
## 5            1.9       2.8         1000    0.55s Thu Jun 27 23:42:44 2019
## 6            3.2       5.0         1000    0.56s Thu Jun 27 23:42:44 2019
## 7            5.6       8.7         1000    0.58s Thu Jun 27 23:42:45 2019
## 8           11.0      17.9         1000    0.61s Thu Jun 27 23:42:45 2019
##         SEED
## 1  244176217
## 2 1336377796
## 3 1308407526
## 4 1338697157
## 5 1848801708
## 6 1375056554
## 7   20391982
## 8  499398409


As can be seen from the printed results, each result from the Summarise() function has been paired with their respective condition, meta-statistics have been properly named, and three additional columns have been appended to the results: REPLICATIONS, which indicates how many time the conditions were performed, SIM_TIME, indicating the time (in seconds) it took to completely finish the respective conditions, and SEED, which indicates the random seeds used by SimDesign for each condition (for reproducibility). A call to View() in the R console may also be a nice way to sift through the results object.

### Interpreting the results

In this case, visually inspecting the simulation table is enough to understand what is occurring, though for other Monte Carlo simulations use of ANOVAs, marginalized tables, and graphics should be used to capture the essentially phenomenon in the results. Monte Carlo simulations are just like collecting and analysing data for experiments, so my advice would be to put on your analysis hats and present your data as though it were data collected from the real world.

In this particular simulation, it is readily apparent that using the un-adjusted mean will adequately recover the population mean with little bias. The precision also seems to increase as sample sizes increase, which is indicated by the decreasing RMSE statistics. Generally, trimming causes less efficiency in the estimates, where greater amounts of trimming results in even less efficiency, and using the median as a proxy to estimate the mean is the least effective method. This is witnessed rather clearly in the following table, which prints the relative efficiency of the estimators:

REs <- results[,grepl('RE\\.', colnames(results))]
data.frame(Design, REs)

##   sample_size distribution RE.mean_no_trim RE.mean_trim.1 RE.mean_trim.2
## 1          30         norm               1            1.1            1.1
## 2          60         norm               1            1.1            1.1
## 3         120         norm               1            1.1            1.1
## 4         240         norm               1            1.1            1.1
## 5          30          chi               1            1.4            1.9
## 6          60          chi               1            2.0            3.2
## 7         120          chi               1            3.2            5.6
## 8         240          chi               1            6.0           11.0
##   RE.median
## 1       1.5
## 2       1.5
## 3       1.5
## 4       1.5
## 5       2.8
## 6       5.0
## 7       8.7
## 8      17.9


Finally, when the $$\chi^2$$ distribution was investigated only the un-adjusted mean accurately portrayed the population mean. This isn't surprising, because the trimmed mean is, after all, making inferences about the population trimmed mean, and the median is making inferences about, well, the median. Only when the distributions under investigation are symmetric are the statistics able to draw inferences about the same inferences about the mean of the population.

# Conceptual walk-through of what runSimulation() is doing

The following is a conceptual breakdown of what runSimulation() is actually doing behind the scenes. Here we demonstrate the results from the first condition (row 1 of Design) to show what each function returns.

A single replication in a Monte Carlo simulation results in the following objects:

(condition <- Design[1, ])

##   sample_size distribution
## 1          30         norm

dat <- Generate(condition)
dat

##  [1] 2.37 3.18 2.16 4.60 3.33 2.18 3.49 3.74 3.58 2.69 4.51 3.39 2.38 0.79
## [15] 4.12 2.96 2.98 3.94 3.82 3.59 3.92 3.78 3.07 1.01 3.62 2.94 2.84 1.53
## [29] 2.52 3.42

res <- Analyse(condition, dat)
res

## mean_no_trim  mean_trim.1  mean_trim.2       median
##          3.1          3.2          3.2          3.3


We can see that Generate() returns a numeric vector which is accepted by Analyse(). The Analyse() function then completes the analysis portion using the generated data, and returns a named vector with the observed parameter estimates. Of course, this is only a single replication, and therefore is not really meaningful in the grand scheme of things; so, it must be repeated a number of times.

# repeat 1000x
results <- matrix(0, 1000, 4)
colnames(results) <- names(res)
for(i in 1:1000){
dat <- Generate(condition)
res <- Analyse(condition, dat)
results[i, ] <- res
}

##      mean_no_trim mean_trim.1 mean_trim.2 median
## [1,]          3.1         3.1         3.1    2.9
## [2,]          3.1         3.1         3.1    3.1
## [3,]          3.1         3.1         3.0    2.8
## [4,]          2.7         2.6         2.6    2.7
## [5,]          3.2         3.2         3.2    3.0
## [6,]          3.1         3.1         3.0    3.1


The matrix stored in results contains 1000 parameter estimates returned from each statistic. After this is obtained, we can move on to summarising the output through the Summarise() function to obtain average estimates, their associated sampling error, their efficiency, and so on.

Summarise(condition, results)

## bias.mean_no_trim  bias.mean_trim.1  bias.mean_trim.2       bias.median
##           -0.0011           -0.0031           -0.0035           -0.0037
## RMSE.mean_no_trim  RMSE.mean_trim.1  RMSE.mean_trim.2       RMSE.median
##            0.1739            0.1777            0.1859            0.2146
##   RE.mean_no_trim    RE.mean_trim.1    RE.mean_trim.2         RE.median
##            1.0000            1.0442            1.1425            1.5225


This scheme is then repeated for each row in the Design object until the entire simulation study is complete.

Of course, runSimulation() does much more than this conceptual outline, which is why it exists. Namely, errors and warnings are controlled and tracked, data is re-drawn when needed, parallel processing is supported, debugging is easier with the debug input (or by inserting browser() directly), temporary and full results can be saved to external files, the simulation state can be saved/restored, build-in safety features are included, and more. The point, however, is that you as the user should not be bogged down with the nitty-gritty details of setting up the simulation work-flow/features; instead, you need only focus your time on the important generate-analyse-summarise steps, organized in the body of these function, required to obtain your interesting simulation results.

To access further examples and instructions feel free to visit the package wiki on Github