The aim of intubate
(logo <||>
) is to offer a painless way to add R functions that are not pipe-aware to data science pipelines implemented by magrittr
with the operator %>%
, without having to rely on workarounds of varying complexity. It also implements three extensions called intubOrders
, intuEnv
, and intuBags
.
install.packages("intubate")
# install.packages("devtools")
devtools::install_github("rbertolusso/intubate")
If you like magrittr
pipelines (%>%) and you are looking for an alternative to performing a statistical analysis in the following way:
fit <- lm(sr ~ pop15, LifeCycleSavings)
summary(fit)
##
## Call:
## lm(formula = sr ~ pop15, data = LifeCycleSavings)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.637 -2.374 0.349 2.022 11.155
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 17.49660 2.27972 7.675 6.85e-10 ***
## pop15 -0.22302 0.06291 -3.545 0.000887 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.03 on 48 degrees of freedom
## Multiple R-squared: 0.2075, Adjusted R-squared: 0.191
## F-statistic: 12.57 on 1 and 48 DF, p-value: 0.0008866
intubate
let’s you do it in these other ways:
library(intubate)
library(magrittr)
ntbt_lm
is the interface provided to lm
, and one of the over 450 interfaces intubate
currently implements (for the list of 88 packages currently containing interfaces see below).
LifeCycleSavings %>%
ntbt_lm(sr ~ pop15) %>% ## ntbt_lm is the interface to lm provided by intubate
summary()
##
## Call:
## lm(formula = sr ~ pop15)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.637 -2.374 0.349 2.022 11.155
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 17.49660 2.27972 7.675 6.85e-10 ***
## pop15 -0.22302 0.06291 -3.545 0.000887 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.03 on 48 degrees of freedom
## Multiple R-squared: 0.2075, Adjusted R-squared: 0.191
## F-statistic: 12.57 on 1 and 48 DF, p-value: 0.0008866
ntbt
You do not need to use interfaces. You can call non-pipe-aware functions directly using ntbt
(even those that currently do not have an interface provided by intubate
).
LifeCycleSavings %>%
ntbt(lm, sr ~ pop15) %>% ## ntbt calls lm without needing to use an interface
summary()
##
## Call:
## lm(formula = sr ~ pop15)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.637 -2.374 0.349 2.022 11.155
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 17.49660 2.27972 7.675 6.85e-10 ***
## pop15 -0.22302 0.06291 -3.545 0.000887 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.03 on 48 degrees of freedom
## Multiple R-squared: 0.2075, Adjusted R-squared: 0.191
## F-statistic: 12.57 on 1 and 48 DF, p-value: 0.0008866
The help for each interface contains examples of use.
intubate
allows you to create your own interfaces “on demand”, right now, giving you full power of decision regarding which functions to interface.
The ability to amplify the scope of intubate
may prove to be particularly welcome if you are related to a particular field that may, in the long run, continue to lack interfaces due to my unforgivable, but unavoidable, ignorance.
As an example of creating an interface “on demand”, suppose the interface to cor.test
was lacking in the current version of intubate
and suppose (at least for a moment) that you want to create yours because you are searching for a pipeline-aware alternative to any of the following styles of coding (results not shown):
data(USJudgeRatings)
## 1)
cor.test(USJudgeRatings$CONT, USJudgeRatings$INTG)
## 2)
attach(USJudgeRatings)
cor.test(CONT, INTG)
detach()
## 3)
with(USJudgeRatings, cor.test(CONT, INTG))
## 4)
USJudgeRatings %>%
with(cor.test(CONT, INTG))
To be able to create an interface to cor.test
“on demand”, the only thing you need to do is to add the following line of code somewhere before its use in your pipeline:
ntbt_cor.test <- intubate ## intubate is the helper function
Please note the lack of parentheses.
Nothing else is required.
The only thing you need to remember is that the names of an interface must start with ntbt_
followed by the name of the interfaced function (cor.test
in this particular case), no matter which function you want to interface.
Now you can use your “just baked” interface in any pipeline. A pipeline alternative to the above code may look like this:
USJudgeRatings %>%
ntbt_cor.test(CONT, INTG) ## Use it right away
##
## Pearson's product-moment correlation
##
## data: CONT and INTG
## t = -0.8605, df = 41, p-value = 0.3945
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.4168591 0.1741182
## sample estimates:
## cor
## -0.1331909
ntbt
Of course, as already stated, you do not have to create an interface if you do not want to. You can call the non-pipe-aware function directly with ntbt
, in the following way:
USJudgeRatings %>%
ntbt(cor.test, CONT, INTG)
##
## Pearson's product-moment correlation
##
## data: CONT and INTG
## t = -0.8605, df = 41, p-value = 0.3945
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.4168591 0.1741182
## sample estimates:
## cor
## -0.1331909
You can potentially use ntbt
with any function, also the ones without an interface provided by intubate
. In principle, the functions you would like to call are the ones you cannot use directly in a pipeline (because data
is not in first place in the definition of the function).
The link below is to Dr. Sheather’s website where code was extracted. In the link there is also information about the book. This code could be used to produce the plots in Figure 3.1 on page 46. Different strategies are illustrated.
http://www.stat.tamu.edu/~sheather/book/
attach(anscombe)
plot(x1, y1, xlim = c(4, 20), ylim = c(3, 14), main = "Data Set 1")
abline(lsfit(x1, y1))
detach()
You needed to attach
so variables are visible locally. If not, you should have used anscombe$x1
and anscombe$y1
. You could also have used with
. Spaces were added for clarity and better comparison with code below.
%>%
) and intubate
(1: provided interface and 2: ntbt
):anscombe %>%
ntbt_plot(x2, y2, xlim = c(4, 20), ylim = c(3, 14), main = "Data Set 2") %>%
ntbt(lsfit, x2, y2) %>% # Call non-pipe-aware function directly with `ntbt`
abline() # No need to interface 'abline'.
ntbt_plot
is the interface to plot
provided by intubate
. As plot
returns NULL
, intubate
forwards (invisibly) its input automatically without having to use %T>%
, so lsfit
gets the original data (what it needs) and everything is done in one pipeline.ntbt
let’s you call the non-pipe-aware function lsfit
directly. You can use ntbt
always (you do not need to use ntbt_
interfaces if you do not want to), but ntbt
is particularly useful to interface directly a non-pipe-aware function for which intubate
does not provide an interface (as currently happens with lsfit
).If intubate
does not provide an interface to a given function and you prefer to use interfaces instead of ntbt
, you can create your own interface “on demand” and use it right away in your pipeline. To create an interface, it suffices the following line of code before its use:
ntbt_lsfit <- intubate # NOTE: we are *not* including parentheses.
That’s it, you have created you interface. Just remember that:
intubate
interfaces must start with ntbt_
followed by the name of the function to interface (lsfit
in this case).You can now use ntbt_lsfit
in your pipeline as any other interfaced function:
anscombe %>%
ntbt_plot(x3, y3, xlim = c(4, 20), ylim = c(3, 14), main = "Data Set 3") %>%
ntbt_lsfit(x3, y3) %>% # Using just created "on demand" interface
abline()
Instead of the X
Y
approach, you can also use the formula variant. In this case, we will have to used lm
as lsfit
does not implement formulas.
anscombe %>%
ntbt_plot(y4 ~ x4, xlim = c(4, 20), ylim = c(3, 14), main = "Data Set 4") %>%
ntbt_lm(y4 ~ x4) %>% # We use 'ntbt_lm' instead of 'ntbt_lmfit'
abline()
intubate
intubate
implements three extensions:
intubOrders
,intuEnv
, andintuBags
.These experimental features are functional for you to use. Unless you do not mind having to potentially make some changes to your code while the architecture solidifies, they are not recommended (yet) for production code.
intubOrders
intubOrders
allow, among other things, to:
run, in place, functions on the input (data
) to the interfaced function, such as head
, tail
, dim
, str
, View
, …
run, in place, functions that use the result generated by the interfaced function, such as print
, summary
, anova
, plot
, …
forward the input to the interfaced function without using %T>%
signal other modifications to the behavior of the interface
intubOrders
are implemented by an intuBorder
<||>
(from where the logo of intubate
originates).
The intuBorder
contains 5 zones (intuZones
?, maybe too much…):
zone 1
< zone 2
| zone 3
| zone 4
> zone 5
zone 1
and zone 5
will be explained later
zone 2
is used to indicate the functions that are to be applied to the input to the interfaced function
zone 3
to modify the behavior of the interface
zone 4
to indicate the functions that are to be applied to the result of the interfaced function
For example, instead of running the following sequence of function calls (only plot shown):
head(LifeCycleSavings)
tail(LifeCycleSavings, n = 3)
dim(LifeCycleSavings)
str(LifeCycleSavings)
summary(LifeCycleSavings)
result <- lm(sr ~ pop15 + pop75 + dpi + ddpi, LifeCycleSavings)
print(result)
summary(result)
anova(result)
plot(result, which = 1)
you could have run, using an intubOrder
:
LifeCycleSavings %>%
ntbt_lm(sr ~ pop15 + pop75 + dpi + ddpi,
"< head; tail(#, n = 3); dim; str; summary
|i|
print; summary; anova; plot(#, which = 1) >")
##
## ntbt_lm(data = ., sr ~ pop15 + pop75 + dpi + ddpi)
##
## * head <||> input *
## sr pop15 pop75 dpi ddpi
## Australia 11.43 29.35 2.87 2329.68 2.87
## Austria 12.07 23.32 4.41 1507.99 3.93
## Belgium 13.17 23.80 4.43 2108.47 3.82
## Bolivia 5.75 41.89 1.67 189.13 0.22
## Brazil 12.88 42.19 0.83 728.47 4.56
## Canada 8.79 31.72 2.85 2982.88 2.43
##
## * tail(#, n = 3) <||> input *
## sr pop15 pop75 dpi ddpi
## Uruguay 9.24 28.13 2.72 766.54 1.88
## Libya 8.89 43.69 2.07 123.58 16.71
## Malaysia 4.71 47.20 0.66 242.69 5.08
##
## * dim <||> input *
## [1] 50 5
##
## * str <||> input *
## 'data.frame': 50 obs. of 5 variables:
## $ sr : num 11.43 12.07 13.17 5.75 12.88 ...
## $ pop15: num 29.4 23.3 23.8 41.9 42.2 ...
## $ pop75: num 2.87 4.41 4.43 1.67 0.83 2.85 1.34 0.67 1.06 1.14 ...
## $ dpi : num 2330 1508 2108 189 728 ...
## $ ddpi : num 2.87 3.93 3.82 0.22 4.56 2.43 2.67 6.51 3.08 2.8 ...
##
## * summary <||> input *
## sr pop15 pop75 dpi
## Min. : 0.600 Min. :21.44 Min. :0.560 Min. : 88.94
## 1st Qu.: 6.970 1st Qu.:26.21 1st Qu.:1.125 1st Qu.: 288.21
## Median :10.510 Median :32.58 Median :2.175 Median : 695.66
## Mean : 9.671 Mean :35.09 Mean :2.293 Mean :1106.76
## 3rd Qu.:12.617 3rd Qu.:44.06 3rd Qu.:3.325 3rd Qu.:1795.62
## Max. :21.100 Max. :47.64 Max. :4.700 Max. :4001.89
## ddpi
## Min. : 0.220
## 1st Qu.: 2.002
## Median : 3.000
## Mean : 3.758
## 3rd Qu.: 4.478
## Max. :16.710
##
## * print <||> result *
##
## Call:
## lm(formula = sr ~ pop15 + pop75 + dpi + ddpi)
##
## Coefficients:
## (Intercept) pop15 pop75 dpi ddpi
## 28.5660865 -0.4611931 -1.6914977 -0.0003369 0.4096949
##
##
## * summary <||> result *
##
## Call:
## lm(formula = sr ~ pop15 + pop75 + dpi + ddpi)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.2422 -2.6857 -0.2488 2.4280 9.7509
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 28.5660865 7.3545161 3.884 0.000334 ***
## pop15 -0.4611931 0.1446422 -3.189 0.002603 **
## pop75 -1.6914977 1.0835989 -1.561 0.125530
## dpi -0.0003369 0.0009311 -0.362 0.719173
## ddpi 0.4096949 0.1961971 2.088 0.042471 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.803 on 45 degrees of freedom
## Multiple R-squared: 0.3385, Adjusted R-squared: 0.2797
## F-statistic: 5.756 on 4 and 45 DF, p-value: 0.0007904
##
##
## * anova <||> result *
## Analysis of Variance Table
##
## Response: sr
## Df Sum Sq Mean Sq F value Pr(>F)
## pop15 1 204.12 204.118 14.1157 0.0004922 ***
## pop75 1 53.34 53.343 3.6889 0.0611255 .
## dpi 1 12.40 12.401 0.8576 0.3593551
## ddpi 1 63.05 63.054 4.3605 0.0424711 *
## Residuals 45 650.71 14.460
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
i
is used to force an invisible result#
is used as a placeholder either for the input or result in cases the call requires extra parameters.intubOrders
may prove to be of interest to non-pipeline oriented people too (results not shown):ntbt_lm(LifeCycleSavings, sr ~ pop15 + pop75 + dpi + ddpi,
"< head; tail(#, n = 3); dim; str; summary
|i|
print; summary; anova; plot(#, which = 1) >")
intubOrders
with collections of inputsWhen using pipelines, the receiving function has to deal with the whole object that receives as its input. Then, it produces a result that, again, needs to be consumed as a whole by the following function.
intubOrders
allow you to work with a collection of objects of any kind in one pipeline, selecting at each step which input to use.
As an example suppose you want to perform the following statistical procedures in one pipeline (results not shown).
CO2 %>%
ntbt_lm(conc ~ uptake)
USJudgeRatings %>%
ntbt_cor.test(CONT, INTG)
sleep %>%
ntbt_t.test(extra ~ group)
We will first create a collection (a list
in this case, but it could also be intuEnv
or an intuBag
, explained later) containing the three dataframes:
coll <- list(CO3 = CO2,
USJudgeRatings1 = USJudgeRatings,
sleep1 = sleep)
names(coll)
## [1] "CO3" "USJudgeRatings1" "sleep1"
(We have changed the names to show we are not cheating…)
We will now use as source the whole collection.
The intubOrder
will need the following info:
zone 1
, in each case, indicates which is the data.frame (or any other object) that we want to use as input in this particular functionzone 3
needs to include f
to forward the input (if you want the next function to receive the whole collection, and not the result if this step)zone 4
(optional) may contain a print
(or summary
) if you want something to be displayedcoll %>%
ntbt_lm(conc ~ uptake, "CO3 <|f| print >") %>%
ntbt_cor.test(CONT, INTG, "USJudgeRatings1 <|f| print >") %>%
ntbt_t.test(extra ~ group, "sleep1 <|f| print >") %>%
names()
##
## ntbt_lm(data = ., conc ~ uptake)
##
## * print <||> result *
##
## Call:
## lm(formula = conc ~ uptake)
##
## Coefficients:
## (Intercept) uptake
## 73.71 13.28
##
##
## ntbt_cor.test(data = ., CONT, INTG)
##
## * print <||> result *
##
## Pearson's product-moment correlation
##
## data: CONT and INTG
## t = -0.8605, df = 41, p-value = 0.3945
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.4168591 0.1741182
## sample estimates:
## cor
## -0.1331909
##
##
## ntbt_t.test(data = ., extra ~ group)
##
## * print <||> result *
##
## Welch Two Sample t-test
##
## data: extra by group
## t = -1.8608, df = 17.776, p-value = 0.07939
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.3654832 0.2054832
## sample estimates:
## mean in group 1 mean in group 2
## 0.75 2.33
## [1] "CO3" "USJudgeRatings1" "sleep1"
names()
was added at the end to show that we have forwarded the original collection to the end of the pipeline.What happens if you would like to save the results of the function calls (or intermediate results of data manipulations)?
intuEnv
and intuBags
intuEnv
and intuBags
allow to save intermediate results without leaving the pipeline. They can also be used to contain the collections of objects.
Let us first consider
intuEnv
When intubate
is loaded, it creates intuEnv
, an empty environment that can be populated with results that you want to use later.
You can access the intuEnv
as follows:
intuEnv() ## intuEnv() returns invisible, so nothing is output
You can verify that, initially, it is empty:
ls(intuEnv())
## character(0)
How can intuEnv
be used?
Suppose that we want, instead of displaying the results of interfaced functions, save the objects returned by them. One strategy (the other is using intuBags
) is to save the results to intuEnv
.
intuEnv
?The intubOrder
will need the following info:
zone 3
needs to include f
to forward the input (if you want the next function to receive the whole collection, and not its result)zone 5
, in each case, indicates the name that the result will have in the intuEnv
coll %>%
ntbt_lm(conc ~ uptake, "CO3 <|f|> lmfit") %>%
ntbt_cor.test(CONT, INTG, "USJudgeRatings1 <|f|> ctres") %>%
ntbt_t.test(extra ~ group, "sleep1 <|f|> ttres") %>%
names()
## [1] "CO3" "USJudgeRatings1" "sleep1"
As you can see, the collection stays unchanged, but look inside intuEnv
ls(intuEnv())
## [1] "ctres" "lmfit" "ttres"
intuEnv
has collected the results, that are ready for use.
Four strategies of using one of the collected results are shown below (output not shown):
intuEnv()$lmfit %>%
summary()
attach(intuEnv())
lmfit %>%
summary()
detach()
intuEnv() %>%
ntbt(summary, "lmfit <||>")
intuEnv() %>%
ntbt(I, "lmfit <|i| summary >")
clear_intuEnv
can be used to empty the contents of intuEnv
.
clear_intuEnv()
ls(intuEnv())
## character(0)
intuEnv
with the Global EnvironmentIf you want your results to be saved to the Global environment (it could be any environment), you can associate intuEnv
to it, so you can have your results available as any other saved object.
First let’s display the contents of the Global environment:
ls()
## [1] "USJudgeRatings" "coll" "fit" "ntbt_cor.test"
## [5] "ntbt_lsfit" "result"
set_intuEnv
let’s you associate intuEnv
to an environment. It takes an environment as parameter, and returns the current intuEnv
, in case you want to save it to reinstate it later. If not, I think it will be just garbage collected (I may be wrong).
Let’s associate intuEnv
to the global environment (saving the current intuEnv
):
saved_intuEnv <- set_intuEnv(globalenv())
Now, we re-run the pipeline:
coll %>%
ntbt_lm(conc ~ uptake, "CO3 <|f|> lmfit") %>%
ntbt_cor.test(CONT, INTG, "USJudgeRatings1 <|f|> ctres") %>%
ntbt_t.test(extra ~ group, "sleep1 <|f|> ttres") %>%
names()
## [1] "CO3" "USJudgeRatings1" "sleep1"
Before forgetting, let’s reinstate the original intuEnv
:
set_intuEnv(saved_intuEnv)
## <environment: R_GlobalEnv>
And now, let’s see if the results were saved to the global environment:
ls()
## [1] "USJudgeRatings" "coll" "ctres" "fit"
## [5] "lmfit" "ntbt_cor.test" "ntbt_lsfit" "result"
## [9] "saved_intuEnv" "ttres"
They were.
Now the results are at your disposal to use as any other variable (result not shown):
lmfit %>%
summary()
intuEnv
as source of the pipelineYou can use intuEnv
(or any other environment) as the input of your pipeline.
We already cleared the contents of intuEnv
, but let’s do it again to get used to how to do it:
clear_intuEnv()
ls(intuEnv())
## character(0)
Let’s populate intuEnv
with the same objects as before:
intuEnv(CO3 = CO2,
USJudgeRatings1 = USJudgeRatings,
sleep1 = sleep)
ls(intuEnv())
## [1] "CO3" "USJudgeRatings1" "sleep1"
When using an environment, such as intuEnv
, as the source of your pipeline, there is no need to specify f
in zone 3
, as the environment is always forwarded (the same happens when the source is an intuBag
).
Keep in mind that, if you are saving results and your source is an environment other than intuEnv
, the results will be saved to intuEnv
, and not to the source enviromnent. If the source is an intuBag
, the results will be saved to the intuBag
, and not to intuEnv
.
We will run the same pipeline as before, but this time we will add subset
and summary
(called directly with ntbt
) to illustrate how we can use a previously generated result (such as from data transformations) in the same pipeline in which it was generated. We will use intuEnv
as the source of the pipeline.
intuEnv() %>%
ntbt(subset, Treatment == "nonchilled", "CO3 <||> CO3nc") %>%
ntbt_lm(conc ~ uptake, "CO3nc <||> lmfit") %>%
ntbt_cor.test(CONT, INTG, "USJudgeRatings1 <||> ctres") %>%
ntbt_t.test(extra ~ group, "sleep1 <||> ttres") %>%
ntbt(summary, "lmfit <||> lmsfit") %>%
names()
## [1] "USJudgeRatings1" "ttres" "CO3nc" "ctres"
## [5] "lmsfit" "lmfit" "sleep1" "CO3"
subset
is already pipe-aware (data
is its first parameter), you have two ways of proceeding. One is the one illustrated above (same strategy used on non-pipe-aware functions). The other, that works only when using pipe-aware functions, is:intuEnv() %>%
ntbt(subset, CO3, Treatment == "nonchilled", "<||> CO3nc")
intuBags
intuBags
differ from intEnv
in that they are based on lists, instead than on environments. Even if (with a little of care) you could keep track of several intuEnvs
, it seems natural (to me) to deal with only one, while several intuBags
(for example one for each database, or collection of objects) seem natural (to me).
Other than that, using an intuEnv
or an intuBag
is a matter of personal taste.
What you can do with one you can do with the other.
iBag <- intuBag(CO3 = CO2,
USJudgeRatings1 = USJudgeRatings,
sleep1 = sleep)
iBag %>%
ntbt(subset, Treatment == "nonchilled", "CO3 <||> CO3nc") %>%
ntbt_lm(conc ~ uptake, "CO3nc <||> lmfit") %>%
ntbt_cor.test(CONT, INTG, "USJudgeRatings1 <||> ctres") %>%
ntbt_t.test(extra ~ group, "sleep1 <||> ttres") %>%
ntbt(summary, "lmfit <||> lmsfit") %>%
names()
## [1] "CO3" "USJudgeRatings1" "sleep1" "CO3nc"
## [5] "lmfit" "ctres" "ttres" "lmsfit"
When using intuBags
, it is possible to use %<>%
if you want to save your results to the intuBag
. This way, instead of a long pipeline, you could run several short ones.
iBag <- intuBag(CO3 = CO2,
USJudgeRatings1 = USJudgeRatings,
sleep1 = sleep)
iBag %<>%
ntbt(subset, CO3, Treatment == "nonchilled", "<||> CO3nc") %>%
ntbt_lm(conc ~ uptake, "CO3nc <||> lmfit")
iBag %<>%
ntbt_cor.test(CONT, INTG, "USJudgeRatings1 <||> ctres")
iBag %<>%
ntbt_t.test(extra ~ group, "sleep1 <||> ttres") %>%
ntbt(summary, "lmfit <||> lmsfit")
names(iBag)
## [1] "CO3" "USJudgeRatings1" "sleep1" "CO3nc"
## [5] "lmfit" "ctres" "ttres" "lmsfit"
The intuBag
will keep all your results, in any way you prefer to use it.
The same happens with intuEnv
. Just remember that %<>%
should not be used with intuEnv
(you should always use %>%
).
Suppose you have a database consisting in the following two tables
iBag <- intuBag(members = data.frame(name=c("John", "Paul", "George",
"Ringo", "Brian", NA),
band=c("TRUE", "TRUE", "TRUE", "TRUE", "FALSE", NA)),
what_played = data.frame(name=c("John", "Paul", "Ringo",
"George", "Stuart", "Pete"),
instrument=c("guitar", "bass", "drums", "guitar", "bass", "drums")))
print(iBag)
## $members
## name band
## 1 John TRUE
## 2 Paul TRUE
## 3 George TRUE
## 4 Ringo TRUE
## 5 Brian FALSE
## 6 <NA> <NA>
##
## $what_played
## name instrument
## 1 John guitar
## 2