DTRlearn: Learning Algorithms for Dynamic Treatment Regimes

Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each stage by time-varying subject-specific features and intermediate outcomes observed in previous stages. This package implements three methods: O-learning (Zhao et. al. 2012,2014), Q-learning (Murphy et. al. 2007; Zhao et.al. 2009) and P-learning (Liu et. al. 2014, 2015) to estimate the optimal DTRs.

Version: 1.2
Depends: kernlab, MASS, glmnet, ggplot2
Published: 2015-12-28
Author: Ying Liu, Yuanjia Wang, Donglin Zeng
Maintainer: Ying Liu <yl2802 at cumc.columbia.edu>
License: GPL-2
NeedsCompilation: no
CRAN checks: DTRlearn results

Downloads:

Reference manual: DTRlearn.pdf
Package source: DTRlearn_1.2.tar.gz
Windows binaries: r-devel: DTRlearn_1.2.zip, r-release: DTRlearn_1.2.zip, r-oldrel: DTRlearn_1.2.zip
OS X Mavericks binaries: r-release: DTRlearn_1.2.tgz, r-oldrel: DTRlearn_1.2.tgz
Old sources: DTRlearn archive

Linking:

Please use the canonical form https://CRAN.R-project.org/package=DTRlearn to link to this page.