Learn optimal policies via doubly robust empirical welfare maximization over trees. This package implements the multi-action doubly robust approach of Zhou, Athey and Wager (2018) <arXiv:1810.04778> in the case where we want to learn policies that belong to the class of depth k decision trees.
Version: | 1.1.0 |
Depends: | R (≥ 3.5.0) |
Imports: | Rcpp, grf (≥ 2.0.0) |
LinkingTo: | Rcpp, BH |
Suggests: | testthat (≥ 2.1.0), DiagrammeR |
Published: | 2021-06-24 |
Author: | Erik Sverdrup [aut, cre], Ayush Kanodia [aut], Zhengyuan Zhou [aut], Susan Athey [aut], Stefan Wager [aut] |
Maintainer: | Erik Sverdrup <erikcs at stanford.edu> |
BugReports: | https://github.com/grf-labs/policytree/issues |
License: | GPL-3 |
URL: | https://github.com/grf-labs/policytree |
NeedsCompilation: | yes |
CRAN checks: | policytree results |
Reference manual: | policytree.pdf |
Package source: | policytree_1.1.0.tar.gz |
Windows binaries: | r-devel: policytree_1.1.0.zip, r-devel-UCRT: policytree_1.1.0.zip, r-release: policytree_1.1.0.zip, r-oldrel: policytree_1.1.0.zip |
macOS binaries: | r-release (arm64): policytree_1.1.0.tgz, r-release (x86_64): policytree_1.1.0.tgz, r-oldrel: policytree_1.1.0.tgz |
Old sources: | policytree archive |
Please use the canonical form https://CRAN.R-project.org/package=policytree to link to this page.