An implementation of various learning algorithms based on Gradient Descent for dealing with regression tasks. The variants of gradient descent algorithm are : Mini-Batch Gradient Descent (MBGD), an optimization to use training data partially to reduce the computation load. Stochastic Gradient Descent (SGD), an optimization to use a random data in learning to reduce the computation load drastically. Stochastic Average Gradient (SAG), a SGD-based algorithm to minimize stochastic step to average. Momentum Gradient Descent (MGD), an optimization to speed-up gradient descent learning. Accelerated Gradient Descent (AGD), an optimization to accelerate gradient descent learning. Adagrad, a gradient-descent-based algorithm that accumulate previous cost to do adaptive learning. Adadelta, a gradient-descent-based algorithm that use hessian approximation to do adaptive learning. RMSprop, a gradient-descent-based algorithm that combine Adagrad and Adadelta adaptive learning ability. Adam, a gradient-descent-based algorithm that mean and variance moment to do adaptive learning.
Version: | 2.0 |
Published: | 2016-12-29 |
Author: | Dendi Handian, Imam Fachmi Nasrulloh, Lala Septem Riza, and Rani Megasari |
Maintainer: | Dendi Handian <dendi at student.upi.edu> |
License: | GPL-2 | GPL-3 | file LICENSE [expanded from: GPL (≥ 2) | file LICENSE] |
URL: | https://github.com/drizzersilverberg/gradDescentR |
NeedsCompilation: | no |
CRAN checks: | gradDescent results |
Reference manual: | gradDescent.pdf |
Package source: | gradDescent_2.0.tar.gz |
Windows binaries: | r-devel: gradDescent_2.0.zip, r-release: gradDescent_2.0.zip, r-oldrel: gradDescent_2.0.zip |
OS X Mavericks binaries: | r-release: gradDescent_2.0.tgz, r-oldrel: gradDescent_2.0.tgz |
Please use the canonical form https://CRAN.R-project.org/package=gradDescent to link to this page.