CRAN Package Check Results for Package OptimClassifier

Last updated on 2019-11-26 00:52:00 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.1.4 6.03 58.61 64.64 ERROR
r-devel-linux-x86_64-debian-gcc 0.1.4 4.94 44.69 49.63 ERROR
r-devel-linux-x86_64-fedora-clang 0.1.4 73.49 OK
r-devel-linux-x86_64-fedora-gcc 0.1.4 71.50 OK
r-devel-windows-ix86+x86_64 0.1.4 22.00 118.00 140.00 OK
r-devel-windows-ix86+x86_64-gcc8 0.1.4 12.00 72.00 84.00 OK
r-patched-linux-x86_64 0.1.4 5.94 55.40 61.34 OK
r-patched-solaris-x86 0.1.4 110.20 OK
r-release-linux-x86_64 0.1.4 5.47 55.73 61.20 OK
r-release-windows-ix86+x86_64 0.1.4 18.00 88.00 106.00 OK
r-release-osx-x86_64 0.1.4 OK
r-oldrel-windows-ix86+x86_64 0.1.4 6.00 62.00 68.00 OK
r-oldrel-osx-x86_64 0.1.4 OK

Check Details

Version: 0.1.4
Check: tests
Result: ERROR
     Running 'testthat.R' [10s/11s]
    Running the tests in 'tests/testthat.R' failed.
    Complete output:
     > library(testthat)
     > library(OptimClassifier)
     >
     > test_check("OptimClassifier")
     6 successful models have been tested
    
     CP rmse success_rate ti_error tii_error Nnodes
     1 0.002262443 0.3602883 0.8701923 0.04807692 0.08173077 17
     2 0.009049774 0.3466876 0.8798077 0.03846154 0.08173077 15
     3 0.011312217 0.3325311 0.8894231 0.04807692 0.06250000 11
     4 0.013574661 0.3325311 0.8894231 0.05769231 0.05288462 9
     5 0.022624434 0.3535534 0.8750000 0.03365385 0.09134615 3
     6 0.665158371 0.6430097 0.5865385 0.41346154 0.00000000 1
     6 successful models have been tested
    
     CP rmse success_rate ti_error tii_error Nnodes
     1 0.002262443 0.3602883 0.8701923 0.04807692 0.08173077 17
     2 0.009049774 0.3466876 0.8798077 0.03846154 0.08173077 15
     3 0.011312217 0.3325311 0.8894231 0.04807692 0.06250000 11
     4 0.013574661 0.3325311 0.8894231 0.05769231 0.05288462 9
     5 0.022624434 0.3535534 0.8750000 0.03365385 0.09134615 3
     6 0.665158371 0.6430097 0.5865385 0.41346154 0.00000000 1Call:
     rpart::rpart(formula = formula, data = training, na.action = rpart::na.rpart,
     model = FALSE, x = FALSE, y = FALSE, cp = 0)
     n= 482
    
     CP nsplit rel error xerror xstd
     1 0.66515837 0 1.0000000 1.0000000 0.04949948
     2 0.02262443 1 0.3348416 0.3348416 0.03581213
     3 0.01357466 4 0.2669683 0.3438914 0.03620379
     4 0.01131222 5 0.2533937 0.3438914 0.03620379
    
     Variable importance
     X8 X10 X9 X7 X5 X14 X13 X6 X12 X3
     36 16 16 13 10 6 2 1 1 1
    
     Node number 1: 482 observations, complexity param=0.6651584
     predicted class=0 expected loss=0.4585062 P(node) =1
     class counts: 261 221
     probabilities: 0.541 0.459
     left son=2 (219 obs) right son=3 (263 obs)
     Primary splits:
     X8 splits as LR, improve=119.25990, (0 missing)
     X10 < 2.5 to the left, improve= 52.61262, (0 missing)
     X9 splits as LR, improve= 47.76803, (0 missing)
     X14 < 396 to the left, improve= 32.46584, (0 missing)
     X7 < 1.0425 to the left, improve= 31.53528, (0 missing)
     Surrogate splits:
     X9 splits as LR, agree=0.701, adj=0.342, (0 split)
     X10 < 0.5 to the left, agree=0.701, adj=0.342, (0 split)
     X7 < 0.435 to the left, agree=0.699, adj=0.338, (0 split)
     X5 splits as LLLLRRRRRRRRRR, agree=0.641, adj=0.210, (0 split)
     X14 < 127 to the left, agree=0.606, adj=0.132, (0 split)
    
     Node number 2: 219 observations
     predicted class=0 expected loss=0.07305936 P(node) =0.4543568
     class counts: 203 16
     probabilities: 0.927 0.073
    
     Node number 3: 263 observations, complexity param=0.02262443
     predicted class=1 expected loss=0.2205323 P(node) =0.5456432
     class counts: 58 205
     probabilities: 0.221 0.779
     left son=6 (99 obs) right son=7 (164 obs)
     Primary splits:
     X9 splits as LR, improve=11.902240, (0 missing)
     X10 < 0.5 to the left, improve=11.902240, (0 missing)
     X14 < 216.5 to the left, improve=10.195680, (0 missing)
     X5 splits as LLLLLLRRRRRRRR, improve= 7.627675, (0 missing)
     X13 < 72.5 to the right, improve= 6.568284, (0 missing)
     Surrogate splits:
     X10 < 0.5 to the left, agree=1.000, adj=1.000, (0 split)
     X14 < 3 to the left, agree=0.722, adj=0.263, (0 split)
     X12 splits as LR-, agree=0.688, adj=0.172, (0 split)
     X5 splits as RLRLRLRRRRRRRR, agree=0.665, adj=0.111, (0 split)
     X7 < 0.27 to the left, agree=0.665, adj=0.111, (0 split)
    
     Node number 6: 99 observations, complexity param=0.02262443
     predicted class=1 expected loss=0.4141414 P(node) =0.2053942
     class counts: 41 58
     probabilities: 0.414 0.586
     left son=12 (61 obs) right son=13 (38 obs)
     Primary splits:
     X13 < 111 to the right, improve=6.520991, (0 missing)
     X5 splits as LLLLLLLLRRRRRR, improve=5.770954, (0 missing)
     X6 splits as LRLRL-RR, improve=4.176207, (0 missing)
     X14 < 388.5 to the left, improve=3.403553, (0 missing)
     X3 < 2.52 to the left, improve=2.599301, (0 missing)
     Surrogate splits:
     X3 < 4.5625 to the left, agree=0.697, adj=0.211, (0 split)
     X2 < 22.835 to the right, agree=0.677, adj=0.158, (0 split)
     X5 splits as RRLLRLLLLRRLLL, agree=0.667, adj=0.132, (0 split)
     X7 < 0.02 to the right, agree=0.667, adj=0.132, (0 split)
     X6 splits as RRRLL-LR, agree=0.657, adj=0.105, (0 split)
    
     Node number 7: 164 observations
     predicted class=1 expected loss=0.1036585 P(node) =0.340249
     class counts: 17 147
     probabilities: 0.104 0.896
    
     Node number 12: 61 observations, complexity param=0.02262443
     predicted class=0 expected loss=0.442623 P(node) =0.126556
     class counts: 34 27
     probabilities: 0.557 0.443
     left son=24 (49 obs) right son=25 (12 obs)
     Primary splits:
     X5 splits as LLLL-LLLRLRRLL, improve=4.5609460, (0 missing)
     X14 < 126 to the left, improve=3.7856330, (0 missing)
     X6 splits as L--LL-R-, improve=3.2211680, (0 missing)
     X3 < 9.625 to the right, improve=1.0257110, (0 missing)
     X2 < 24.5 to the right, improve=0.9861812, (0 missing)
     Surrogate splits:
     X14 < 2202.5 to the left, agree=0.836, adj=0.167, (0 split)
     X3 < 11.3125 to the left, agree=0.820, adj=0.083, (0 split)
    
     Node number 13: 38 observations
     predicted class=1 expected loss=0.1842105 P(node) =0.07883817
     class counts: 7 31
     probabilities: 0.184 0.816
    
     Node number 24: 49 observations, complexity param=0.01357466
     predicted class=0 expected loss=0.3469388 P(node) =0.1016598
     class counts: 32 17
     probabilities: 0.653 0.347
     left son=48 (34 obs) right son=49 (15 obs)
     Primary splits:
     X6 splits as L--LL-R-, improve=2.7687880, (0 missing)
     X13 < 150 to the left, improve=2.3016430, (0 missing)
     X14 < 126 to the left, improve=2.2040820, (0 missing)
     X3 < 4.4575 to the right, improve=1.5322870, (0 missing)
     X5 splits as LLRL-LRR-R--RR, improve=0.8850852, (0 missing)
     Surrogate splits:
     X2 < 50.415 to the left, agree=0.735, adj=0.133, (0 split)
     X5 splits as LLLL-LLL-L--LR, agree=0.735, adj=0.133, (0 split)
     X7 < 2.625 to the left, agree=0.735, adj=0.133, (0 split)
    
     Node number 25: 12 observations
     predicted class=1 expected loss=0.1666667 P(node) =0.02489627
     class counts: 2 10
     probabilities: 0.167 0.833
    
     Node number 48: 34 observations
     predicted class=0 expected loss=0.2352941 P(node) =0.07053942
     class counts: 26 8
     probabilities: 0.765 0.235
    
     Node number 49: 15 observations
     predicted class=1 expected loss=0.4 P(node) =0.03112033
     class counts: 6 9
     probabilities: 0.400 0.600
    
     1 successful models have been tested
    
     Model rmse success_rate ti_error tii_error
     1 lda 0.3509821 0.8768116 0.03623188 0.08695652 0 1
     0.5507246 0.4492754
     7 successful models have been tested and 21 thresholds evaluated
    
     Model rmse Threshold success_rate ti_error tii_error
     1 binomial(logit) 0.3011696 1.00 0.5865385 0.4134615 0
     2 binomial(probit) 0.3016317 1.00 0.5865385 0.4134615 0
     3 binomial(cloglog) 0.3020186 1.00 0.5865385 0.4134615 0
     4 poisson(log) 0.3032150 0.95 0.6634615 0.3365385 0
     5 poisson(sqrt) 0.3063370 0.95 0.6490385 0.3509615 0
     6 gaussian 0.3109044 0.95 0.6442308 0.3557692 0
     7 poisson 0.3111360 1.00 0.6153846 0.3846154 0
     -- 1. Failure: Test GLM with Australian Credit (@test-OptimGLM.R#10) ----------
     class(summary(modelFit)$coef) not equal to "matrix".
     Lengths differ: 2 is not 1
    
     3 successful models have been tested
    
     Model rmse threshold success_rate ti_error tii_error
     1 LM 0.3109044 1 0.5625000 0.009615385 0.4278846
     2 SQRT.LM 0.4516999 1 0.5625000 0.009615385 0.4278846
     3 LOG.LM 1.1762341 1 0.5865385 0.413461538 0.0000000
     3 successful models have been tested
    
     Model rmse threshold success_rate ti_error tii_error
     1 LM 0.3109044 1 0.5625000 0.009615385 0.4278846
     2 SQRT.LM 0.4516999 1 0.5625000 0.009615385 0.4278846
     3 LOG.LM 1.1762341 1 0.5865385 0.413461538 0.0000000-- 2. Failure: Test LM with Australian Credit (@test-OptimLM.R#12) ------------
     class(summary(modelFit)$coef) not equal to "matrix".
     Lengths differ: 2 is not 1
    
     8 random variables have been tested
    
     Random_Variable aic bic rmse threshold success_rate ti_error
     1 X5 495.8364 600.7961 1.023786 1.70 0.8942308 0.08653846
     2 X1 497.6737 628.8733 1.035826 1.60 0.8942308 0.04807692
     3 X6 514.7091 645.9087 1.019398 1.50 0.8653846 0.04807692
     4 X11 524.0760 677.1422 1.016578 1.55 0.8750000 0.04807692
     5 X4 531.7380 684.8042 1.017809 1.30 0.8653846 0.03846154
     6 X9 534.2266 691.6661 1.016536 1.55 0.8750000 0.04807692
     7 X12 536.3424 689.4086 1.016180 1.55 0.8750000 0.04807692
     8 X8 537.4437 694.8833 1.016513 1.55 0.8750000 0.04807692
     tii_error
     1 0.01923077
     2 0.05769231
     3 0.08653846
     4 0.07692308
     5 0.09615385
     6 0.07692308
     7 0.07692308
     8 0.07692308
     8 random variables have been tested
    
     Random_Variable aic bic rmse threshold success_rate ti_error
     1 X5 495.8364 600.7961 1.023786 1.70 0.8942308 0.08653846
     2 X1 497.6737 628.8733 1.035826 1.60 0.8942308 0.04807692
     3 X6 514.7091 645.9087 1.019398 1.50 0.8653846 0.04807692
     4 X11 524.0760 677.1422 1.016578 1.55 0.8750000 0.04807692
     5 X4 531.7380 684.8042 1.017809 1.30 0.8653846 0.03846154
     6 X9 534.2266 691.6661 1.016536 1.55 0.8750000 0.04807692
     7 X12 536.3424 689.4086 1.016180 1.55 0.8750000 0.04807692
     8 X8 537.4437 694.8833 1.016513 1.55 0.8750000 0.04807692
     tii_error
     1 0.01923077
     2 0.05769231
     3 0.08653846
     4 0.07692308
     5 0.09615385
     6 0.07692308
     7 0.07692308
     8 0.07692308Warning: Thresholds' criteria not selected. The success rate is defined as the default.
    
     # weights: 37
     initial value 314.113022
     iter 10 value 305.860086
     iter 20 value 305.236595
     iter 30 value 305.199531
     final value 305.199440
     converged
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
     :
     --- package (from environment) ---
     OptimClassifier
     --- call from context ---
     MC(y = y, yhat = CutR)
     --- call from argument ---
     if (class(yhat) != class(y)) {
     yhat <- as.numeric(yhat)
     y <- as.numeric(y)
     }
     --- R stacktrace ---
     where 1: MC(y = y, yhat = CutR)
     where 2: FUN(X[[i]], ...)
     where 3: lapply(thresholdsused, threshold, y = testing[, response_variable],
     yhat = predicts[[k]], categories = Names)
     where 4 at testthat/test-OptimNN.R#4: Optim.NN(Y ~ ., AustralianCredit, p = 0.65, seed = 2018)
     where 5: eval(code, test_env)
     where 6: eval(code, test_env)
     where 7: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 8: doTryCatch(return(expr), name, parentenv, handler)
     where 9: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 10: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 11: doTryCatch(return(expr), name, parentenv, handler)
     where 12: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 13: tryCatchList(expr, classes, parentenv, handlers)
     where 14: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 15: test_code(desc, code, env = parent.frame())
     where 16 at testthat/test-OptimNN.R#3: test_that("Test example with Australian Credit Dataset for NN",
     {
     modelFit <- Optim.NN(Y ~ ., AustralianCredit, p = 0.65,
     seed = 2018)
     expect_equal(class(modelFit), "Optim")
     print(modelFit)
     print(modelFit, plain = TRUE)
     expect_equal(class(summary(modelFit)$value), "numeric")
     })
     where 17: eval(code, test_env)
     where 18: eval(code, test_env)
     where 19: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 20: doTryCatch(return(expr), name, parentenv, handler)
     where 21: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 22: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 23: doTryCatch(return(expr), name, parentenv, handler)
     where 24: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 25: tryCatchList(expr, classes, parentenv, handlers)
     where 26: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 27: test_code(NULL, exprs, env)
     where 28: source_file(path, new.env(parent = env), chdir = TRUE, wrap = wrap)
     where 29: force(code)
     where 30: doWithOneRestart(return(expr), restart)
     where 31: withOneRestart(expr, restarts[[1L]])
     where 32: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 33: with_reporter(reporter = reporter, start_end_reporter = start_end_reporter,
     {
     reporter$start_file(basename(path))
     lister$start_file(basename(path))
     source_file(path, new.env(parent = env), chdir = TRUE,
     wrap = wrap)
     reporter$.end_context()
     reporter$end_file()
     })
     where 34: FUN(X[[i]], ...)
     where 35: lapply(paths, test_file, env = env, reporter = current_reporter,
     start_end_reporter = FALSE, load_helpers = FALSE, wrap = wrap)
     where 36: force(code)
     where 37: doWithOneRestart(return(expr), restart)
     where 38: withOneRestart(expr, restarts[[1L]])
     where 39: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 40: with_reporter(reporter = current_reporter, results <- lapply(paths,
     test_file, env = env, reporter = current_reporter, start_end_reporter = FALSE,
     load_helpers = FALSE, wrap = wrap))
     where 41: test_files(paths, reporter = reporter, env = env, stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 42: test_dir(path = test_path, reporter = reporter, env = env, filter = filter,
     ..., stop_on_failure = stop_on_failure, stop_on_warning = stop_on_warning,
     wrap = wrap)
     where 43: test_package_dir(package = package, test_path = test_path, filter = filter,
     reporter = reporter, ..., stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 44: test_check("OptimClassifier")
    
     --- value of length: 2 type: logical ---
     [1] TRUE TRUE
     --- function from context ---
     function (yhat, y, metrics = FALSE)
     {
     if (class(yhat) != class(y)) {
     yhat <- as.numeric(yhat)
     y <- as.numeric(y)
     }
     Real <- y
     Estimated <- yhat
     MC <- table(Estimated, Real)
     Success_rate <- (sum(diag(MC)))/sum(MC)
     tI_error <- sum(MC[upper.tri(MC, diag = FALSE)])/sum(MC)
     tII_error <- sum(MC[lower.tri(MC, diag = FALSE)])/sum(MC)
     General_metrics <- data.frame(Success_rate = Success_rate,
     tI_error = tI_error, tII_error = tII_error)
     if (metrics == TRUE) {
     Real_cases <- colSums(MC)
     Sensitivity <- diag(MC)/colSums(MC)
     Prevalence <- Real_cases/sum(MC)
     Specificity_F <- function(N, Matrix) {
     sum(diag(Matrix)[-N])/sum(colSums(Matrix)[-N])
     }
     Precision_F <- function(N, Matrix) {
     diag(Matrix)[N]/sum(diag(Matrix))
     }
     Specificity <- unlist(lapply(X = 1:nrow(MC), FUN = Specificity_F,
     Matrix = MC))
     Precision <- unlist(lapply(X = 1:nrow(MC), FUN = Precision_F,
     Matrix = MC))
     Categories <- names(Precision)
     Categorical_Metrics <- data.frame(Categories, Sensitivity,
     Prevalence, Specificity, Precision)
     output <- list(MC, General_metrics, Categorical_Metrics)
     }
     else {
     output <- MC
     }
     return(output)
     }
     <bytecode: 0x2d491c0>
     <environment: namespace:OptimClassifier>
     --- function search by body ---
     Function MC in namespace OptimClassifier has this body.
     ----------- END OF FAILURE REPORT --------------
     Fatal error: the condition has length > 1
Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.1.4
Check: tests
Result: ERROR
     Running ‘testthat.R’ [7s/10s]
    Running the tests in ‘tests/testthat.R’ failed.
    Complete output:
     > library(testthat)
     > library(OptimClassifier)
     >
     > test_check("OptimClassifier")
     6 successful models have been tested
    
     CP rmse success_rate ti_error tii_error Nnodes
     1 0.002262443 0.3602883 0.8701923 0.04807692 0.08173077 17
     2 0.009049774 0.3466876 0.8798077 0.03846154 0.08173077 15
     3 0.011312217 0.3325311 0.8894231 0.04807692 0.06250000 11
     4 0.013574661 0.3325311 0.8894231 0.05769231 0.05288462 9
     5 0.022624434 0.3535534 0.8750000 0.03365385 0.09134615 3
     6 0.665158371 0.6430097 0.5865385 0.41346154 0.00000000 1
     6 successful models have been tested
    
     CP rmse success_rate ti_error tii_error Nnodes
     1 0.002262443 0.3602883 0.8701923 0.04807692 0.08173077 17
     2 0.009049774 0.3466876 0.8798077 0.03846154 0.08173077 15
     3 0.011312217 0.3325311 0.8894231 0.04807692 0.06250000 11
     4 0.013574661 0.3325311 0.8894231 0.05769231 0.05288462 9
     5 0.022624434 0.3535534 0.8750000 0.03365385 0.09134615 3
     6 0.665158371 0.6430097 0.5865385 0.41346154 0.00000000 1Call:
     rpart::rpart(formula = formula, data = training, na.action = rpart::na.rpart,
     model = FALSE, x = FALSE, y = FALSE, cp = 0)
     n= 482
    
     CP nsplit rel error xerror xstd
     1 0.66515837 0 1.0000000 1.0000000 0.04949948
     2 0.02262443 1 0.3348416 0.3348416 0.03581213
     3 0.01357466 4 0.2669683 0.3438914 0.03620379
     4 0.01131222 5 0.2533937 0.3438914 0.03620379
    
     Variable importance
     X8 X10 X9 X7 X5 X14 X13 X6 X12 X3
     36 16 16 13 10 6 2 1 1 1
    
     Node number 1: 482 observations, complexity param=0.6651584
     predicted class=0 expected loss=0.4585062 P(node) =1
     class counts: 261 221
     probabilities: 0.541 0.459
     left son=2 (219 obs) right son=3 (263 obs)
     Primary splits:
     X8 splits as LR, improve=119.25990, (0 missing)
     X10 < 2.5 to the left, improve= 52.61262, (0 missing)
     X9 splits as LR, improve= 47.76803, (0 missing)
     X14 < 396 to the left, improve= 32.46584, (0 missing)
     X7 < 1.0425 to the left, improve= 31.53528, (0 missing)
     Surrogate splits:
     X9 splits as LR, agree=0.701, adj=0.342, (0 split)
     X10 < 0.5 to the left, agree=0.701, adj=0.342, (0 split)
     X7 < 0.435 to the left, agree=0.699, adj=0.338, (0 split)
     X5 splits as LLLLRRRRRRRRRR, agree=0.641, adj=0.210, (0 split)
     X14 < 127 to the left, agree=0.606, adj=0.132, (0 split)
    
     Node number 2: 219 observations
     predicted class=0 expected loss=0.07305936 P(node) =0.4543568
     class counts: 203 16
     probabilities: 0.927 0.073
    
     Node number 3: 263 observations, complexity param=0.02262443
     predicted class=1 expected loss=0.2205323 P(node) =0.5456432
     class counts: 58 205
     probabilities: 0.221 0.779
     left son=6 (99 obs) right son=7 (164 obs)
     Primary splits:
     X9 splits as LR, improve=11.902240, (0 missing)
     X10 < 0.5 to the left, improve=11.902240, (0 missing)
     X14 < 216.5 to the left, improve=10.195680, (0 missing)
     X5 splits as LLLLLLRRRRRRRR, improve= 7.627675, (0 missing)
     X13 < 72.5 to the right, improve= 6.568284, (0 missing)
     Surrogate splits:
     X10 < 0.5 to the left, agree=1.000, adj=1.000, (0 split)
     X14 < 3 to the left, agree=0.722, adj=0.263, (0 split)
     X12 splits as LR-, agree=0.688, adj=0.172, (0 split)
     X5 splits as RLRLRLRRRRRRRR, agree=0.665, adj=0.111, (0 split)
     X7 < 0.27 to the left, agree=0.665, adj=0.111, (0 split)
    
     Node number 6: 99 observations, complexity param=0.02262443
     predicted class=1 expected loss=0.4141414 P(node) =0.2053942
     class counts: 41 58
     probabilities: 0.414 0.586
     left son=12 (61 obs) right son=13 (38 obs)
     Primary splits:
     X13 < 111 to the right, improve=6.520991, (0 missing)
     X5 splits as LLLLLLLLRRRRRR, improve=5.770954, (0 missing)
     X6 splits as LRLRL-RR, improve=4.176207, (0 missing)
     X14 < 388.5 to the left, improve=3.403553, (0 missing)
     X3 < 2.52 to the left, improve=2.599301, (0 missing)
     Surrogate splits:
     X3 < 4.5625 to the left, agree=0.697, adj=0.211, (0 split)
     X2 < 22.835 to the right, agree=0.677, adj=0.158, (0 split)
     X5 splits as RRLLRLLLLRRLLL, agree=0.667, adj=0.132, (0 split)
     X7 < 0.02 to the right, agree=0.667, adj=0.132, (0 split)
     X6 splits as RRRLL-LR, agree=0.657, adj=0.105, (0 split)
    
     Node number 7: 164 observations
     predicted class=1 expected loss=0.1036585 P(node) =0.340249
     class counts: 17 147
     probabilities: 0.104 0.896
    
     Node number 12: 61 observations, complexity param=0.02262443
     predicted class=0 expected loss=0.442623 P(node) =0.126556
     class counts: 34 27
     probabilities: 0.557 0.443
     left son=24 (49 obs) right son=25 (12 obs)
     Primary splits:
     X5 splits as LLLL-LLLRLRRLL, improve=4.5609460, (0 missing)
     X14 < 126 to the left, improve=3.7856330, (0 missing)
     X6 splits as L--LL-R-, improve=3.2211680, (0 missing)
     X3 < 9.625 to the right, improve=1.0257110, (0 missing)
     X2 < 24.5 to the right, improve=0.9861812, (0 missing)
     Surrogate splits:
     X14 < 2202.5 to the left, agree=0.836, adj=0.167, (0 split)
     X3 < 11.3125 to the left, agree=0.820, adj=0.083, (0 split)
    
     Node number 13: 38 observations
     predicted class=1 expected loss=0.1842105 P(node) =0.07883817
     class counts: 7 31
     probabilities: 0.184 0.816
    
     Node number 24: 49 observations, complexity param=0.01357466
     predicted class=0 expected loss=0.3469388 P(node) =0.1016598
     class counts: 32 17
     probabilities: 0.653 0.347
     left son=48 (34 obs) right son=49 (15 obs)
     Primary splits:
     X6 splits as L--LL-R-, improve=2.7687880, (0 missing)
     X13 < 150 to the left, improve=2.3016430, (0 missing)
     X14 < 126 to the left, improve=2.2040820, (0 missing)
     X3 < 4.4575 to the right, improve=1.5322870, (0 missing)
     X5 splits as LLRL-LRR-R--RR, improve=0.8850852, (0 missing)
     Surrogate splits:
     X2 < 50.415 to the left, agree=0.735, adj=0.133, (0 split)
     X5 splits as LLLL-LLL-L--LR, agree=0.735, adj=0.133, (0 split)
     X7 < 2.625 to the left, agree=0.735, adj=0.133, (0 split)
    
     Node number 25: 12 observations
     predicted class=1 expected loss=0.1666667 P(node) =0.02489627
     class counts: 2 10
     probabilities: 0.167 0.833
    
     Node number 48: 34 observations
     predicted class=0 expected loss=0.2352941 P(node) =0.07053942
     class counts: 26 8
     probabilities: 0.765 0.235
    
     Node number 49: 15 observations
     predicted class=1 expected loss=0.4 P(node) =0.03112033
     class counts: 6 9
     probabilities: 0.400 0.600
    
     1 successful models have been tested
    
     Model rmse success_rate ti_error tii_error
     1 lda 0.3509821 0.8768116 0.03623188 0.08695652 0 1
     0.5507246 0.4492754
     7 successful models have been tested and 21 thresholds evaluated
    
     Model rmse Threshold success_rate ti_error tii_error
     1 binomial(logit) 0.3011696 1.00 0.5865385 0.4134615 0
     2 binomial(probit) 0.3016317 1.00 0.5865385 0.4134615 0
     3 binomial(cloglog) 0.3020186 1.00 0.5865385 0.4134615 0
     4 poisson(log) 0.3032150 0.95 0.6634615 0.3365385 0
     5 poisson(sqrt) 0.3063370 0.95 0.6490385 0.3509615 0
     6 gaussian 0.3109044 0.95 0.6442308 0.3557692 0
     7 poisson 0.3111360 1.00 0.6153846 0.3846154 0
     ── 1. Failure: Test GLM with Australian Credit (@test-OptimGLM.R#10) ──────────
     class(summary(modelFit)$coef) not equal to "matrix".
     Lengths differ: 2 is not 1
    
     3 successful models have been tested
    
     Model rmse threshold success_rate ti_error tii_error
     1 LM 0.3109044 1 0.5625000 0.009615385 0.4278846
     2 SQRT.LM 0.4516999 1 0.5625000 0.009615385 0.4278846
     3 LOG.LM 1.1762341 1 0.5865385 0.413461538 0.0000000
     3 successful models have been tested
    
     Model rmse threshold success_rate ti_error tii_error
     1 LM 0.3109044 1 0.5625000 0.009615385 0.4278846
     2 SQRT.LM 0.4516999 1 0.5625000 0.009615385 0.4278846
     3 LOG.LM 1.1762341 1 0.5865385 0.413461538 0.0000000── 2. Failure: Test LM with Australian Credit (@test-OptimLM.R#12) ────────────
     class(summary(modelFit)$coef) not equal to "matrix".
     Lengths differ: 2 is not 1
    
     8 random variables have been tested
    
     Random_Variable aic bic rmse threshold success_rate ti_error
     1 X5 495.8364 600.7961 1.023786 1.70 0.8942308 0.08653846
     2 X1 497.6737 628.8733 1.035826 1.60 0.8942308 0.04807692
     3 X6 514.7091 645.9087 1.019398 1.50 0.8653846 0.04807692
     4 X11 524.0760 677.1422 1.016578 1.55 0.8750000 0.04807692
     5 X4 531.7380 684.8042 1.017809 1.30 0.8653846 0.03846154
     6 X9 534.2266 691.6661 1.016536 1.55 0.8750000 0.04807692
     7 X12 536.3424 689.4086 1.016180 1.55 0.8750000 0.04807692
     8 X8 537.4437 694.8833 1.016513 1.55 0.8750000 0.04807692
     tii_error
     1 0.01923077
     2 0.05769231
     3 0.08653846
     4 0.07692308
     5 0.09615385
     6 0.07692308
     7 0.07692308
     8 0.07692308
     8 random variables have been tested
    
     Random_Variable aic bic rmse threshold success_rate ti_error
     1 X5 495.8364 600.7961 1.023786 1.70 0.8942308 0.08653846
     2 X1 497.6737 628.8733 1.035826 1.60 0.8942308 0.04807692
     3 X6 514.7091 645.9087 1.019398 1.50 0.8653846 0.04807692
     4 X11 524.0760 677.1422 1.016578 1.55 0.8750000 0.04807692
     5 X4 531.7380 684.8042 1.017809 1.30 0.8653846 0.03846154
     6 X9 534.2266 691.6661 1.016536 1.55 0.8750000 0.04807692
     7 X12 536.3424 689.4086 1.016180 1.55 0.8750000 0.04807692
     8 X8 537.4437 694.8833 1.016513 1.55 0.8750000 0.04807692
     tii_error
     1 0.01923077
     2 0.05769231
     3 0.08653846
     4 0.07692308
     5 0.09615385
     6 0.07692308
     7 0.07692308
     8 0.07692308Warning: Thresholds' criteria not selected. The success rate is defined as the default.
    
     # weights: 37
     initial value 314.113022
     iter 10 value 305.860086
     iter 20 value 305.236595
     iter 30 value 305.199531
     final value 305.199440
     converged
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
     :
     --- package (from environment) ---
     OptimClassifier
     --- call from context ---
     MC(y = y, yhat = CutR)
     --- call from argument ---
     if (class(yhat) != class(y)) {
     yhat <- as.numeric(yhat)
     y <- as.numeric(y)
     }
     --- R stacktrace ---
     where 1: MC(y = y, yhat = CutR)
     where 2: FUN(X[[i]], ...)
     where 3: lapply(thresholdsused, threshold, y = testing[, response_variable],
     yhat = predicts[[k]], categories = Names)
     where 4 at testthat/test-OptimNN.R#4: Optim.NN(Y ~ ., AustralianCredit, p = 0.65, seed = 2018)
     where 5: eval(code, test_env)
     where 6: eval(code, test_env)
     where 7: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 8: doTryCatch(return(expr), name, parentenv, handler)
     where 9: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 10: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 11: doTryCatch(return(expr), name, parentenv, handler)
     where 12: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 13: tryCatchList(expr, classes, parentenv, handlers)
     where 14: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 15: test_code(desc, code, env = parent.frame())
     where 16 at testthat/test-OptimNN.R#3: test_that("Test example with Australian Credit Dataset for NN",
     {
     modelFit <- Optim.NN(Y ~ ., AustralianCredit, p = 0.65,
     seed = 2018)
     expect_equal(class(modelFit), "Optim")
     print(modelFit)
     print(modelFit, plain = TRUE)
     expect_equal(class(summary(modelFit)$value), "numeric")
     })
     where 17: eval(code, test_env)
     where 18: eval(code, test_env)
     where 19: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 20: doTryCatch(return(expr), name, parentenv, handler)
     where 21: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 22: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 23: doTryCatch(return(expr), name, parentenv, handler)
     where 24: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 25: tryCatchList(expr, classes, parentenv, handlers)
     where 26: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 27: test_code(NULL, exprs, env)
     where 28: source_file(path, new.env(parent = env), chdir = TRUE, wrap = wrap)
     where 29: force(code)
     where 30: doWithOneRestart(return(expr), restart)
     where 31: withOneRestart(expr, restarts[[1L]])
     where 32: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 33: with_reporter(reporter = reporter, start_end_reporter = start_end_reporter,
     {
     reporter$start_file(basename(path))
     lister$start_file(basename(path))
     source_file(path, new.env(parent = env), chdir = TRUE,
     wrap = wrap)
     reporter$.end_context()
     reporter$end_file()
     })
     where 34: FUN(X[[i]], ...)
     where 35: lapply(paths, test_file, env = env, reporter = current_reporter,
     start_end_reporter = FALSE, load_helpers = FALSE, wrap = wrap)
     where 36: force(code)
     where 37: doWithOneRestart(return(expr), restart)
     where 38: withOneRestart(expr, restarts[[1L]])
     where 39: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 40: with_reporter(reporter = current_reporter, results <- lapply(paths,
     test_file, env = env, reporter = current_reporter, start_end_reporter = FALSE,
     load_helpers = FALSE, wrap = wrap))
     where 41: test_files(paths, reporter = reporter, env = env, stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 42: test_dir(path = test_path, reporter = reporter, env = env, filter = filter,
     ..., stop_on_failure = stop_on_failure, stop_on_warning = stop_on_warning,
     wrap = wrap)
     where 43: test_package_dir(package = package, test_path = test_path, filter = filter,
     reporter = reporter, ..., stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 44: test_check("OptimClassifier")
    
     --- value of length: 2 type: logical ---
     [1] TRUE TRUE
     --- function from context ---
     function (yhat, y, metrics = FALSE)
     {
     if (class(yhat) != class(y)) {
     yhat <- as.numeric(yhat)
     y <- as.numeric(y)
     }
     Real <- y
     Estimated <- yhat
     MC <- table(Estimated, Real)
     Success_rate <- (sum(diag(MC)))/sum(MC)
     tI_error <- sum(MC[upper.tri(MC, diag = FALSE)])/sum(MC)
     tII_error <- sum(MC[lower.tri(MC, diag = FALSE)])/sum(MC)
     General_metrics <- data.frame(Success_rate = Success_rate,
     tI_error = tI_error, tII_error = tII_error)
     if (metrics == TRUE) {
     Real_cases <- colSums(MC)
     Sensitivity <- diag(MC)/colSums(MC)
     Prevalence <- Real_cases/sum(MC)
     Specificity_F <- function(N, Matrix) {
     sum(diag(Matrix)[-N])/sum(colSums(Matrix)[-N])
     }
     Precision_F <- function(N, Matrix) {
     diag(Matrix)[N]/sum(diag(Matrix))
     }
     Specificity <- unlist(lapply(X = 1:nrow(MC), FUN = Specificity_F,
     Matrix = MC))
     Precision <- unlist(lapply(X = 1:nrow(MC), FUN = Precision_F,
     Matrix = MC))
     Categories <- names(Precision)
     Categorical_Metrics <- data.frame(Categories, Sensitivity,
     Prevalence, Specificity, Precision)
     output <- list(MC, General_metrics, Categorical_Metrics)
     }
     else {
     output <- MC
     }
     return(output)
     }
     <bytecode: 0x55eaea167e20>
     <environment: namespace:OptimClassifier>
     --- function search by body ---
     Function MC in namespace OptimClassifier has this body.
     ----------- END OF FAILURE REPORT --------------
     Fatal error: the condition has length > 1
Flavor: r-devel-linux-x86_64-debian-gcc