CRAN Package Check Results for Package stm

Last updated on 2019-11-26 00:52:14 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 1.3.4 46.55 121.80 168.35 ERROR
r-devel-linux-x86_64-debian-gcc 1.3.4 34.73 91.42 126.15 ERROR
r-devel-linux-x86_64-fedora-clang 1.3.4 385.92 OK
r-devel-linux-x86_64-fedora-gcc 1.3.4 381.38 OK
r-devel-windows-ix86+x86_64 1.3.4 115.00 773.00 888.00 OK
r-devel-windows-ix86+x86_64-gcc8 1.3.4 92.00 692.00 784.00 OK
r-patched-linux-x86_64 1.3.4 34.55 280.33 314.88 OK
r-patched-solaris-x86 1.3.4 511.30 OK
r-release-linux-x86_64 1.3.4 36.29 282.02 318.31 OK
r-release-windows-ix86+x86_64 1.3.4 107.00 779.00 886.00 OK
r-release-osx-x86_64 1.3.4 NOTE
r-oldrel-windows-ix86+x86_64 1.3.4 57.00 465.00 522.00 ERROR
r-oldrel-osx-x86_64 1.3.4 OK

Check Details

Version: 1.3.4
Check: examples
Result: ERROR
    Running examples in 'stm-Ex.R' failed
    The error most likely occurred in:
    
    > base::assign(".ptime", proc.time(), pos = "CheckExEnv")
    > ### Name: alignCorpus
    > ### Title: Align the vocabulary of a new corpus to an old corpus
    > ### Aliases: alignCorpus
    >
    > ### ** Examples
    >
    > #we process an original set that is just the first 100 documents
    > temp<-textProcessor(documents=gadarian$open.ended.response[1:100],metadata=gadarian[1:100,])
    Building corpus...
    Converting to Lower Case...
    Removing punctuation...
    Removing stopwords...
    Removing numbers...
    Stemming...
    Creating Output...
    > out <- prepDocuments(temp$documents, temp$vocab, temp$meta)
    Removing 329 of 512 terms (329 of 1139 tokens) due to frequency
    Removing 2 Documents with No Words
    Your corpus now has 98 documents, 183 terms and 810 tokens.> set.seed(02138)
    > #Maximum EM its is set low to make this run fast, run models to convergence!
    > mod.out <- stm(out$documents, out$vocab, 3, prevalence=~treatment + s(pid_rep),
    + data=out$meta, max.em.its=5)
    Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     .
    Initialization complete.
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
    :
     --- package (from environment) ---
    stm
     --- call from context ---
    estep(documents = documents, beta.index = betaindex, update.mu = (!is.null(mu$gamma)),
     beta$beta, lambda, mu$mu, sigma, verbose)
     --- call from argument ---
    if (class(sigobj) == "try-error") {
     sigmaentropy <- (0.5 * determinant(sigma, logarithm = TRUE)$modulus[1])
     siginv <- solve(sigma)
    } else {
     sigmaentropy <- sum(log(diag(sigobj)))
     siginv <- chol2inv(sigobj)
    }
     --- R stacktrace ---
    where 1: estep(documents = documents, beta.index = betaindex, update.mu = (!is.null(mu$gamma)),
     beta$beta, lambda, mu$mu, sigma, verbose)
    where 2: stm.control(documents, vocab, settings, model)
    where 3: stm(out$documents, out$vocab, 3, prevalence = ~treatment + s(pid_rep),
     data = out$meta, max.em.its = 5)
    
     --- value of length: 2 type: logical ---
    [1] FALSE FALSE
     --- function from context ---
    function (documents, beta.index, update.mu, beta, lambda.old,
     mu, sigma, verbose)
    {
     V <- ncol(beta[[1]])
     K <- nrow(beta[[1]])
     N <- length(documents)
     A <- length(beta)
     ctevery <- ifelse(N > 100, floor(N/100), 1)
     if (!update.mu)
     mu.i <- as.numeric(mu)
     sigma.ss <- diag(0, nrow = (K - 1))
     beta.ss <- vector(mode = "list", length = A)
     for (i in 1:A) {
     beta.ss[[i]] <- matrix(0, nrow = K, ncol = V)
     }
     bound <- vector(length = N)
     lambda <- vector("list", length = N)
     sigobj <- try(chol.default(sigma), silent = TRUE)
     if (class(sigobj) == "try-error") {
     sigmaentropy <- (0.5 * determinant(sigma, logarithm = TRUE)$modulus[1])
     siginv <- solve(sigma)
     }
     else {
     sigmaentropy <- sum(log(diag(sigobj)))
     siginv <- chol2inv(sigobj)
     }
     for (i in 1:N) {
     doc <- documents[[i]]
     words <- doc[1, ]
     aspect <- beta.index[i]
     init <- lambda.old[i, ]
     if (update.mu)
     mu.i <- mu[, i]
     beta.i <- beta[[aspect]][, words, drop = FALSE]
     doc.results <- logisticnormalcpp(eta = init, mu = mu.i,
     siginv = siginv, beta = beta.i, doc = doc, sigmaentropy = sigmaentropy)
     sigma.ss <- sigma.ss + doc.results$eta$nu
     beta.ss[[aspect]][, words] <- doc.results$phis + beta.ss[[aspect]][,
     words]
     bound[i] <- doc.results$bound
     lambda[[i]] <- c(doc.results$eta$lambda)
     if (verbose && i%%ctevery == 0)
     cat(".")
     }
     if (verbose)
     cat("\n")
     lambda <- do.call(rbind, lambda)
     return(list(sigma = sigma.ss, beta = beta.ss, bound = bound,
     lambda = lambda))
    }
    <bytecode: 0x8de1120>
    <environment: namespace:stm>
     --- function search by body ---
    Function estep in namespace stm has this body.
     ----------- END OF FAILURE REPORT --------------
    Fatal error: the condition has length > 1
Flavor: r-devel-linux-x86_64-debian-clang

Version: 1.3.4
Check: tests
Result: ERROR
     Running 'spelling.R' [0s/1s]
     Running 'testthat.R' [11s/13s]
    Running the tests in 'tests/testthat.R' failed.
    Complete output:
     > library(testthat)
     > library(stm)
     stm v1.3.4 successfully loaded. See ?stm for help.
     Papers, resources, and other materials at structuraltopicmodel.com
     >
     > test_check("stm")
     Building corpus...
     Converting to Lower Case...
     Removing punctuation...
     Removing stopwords...
     Removing numbers...
     Stemming...
     Creating Output...
     Removed 1098 of 1102 terms (3659 of 4143 tokens) due to sparselevel of 0.8
     Removing 2 of 4 terms (223 of 377 tokens) due to frequency
     Removing 117 Documents with No Words
     Your corpus now has 135 documents, 2 terms and 154 tokens.Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ...........
     Initialization complete.
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
     :
     --- package (from environment) ---
     stm
     --- call from context ---
     estep(documents = documents, beta.index = betaindex, update.mu = (!is.null(mu$gamma)),
     beta$beta, lambda, mu$mu, sigma, verbose)
     --- call from argument ---
     if (class(sigobj) == "try-error") {
     sigmaentropy <- (0.5 * determinant(sigma, logarithm = TRUE)$modulus[1])
     siginv <- solve(sigma)
     } else {
     sigmaentropy <- sum(log(diag(sigobj)))
     siginv <- chol2inv(sigobj)
     }
     --- R stacktrace ---
     where 1: estep(documents = documents, beta.index = betaindex, update.mu = (!is.null(mu$gamma)),
     beta$beta, lambda, mu$mu, sigma, verbose)
     where 2: stm.control(documents, vocab, settings, model)
     where 3 at testthat/test-quanteda-stm.R#13: stm(gadarian_dfm, K = 3, prevalence = ~treatment + s(pid_rep),
     data = docvars(gadarian_corpus), max.em.its = 2)
     where 4: eval(code, test_env)
     where 5: eval(code, test_env)
     where 6: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 7: doTryCatch(return(expr), name, parentenv, handler)
     where 8: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 9: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 10: doTryCatch(return(expr), name, parentenv, handler)
     where 11: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 12: tryCatchList(expr, classes, parentenv, handlers)
     where 13: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 14: test_code(desc, code, env = parent.frame())
     where 15 at testthat/test-quanteda-stm.R#5: test_that("Test that stm works on a quanteda dfm", {
     require(quanteda)
     if (utils::compareVersion(as.character(utils::packageVersion("quanteda")),
     "0.9.9-31") >= 0) {
     gadarian_corpus <- corpus(gadarian, text_field = "open.ended.response")
     gadarian_dfm <- dfm(gadarian_corpus, remove = stopwords("english"),
     stem = TRUE)
     set.seed(10012)
     stm_from_dfm <- stm(gadarian_dfm, K = 3, prevalence = ~treatment +
     s(pid_rep), data = docvars(gadarian_corpus), max.em.its = 2)
     expect_identical(class(stm_from_dfm), "STM")
     }
     else {
     expect_identical("STM", "STM")
     }
     })
     where 16: eval(code, test_env)
     where 17: eval(code, test_env)
     where 18: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 19: doTryCatch(return(expr), name, parentenv, handler)
     where 20: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 21: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 22: doTryCatch(return(expr), name, parentenv, handler)
     where 23: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 24: tryCatchList(expr, classes, parentenv, handlers)
     where 25: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 26: test_code(NULL, exprs, env)
     where 27: source_file(path, new.env(parent = env), chdir = TRUE, wrap = wrap)
     where 28: force(code)
     where 29: doWithOneRestart(return(expr), restart)
     where 30: withOneRestart(expr, restarts[[1L]])
     where 31: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 32: with_reporter(reporter = reporter, start_end_reporter = start_end_reporter,
     {
     reporter$start_file(basename(path))
     lister$start_file(basename(path))
     source_file(path, new.env(parent = env), chdir = TRUE,
     wrap = wrap)
     reporter$.end_context()
     reporter$end_file()
     })
     where 33: FUN(X[[i]], ...)
     where 34: lapply(paths, test_file, env = env, reporter = current_reporter,
     start_end_reporter = FALSE, load_helpers = FALSE, wrap = wrap)
     where 35: force(code)
     where 36: doWithOneRestart(return(expr), restart)
     where 37: withOneRestart(expr, restarts[[1L]])
     where 38: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 39: with_reporter(reporter = current_reporter, results <- lapply(paths,
     test_file, env = env, reporter = current_reporter, start_end_reporter = FALSE,
     load_helpers = FALSE, wrap = wrap))
     where 40: test_files(paths, reporter = reporter, env = env, stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 41: test_dir(path = test_path, reporter = reporter, env = env, filter = filter,
     ..., stop_on_failure = stop_on_failure, stop_on_warning = stop_on_warning,
     wrap = wrap)
     where 42: test_package_dir(package = package, test_path = test_path, filter = filter,
     reporter = reporter, ..., stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 43: test_check("stm")
    
     --- value of length: 2 type: logical ---
     [1] FALSE FALSE
     --- function from context ---
     function (documents, beta.index, update.mu, beta, lambda.old,
     mu, sigma, verbose)
     {
     V <- ncol(beta[[1]])
     K <- nrow(beta[[1]])
     N <- length(documents)
     A <- length(beta)
     ctevery <- ifelse(N > 100, floor(N/100), 1)
     if (!update.mu)
     mu.i <- as.numeric(mu)
     sigma.ss <- diag(0, nrow = (K - 1))
     beta.ss <- vector(mode = "list", length = A)
     for (i in 1:A) {
     beta.ss[[i]] <- matrix(0, nrow = K, ncol = V)
     }
     bound <- vector(length = N)
     lambda <- vector("list", length = N)
     sigobj <- try(chol.default(sigma), silent = TRUE)
     if (class(sigobj) == "try-error") {
     sigmaentropy <- (0.5 * determinant(sigma, logarithm = TRUE)$modulus[1])
     siginv <- solve(sigma)
     }
     else {
     sigmaentropy <- sum(log(diag(sigobj)))
     siginv <- chol2inv(sigobj)
     }
     for (i in 1:N) {
     doc <- documents[[i]]
     words <- doc[1, ]
     aspect <- beta.index[i]
     init <- lambda.old[i, ]
     if (update.mu)
     mu.i <- mu[, i]
     beta.i <- beta[[aspect]][, words, drop = FALSE]
     doc.results <- logisticnormalcpp(eta = init, mu = mu.i,
     siginv = siginv, beta = beta.i, doc = doc, sigmaentropy = sigmaentropy)
     sigma.ss <- sigma.ss + doc.results$eta$nu
     beta.ss[[aspect]][, words] <- doc.results$phis + beta.ss[[aspect]][,
     words]
     bound[i] <- doc.results$bound
     lambda[[i]] <- c(doc.results$eta$lambda)
     if (verbose && i%%ctevery == 0)
     cat(".")
     }
     if (verbose)
     cat("\n")
     lambda <- do.call(rbind, lambda)
     return(list(sigma = sigma.ss, beta = beta.ss, bound = bound,
     lambda = lambda))
     }
     <bytecode: 0x6244660>
     <environment: namespace:stm>
     --- function search by body ---
     Function estep in namespace stm has this body.
     ----------- END OF FAILURE REPORT --------------
     Fatal error: the condition has length > 1
Flavor: r-devel-linux-x86_64-debian-clang

Version: 1.3.4
Check: re-building of vignette outputs
Result: WARN
    Error(s) in re-building vignettes:
     ...
    --- re-building 'stmVignette.Rnw' using Sweave
    stm v1.3.4 successfully loaded. See ?stm for help.
     Papers, resources, and other materials at structuraltopicmodel.com
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
    :
     --- package (from environment) ---
    stm
     --- call from context ---
    labelTopics(poliblogPrevFit, c(3, 7, 20))
     --- call from argument ---
    if (class(frexlabels) == "try-error") {
     out$frex[[k]] <- "FREX encountered an error and failed to run"
    } else {
     out$frex[[k]] <- vocab[frexlabels[1:n, k]]
    }
     --- R stacktrace ---
    where 1: labelTopics(poliblogPrevFit, c(3, 7, 20))
    where 2: eval(expr, .GlobalEnv)
    where 3: eval(expr, .GlobalEnv)
    where 4: withVisible(eval(expr, .GlobalEnv))
    where 5: doTryCatch(return(expr), name, parentenv, handler)
    where 6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
    where 7: tryCatchList(expr, classes, parentenv, handlers)
    where 8: tryCatch(expr, error = function(e) {
     call <- conditionCall(e)
     if (!is.null(call)) {
     if (identical(call[[1L]], quote(doTryCatch)))
     call <- sys.call(-4L)
     dcall <- deparse(call)[1L]
     prefix <- paste("Error in", dcall, ": ")
     LONG <- 75L
     sm <- strsplit(conditionMessage(e), "\n")[[1L]]
     w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w")
     if (is.na(w))
     w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L],
     type = "b")
     if (w > LONG)
     prefix <- paste0(prefix, "\n ")
     }
     else prefix <- "Error : "
     msg <- paste0(prefix, conditionMessage(e), "\n")
     .Internal(seterrmessage(msg[1L]))
     if (!silent && isTRUE(getOption("show.error.messages"))) {
     cat(msg, file = outFile)
     .Internal(printDeferredWarnings())
     }
     invisible(structure(msg, class = "try-error", condition = e))
    })
    where 9: try(withVisible(eval(expr, .GlobalEnv)), silent = TRUE)
    where 10: evalFunc(ce, options)
    where 11: tryCatchList(expr, classes, parentenv, handlers)
    where 12: tryCatch(evalFunc(ce, options), finally = {
     cat("\n")
     sink()
    })
    where 13: driver$runcode(drobj, chunk, chunkopts)
    where 14: utils::Sweave(...)
    where 15: engine$weave(file, quiet = quiet, encoding = enc)
    where 16: doTryCatch(return(expr), name, parentenv, handler)
    where 17: tryCatchOne(expr, names, parentenv, handlers[[1L]])
    where 18: tryCatchList(expr, classes, parentenv, handlers)
    where 19: tryCatch({
     engine$weave(file, quiet = quiet, encoding = enc)
     setwd(startdir)
     output <- find_vignette_product(name, by = "weave", engine = engine)
     if (!have.makefile && vignette_is_tex(output)) {
     texi2pdf(file = output, clean = FALSE, quiet = quiet)
     output <- find_vignette_product(name, by = "texi2pdf",
     engine = engine)
     }
     outputs <- c(outputs, output)
    }, error = function(e) {
     thisOK <<- FALSE
     fails <<- c(fails, file)
     message(gettextf("Error: processing vignette '%s' failed with diagnostics:\n%s",
     file, conditionMessage(e)))
    })
    where 20: tools:::buildVignettes(dir = "/home/hornik/tmp/R.check/r-devel-clang/Work/PKGS/stm.Rcheck/vign_test/stm",
     ser_elibs = "/tmp/RtmpSKrqqo/file29716c7242b5.rds")
    
     --- value of length: 2 type: logical ---
    [1] FALSE FALSE
     --- function from context ---
    function (model, topics = NULL, n = 7, frexweight = 0.5)
    {
     if (n < 1)
     stop("n must be 1 or greater")
     logbeta <- model$beta$logbeta
     K <- model$settings$dim$K
     vocab <- model$vocab
     if (is.null(topics))
     topics <- 1:nrow(logbeta[[1]])
     aspect <- length(logbeta) > 1
     out <- list()
     if (!aspect) {
     out$prob <- list()
     out$frex <- list()
     out$lift <- list()
     out$score <- list()
     logbeta <- logbeta[[1]]
     wordcounts <- model$settings$dim$wcounts$x
     frexlabels <- try(calcfrex(logbeta, frexweight, wordcounts),
     silent = TRUE)
     liftlabels <- try(calclift(logbeta, wordcounts), silent = TRUE)
     scorelabels <- try(calcscore(logbeta), silent = TRUE)
     problabels <- apply(logbeta, 1, order, decreasing = TRUE)
     for (k in 1:K) {
     out$prob[[k]] <- vocab[problabels[1:n, k]]
     if (class(frexlabels) == "try-error") {
     out$frex[[k]] <- "FREX encountered an error and failed to run"
     }
     else {
     out$frex[[k]] <- vocab[frexlabels[1:n, k]]
     }
     if (class(liftlabels) == "try-error") {
     out$lift[[k]] <- "Lift encountered an error and failed to run"
     }
     else {
     out$lift[[k]] <- vocab[liftlabels[1:n, k]]
     }
     if (class(scorelabels) == "try-error") {
     out$lift[[k]] <- "Score encountered an error and failed to run"
     }
     else {
     out$score[[k]] <- vocab[scorelabels[1:n, k]]
     }
     }
     out <- lapply(out, do.call, what = rbind)
     }
     else {
     labs <- lapply(model$beta$kappa$params, function(x) {
     windex <- order(x, decreasing = TRUE)[1:n]
     ifelse(x[windex] > 0.001, vocab[windex], "")
     })
     labs <- do.call(rbind, labs)
     A <- model$settings$dim$A
     anames <- model$settings$covariates$yvarlevels
     i1 <- K + 1
     i2 <- K + A
     intnums <- (i2 + 1):nrow(labs)
     out$topics <- labs[topics, , drop = FALSE]
     out$covariate <- labs[i1:i2, , drop = FALSE]
     rownames(out$covariate) <- anames
     if (model$settings$kappa$interactions) {
     tindx <- rep(1:K, each = A)
     intnums <- intnums[tindx %in% topics]
     out$interaction <- labs[intnums, , drop = FALSE]
     }
     }
     out$topicnums <- topics
     class(out) <- "labelTopics"
     return(out)
    }
    <bytecode: 0x99f3e50>
    <environment: namespace:stm>
     --- function search by body ---
    Function labelTopics in namespace stm has this body.
     ----------- END OF FAILURE REPORT --------------
    Fatal error: the condition has length > 1
Flavor: r-devel-linux-x86_64-debian-clang

Version: 1.3.4
Check: examples
Result: ERROR
    Running examples in ‘stm-Ex.R’ failed
    The error most likely occurred in:
    
    > base::assign(".ptime", proc.time(), pos = "CheckExEnv")
    > ### Name: alignCorpus
    > ### Title: Align the vocabulary of a new corpus to an old corpus
    > ### Aliases: alignCorpus
    >
    > ### ** Examples
    >
    > #we process an original set that is just the first 100 documents
    > temp<-textProcessor(documents=gadarian$open.ended.response[1:100],metadata=gadarian[1:100,])
    Building corpus...
    Converting to Lower Case...
    Removing punctuation...
    Removing stopwords...
    Removing numbers...
    Stemming...
    Creating Output...
    > out <- prepDocuments(temp$documents, temp$vocab, temp$meta)
    Removing 329 of 512 terms (329 of 1139 tokens) due to frequency
    Removing 2 Documents with No Words
    Your corpus now has 98 documents, 183 terms and 810 tokens.> set.seed(02138)
    > #Maximum EM its is set low to make this run fast, run models to convergence!
    > mod.out <- stm(out$documents, out$vocab, 3, prevalence=~treatment + s(pid_rep),
    + data=out$meta, max.em.its=5)
    Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     .
    Initialization complete.
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
    :
     --- package (from environment) ---
    stm
     --- call from context ---
    estep(documents = documents, beta.index = betaindex, update.mu = (!is.null(mu$gamma)),
     beta$beta, lambda, mu$mu, sigma, verbose)
     --- call from argument ---
    if (class(sigobj) == "try-error") {
     sigmaentropy <- (0.5 * determinant(sigma, logarithm = TRUE)$modulus[1])
     siginv <- solve(sigma)
    } else {
     sigmaentropy <- sum(log(diag(sigobj)))
     siginv <- chol2inv(sigobj)
    }
     --- R stacktrace ---
    where 1: estep(documents = documents, beta.index = betaindex, update.mu = (!is.null(mu$gamma)),
     beta$beta, lambda, mu$mu, sigma, verbose)
    where 2: stm.control(documents, vocab, settings, model)
    where 3: stm(out$documents, out$vocab, 3, prevalence = ~treatment + s(pid_rep),
     data = out$meta, max.em.its = 5)
    
     --- value of length: 2 type: logical ---
    [1] FALSE FALSE
     --- function from context ---
    function (documents, beta.index, update.mu, beta, lambda.old,
     mu, sigma, verbose)
    {
     V <- ncol(beta[[1]])
     K <- nrow(beta[[1]])
     N <- length(documents)
     A <- length(beta)
     ctevery <- ifelse(N > 100, floor(N/100), 1)
     if (!update.mu)
     mu.i <- as.numeric(mu)
     sigma.ss <- diag(0, nrow = (K - 1))
     beta.ss <- vector(mode = "list", length = A)
     for (i in 1:A) {
     beta.ss[[i]] <- matrix(0, nrow = K, ncol = V)
     }
     bound <- vector(length = N)
     lambda <- vector("list", length = N)
     sigobj <- try(chol.default(sigma), silent = TRUE)
     if (class(sigobj) == "try-error") {
     sigmaentropy <- (0.5 * determinant(sigma, logarithm = TRUE)$modulus[1])
     siginv <- solve(sigma)
     }
     else {
     sigmaentropy <- sum(log(diag(sigobj)))
     siginv <- chol2inv(sigobj)
     }
     for (i in 1:N) {
     doc <- documents[[i]]
     words <- doc[1, ]
     aspect <- beta.index[i]
     init <- lambda.old[i, ]
     if (update.mu)
     mu.i <- mu[, i]
     beta.i <- beta[[aspect]][, words, drop = FALSE]
     doc.results <- logisticnormalcpp(eta = init, mu = mu.i,
     siginv = siginv, beta = beta.i, doc = doc, sigmaentropy = sigmaentropy)
     sigma.ss <- sigma.ss + doc.results$eta$nu
     beta.ss[[aspect]][, words] <- doc.results$phis + beta.ss[[aspect]][,
     words]
     bound[i] <- doc.results$bound
     lambda[[i]] <- c(doc.results$eta$lambda)
     if (verbose && i%%ctevery == 0)
     cat(".")
     }
     if (verbose)
     cat("\n")
     lambda <- do.call(rbind, lambda)
     return(list(sigma = sigma.ss, beta = beta.ss, bound = bound,
     lambda = lambda))
    }
    <bytecode: 0x56330cd61f80>
    <environment: namespace:stm>
     --- function search by body ---
    Function estep in namespace stm has this body.
     ----------- END OF FAILURE REPORT --------------
    Fatal error: the condition has length > 1
Flavor: r-devel-linux-x86_64-debian-gcc

Version: 1.3.4
Check: tests
Result: ERROR
     Running ‘spelling.R’ [0s/1s]
     Running ‘testthat.R’ [8s/13s]
    Running the tests in ‘tests/testthat.R’ failed.
    Complete output:
     > library(testthat)
     > library(stm)
     stm v1.3.4 successfully loaded. See ?stm for help.
     Papers, resources, and other materials at structuraltopicmodel.com
     >
     > test_check("stm")
     Building corpus...
     Converting to Lower Case...
     Removing punctuation...
     Removing stopwords...
     Removing numbers...
     Stemming...
     Creating Output...
     Removed 1098 of 1102 terms (3659 of 4143 tokens) due to sparselevel of 0.8
     Removing 2 of 4 terms (223 of 377 tokens) due to frequency
     Removing 117 Documents with No Words
     Your corpus now has 135 documents, 2 terms and 154 tokens.Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ...........
     Initialization complete.
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
     :
     --- package (from environment) ---
     stm
     --- call from context ---
     estep(documents = documents, beta.index = betaindex, update.mu = (!is.null(mu$gamma)),
     beta$beta, lambda, mu$mu, sigma, verbose)
     --- call from argument ---
     if (class(sigobj) == "try-error") {
     sigmaentropy <- (0.5 * determinant(sigma, logarithm = TRUE)$modulus[1])
     siginv <- solve(sigma)
     } else {
     sigmaentropy <- sum(log(diag(sigobj)))
     siginv <- chol2inv(sigobj)
     }
     --- R stacktrace ---
     where 1: estep(documents = documents, beta.index = betaindex, update.mu = (!is.null(mu$gamma)),
     beta$beta, lambda, mu$mu, sigma, verbose)
     where 2: stm.control(documents, vocab, settings, model)
     where 3 at testthat/test-quanteda-stm.R#13: stm(gadarian_dfm, K = 3, prevalence = ~treatment + s(pid_rep),
     data = docvars(gadarian_corpus), max.em.its = 2)
     where 4: eval(code, test_env)
     where 5: eval(code, test_env)
     where 6: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 7: doTryCatch(return(expr), name, parentenv, handler)
     where 8: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 9: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 10: doTryCatch(return(expr), name, parentenv, handler)
     where 11: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 12: tryCatchList(expr, classes, parentenv, handlers)
     where 13: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 14: test_code(desc, code, env = parent.frame())
     where 15 at testthat/test-quanteda-stm.R#5: test_that("Test that stm works on a quanteda dfm", {
     require(quanteda)
     if (utils::compareVersion(as.character(utils::packageVersion("quanteda")),
     "0.9.9-31") >= 0) {
     gadarian_corpus <- corpus(gadarian, text_field = "open.ended.response")
     gadarian_dfm <- dfm(gadarian_corpus, remove = stopwords("english"),
     stem = TRUE)
     set.seed(10012)
     stm_from_dfm <- stm(gadarian_dfm, K = 3, prevalence = ~treatment +
     s(pid_rep), data = docvars(gadarian_corpus), max.em.its = 2)
     expect_identical(class(stm_from_dfm), "STM")
     }
     else {
     expect_identical("STM", "STM")
     }
     })
     where 16: eval(code, test_env)
     where 17: eval(code, test_env)
     where 18: withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error)
     where 19: doTryCatch(return(expr), name, parentenv, handler)
     where 20: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     where 21: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
     where 22: doTryCatch(return(expr), name, parentenv, handler)
     where 23: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
     names[nh], parentenv, handlers[[nh]])
     where 24: tryCatchList(expr, classes, parentenv, handlers)
     where 25: tryCatch(withCallingHandlers({
     eval(code, test_env)
     if (!handled && !is.null(test)) {
     skip_empty()
     }
     }, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
     message = handle_message, error = handle_error), error = handle_fatal,
     skip = function(e) {
     })
     where 26: test_code(NULL, exprs, env)
     where 27: source_file(path, new.env(parent = env), chdir = TRUE, wrap = wrap)
     where 28: force(code)
     where 29: doWithOneRestart(return(expr), restart)
     where 30: withOneRestart(expr, restarts[[1L]])
     where 31: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 32: with_reporter(reporter = reporter, start_end_reporter = start_end_reporter,
     {
     reporter$start_file(basename(path))
     lister$start_file(basename(path))
     source_file(path, new.env(parent = env), chdir = TRUE,
     wrap = wrap)
     reporter$.end_context()
     reporter$end_file()
     })
     where 33: FUN(X[[i]], ...)
     where 34: lapply(paths, test_file, env = env, reporter = current_reporter,
     start_end_reporter = FALSE, load_helpers = FALSE, wrap = wrap)
     where 35: force(code)
     where 36: doWithOneRestart(return(expr), restart)
     where 37: withOneRestart(expr, restarts[[1L]])
     where 38: withRestarts(testthat_abort_reporter = function() NULL, force(code))
     where 39: with_reporter(reporter = current_reporter, results <- lapply(paths,
     test_file, env = env, reporter = current_reporter, start_end_reporter = FALSE,
     load_helpers = FALSE, wrap = wrap))
     where 40: test_files(paths, reporter = reporter, env = env, stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 41: test_dir(path = test_path, reporter = reporter, env = env, filter = filter,
     ..., stop_on_failure = stop_on_failure, stop_on_warning = stop_on_warning,
     wrap = wrap)
     where 42: test_package_dir(package = package, test_path = test_path, filter = filter,
     reporter = reporter, ..., stop_on_failure = stop_on_failure,
     stop_on_warning = stop_on_warning, wrap = wrap)
     where 43: test_check("stm")
    
     --- value of length: 2 type: logical ---
     [1] FALSE FALSE
     --- function from context ---
     function (documents, beta.index, update.mu, beta, lambda.old,
     mu, sigma, verbose)
     {
     V <- ncol(beta[[1]])
     K <- nrow(beta[[1]])
     N <- length(documents)
     A <- length(beta)
     ctevery <- ifelse(N > 100, floor(N/100), 1)
     if (!update.mu)
     mu.i <- as.numeric(mu)
     sigma.ss <- diag(0, nrow = (K - 1))
     beta.ss <- vector(mode = "list", length = A)
     for (i in 1:A) {
     beta.ss[[i]] <- matrix(0, nrow = K, ncol = V)
     }
     bound <- vector(length = N)
     lambda <- vector("list", length = N)
     sigobj <- try(chol.default(sigma), silent = TRUE)
     if (class(sigobj) == "try-error") {
     sigmaentropy <- (0.5 * determinant(sigma, logarithm = TRUE)$modulus[1])
     siginv <- solve(sigma)
     }
     else {
     sigmaentropy <- sum(log(diag(sigobj)))
     siginv <- chol2inv(sigobj)
     }
     for (i in 1:N) {
     doc <- documents[[i]]
     words <- doc[1, ]
     aspect <- beta.index[i]
     init <- lambda.old[i, ]
     if (update.mu)
     mu.i <- mu[, i]
     beta.i <- beta[[aspect]][, words, drop = FALSE]
     doc.results <- logisticnormalcpp(eta = init, mu = mu.i,
     siginv = siginv, beta = beta.i, doc = doc, sigmaentropy = sigmaentropy)
     sigma.ss <- sigma.ss + doc.results$eta$nu
     beta.ss[[aspect]][, words] <- doc.results$phis + beta.ss[[aspect]][,
     words]
     bound[i] <- doc.results$bound
     lambda[[i]] <- c(doc.results$eta$lambda)
     if (verbose && i%%ctevery == 0)
     cat(".")
     }
     if (verbose)
     cat("\n")
     lambda <- do.call(rbind, lambda)
     return(list(sigma = sigma.ss, beta = beta.ss, bound = bound,
     lambda = lambda))
     }
     <bytecode: 0x55804bd0baa0>
     <environment: namespace:stm>
     --- function search by body ---
     Function estep in namespace stm has this body.
     ----------- END OF FAILURE REPORT --------------
     Fatal error: the condition has length > 1
Flavor: r-devel-linux-x86_64-debian-gcc

Version: 1.3.4
Check: re-building of vignette outputs
Result: WARN
    Error(s) in re-building vignettes:
     ...
    --- re-building ‘stmVignette.Rnw’ using Sweave
    stm v1.3.4 successfully loaded. See ?stm for help.
     Papers, resources, and other materials at structuraltopicmodel.com
     ----------- FAILURE REPORT --------------
     --- failure: the condition has length > 1 ---
     --- srcref ---
    :
     --- package (from environment) ---
    stm
     --- call from context ---
    labelTopics(poliblogPrevFit, c(3, 7, 20))
     --- call from argument ---
    if (class(frexlabels) == "try-error") {
     out$frex[[k]] <- "FREX encountered an error and failed to run"
    } else {
     out$frex[[k]] <- vocab[frexlabels[1:n, k]]
    }
     --- R stacktrace ---
    where 1: labelTopics(poliblogPrevFit, c(3, 7, 20))
    where 2: eval(expr, .GlobalEnv)
    where 3: eval(expr, .GlobalEnv)
    where 4: withVisible(eval(expr, .GlobalEnv))
    where 5: doTryCatch(return(expr), name, parentenv, handler)
    where 6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
    where 7: tryCatchList(expr, classes, parentenv, handlers)
    where 8: tryCatch(expr, error = function(e) {
     call <- conditionCall(e)
     if (!is.null(call)) {
     if (identical(call[[1L]], quote(doTryCatch)))
     call <- sys.call(-4L)
     dcall <- deparse(call)[1L]
     prefix <- paste("Error in", dcall, ": ")
     LONG <- 75L
     sm <- strsplit(conditionMessage(e), "\n")[[1L]]
     w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w")
     if (is.na(w))
     w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L],
     type = "b")
     if (w > LONG)
     prefix <- paste0(prefix, "\n ")
     }
     else prefix <- "Error : "
     msg <- paste0(prefix, conditionMessage(e), "\n")
     .Internal(seterrmessage(msg[1L]))
     if (!silent && isTRUE(getOption("show.error.messages"))) {
     cat(msg, file = outFile)
     .Internal(printDeferredWarnings())
     }
     invisible(structure(msg, class = "try-error", condition = e))
    })
    where 9: try(withVisible(eval(expr, .GlobalEnv)), silent = TRUE)
    where 10: evalFunc(ce, options)
    where 11: tryCatchList(expr, classes, parentenv, handlers)
    where 12: tryCatch(evalFunc(ce, options), finally = {
     cat("\n")
     sink()
    })
    where 13: driver$runcode(drobj, chunk, chunkopts)
    where 14: utils::Sweave(...)
    where 15: engine$weave(file, quiet = quiet, encoding = enc)
    where 16: doTryCatch(return(expr), name, parentenv, handler)
    where 17: tryCatchOne(expr, names, parentenv, handlers[[1L]])
    where 18: tryCatchList(expr, classes, parentenv, handlers)
    where 19: tryCatch({
     engine$weave(file, quiet = quiet, encoding = enc)
     setwd(startdir)
     output <- find_vignette_product(name, by = "weave", engine = engine)
     if (!have.makefile && vignette_is_tex(output)) {
     texi2pdf(file = output, clean = FALSE, quiet = quiet)
     output <- find_vignette_product(name, by = "texi2pdf",
     engine = engine)
     }
     outputs <- c(outputs, output)
    }, error = function(e) {
     thisOK <<- FALSE
     fails <<- c(fails, file)
     message(gettextf("Error: processing vignette '%s' failed with diagnostics:\n%s",
     file, conditionMessage(e)))
    })
    where 20: tools:::buildVignettes(dir = "/home/hornik/tmp/R.check/r-devel-gcc/Work/PKGS/stm.Rcheck/vign_test/stm",
     ser_elibs = "/home/hornik/tmp/scratch/Rtmp0FYbnW/file42c1669bab06.rds")
    
     --- value of length: 2 type: logical ---
    [1] FALSE FALSE
     --- function from context ---
    function (model, topics = NULL, n = 7, frexweight = 0.5)
    {
     if (n < 1)
     stop("n must be 1 or greater")
     logbeta <- model$beta$logbeta
     K <- model$settings$dim$K
     vocab <- model$vocab
     if (is.null(topics))
     topics <- 1:nrow(logbeta[[1]])
     aspect <- length(logbeta) > 1
     out <- list()
     if (!aspect) {
     out$prob <- list()
     out$frex <- list()
     out$lift <- list()
     out$score <- list()
     logbeta <- logbeta[[1]]
     wordcounts <- model$settings$dim$wcounts$x
     frexlabels <- try(calcfrex(logbeta, frexweight, wordcounts),
     silent = TRUE)
     liftlabels <- try(calclift(logbeta, wordcounts), silent = TRUE)
     scorelabels <- try(calcscore(logbeta), silent = TRUE)
     problabels <- apply(logbeta, 1, order, decreasing = TRUE)
     for (k in 1:K) {
     out$prob[[k]] <- vocab[problabels[1:n, k]]
     if (class(frexlabels) == "try-error") {
     out$frex[[k]] <- "FREX encountered an error and failed to run"
     }
     else {
     out$frex[[k]] <- vocab[frexlabels[1:n, k]]
     }
     if (class(liftlabels) == "try-error") {
     out$lift[[k]] <- "Lift encountered an error and failed to run"
     }
     else {
     out$lift[[k]] <- vocab[liftlabels[1:n, k]]
     }
     if (class(scorelabels) == "try-error") {
     out$lift[[k]] <- "Score encountered an error and failed to run"
     }
     else {
     out$score[[k]] <- vocab[scorelabels[1:n, k]]
     }
     }
     out <- lapply(out, do.call, what = rbind)
     }
     else {
     labs <- lapply(model$beta$kappa$params, function(x) {
     windex <- order(x, decreasing = TRUE)[1:n]
     ifelse(x[windex] > 0.001, vocab[windex], "")
     })
     labs <- do.call(rbind, labs)
     A <- model$settings$dim$A
     anames <- model$settings$covariates$yvarlevels
     i1 <- K + 1
     i2 <- K + A
     intnums <- (i2 + 1):nrow(labs)
     out$topics <- labs[topics, , drop = FALSE]
     out$covariate <- labs[i1:i2, , drop = FALSE]
     rownames(out$covariate) <- anames
     if (model$settings$kappa$interactions) {
     tindx <- rep(1:K, each = A)
     intnums <- intnums[tindx %in% topics]
     out$interaction <- labs[intnums, , drop = FALSE]
     }
     }
     out$topicnums <- topics
     class(out) <- "labelTopics"
     return(out)
    }
    <bytecode: 0x55c4e3b44d68>
    <environment: namespace:stm>
     --- function search by body ---
    Function labelTopics in namespace stm has this body.
     ----------- END OF FAILURE REPORT --------------
    Fatal error: the condition has length > 1
Flavor: r-devel-linux-x86_64-debian-gcc

Version: 1.3.4
Check: installed package size
Result: NOTE
     installed size is 5.0Mb
     sub-directories of 1Mb or more:
     data 1.7Mb
     libs 2.2Mb
Flavor: r-release-osx-x86_64

Version: 1.3.4
Check: running tests for arch ‘i386’
Result: ERROR
     Running 'spelling.R' [0s]
     Running 'testthat.R' [107s]
    Running the tests in 'tests/testthat.R' failed.
    Complete output:
     > library(testthat)
     > library(stm)
     stm v1.3.4 successfully loaded. See ?stm for help.
     Papers, resources, and other materials at structuraltopicmodel.com
     >
     > test_check("stm")
     Building corpus...
     Converting to Lower Case...
     Removing punctuation...
     Removing stopwords...
     Removing numbers...
     Stemming...
     Creating Output...
     Removed 1098 of 1102 terms (3659 of 4143 tokens) due to sparselevel of 0.8
     Removing 2 of 4 terms (223 of 377 tokens) due to frequency
     Removing 117 Documents with No Words
     Your corpus now has 135 documents, 2 terms and 154 tokens.Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ...........
     Initialization complete.
     .................................................................................................................
     Completed E-Step (0 seconds).
     Completed M-Step.
     Completing Iteration 1 (approx. per word bound = -5.753)
     .................................................................................................................
     Completed E-Step (0 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     Building corpus...
     Converting to Lower Case...
     Removing punctuation...
     Removing stopwords...
     Removing numbers...
     Stemming...
     Creating Output...
     Removing 640 of 1102 terms (640 of 3789 tokens) due to frequency
     Your corpus now has 341 documents, 462 terms and 3149 tokens.Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ....
     Initialization complete.
     .................................................................................................................
     Completed E-Step (0 seconds).
     Completed M-Step.
     Completing Iteration 1 (approx. per word bound = -5.622)
     .................................................................................................................
     Completed E-Step (0 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     Beginning Random Initialization
     ....................................................................................................
     Completed E-Step (3 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ..........................
     Initialization complete.
     ....................................................................................................
     Completed E-Step (3 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ..........................
     Initialization complete.
     ....................................................................................................
     Completed E-Step (3 seconds).
     Completed M-Step.
     Completing Iteration 1 (approx. per word bound = -7.197)
     ....................................................................................................
     Completed E-Step (2 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ..........................
     Initialization complete.
     ....................................................................................................
     Completed E-Step (3 seconds).
     .....................................................................................................
     Completed M-Step (16 seconds).
     Completing Iteration 1 (approx. per word bound = -7.197)
     ....................................................................................................
     Completed E-Step (2 seconds).
     .....................................................................................................
     Completed M-Step (16 seconds).
     Model Terminated Before Convergence Reached
     Beginning Random Initialization
     ....................................................................................................
     Completed Group 1 E-Step (1 seconds).
     ....................................................................................................
     Completed Group 2 E-Step (1 seconds).
     Completed M-Step.
     Completing Iteration 1 (approx. per word bound = -10.359)
     ....................................................................................................
     Completed Group 2 E-Step (1 seconds).
     Completed M-Step.
     ....................................................................................................
     Completed Group 1 E-Step (1 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     -- 1. Error: plot.STM doesn't throw error (@test-visualize.R#4) --------------
     cannot open the connection
     Backtrace:
     1. base::load(url("http://goo.gl/VPdxlS"))
    
     == testthat results ===========================================================
     [ OK: 9 | SKIPPED: 0 | WARNINGS: 1 | FAILED: 1 ]
     1. Error: plot.STM doesn't throw error (@test-visualize.R#4)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-oldrel-windows-ix86+x86_64

Version: 1.3.4
Check: running tests for arch ‘x64’
Result: ERROR
     Running 'spelling.R' [1s]
     Running 'testthat.R' [107s]
    Running the tests in 'tests/testthat.R' failed.
    Complete output:
     > library(testthat)
     > library(stm)
     stm v1.3.4 successfully loaded. See ?stm for help.
     Papers, resources, and other materials at structuraltopicmodel.com
     >
     > test_check("stm")
     Building corpus...
     Converting to Lower Case...
     Removing punctuation...
     Removing stopwords...
     Removing numbers...
     Stemming...
     Creating Output...
     Removed 1098 of 1102 terms (3659 of 4143 tokens) due to sparselevel of 0.8
     Removing 2 of 4 terms (223 of 377 tokens) due to frequency
     Removing 117 Documents with No Words
     Your corpus now has 135 documents, 2 terms and 154 tokens.Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ...........
     Initialization complete.
     .................................................................................................................
     Completed E-Step (0 seconds).
     Completed M-Step.
     Completing Iteration 1 (approx. per word bound = -5.748)
     .................................................................................................................
     Completed E-Step (0 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     Building corpus...
     Converting to Lower Case...
     Removing punctuation...
     Removing stopwords...
     Removing numbers...
     Stemming...
     Creating Output...
     Removing 640 of 1102 terms (640 of 3789 tokens) due to frequency
     Your corpus now has 341 documents, 462 terms and 3149 tokens.Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ....
     Initialization complete.
     .................................................................................................................
     Completed E-Step (0 seconds).
     Completed M-Step.
     Completing Iteration 1 (approx. per word bound = -5.623)
     .................................................................................................................
     Completed E-Step (0 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     Beginning Random Initialization
     ....................................................................................................
     Completed E-Step (3 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ..........................
     Initialization complete.
     ....................................................................................................
     Completed E-Step (3 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ..........................
     Initialization complete.
     ....................................................................................................
     Completed E-Step (2 seconds).
     Completed M-Step.
     Completing Iteration 1 (approx. per word bound = -7.197)
     ....................................................................................................
     Completed E-Step (2 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     Beginning Spectral Initialization
     Calculating the gram matrix...
     Finding anchor words...
     ...
     Recovering initialization...
     ..........................
     Initialization complete.
     ....................................................................................................
     Completed E-Step (2 seconds).
     .....................................................................................................
     Completed M-Step (16 seconds).
     Completing Iteration 1 (approx. per word bound = -7.197)
     ....................................................................................................
     Completed E-Step (3 seconds).
     .....................................................................................................
     Completed M-Step (16 seconds).
     Model Terminated Before Convergence Reached
     Beginning Random Initialization
     ....................................................................................................
     Completed Group 1 E-Step (1 seconds).
     ....................................................................................................
     Completed Group 2 E-Step (1 seconds).
     Completed M-Step.
     Completing Iteration 1 (approx. per word bound = -10.359)
     ....................................................................................................
     Completed Group 2 E-Step (1 seconds).
     Completed M-Step.
     ....................................................................................................
     Completed Group 1 E-Step (1 seconds).
     Completed M-Step.
     Model Terminated Before Convergence Reached
     -- 1. Error: plot.STM doesn't throw error (@test-visualize.R#4) --------------
     cannot open the connection
     Backtrace:
     1. base::load(url("http://goo.gl/VPdxlS"))
    
     == testthat results ===========================================================
     [ OK: 9 | SKIPPED: 0 | WARNINGS: 1 | FAILED: 1 ]
     1. Error: plot.STM doesn't throw error (@test-visualize.R#4)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-oldrel-windows-ix86+x86_64

Version: 1.3.4
Check: re-building of vignette outputs
Result: WARN
    Error in re-building vignettes:
     ...
    stm v1.3.4 successfully loaded. See ?stm for help.
     Papers, resources, and other materials at structuraltopicmodel.com
    Warning in load(url("http://goo.gl/VPdxlS")) :
     InternetOpenUrl failed: 'Eine Umleitungsanforderung ändert eine nicht sichere in eine sichere Verbindung.'
    
    Error: processing vignette 'stmVignette.Rnw' failed with diagnostics:
     chunk 7
    Error in load(url("http://goo.gl/VPdxlS")) : cannot open the connection
    Execution halted
Flavor: r-oldrel-windows-ix86+x86_64