Kellen, D., Singmann, H., & Batchelder, W. H. (2017). Classic-Probability Accounts of Mirrored (Quantum-Like) Order Effects in Human Judgments. Decision.
Singmann, H., Klauer, K. C., & Beller, S. (2016). Probabilistic conditional reasoning: Disentangling form and content with the dual-source model. Cognitive Psychology, 88, 61–87.
Skovgaard-Olsen, N., Singmann, H., & Klauer, K. C. (2016). The relevance effect and conditionals. Cognition, 150, 26–36.
Kellen, D., & Singmann, H. (2016). ROC residuals in signal-detection models of recognition memory. Psychonomic Bulletin & Review, 23, 253-264.
Obtains p-values for linear or generalized linear mixed models and allows to conveniently specify any within-/between-subjects ANOVAs. Uses “Type 3” sums of squares as default.
Fits and obtains the Fisher information approximation (FIA) measure of model complexity for multinomial processing tree (MPT) models. Can also fit SDT or other models for categorical data.
Provides the algorithmic complexity for short strings, an approximation of the Kolmogorov Complexity of a short string using the coding theorem method.
Provides PDF, CDF, quantile function, and RNG for the diffusion model and the linear ballistic accumulator (LBA). All functions are fully vectorized.
Functions for estimating marginal likelihoods, Bayes factors, posterior model probabilities, and normalizing constants in general, via different versions of bridge sampling (Meng & Wong, 1996).
Multinomial Processing Tree (MPT) models are cognitive measurement models for categorical data. They describe observed response frequencies from a finite set of response categories (i.e., responses following a multinomial distribution) with a finite number of latent states. Each latent state is reached by particular combinations of cognitive processes; processes that are assumed to take place in an all-or-nothing fashion.
Reasoning is the ability to infer propositions from given propositions. The psychological research is concerned with the cognitive processes underlying human reasoning. An influential way of addressing this question has been to compare observed reasoning performance with normative systems such as bivalent logic or probability theory with the goal to infer that underlying processes somehow mimic or reproduce normative systems.
Recognition memory is concerned with the ability to discriminate between previously encountered information and new information. A central question is how to disentangle response tendencies (e.g., the tendency to respond "old") from memory performance (i.e., the ability to discriminate between old and new information). Several measurement models with markedly different assumptions about the underlying memory process exist.
Mixed models (aka multilevel or hierarchical models) are statistical models containing both fixed- and random-effects terms. They are useful when multiple measurements exist for each unit of observation (e.g., participant or item), or for hierarchical data. Linear mixed models (LMMs) are used for normally distributed dependent variables, generalized linear mixed models (GLMMs) are applicable to other distributions (e.g., binomial data).