# Recognition Memory Models

Recognition memory is concerned with the ability to discriminate between previously encountered information and new information. A central question is how to disentangle response tendencies (e.g., the tendency to respond “old”) from memory performance (i.e., the ability to discriminate between old and new information). Several measurement models with markedly different assumptions about the underlying memory process exist.

Three different SDT based measurement models of recognition memory and a prototypical ROC plot.

In recognition memory experiments participants are first presented with a list of stimuli, one after each other, which they are asked to encode (= learning phase). In the subsequent test phase participants are again presented with stimuli and have to decide which of those stimuli had been presented during the study phase (i.e., are old) and which are new. In the simplest task, the Old/New task, stimuli are presented individually and for each item participants have to decide if the item is old or new. Responding “old” to an old item is called a hit and responding “old” to a new item is called a false alarm. As the proportion of new responses to old items, termed misses, adds up to one with the hit rate, those can be ignored (the same is true with new responses to new items, called correct rejections).

Unfortunately, hit and false alarm rates are no unbiased measures of memory performance (). Take for example a decision maker with no memory of the studied items whatsoever who only responds randomly with “old” and “new”. We would expect this individual to show a hit and false alarm rate of .5. Now imagine another decision maker who also has no memory of the studied items but with the tendency to respond “old” in 80% of the trials. Clearly we would expect that individual to show a hit and false alarm rate of .8. As can be easily seen, memory performance can only be assessed by combining hits and false rate. Several different measurement models try to accomplish this.

The figure on the left shows three measurement models for recognition memory based on signal-detection theory (SDT; ). According to SDT, the memory information of a previously encountered stimulus acts as a continuous signal, usually termed familiarity, which needs to be detected by the decision maker. If the familiarity value of a given stimulus exceeds a criterion set by the decision maker, response “old” is given. New items also elicit familiarity which can sometimes exceed the criterion leading to false alarms. The familiarity distributions are usually assumed to be Gaussian. The smaller the overlap of the old and new items familiarity distributions the better the memory of the decision maker. The placement of the criterion reflects the response tendencies. The models displayed here have multiple criteria reflecting the fact that responses need to be given on a graded scale (here from 1 to 6).

The most popular version of the SDT based models is the unequal variance signal-detection (UVSD; ) model. The dual-process signal detection (DPSD; ) model assumes a combination of familiarity based recognition judgments and threshold-based episodic retrieval judgments, termed recollection. With probability $$R$$, an item is recollected (i.e., the memory episode is retrieved) and the correct response is given with highest confidence. The mixture signal detection (MSD; ) model assumes a mixture of old item familiarity distributions. The familiarity distribution for the items that were attended to during the study phase (with probability $$\lambda$$) are shifted further away from the new item distribution than the familiarity distribution for the unattended items. Our figure displays a version of the MSD where the unattended familiarity distributions is equivalent to the new item familiarity distribution. A fourth model (which is graphically displayed here), the two-high threshold (2HT; ) model, assumes no continuous memory process. Instead it assumes discrete memory states. Either an item is recognized as old or new, with probabilities $$D_o$$ and $$D_n$$, respectively, in which case the correct response is invariably given, or not. In case recognition fails a response is guessed reflecting response tendencies.

I am interested in a variety of aspects concerning recognition memory models (see here for a list or relevant publications). There is a long and ongoing debate on whether continuous memory models or discrete state memory models are more empirically adequate (e.g., ). I have recently published a paper concerning this debate and am currently working on more in this regard. The question I am generally interested in is how can we adequately measure recognition memory performance. This is also important in one of my more applied studies (). Finally, I am interested in specific aspects of recognition memory models. For example, we could show that there is actually little evidence for criteria variability (or criterion noise) in recognition memory experiments.