Probabilistic and the game subsequently. The AP will assume,

Probabilistic inference is a unique and deep concept that has an impact on decisions that are made in everyday life, from the mundane to the momentous. These models attempt to explain human thought using the core ideas of statistics.

The most immediate concept that an individual can think of in regards to probabilities and statistics would be games of change, more specifically involving dice or a deck of cards. This idea of probabilities and actions being formulated based off of uncertainty can be directly applied to human behavior, and dissect rational action (Griffiths and Yuille 2009). Take an example of a popular card game: Magic the Gathering. The active player (AP) has three life left, when their life bar hits zero they will lose the game. The non-active player (NAP) has three cards in hand, and an unknown number of cards in their deck.

The AP knows that the NAP has a card that can instantly make him lose three life, and the game subsequently. The AP will assume, by the number of cards in the player’s hand and the remaining in their deck that they are safe from dying this turn. This is a very mechanical and foreign example. But to boil down what the AP player essentially did was to run the “probability” of the NAP having the proper card in hand. This is a probabilistic inference. A model for the rational play that occurred by reasoning of the AP brain.Probabilistic models unify many fields in order to take a peek inside human cognition. They combine computer science, mathematics, engineering and neuroscience in an effort to chase the holy grail of understanding: human cognition.

There exists two problems in developing probabilistic models of human cognition, the first lies in model selection. The second challenge is analysis of the models results and predictions (Griffiths and Yuille). An example of such a model is Stanford’s DeepDive, “DeepDive is a new type of data management system that enables one to tackle extraction, integration and prediction problems in a single system” (DeepDive). Their system uses a model called “Factor Graphs” to perform probabilistic inference. There are many different models that can result in different interpretations of data (Griffiths and Yuille). The second issue regarding the analysis of a models results, stem from the unique complexity of the problems that are being ran with these models. Griffiths and Yuille specifically state that, “As the structure of the model becomes richer, probabilistic inference becomes harder” (Griffiths and Yuille).

These models are powerful tools that can be applied to tackle the gargantuan problem that human cognition is, they offer researchers powerful tools to see data in ways that would not be visible feasibly without them. One of the explored areas with these models is memory. Memory recognition and recall tasks specifically are incredibly interesting in that they provide a perfect opportunity for these models to perform. In fact, when using Google Scholar to search the keywords Memory Recognition and Recall over 1,200,000 results are offered (Google Scholar).

General research questions posed can include topics such as: episode memory (Tulving 1985), word recognition (Shepard, 1967), and hippocampal memory (Squire 1992). However the issues that piqued the most interest would be the relationship between memory, and the knowledge that you accrue on a day to day basis. Pernille Hammer and Mark Steyvers published a paper in January 2009 that provides “A Bayesian Account of Reconstructive Memory” that specifically focuses on “the interactions of memory and knowledge” and that they “propose a Bayesian model of reconstructive memory in which prior knowledge interacts with episodic memory at multiple levels of abstraction” (Hammer and Steyvers, p1, 2009). Hammer and Steyvers extend the work of Huttenlocker et al. (Psychological Review, 1991), in proposing that there is a “combination of prior knowledge and noisy memory representations that are dependent on familiarity” and that there is “empirical evidence of the influences of prior knowledge at multiple levels of abstraction” (Hammer and Steyvers, p1. 2009).

The team performs a behavioral experiment with “natural objects” listed as fruit, and vegetables which participants will have assumed prior knowledge of. Hammer and Steyvers points out that they expect the participants to not only have, “category level knowledge” but also “object level knowledge”. So that if the participants are shown a fruit or vegetable that may not be commonly known, their prior knowledge can guide “reconstruction”. Hammer and Steyvers produced 36 images in two categories listed as fruits and vegetables. The methods section lists that, “24 of the objects from each of the categories were used” (Hammer and Steyvers 2009). They also note that, “another class of stimuli were also developed: abstract shapes created by drawing outlines of objects and filling them with blue”.

After a norming phase, participants were then thrown into the memory phase and tested. The results for the norming phase follow a “natural order”: described as the size of mushrooms were smaller than bell peppers, and so on. For the memory phase, their team used “reconstruction error” or the remembered size versus the studied size as a means of measuring performance of the participant (Hammer and Steyvers 2009). The results of this memory portion follow what Hammer and Steyvers calls a “regular pattern” for differing objects. Their model mixes the, “prior mean and the variance, a combination of category and object level priors” (Hammer and Steyvers, 2009). Given this analysis, the conclusion is that reconstruction of “size” is in fact “influenced by prior knowledge at multiple levels”. Less familiar objects result in inferences that are “more category center”, in contrast familiar objects result in more inferences that are more “object prior” (Hammer and Steyvers, 2009).

A modification to this study that can be tested is whether or not a physical object would change the results of the experiment. Paper drawings due to their two dimensional nature can result in misleading and sometimes flat out inaccurate proportions. This change in the study would be able to test the same questions as before, and attempt to answer the question: does physical presence of an object alter or strengthen memory recall? It’s been studied that melodies of songs can in fact have a positive effect on learning and recall. And that when a song is repeated that memory can be triggered as a result of that familiar melody (Wallace 1994). There may in fact be a possibility that the physicality of an object may in fact result in a strengthened memory recall response that a drawing does not trigger. Another different angle is that there are other portions of physicality besides sight. Smell, and hearing both are factors in physicality that are not often brought to the forefront.

The smell of an apples skin may trigger stronger memory recollection than the smell of ink on a paper. The modification would be informative because it addresses a different angle that can provide unique insight on the prior influence of knowledge centers in determining unfamiliar information and memory recall. This alternate take would allow researchers to use this data, for another look into human cognition. The possible results may include a strengthening in the memory recollection for certain events, results very similar are seen in melody and music the same might hold true for the other senses (Wallace 1994). However, it can be safely said that the writer of this paper is not qualified in making these statements as “certainty” (as the writer is not an expert in this field).   The hypothesis would change to be one as such that “prior knowledge interacts with episodic memory at multiple levels of abstraction, and that the strength of this abstraction may be modified by the physicality of the object in question”. The Bayesian model would subsequently have to be changed also, factors will now have to include whether or not the object present had factors of physicality.

How to map those factors however is a whole other task that is outside of my field of expertise. The knowledge that can be learned from the modified study in regards to the mind is one of interactions between the varying levels of memory, and knowledge within an individual’s mind. This interaction is incredibly complex, and the multitude of layers that exist within it not to mention the factors influencing it must be large in number. Beginning to hack away at the possibilities one step at a time is a useful step in the direction of the ultimate goal: understanding human cognition and all of the intricacies that come with it. Probabilistic inference is similar to symbolic systems in that they are closely related and often fall back on one another to operate. For example, “Symbolic computation systems, are compilers for computer languages which allow indefinite results” (Dodier p2. 2006). In summary that means that symbolic systems can handle both the concept of indefinite and definite results.

The idea of 0s and 1s, or the idea of a “graded percentage”. This is important in a lot of ways for computation but there are interesting limits and drawbacks to this system. Let’s take for instance a random number generator for a video game. Most algorithms for a random number generator are not “truly” random.

In fact, “truly random results can only be generated by using a phenomenon that occurs in nature” (KnowtheRNG). Other forms of random number generation are pseudo-random or quasi-random. However patterns eventually arise in these algorithms, and as such in modern day there is a compromise to create a middle ground (KnowtheRNG).

Probabilistic inference however is different than connectionist models in that a connectionist model is based on recognition of activity patterns that strengthen over time. Connectionism at its core is an attempt to explain human cognition using various techniques such as neural networks. These neural networks are created models of the brain that have units (very similar to neurons) that work together in connection to one another.

These connections are measured in strength by “weights” (Garson). A probability can increase, but it cannot “strengthen” in knowledge over the course of a training session. While there are percentage aspects involved in Connectionist models, it is not based on knowledge that is inferred, not known or present at the time. Probabilistic inference is different than embodied cognition in that the probabilistic idea of the mind is solely focused on the brain and its computational power, not the body in its entirety.

This is a core tenet of embodiment in that your physical body in of itself is a crucial, integral part to your understanding of your everyday life, your cognition. While it can be argued that the body may be making inferences based on collectively bodily knowledge, the systems in definition stand opposite of one another in terms of modeling human cognition.