Connectionsim

Elizabeth Bates is considered one of the founders of connectionism. Bates started in a clinical PhD program, but later transitioned into cognitive psychology. The similarities between the work of Bates and Thelen (DST) may be traced in some part back the strong influence of Peter Greene, cited by both women as a powerful influence on their work. Connectionism emerged from Bates study of language acquisition, and in particular the differences seen between language use of native speakers and secondary language learners (learned later in life). She first addressed this in her book //Language and Context//, and continues working in this area now. In particular, she drawn to one of the central problems in the study of language development: the U-shaped pattern of change seen for acquisition of the English past tense. Perhaps her most influential work is //Rethinking Innateness: A connectionist perspective on development//, with five other colleagues, which is a considered foundational text in the field.
 * **Background**

Two previous theories whose impact can be traced through her work are (1) Piaget's ideas of equilibrium-disequilibrium, with the associated processes of assimilation, accommodation and organization, and (2) Gibson's just emerging focus on affordances in the environment. || || Bates is explicit about some of her starting assumptions.
 * Assumptions**
 * 1) Grammars (in all their varying forms from one language to another) represent emergent solutions to a complex communicative problem. This was vastly different from the dominate view her career, Noam Chomsky's view of innate language abilities still held sway. Building off of this emphasis on language has led researchers to approach other abilities as emerging through efforts to solve complex problems. In other words, there is no pre-determined end goal, no path the individual must go down, but rather it is driven by moment-to-moment effort to solve the problem currently in front of the individual - learning ins purely functionalist.
 * 2) Grammatical development (all development?) is best viewed as an intensely bidirectional process.
 * 3) Individual differences are vastly important - this in particular puts her outside much of linguistic studies. From this perspective, outliers are key sources of knowledge about the developmental processes.
 * 4) Linguistic knowledge (all knowledge) is "probabilistic all the way down", reflecting the statistics available on language use and language learning. This put her in direct opposition to the competence(language comprehension)-performance(language speaking) distinction commonly applied in linguistics.


 * The beehive metaphor Bates popularized to represent emergent patterns that occur reliably, though she clarifies that she stole the metaphor from D'Arcy Thompson, who borrowed it from Bonanni. In brief, the beehive metaphor represents how specific, reliably repeating properties can be emergent rather than programed. The form of the beehive, when observed under construction, is the result of the interaction of multiple physical constraints. Each honey cell starts as a spherical object. As these are packed together in the hive, each bee to create as much space as possible for itself within an individual cell, the spherical walls will undergo deformation, with the surface tension between the two adjacent honey cells acting to create a flat plane surface. This hexagonal shape maximizes the packing space and offers the most economical use of the wax resource, through the consortium of the labor of the bees and the forces of physics - the bees do not need to "know" any special knowledge. The resultant pattern - reliably seen in all beehives - are emergent properties of system' dynamics. || [[image:beehive.jpg width="382" height="194"]] ||

Humans are understood to be active in connectionism (via Piaget and Gibson), although the component of attention is often absent in their simulations, where the computer program is offered only tightly controlled stimuli. In some ways this then counts as an untested assumption.
 * Human Development as Active or Passive?**


 * Development as Qualitative or Quantitative?**
 * Connectionism holds that development change is both quantitative and qualitative. To clarify, looking at the data on the right, we can see that there is a quantitative change at each trial - in other words an incremental increase or decrease in ability to correctly label an image, but that at around trial number 35, there is a sharp increase that continuous to rise, without ever dipping again to the previous error rates seen earlier. Thus, at around trial 50, and certainly by trial 70, the program is displaying a consistently new pattern of response, never returning to the earlier pattern - a qualitative change to a new "stage". In short, although qualitative changes do occur, they are always the product of quantitative changes. || [[image:QunatQualChanges.jpg width="488" height="276"]] ||


 * Nature or Nurture?**
 * This question is really at the heart of connectionism, as is evident in the title //Rethinking Innateness//.

Development is seen as primarily regulatory, meaning that the orchestration of cell differentiation and the final outcome are under broad genetic control, but the precise pathway to adulthood reflects numerous interactions at the cellular level that occur //during development//. These interactions are influenced by numerous factors, such as timing and architectural constraints (see beehive above). This accounts for the vastly different developmental outcomes between the average human and chimpanzee - whose DNA differs by only 1/6% - less that the DNA difference between two species of gibbons.

The primary advantage of regulatory systems is that it allows for greater flexibility. Damage to a group of cells can often be compensated for by their neighbors. An example of this flexibility can be see in the images on the right, which show brain activation patters for tactile stimulus for people with an average sight system (top row) and those born blind. As you can see, the visual cortex activates during the processing of tactile stimulus among people who are born blind, but not for people with average vision. This suggests that the visual cortex has a "preference" for visual stimulus, but when this is not received, it processes and becomes attuned for the stimulus that is incoming - in this case tactile.

Regulatory systems are considered responsible for most of our development beyond the womb, with important gene-environment patterns of interaction. Mosaic systems are those that have little degree of flexibility in development - disruption of these systems requires a massive disruption in the individual's environment. These processes are also called canalization, with their developmental outcomes referred to as canalized traits. These processes and their resulting traits are primarily an all-or-nothing approach in that if you do not develop this trait, you are dead. Examples include development of the neural tube, fetal circulatory system, etc.

Connectionists focus on the regulatory system developmental processes, with an emphasis on the environmental factors. Connectionists maintain that considerably more information is latent in the environment than was previously thought (Gibson here) - and current studies support this assessment. ||= ||

At the global level, the individual develops an ability to negotiate their multiple environments (physical and social) in a productive manner. At the micro-level, where much of connectionism is studied, change occurs in (a) the development of new connections between nodes, or the shift in weights ascribed to each node. This micro change, resulting from each trial (or training epoch in the image under quantitative or qualitative development), is the quantitative change discussed above. Each of these micro changes are seen to be purely the result of effort on the part of the individual to solve an immediate problem confronting them at that time - this is the functional component referenced above under assumptions. Repeated trials and the resulting changes in node connections or node weights will eventually produce a new pattern of response, that will then be relatively stable - unless it starts encountering new situations where it does not work and most go through another round on micro-changes in weights! This new pattern of response is the qualitative changes noted above (shift from mean of 20% likelihood to correctly identity an image to mean of approximately 70% likelihood to correctly identify an image). To gain a more detailed understanding of how this functions, see the example under Mechanisms of Development below.
 * What does Develop?**

Equally important to the question "what develops", is the question of what does NOT develop. Connectionism maintains that self-organization (notice overlap with Dynamic Systems Theory) in a moment to moment functional process targeted at solving a particular problem, produce all highly organized responses without explicit guidance or overarching "central executive" system.


 * What are the Mechanisms of Development?**
 * The explanation of this is perhaps best started with an example study. We will do this by taking a second look at Baillargeon's (1987b) extension of Piaget's study of object permanence in infants. The image on the right gives a brief overview of the study protocol, as previously explained in class during the Piaget presentation. Although connectionists did not necessarily disagree with results in terms of length of look, they did disagree with Spelke's (1999) interpretation that this indicated the principles of object permanence are innately specified. The reason given for why infants have a length of look that indicates awareness of expectation of object permanence, but do not act on in reaching tasks is that young infants lack the means-end analysis enabling them to coordinated the necessary sequence of actions (Diamond, 1991).

Mareschal et al. (1985) the idea of progressively strengthening representations provided an accurate, less complex explanation of this outcome. According to Mareschal, through increased processing experiences of the relevant stimuli, knowledge account the continued existence of objects accrues so that the internal representations of occluded objects become progressively strengthened. A weak internal representation hay have a lower threshold, which is sufficient for guiding a looking behavior, but not reaching. Mareschal also felt that reaching requires the infant to take account of more precise details of objects that would also have to coordinate with the infant's representation of the object's position. || ||
 * These two pieces of information: (a) object recognition and (b) object's position, are represented in the to the right. To test the theory, Mareschal built a recurrent network with backpropogation learning algorithm. A recurrent network was used because it is particularly well suited to using past internal representations to influence subsequent internal representations, in turn affecting behavior at the output level. The network would see sequences of inputs corresponding to several time steps in an occulader (a block) moving across a trajectory in space which also contains a stationary object (a ball). At each time step, the input specifies the location of one or two objects (the ball and/or the block). When the block passes the ball, the ball becomes temporarily invisible. The goal is to correctly anticipate the future position of objects that disappear and reappear from behind a occulader. The learning in the network is driven by discrepancies between the predictions that the network makes at each time step and perceptual input it receives at the next time step.

Each circle in the modules to the right is a node. A node is like a simple artificial neuron. As is true of neurons, they collects information from various sources. Some nodes receive and send input only to other nodes, while others act like sensory receptors, receiving input from the world. Still others may act as effectors and send activation outward, while some may do all three. A node receives input from other nodes to which it is connected - these connections may be unidirectional or bidirectional. The connections between nodes have weights. It is in the weights that knowledge is progressively built up in a network. These weights are usually real-valued numbers, such as 1.234 or -2.453. These weights serve to multiply the output of the sending nodes. Thus, if node a has an output of 0.5 and has a connection with node b with a weight of -2.0, then node b will receive a signal of -1.0 (0.5 x -2.0); in this case the signal would be i//nhibitory// rather than //excitatory//. These node weights are then represented in algebraic formulas. || ||
 * The graphs to the right summarize the performance of the network when it attempted to "reach" for an object or track and object with or without an occulder. When there is no occulder, the network learns quickly both to track and reach for a desired object. In the presence of the occulder, the error rate for predicting the reappearance of the object from behind the occulder reaches an acceptable level much faster than the reaching. In this model, the basic mechanisms for change are constant throughout the development - it does not correspond to the sudden maturation of a new motor system. The basic potential has been there all along. The difference is that the successful reaching must await more accurate coordination of the representations in the "what" and "where" channels, as shown above. || [[image:DataObjectPermananceStudy.jpg width="461" height="236"]] ||


 * Another key part of connectionism is the presence of parallel processing. This is an important deviation from the use of the computer as a metaphor for cognition. When writing a computer program, you remove all possible parts of redundancy. This produces a clean set of code. However, research has shown that certain parts of human processing have high rates of redundancy built into them, with parallel lines of processing. The image on the right shows a simple example of this process. Note that there are four pathways for the input to reach the required activation unit and produce output. This means that in order to prevent an output, you would have to disrupt all four pathways, not just one. This redundancy contributes to stabilization of the system for processes that are essential for continuation.

Notice how this counter-balances and works with the flexibility provided by the regulatory system of development. || ||
 * What is the Shape of Development?**
 * Intertwined with the issues of what does and does not develop, along with the mechanisms of development is the question of what development looks like. The image to the right shows three different patterns of developmental change. Function 1 is the traditional linear pattern of development. Function 2 shows a pattern often seen in real life observations of developmental change - a sharp rise followed by a leveling off. Function 3 shows another pattern of change often seen in real life observations - a gradual increase, followed by a rapid increase, tapering to a leveling off. Note on function 3 the part marked with dotted lines - this is where the shape of the curve shifts from concave to convex - an important piece of information from a developmental position.

Previous linear formulas of development would have required to different formulas to accurately represent the changes see in function 2. Function 3 would have required at least four, but perhaps up to 6 formulas, clearly violate Occam's Razor. Through algebraic nonlinear formulas, as shown below, we can now accurately represent all of function 3 with a single formula. Thus, although the formula's themselves may be complex at times, they fit the goal of Occam's Razor better than multiple formulas. || ||

Most connectionist research is conducted with computer simulations. See the example above under mechanisms for an example of what this process looks like.
 * Predominant Methodology**

There are three main advantages to this methodology. There are also several disadvantages with this methodology.
 * 1) Implementing a theory as a computer model requires a level of precision and detail that often reveals logical flaws or inconsistencies easily overlooked in a verbal description.
 * 2) The models possess nonlinear characteristics, which makes their behavior difficult (if not impossible) to predict. The use of distributed representations also means that the models exhibit emergent behaviors that are often not anticipated. Thus, they function as an empirical experiment, often producing unexpected results that must then be accounted for in the revised theory. (examples and discuss to clarify?)
 * 3) The inner workings of the model are accessible for analysis, which is certainly not true for humans. This relates back to the question of nature or nurture, in that through computer simulations we can now track the minute changes resulting from rapid bidirectional feedback loops between "environment" and "the organism" in a way previously unavailable.
 * 1) Similar to controlled lab experiments, these simulations tell us how a process might work, but with very low ecological validity, they tell us very little about how the process does work in an individual's actual activities. We can determine how a certain stimulus could effect an individual's development if they attended to it, but cannot tell us if the individual would actually attend to that stimulus in daily life, or rather may focus on another stimulus altogether.
 * 2) Simulation studies in particular require advanced computer programing skills, limiting those who can conduct such studies, and those who can critique them meaningfully at the methodological level, rather than taking things at face value.

Although connectionism provides an excellent opportunity for incorporating diverse backgrounds through differences in input producing different weight systems. even if all systems started the same, this avenue is currently not explored. For example, looking at hostile attribution bias, connecitonism would suggest that even if two people had the same starting system, if person b experienced above average rates of violence, this would produce a system with a different weight pattern than person a, who experienced only average rates of violence. This would then explain why person b would interpret an event as hostile, while person a would interpret it as neutral. Connectionism provides a way of understanding how these representational are created and maintained, not just in terms of concepts, but in terms of concrete neuron activation patterns, with an ability to track the micro-processes necessary to produce these two different networks.
 * Diversity**