The readings mention that there are bursts of rapid synaptogenesis in certain parts of the brain at different points in development. Could these provide a biological basis for the phase transitions in the theories of Piaget and other psychologists? The readings also say that localization of function tends to become more specific in the brain over the course of development. Is this mostly the result of innate factors or a consequence of greater experience with the environment? What would be the advantages of moving to a less distributed coding system as one gains greater experience? Would the same pattern of increased localization also hold for adults who begin learning an unfamiliar task? -Chris
One theoretical perspective on development focuses on the improvement of interactions between different brain regions. Obviously, theories based on interactions between multiple brain regions are far more complex than theories based on a modular brain. How can you go about building robust theories with so many variables in play? Taken to the extreme, the majority of the brain is probably involved to some degree in most tasks. How do you decide what constitutes a "network" responsible for particular tasks, especially based on fMRI evidence (which is, by definition, based on relative contributions to a task)?
Pelphrey & Perlman (in press):
p. 6. The authors cite Phillips et al. (2002) as providing evidence for social cognition, when an infant observes a person acting positively towards one object and are "surprised" when the person is holding another object later on. Could this simply be the result of the child being habituated to the first object because their attention was focused on it, as opposed to the more deep-reaching idea that infants are are "surprised" by the inconsistency?
Why do some pathways take longer to develop than others? To what degree is this because some tasks are harder to learn and hence develop stable interactivity between brain regions, as opposed to environmentally-independent maturation?
p. 11 I don't understand why neutral faces should elicit greater activation in the amygdala than fearful faces. Even if the emotion network is less developed, I would only have predicted less extreme differences between stimulus types as opposed to reversals in the predicted trends.
p. 13 One explanation for why neural networks are continuing to develop after children appear to have behaviourally mastered a task is that some brain regions are responsible for acquiring a skill, whereas others are required for mastering it. Is there a concrete example of this process in some cognitive domain that has been well documented? Expertise in some domains might be a good example, but I'm not sure of the degree to which expert ability precedes neural correlation of a particular brain region with ability.
What does it mean to have consistent behaviour with no change in fMRI activation? Simply that there is insufficient temporal resolution to find it, or is there something else?
Responding to Blair: The amygdala typically shows greater responsiveness to emotionally ambiguous situations or stimuli than to ones showing clear-cut emotions. This is because when you have an ambiguous stimulus, you need to make your emotional circuitry work harder to determine what is probably going on. That I assume is the reason the subjects showed greater amygdala responses to the neutral faces than the ones showing clear-cut emotions.
There are a number of reasons why you could have consistent behavioral performance on a task and no fMRI activation. It could be the task was simply so easy that no part of the brain had to work hard enough to require extra oxygenated blood. It could also be the function you are interested in is divided among a large diffuse network or that the brain area involved in the function played a principally inhibitory role (it does not take any more metabolic energy for a cell to block the spread of a signal than to send the signal through; areas that serve filtering or gating roles in a network are therefore difficult to see under fMRI).
1. It does seem that the story of the brain is told not through assigning regions to certain tasks, but rather, through the interaction between a network of regions -- my question is are there methods that are effective in characterizing how a region behaves as apart of a network, rather than just whether the region activated for a task? A concrete example -- if region A retrieves knowledge, region B focues on a task at hand, and region C integrates the task and prior knowledge, and we tested a subject on a task and found that A, B, and C were active for the task, how would we characterize the roles of these regions are with fMRI? To address this question, is there a need to jointly utilize complementary methods? (fMRI and ERP come to mind, or perhaps MEG)
2. Certainly the software does alter the hardware in the brain -- but this is vague to me -- couldn't I use this to justify teaching calculus to 4yr olds on the basis that the hardware will eventually adapt? I guess my point is that the more interesting question is teasing apart how the software alters the hardware, and how rigid the hardware is to different software -- this certainly has educational implications if we are to set challenging but attainable learning goals for children. How do developmental cog. neuroscientists address these questions?
3. munakata, casey and diamond put forward the idea that cognition is not independent of emotion (p. 125) -- yet most cognitive psychologists, dev. cog. psychologists, and dev. cog. neuroscience researchers steer clear of emotion's influence on cognition -- does anyone agree with munakata et al that we are neglecting a crucial aspect of cognition by not studying its interaction with emotion? -bryan
First, I just wanted to say a little something about the reconfigurable computer chips, called FPGAs, that I mentioned in class. In the realm of genetic simulations, mostly people have used these to help parallelize the process of running genetic algorithms (which can be computationally expensive), but the most interesting thing I have seen done with them is to let the FPGA hardware design be guided by its own experiences in the world as it specializes in a task. This small, budding field is called Evolvable Hardware. Essentially, the FPGA chip has some way of testing for the fitness of its current configuration of programs, and then keeps only the top third most fit. The other remaining two thirds of programs get trashed, and one new third is derived by making random mutations to the top third, and another new third is completely brand new. In this way, the ideas of mutation and incorporating new genetic material are instantiated. The FPGA reconfigures itself to optimize towards a particular task, usually multiplication in the literature.
Watching this research group has been really interesting. For example, in one case they were allowing an FPGA to continually redesign itself to become excellent at discriminating audible tones. The stable design that it generated for itself in the end was far beyond any human generated design in speed, and capitalized on facets of the physical nature of the FPGA that no human engineer would have. They were so excited about their result that they took it to show a colleague, but when they tried to show them the chip would not work at all. They were disappointed, but they tried the chip again in its original setup and it started working! In investigating why, they discovered that the design of the chip was extraordinarily sensitive to the temperature of the room it was in, which the experimenters had been precisely controlling. They hadn't thought to vary environmental factors, or considered the possibility of its impact, which in the end trickled down all the way to the hardware configuration. (Thompson, Adrian. (1997). Temperature in Natural and Artificial Systems. 4th Eur. Conf. on Artificial Life.)
This reminds me of the graph Pelphrey showed in class of the four levels which we might study: environment, behavior, neural activity, and genetic activity. This is a nice example of interactions among these levels, albeit in a computer chip that may not ever be useful to psychologists for modeling (just considering the amount of environmental factors to manipulate is staggering a la a combinatorial explosion). However, I think this comes around to Bryan and Munakata's point as well. Where does emotion fit into these levels, and how does exert its influence? I think it is clear that factors like emotion, motivation, and attention have significant roles in determining the final behavioral output, and I do think there is a lack of integrating these systems into full models of behavior.
One example I can think of is the brain's response to error. There is an ERP signal called the Error Related Negativity (ERN) that appears about 50ms after a person makes an error that they are aware of (think of the times when you might say "Oh Crap!" right after you get something wrong). In exploring the ERN, many researchers have found that the amplitude/magnitude of the ERN is significantly increased when people are motivated to do well. Likewise, if they are bored or sleepy or drunk, the ERN is smaller. Austistic and depressed people have smaller ERNs, while people who are OCD have bigger ones. Even just being good at the task at hand will cause people to have a bigger ERN. Within a person, ERN is manipulated by how much reward the person will get (ie - within a single session for a single person the ERN is bigger on trials where people are getting money for doing well vs. when there is no reward).
So clearly there is an interaction, but the nature of this interaction is still mysterious. The ERN is hypothesized to eminate from the Anterior Cingulate (ACC, a medial structure wrapped around corpus collosum), which is known to be involved in decision making in the fMRI literature. But how does motivation level and emotional state gate activity in the ACC, or anywhere else in the brain for that matter? How does it do it so rapidly?
As Bryan said, we can get an idea of what parts of the brain are operating, and we can hope to get a glimpse of the temporal ordering of the activation of those regions using electrophsyological measures, and in combination we might be able to start making inferences (by the way, MEG can give you beautiful movies in which you can literally watch the activity move from region to region - I'll show you some time, Bryan). BUT, my question is about other factors, such as the neurotransmitters? They are obviously a critical part of brain function, and we know that there are ways that their metabolism might be altered, such as reuptake inhibitors, etc. How likely is it that emotional/motivational state might influence cognitive processes via a mechanism that operates on neurotransmitter metabolism? Maybe an interesting study might be to look at MRS (spectroscopy) or PET responses of particular transmitters, focusing in on the ACC during one of these ERN studies. You could start by looking at the special populations (say, compare depressed individuals to OCD) and then apply it to normal individuals (manipulating their ERN amplitude by rewards).
cpaynter wrote:The amygdala typically shows greater responsiveness to emotionally ambiguous situations or stimuli than to ones showing clear-cut emotions. This is because when you have an ambiguous stimulus, you need to make your emotional circuitry work harder to determine what is probably going on. That I assume is the reason the subjects showed greater amygdala responses to the neutral faces than the ones showing clear-cut emotions.
If that's the case, then why do adults show greater activation for negative as opposed to neutral faces? They too should have to work harder to understand the connotation of a "neutral" face.