Dopamine and reward
Goals
- To discuss dopamine systems in the brain
- To discuss different receptors for dopamine and drug targeting
- To discuss the affective/limbic loop of the basal ganglia
- To discuss predictive reward signals and reinforcement learning
- To introduce decision neuroscience
- To discuss valuation and the vmPFC
Topic slide

Daniel Kahneman (b. 1934) is a psychologist at Princeton who won the Nobel Prize in 2002 in Economics for his fundamental work on behavioral economics Working in collaboration with Amos Tversky, Kahneman developed prospect theory which considered decision-making under risk conditions. Kahneman is also the author of the popular book "Thinking fast and slow" in which he presents a lay treatment of his dual process theory (System 1 – fast and heuristic driven, System 2 – slow and deliberative) for decision making.
Wolfram Schultz (b. 1944) is a professor at the University of Cambridge who has done fundamental work on the properties of dopaminergic neurons in reward learning and, in particular, that regarding reward prediction error.
Reading
- Reading: PN6: Chapter 32 (focus on Box 32 A, pp. 729-730)
Abulia
I connected the work in rodents regarding motivation and approach behaviors towards rewards to the human syndrome of abulia. I presented two case studies in which strokes occurred in the basal ganglia (reported to be restricted to the striatum) and the patients exhibited marked abulia. I don't have an online citation to the original paper, so I will quote from the paper below (my transcription, all emphasis is mine).
In May 1998 two right-handed patients, AS (female, age 58) and PJ (male, age 56) were admitted to our clinic with a spontaneous hematoma in the left BG. … In both subjects an initial neurological examination showed … a minimal paresis of the right face, arm and leg, slight dysarthria and word-finding difficulties. These symptoms all resolved completely with a few days. MRI demonstrated a bleeding located in the left basal ganglia region, but not other brain lesion…
Shortly after disease onset, AS and PJ were both abulic. They showed a profound lack of spontaneous activities, rarely expressed thoughts of asked questions. During conversation they produced sparse, laconic verbal output and had prolonged answering latencies. To keep the conversation going, the dialogue needed continuous support via repetitive requests and proposals from the examiner. Both patients often needed motivational assistance during psychometric testing, exhibited blunt and indifferent affect and never appeared preoccupied regarding their disease.
AS, who was premorbidly known for her energy, drive and initiative, was described … as now being slow, poorly motivated, lacking concern for her household and social activities, emotionally flat, absent-minded, unable to make plans, with severe memory problems and a loss of social interests. During both follow-ups she explicitly denied any form of mental impairment and stated that she felt well and unchanged. PJ had successfully run a small firm with several employees. Now his wife described him as unable to take care of his business due to lack of initiative, drive and endurance. He was perseverating, indifferent, inattentive, uninterested, emotionally unconcerned, and severely forgetful. Similar to AS he seemed unable to detect a difference between his present and premorbid functional state. Over a period of 22 months PJ gained about 15 kgs of weight. Apart from the observed loss of drive, both patients exhibited no other s symptoms typical for depression. Other behavioral changes like psychosis, mania, paranoia, altered sexual behavior, compulsivity or disinhibition were not found. No patient showed spontaneous confabulations of motor abnormalities.
Both patients presented with a marked reduction of psychic self-activation, but remained reactive to external stimulation. Abulia (or psychic akinesia) has been mentioned in many CN (gmc: caudate nucleus) cases..
Different forms of learning
Earlier in the semester, I emphasized the role of the cerebellum and basal ganglia as neural machines that modulated motor commands to produce smooth movements (cerebellum) and to facilitate some and inhibit other possible movements (basal ganglia). However, in both cases, I emphasized that the neural circuitry was in place in both structures to enable a larger role in cognitive processes. In particular, the cerebrocerebellum forms a closed loop with extensive regions of prefrontal cortex or other non-motor cortical regions, and the basal ganglia circuitry includes loops that do not involve motor cortical regions. Indeed, we will focus on non-motor involvement of the basal ganglia in today's lecture on decision-making.
Peter Strick and others have advanced a theory that the Basal Ganglia, the Cerebellum, and cerebral cortex instantiate different forms of learning:
- Basal Ganglia – reinforcement learning
- Cerebellum – supervised learning
- Cortex – unsupervised learning
Reinforcement learning will be included in today's lecture. In reinforcement learning, certain behaviors (food seeking) are 'rewarded' and so tend to be repeated when one is again motivated to eat. However, there can be a long delay before the physiological drive state that motivated one to eat is alleviated, and so there need to be intermediate rewards to sustain behavior. Also, there may be many behavioral steps to obtaining food, and so how can distal behavioral acts that ultimately led to obtaining food be rewarded? And which behaviors should receive credit for obtaining food?
In today's lecture we will consider these issues – focusing on a neuromodulator molecule important in reward (dopamine), an area of the brain essential for reward (the ventral striatum, aka nucleus accumbens), and the manner in which dopamine signals reward.
Dopamine
I reviewed the dopamine system. Dopamine is produced by several neuronal clusters (or nuclei) in the midbrain. There are approximately 400,000 – 600,000 dopamine producing neurons in the human brain. The principal nuclei for producing dopamine are the substantia nigra (SN) and in the ventral tegmentum (VTA, or ventral tegmental area) and then distributed to targets within the cerebrum. Dopamine is a monoamine like serotonin and norepinephrine, and dopamine is a catecholamine like norepinephrine (aka, noradrenaline). Indeed, dopamine is an intermediate product of the synthesis of norepinephrine, and many drugs that affect dopamine also affect norepinephrine (see below).
Dopamine pathways
The two main pathways discussed in lecture were the nigrostriatal and mesocorticolimbic pathways (the latter is often divided into mesolimbic and mesocortical branches). A third pathway, the tuberoinfundibular pathway, was included for completeness but not discussed in detail.
- nigrostriatal ascends from the substantia nigra to the dorsal striatum and thalamus.
- If this pathway does not provide dopamine to the striatum, then Parkinson disease may result.
- the mesolimbic ascends from the ventral tegmental area (VTA) to the ventral striatum (n. accumbens), ventromedial prefrontal cortex (vmPFC) and other cortical targets.
- Tuberoinfundibular pathway from the hypothalamus to the pituitary gland.
Self stimulation and the median forebrain bundle
James Olds showed that rats will self-stimulate for electrical stimulation of an area he referred to as the ‘median forebrain bundle’, which roughly corresponds to the mesodopamine project pathway (although it also includes fibers from the lateral hypothalamus and other sources and targets).
Here is a description of the rat's behavior as summarized in the wikipedia:
Rats will perform lever-pressing at rates of several thousand responses per hour for days in order to obtain direct electrical stimulation of the lateral hypothalamus… Multiple studies have demonstrated that rats will perform reinforced behaviors at the exclusion of all other behaviors. Experiments have shown rats to forgo food to the point of starvation in order to work for brain stimulation… Rats will even cross electrified grids to press a lever, and they are willing to withstand higher levels of shock to obtain electrical stimulation that they are to accept for food .
Dopamine receptors
Here I emphasized a complexity regarding the neural action of dopamine, that applies to most neurotransmitters and neuromodulators – i.e., they can have different effects upon different receptors.
Recall we have previously discussed different receptors – for example, in the lecture on ion channels, ionotropic and metabotropic receptors and used ‘key’ opening a lock as an analogy for a neurotransmitter opening a channel.
Imagine that the neurotransmitter is a ‘master key’ that opens different locks that open different doors. The different locks are the different receptors sensitive to that neurotransmitter. Each door when opened, however, leads to a different cascade of intraneuronal effects. For example, the 'D1' door leads to excitation, while the 'D2' door leads to inhibition.
Neurotransmitters and neuromodulators (henceforth, neuromodulators) generally act at more than one receptor type. These receptors may initiate different functions in the neuron (one might be inhibitory, one excitatory, or act on different second messengers in the neuron). While the neuromodulator – for example, dopamine – will act at each of its different receptors, it may have a higher or lower affinity at one receptor type than another. (a high affinity receptors responds very efficiently to its ligand, and so are sensitive to very low concentrations of a neuromodulator, low affinity receptors may not respond at all to low concentrations but respond to higher concentrations.)
Dopamine is a single neurotransmitter/neuromodulator but it acts on several different neuronal membrane receptors. There are five types of dopamine receptors in the vertebrate brain (D1, D2, D3, D4, D5). The receptors for D2 through D4 are similar, and so it is common to speak of them as belonging to the D2-like family. The different receptors are coupled to different intracellular signaling pathways, and in some cases have opposing effects on those pathways.
- D1-like (D1 subtypes and D5)
- D1 most common
- Activates cAMP
- Can increase glutamate receptor trafficking
- Excitatory
- D2-like (D2, D3, D4)
- Reduces activation of cAMP
- Modulates voltage-sensitive Ca++ and K+ channels
- Inhibitory
Dopamine affects excitability in neurons through many mechanisms. For example, D1 receptors can increase glutamate receptor trafficking (the increase in numbers of glutamate receptors) and play an important role in neuronal plasticity. Dopamine D2 receptors, on the other hand, can reduce glutamate release at the synapse. For these and other reasons, D1 receptors are generally thought to increase neuronal excitability, while D2 receptors are thought to decrease neuronal excitability. Dopamine receptors can also affect the opening of calcium, potassium, and sodium channels, and influence NMDA receptors.
Some neurons will have only a single type of dopamine receptor, while others will have receptors for different dopamine receptors. As of 2018, there were no known ionotropic receptors for dopamine in the vertebrate brain, however, an ionotropic dopamine receptor that opens chloride channels was discovered in the worm (C. elegans) and may yet be found in the vertebrate brain. You can read about that discovery here.
Drug effects
There are different ways that drugs can change the physiological effects of a neuromodulator like dopamine:
- Antagonist – a drug that decreases the activity of a neuromodulator – for example, an antagonist can occupy a receptor for a neuromodulator like dopamine and thus not let dopamine stimulate that receptor. Although occupying the receptor, the antagonist does not activate it.
- Agonist – a drug that increases the activity of a neuromodulator.
Pharmaceutical drugs can be devised so that they act primarily at one, but not another, of the receptors for a neuromodulator. This can then restrict the drug’s action to one process subserved by the neuromodulator, but not other processes subserved by the same neuromodulator. This can then help control unwanted side effects of a drug.
For example, many modern ('atypical') antipsychotic drugs selectively block dopamine D2 receptors (but do not stimulate a physiological response at that D2 receptor, thus, the drug is a 'blocker' or D2 antagonist). However, atypical antipsychotics have little effect upon D1 receptors. Older generation 'typical' antipsychotics block both D1 and D2 receptors. By blocking D1 receptors, these older antipsychotics can cause a serious side effect – tardive dyskinesia – which has similar symptoms to Parkinson's disease.
Because there are many steps in the synthesis, release, and reuptake or enzymatic degradation of a neuromodulator, there are many ways that drugs can influence a neuromodulator’s effects:
- Reuptake inhibitor – a reuptake inhibitor extends the activity of a neuromodulator by interfering with its reuptake from the synapse. We have all heard of SSRIs (selective serotonin reuptake inhibitors), but there are other drugs that interfere with the reuptake of other neuromodulators – e.g., Ritalin (methylphenidate), a drug used to treat ADHD, blocks the reuptake of dopamine.
- Enzymes like monoamine oxidase (MAO) and catechol-o-methyl-transferase (COMT) help breakdown norepinephrine, serotonin, and dopamine. Some antidepressants inhibit MAO and/OR COMT thus increase the activity of these neuromodulators.
- Some drugs increase the release of a particular neuromodulator – amphetamine, for example, increases the release of dopamine vesicles.
- Some drugs can increase the synthesis of a particular neuromodulator. L-dopa, for example, is a precursor to dopamine and thus increases the availability of dopamine. L-dopa has been used in the treatment of Parkinson’s disease.
- Some drugs interfere with the reuptake of a neuromodulator and thus sustain the activity of the neuromodulator. Cocaine blocks dopamine reuptake. Many antidepressant drugs block the reuptake of serotonin (SSRIs, or selective serotonin reuptake inhibitors).
- Side effects can occur when a drug alters the activity of a neuromodulator in an unintended way or at an unintended target brain structure. For example, some dopamine agonist drugs that treat Parkinson’s disease have been reported (in some cases) to increase risky behavior – particularly gambling.
- The target of the Parkinson’s disease drugs is the striatum of the basal ganglia (normally served by the nigrostriatal pathway), but presumably the increase in risky behavior is related to the effects of dopamine in the n. accumbens and limbic structures (mesolimbic pathway).
Drugs of abuse
Most drugs of abuse directly (e.g., cocaine, amphetamine, nicotine, marijuana) or indirectly (alcohol, opioids) stimulate dopamine release. It has been state that drugs of abuse 'hijack' the brain's reward systems.
Predictive reward signals and decision-making
Basal ganglia
In preparation for my discussion of the ventral striatum and the reward system, I reviewed the functional anatomy of the basal ganglia. A good online review of basal ganglia functional anatomy here.
You can also watch helpful Khan Academy videos that summarize both the direct and indirect pathways through the basal ganglia. More detail about the indirect pathway can also be found here.
Here is my summary of the 'motor loop' of the basal ganglia to compare to my description below of the 'affective loop'.
The caudate and putamen comprise the dorsal striatum (recall, the caudate and putamen are histologically similar, and have two names because of the white matter pathway, the internal capsule, that divides them). The striatum contains medium spiny neurons which receive input from sensorimotor regions and project output to the globus pallidus.
There are two intermixed populations of medium spiny neurons within the dorsal striatum, and which are distinguished by their dopamine receptor type (D1 or D2) and their projection (direct loop: D1 – to GPi, indirect loop: D2 – to GPe). In the dorsal striatum, the substantia nigra is the source of dopamine to these two populations of MSNs.
The globus pallidus (also known as the dorsal pallidum) has two parts, the internal segment (GPi) and the external segment (GPe). The GPi tonically inhibits the VA/VL thalamus, but is disinibited by direct pathway input from the dorsal striatum D1 population of MSNs. (Note, the MSN neurons are inhibitory, so when the MSNs are excited, they transiently inhibit the GPi, which transiently disinhibits the thalamus).
The GPe receives input from the D2 population of MSNs, and projects to the subthalamic nucleus (STN). The STN, in turn, projects to the GPi. The STN transiently excites the GPi. Because the GPi's projections to the thalamus are inibitory, this excitatory STN input to GPi increases inhibition of thalamus. Thus, the GPi integrates transient disinhibition from the direct pathway and transient excitation from the indirect pathway, and modulates the degree of disinhibition of thalamus.
The direct and indirect pathways in the 'motor' striatum are thought to facilitate (direct) some movements while inhibiting (indirect) other competing movements.
Nucleus accumbens (NAc) and 'Limbic/Affective' loop
The nucleus accumbens is also known as the ventral striatum (just as the caudate and putamen are collectively known as the dorsal striatum) and it is the input area for the 'affective' or 'limbic' loop of the basal ganglia. Like the dorsal striatum, it contains two populations of medium spiny neurons (MSNs) which are distinguished by their dopamine receptor type (D1 and D2). The main source of dopamine to the ventral striatum is the VTA, but there are also projections to the 'core' of the NAc from the substantia nigra (SN).
Although not relevant to my lecture (and won't be on any test), the NAc is usually described as having two components: the shell and the core. The shell of the NAc receives its dopamine input from the VTA and projects to ventral medial prefrontal cortex. The shell also projects to mediodorsal thalamus (MD). MD thalamus has strong bidirectional projections to all of prefrontal cortex.
Although part of the ventral striatum, the shell also has some histological characteristics of the amygdala. Some researchers have noted with interest the role of the NAc in positive reward and the nearby amygdala in fear and negative reward.
The core of the NAc receives its dopamine primarily from the SN and projects to the subthalamic nucleus. The NAc core may represent the 'motor output' region of the ventral striatum.
The ventral striatum also has a direct and indirect loop, although it is a bit more complicated than the dorsal striatum and won't be considered in anatomical detail here. Suffice it to say, the main output from the ventral striatum is the ventral pallidum. The ventral pallidum is that portion of the globus pallidus that is located below (ventral) to the anterior commissure. The output of the ventral pallidum is the vmPFC and mediodorsal (MD) thalamus.
There are differences of opinion currently among researchers as to the functions of the direct and indirect pathways for this 'limbic' or 'affective' loop. Here are some current conceptualizations:
- 'go' (direct) and 'no go' (indirect)
- reward (direct) and punishment (indirect)
- prepare (direct) and select (indirect)
I will provide a video using D1 and D2 optogenetic stimulation of the ventral striatum that supports the reward/punishment distinction. When selectively stimulated by laser light, D1 receptor activation rewarded a nose poke by a rat. However, D2 receptor activation caused the animal to avoid the section of the cage where nose pokes were made.
Later in the lecture, I provided other data suggesting that 'Go' cues that require movement by the animal release more dopamine that 'No Go' cues, even when both were equally rewarded.
Dopamine and pathological gambling
We discussed how dopamine precursors (levodopa, sinemet) are used to treat the symptoms of Parkinson’s disease. In some instances, the increase in dopamine affects other behaviors outside of the motor system. I discussed several individuals who were given sinemet to treat Parkinson’s disease, and who developed pathological gambling behavior and (in some instances) greatly heightened sexual behavior.
I also discussed a study of pathological gamblers that show that the striatum is more activated in 'near miss' outcomes, indeed, the activation level was similar to that of a winning outcome. This suggests that in (at least some) pathological gamblers, the reward system is activated by a losing outcome, which may maintain gambling behavior.
You can think of a 'near-miss' outcome in the following way: Imagine that you lost a Powerball lottery by holding a ticket with 11 of the 12 random digits that appeared on the winning ticket. Given a random draw, in actuality, your ticket was no closer to the winning ticket than any string of 11 random digits. Nevertheless, the imaging results suggest that (if you were a pathological gambler) your striatum interpreted your ticket as a near miss winner.
Reward prediction error
The positive consequences of important life behaviors – e.g., eating and mating – take time to manifest themselves. Therefore, it may be necessary to have intermediate rewards – perhaps dopamine release itself (although see below about wanting and liking) is rewarding, and thus can serve as a bridge between the behavior and the consequences of the behavior. This is the basis of reinforcement learning.
In a now famous series of studies, Wolfram Shultz showed that activity in dopamine producing neurons (in the ventral tegmentum) increased when a reward was given without any warning cue. However, when the reward was signaled by a cue, the increase in dopamine neural activity occurred at the warning cue (or ‘conditioned stimulus’, CS). This chaining of intermediate rewards presumably sustains complex behaviors consisting of many intermediate steps.
If a predicted reward of an expected magnitude is presented, it no longer results in a burst of dopamine neuron firing (that firing occurred already to the predictive cue). However, if the predicted reward is absent or smaller than expected, then there is an omission of dopamine firing at the time of the reward. If the predicted reward is larger in magnitude than expected, then there is a burst of dopamine firing at the time of reward. Thus, the amount of dopamine firing at the time of reward reflects the difference, or 'prediction error', between what was expected and what was delivered. This reward prediction error is thought to be an important component in learning.
I showed interesting new data that demonstrates that the reliability of the cue also influences firing to the cue and to the reward. A cue that predicts a reward 100% of the time evokes a large dopamine response, with a small dopamine response to the actual reward. A cue that predicts the same reward 10% of the time evokes a small dopamine response, with a large dopamine response to the actual reward.
Dopamine and instrumental (operant) conditioning tasks
Reward prediction error is a powerful concept that links dopamine firing and reinforcement learning. However, recent data suggests that it is not the only interesting relationship of dopamine levels and behavior.
Volumetric techniques can measure dopamine levels in different brain regions. These techniques have shown that dopamine levels increase in a 'ramp-like' fashion as the animal approaches a cue for a reward, or the reward itself.
One interesting phenomenon revealed by recent studies using this technique is the relationship between dopamine levels and goal-directed movements. A cue to move towards a goal (reward) evokes ramp-like dopamine signaling. However, a cue to hold in place (no go) to obtain the same reward does not evoke a dopamine signal until the animal moves to obtain the reward. This suggests that movement towards a desired goal plays an important role in dopamine signaling.
Decision-making
Brian Knutson (NIH, Stanford) developed the Monetary Incentive Decision-Making task to study the reward system of the brain. It is an analogue in humans of Schultz's dopamine studies in monkeys.
Monetary incentive decision-making task:
- A cue indicates an amount of money to be delivered.
- The cue indicating reward activates the ventral striatum (n. accumbens).
- The reward itself activated the ventromedial prefrontal cortex.
- A social version of this task also showed n. accumbens activation for a cue indicating that a smiling face reward will be delivered. However, smiling face reward activated the amygdala when it was finally delivered.
- The n. accumbens is activated more by greater reward amounts.
- It appears that dopamine neurons also encode losses as well as rewards – although it may be different neurons in the n. accumbens that respond to losses and gains.
Social decision making and social reward
A social/monetary incentive decision-making task has also been used, in which the reward can be monetary (money) or simply the ability to view an attractive face (social). The ventral striatum appears equally activated in the anticipation of both monetary and social rewards, suggesting that there is a 'common currency' for reward.
Taxation vs. Charitable Giving
A study was performed showing monetary payoffs to the participant, the participant observing a charity receive money from one's endowment through taxation, or the participant freely choosing to donate to the charity, all activated the ventral striatum. Again, these results are supportive of the common currency concept.
Negative rewards
Thus far, I have discussed only rewards that are positive, with the absence of an expected positive reward as a negative reward. But how about truly aversive rewards? Do they also activate the dopamine predictive reward circuit?
The answer appears to be a qualified 'yes'. A study performed in monkeys in which firing in single dopaminergic neurons was recorded. In this instance, an aversive air puff to the eye also elicited increased firing in dopamingeric neurons. This suggested that at least some dopaminergic neurons are sensitive to negative as well as positive reinforcers.
Wanting (or seeking) and liking
Dopamine appears to motivate behavior – recall that Parkinson’s patients have trouble initiating movements. Recall also our two case studies of patients with caudate strokes who showed abulia (profound lack of behavior – even in keeping up a conversation) and were described as having ‘psychic akinesia' (akinesia meaning lack of movement, but the ‘psychic’ modifier suggesting a more profound loss of behavior than just motor behavior).
Drugs like 6-OHDA (6 hydroxydopamine) cause a selective loss of dopamine. Research by Kent Berridge showed that in rodents, 6-OHDA can produce a lack of motivated behaviors. The rodent will not move to get food (won’t ‘want’ or ‘seek’). However, the rodent will eat or lick if the food is brought to their mouth (will ‘like’) and demonstrate the same behavior suggestive of liking. So, this suggests that dopamine may not be rewarding in and of itself – but rather is necessary to engage in motivated behavior – the seeking or wanting of the reward.
Other studies suggest that the opioid system may be crucial for the ‘liking’ aspect of reward. There are opioid ‘hedonic hotspots’ in the nucleus accumbens and ventral pallidum. Stimulation of some hotspots greatly intensify 'liking' behavior and some intensify 'not liking' behavior. Thus, the pleasurable aspect of consumption may be mediated by endogenous opioid systems in the ventral striatum and pallidum, while dopamine systems may be responsible for motivated behavior to obtain those rewards.
Valuation
Several studies have implicated ventromedial prefrontal cortex in preference. In some theories, the vmPFC compares two rewards and responds proportionally to the difference in their value.
Neural basis of preference
The vmPFC activates more to stimuli that are more preferred relative to others. I first described the anatomy of vmPFC and orbital frontal cortex. These abutting anatomical regions are often combined into the term orbital frontal cortex.
I discussed a monkey study by Padoa-Schioppa and Assad that showed single neurons in the monkey orbital frontal cortex appear to code the relative amounts of more preferred 'A' and less preferred 'B' drink rewards. Prior to recording, the relative values of each reward 'A' and 'B' were assessed with behavioral testing and the 'point of indifference' between the two rewards was calculated (~4 units of 'B' = 'A'). The units responded greater to the difference in reward value between competing 'A' and 'B' offers.
Neuromarketing
Some companies are interested in the results of such studies described above to determine subject's responses to different products. This work was stimulated by a study that contrasted the vmPFC response to Coke and Pepsi in individuals displaying a strong preference for one or the other drink. The vmPFC response (measured by fMRI) was stronger for a subject receiving Coke who had a strong preference for Coke, and weaker for a Coke reward in subjects with a less strong preference for Coke. This suggests (to some) that the vmPFC response is a veridical measure of 'liking'.
I also discussed a study by Antonio Rangel and colleagues at Cal Tech who investigated whether ‘marketing’ (here represented by the dollar value assigned to a wine) influenced both preference (as measured by behavior) and neural activity in the vmPFC and OFC. Subjects preferred cheap wines when labeled as expensive, and did not prefer expensive wines when labeled as cheap. OFC and vmPFC activity tracked these preferences. The authors suggest that the pleasure associated with the wine (as indexed by vmPFC and OFC) was partly determined by its expense.
Reconciling valuation and somatic markers
I find it unsatisfying when the same* brain regions are implicated in what appears on the surface to be very different behaviors. Such it is for ventromedial PFC, which we discussed in the domain of emotions and 'somatic markers' and in the domain of economic choice based upon valuation. [and it gets worse, we will further link vmPFC to the realm of abnormal social behavior and psychopathy.]
One way to link valuation to somatic markers is through the subjective experience that the potential rewards evoke. The 'as if' feeling associated with a high value reward is presumably stronger than the 'as if' feeling associated with a lesser value reward.
It is difficult to determine what are 'same brain regions' with fMRI and lesion studies. Both of these require spatial separation of differing functions. If two or more functions are spatially interdigitated, then the blood oxygenation response of fMRI will be indistinguishable for all of those functions. We have already seen in the striatum (both dorsal and ventral) that medium spiny neurons that are part of the direct (D1) pathway and indirect (D2) pathway are spatially interdigitated. I showed a slide from a 2017 study indicating that there are functionally dissociable but spatially interdigitated D1 and D2 microcircuits in vmPFC associated with different aspects of decision-making.
Videos
Videos prerecorded for 2020
Video transition to lecture 24