Structurally, therefore, this pathway would appear well situated to monitor and to switch between cortical inputs to the striatum based on changes in well-predicted
external contingencies (Kimura et al., 2004). Indeed, the recent suggestion that striatal CINs may form a recurrent inhibitory network anticipates CT99021 in vitro context- or state-specific plasticity of this kind, with each CIN potentially modulating a distinct region of corticostriatal plasticity under the control of the thalamostriatal pathway (Sullivan et al., 2008). At a formal level, contextual or state cues of this kind have emerged as a critical component of computational models of goal-directed action derived from model-based reinforcement learning (Daw et al., 2005). Such cues are argued to exert conditional control over actions and to produce a state prediction error when changes in such control occur. Model-based reinforcement learning uses experienced state-action-state transitions to build a model of the environment by generating state prediction errors produced by any discrepancy induced by a state transition based on the current estimates of state-action-state transition probabilities ( Gläscher et al., 2010). The notion of state prediction
errors is in contrast with that of reward prediction errors derived from temporal difference Epigenetic phosphorylation models of learning ( Sutton and Barto, 1998) that have been shown to reliably correlate with the phasic action of midbrain dopamine neurons ( Schultz and Dickinson, 2000). However, reward prediction error is negligible, particularly in Bay 11-7085 the reversal experiments in the current series; the
animal is expecting one outcome and receives another, which creates an error signal but one that is unrelated to rewarding prediction per se (the amount of reward earned is unchanged). This kind of signal is consistent with recent suggestions that CINs may participate in a form of prediction error signal in the DMS during reversal of previously learned contingencies ( Apicella et al., 2011). Indeed, similar studies assessing prediction errors in, what are at least nominally, instrumental conditioning tasks have found that TANs (putative CINs) preferentially encode prediction errors to situational events rather than reward ( Apicella et al., 2011; Stalnaker et al., 2012). Taken together, the effect of impaired pDMS CIN function on contingency degradation and the learning of new, but not initial, action-outcome contingencies is consistent with a deficit in computing reductions in state prediction errors that lead to reductions in contingency knowledge (see Supplemental Information). Whatever the role of CINs in conditional control, the current data suggest that the thalamostriatal pathway and its influence on CINs are critical for encoding changes in the instrumental contingency. Although this pathway does not appear to play any direct role in encoding action-outcome associations (this paper) or in striatal LTP (Bonsi et al.