Traumatic brain injury and reserve

The potential role of brain and cognitive reserve in traumatic brain injury (TBI) is reviewed. Brain reserve capacity (BRC) refers to preinjury quantitative measures such as brain size that relate to outcome. Higher BRC implies threshold differences when clinical deficits will become apparent after injury, where those individuals with higher BRC require more pathology to reach that threshold. Cognitive reserve (CR) refers to how flexibly and efficiently the individual makes use of available brain resources. The CR model suggests the brain actively attempts to cope with brain damage by using pre-existing cognitive processing approaches or by enlisting compensatory approaches. Standard proxies for CR include education and IQ although this has expanded to include literacy, occupational attainment, engagement in leisure activities, and the integrity of social networks. Most research on BRC and CR has taken place in aging and degenerative disease but these concepts likely apply to the effects of TBI, especially with regards to recovery. Since high rates of TBI occur in those under age 35, both CR and BRC factors likely relate to how the individual copes with TBI over the lifespan. These factors may be particularly relevant to the relationship of developing dementia in the individual who has sustained a TBI earlier in life.


GENERAL CONCEPTS OF RESERVE
described 10 cases of cognitively normal elderly women who were discovered to have advanced Alzheimer's disease (AD) pathology in their brains at death. They speculated that these women did not express the clinical features of AD because their brains were larger than average, providing them with "brain reserve." In more recent cohort studies it has been estimated that approximately 25% of individuals who have neuropathologic evidence of AD at postmortem examination are not demented during their lives (Ince, 2001).
Brain reserve (Katzman, 1993) is an example of what might be called passive models of reserve, where reserve derives from brain size or neuronal count. The models are passive because reserve is defined in terms of the amount of brain damage that can be sustained before reaching a threshold for clinical expression. The threshold model (Satz, 1993), one of the best articulated passive models, revolves around the construct of "brain reserve capacity" (BRC). While BRC is a hypothetical construct, concrete examples of BRC might include brain size or synapse count. The model recognizes that there are individual differences in BRC. It also presupposes that there is a critical threshold of BRC. Once BRC is depleted past this threshold, specific clinical or functional deficits emerge.
In contrast, the cognitive reserve (CR) model suggests that the brain actively attempts to cope with brain damage by using pre-existing cognitive processing approaches or by enlisting compensatory approaches (Stern, 2002(Stern, , 2009. Individuals with more CR would be more successful at coping with the same amount of brain damage. Thus, the same amount of brain damage or pathology will have different effects on different people, even if BRC (e.g., brain size) is held constant. The concept of CR provides a ready explanation for why many studies have demonstrated that higher levels of intelligence and of educational and occupational attainment are good predictors of which individuals can sustain greater brain damage before demonstrating functional deficit. Rather than positing that these individuals' brains are grossly anatomically different than those with less reserve (e.g., they have more synapses), the cognitive reserve hypothesis posits that they process tasks in a manner that allows them to cope better with the brain damage. Brain reserve and cognitive reserve concepts are not mutually exclusive, and it is likely that both are involved in providing reserve against brain damage.
It has become evident in recent years that the demarcation between brain reserve and cognitive reserve is not clear-cut. First, from a strict point of view, the differences in cognitive processing envisioned by the cognitive reserve model must also have a physiologic basis, in that the brain must ultimately mediate all cognitive function. The difference is in terms of the level of analysis. Presumably, the physiologic variability subsumed by cognitive reserve is at the level of variability in synaptic organization, or in relative utilization of specific brain regions. Thus cognitive reserve implies anatomic variability at the level of brain networks, while brain reserve implies differences in the quantity of available neural substrate. Second, many of the factors associated with increased cognitive reserve, such as cognitively stimulating experiences, have a direct effect on the brain. The child developmental literature suggests that not only do individuals with higher IQ have larger brain volume (Willerman et al., 1991;Kesler et al., 2003), but that cognitively stimulating aspects of life experience may also be associated with increased brain volume. It is also now clear that stimulating environments and exercise promote neurogenesis in the dentate gyrus (Brown et al., 2003;van Praag et al., 2005). Both exercise and cognitive stimulation regulate factors that increase neuronal plasticity (such as BDNF) and resistance to cell death. Indeed, Rostami et al. (2011) demonstrated that BDNF polymorphism predicted recovery of general cognitive ability after penetrating TBI (see also Krueger et al., 2011). Finally, there is evidence to suggest that environmental enrichment might act directly to prevent or slow the accumulation of AD pathology (Lazarov et al., 2005). Thus, a more complete account of CR would have to integrate these complex interactions between genetics, the environmental influences on brain reserve and pathology, and the ability to actively compensate for the effects of pathology.

MEASURES OF RESERVE
For advocates of the idea of brain reserve, anatomic measures such as brain volume, head circumference, synaptic count, or dendritic branching are effective measures of reserve. Mounting evidence suggests that many of these measures are malleable over the lifetime, and influenced by life experience. Therefore, brain reserve may represent a summation of many aspects of life experience that are also thought to summate into CR.
Variables descriptive of lifetime experience are the most commonly used proxies for CR. These include measures of socioeconomic status, such as income or occupational attainment. Educational attainment has also been a widely used proxy for reserve, probably because it is relatively easy to ascertain. Degree of literacy might be a better marker for CR than number of years of formal education because it is a more direct measure of educational attainment (Manly et al., 2003). Finally, specific measured attributes have been used as indices of reserve, including IQ, and measures of various cognitive functions.
Education might also be a marker for innate intelligence, which may in turn be genetically based or a function of exposures. Some studies suggest that an estimate of IQ, or premorbid IQ might actually be a more powerful measure of reserve in some cases (Alexander et al., 1997;Albert and Teresi, 1999). Still, education or other life experiences probably impart reserve over and above that obtained from innate intelligence. Studies have demonstrated separate or synergistic effects for higher educational and occupational attainment and leisure activities, suggesting that each of these life experiences contributes independently to reserve (Rocca et al., 1990;Evans et al., 1993;Stern et al., 1994Stern et al., , 1995aMortel et al., 1995). A prospective study showed that estimated IQ at age 53 was separately influenced by childhood cognition, educational attainment, and adult occupation (Richards et al., 2003). These observations stress that CR is not fixed; at any point in one's lifetime it results from a combination of exposures.
The simplest explanation for how CR forestalls the clinical effects of AD pathology does not posit that experiences associated with more CR directly affect brain reserve or the development of AD pathology. Rather, CR allows some people to better cope with the pathology and remain clinically more intact for longer periods of time. This has been the working assumption underlying the design and interpretation of many of the studies of Stern and colleagues. However, as mentioned above, several factors associated with CR may also have direct impact on the brain itself. There is a demonstrated positive relationship between IQ and brain volume where significant correlations generally range between 0.25 and 0.40 (Willerman et al., 1991;Reiss et al., 1996;Witelson et al., 2006;Miller and Penke, 2007;Royle et al., 2013). This also extends to hippocampal volume and a positive relation to IQ (Schumann et al., 2007). Thus, the child development literature suggests that intracranial brain volume and aspects of lifetime exposure are predictive of differential susceptibility to the effects of traumatic brain injury (Kesler et al., 2003). As previously stated, all of this suggests a more complete understanding of how CR would have to integrate these complex interactions between genetics, the environmental influences on brain reserve, and pathology; in addition, the ability to actively compensate for the effects of pathology is needed in order to know how CR influences TBI outcomes.

EPIDEMIOLOGIC EVIDENCE FOR COGNITIVE RESERVE
The concept of reserve is relevant to any situation where the brain sustains injury. In addition, it will be argued that the concept of reserve should be extended to encompass variation in healthy individuals' performance, particularly when they must perform at their maximum capacity. Nevertheless, many of the studies discussed in this chapter will be framed around aging and AD, with the implicit assumption that the discussion has implications for brain damage in general. Both aging and AD have some unique advantages for examining diseaseinduced changes in brain function. Both are more likely than conditions such as focal stroke to affect overlapping anatomic sites across subjects, allowing better generalization although there is considerable heterogeneity in both aging and AD. Both are also slowly progressive but have different trajectories that involve brain volume loss over time (Fjell et al., 2013), where atrophic loss has been used as an indicator of the severity of brain insult required before cognitive networks change. On the other hand, the potential for adaptation of recovery might vary between slowly progressive and acute pathologies, so studies of aging and AD may not always have direct implications for studies of other conditions. In 1994, Stern et al. reported incident dementia data from a follow-up study of 593 community-based, nondemented individuals aged 60 years or older (Stern et al., 1994). After 1-4 years of follow-up, 106 became demented; all but five of these met research criteria for AD. The risk of dementia was increased in subjects with low education, where the relative risk (RR) of developing dementia over the follow-up period was 2.2 times higher in individuals with less than 8 years of education than in those with more education (95% confidence interval [CI], 1.33-3.06). Similarly, risk of incident dementia was increased in those with low lifetime occupational attainment (RR, 2.25; 95% CI, 1.32-3.84). Risk was greatest for subjects with both low education and low lifetime occupational attainment (RR, 2.87; 95% CI, 1.32-3.84).
To the extent that aspects of educational and occupational attainment reflect lifetime exposures that would increase CR, it would be logical to expect that environmental exposures later in life would also be beneficial. In a subsequent study the same group assessed participation in a variety of leisure activities characterized as intellectual (e.g., reading, playing games, going to classes) or social (e.g., visiting with friends or relatives, etc.) in a population sample of nondemented elderly in New York (Scarmeas et al., 2001). During follow-up, subjects who engaged in more of these activities had 38% less risk of developing dementia. Interestingly, specific classifications of leisure activity (such as purely intellectual activities) did not provide better prediction then a simple summation of all the considered activities.
Many studies have examined the relation of CR proxy variables to incident dementia. A meta-analysis examined cohort studies of the effects of education, occupation, premorbid IQ, and mental activities on dementia risk (Valenzuela and Sachdev, 2005). A summary analysis was based on an integrated total of 29 279 individuals from 22 studies. The median follow-up was 7.1 years. The summary overall risk of incident dementia for individuals with high brain reserve compared to low was 0.54 (95% CI, 0.49-0.59, p < 0.0001) -a decreased risk of 46%. Eight out of 33 datasets showed no significant effect, while 25 out of 33 demonstrated a significant protective effect. The authors found a significant negative association between incident dementia risk (based on differential education) and the overall dementia rate for each cohort (r ¼ À0.57, p ¼ 0.04), indicating that in negative studies there was a lower overall risk of incident dementia in the cohort.
There is also evidence for the role of education in agerelated cognitive decline, with several studies of normal aging reporting slower cognitive and functional decline in individuals with higher educational attainment (Snowdon et al., 1989;Colsher and Wallace, 1991;Albert et al., 1995;Farmer et al., 1995;Butler et al., 1996;Christensen et al., 1997;Lyketsos et al., 1999;Chodosh et al., 2002). These studies suggest that the same education-related factors that delay the onset of dementia also allow individuals to cope more effectively with brain changes encountered in normal aging. In an ethnically diverse cohort of nondemented elders in New York City, increased literacy was also associated with slower decline in memory, executive function, and language skills (Manly et al., 2005).
In contrast to the studies above, in which greater reserve was associated with better outcomes, a series of studies of patients with AD have suggested that those with higher reserve have poorer outcomes. In a prospective study of AD patients matched for clinical severity at baseline (Stern et al., 1995a), patients with greater education or occupational attainment died sooner than those with less attainment. Although at first these findings appear counterintuitive, they are consistent with the CR hypothesis. The hypothesis predicts that at any level of assessed clinical severity, the underlying pathology of AD is more advanced in patients with high CR than in those with low CR. This would result in the clinical disease emerging when pathology is more advanced, as suggested by the incidence studies reviewed above. This disparity in degree of pathology would be present at more advanced clinical stages of the disease as well. At some point the greater degree of pathology in the high TRAUMATIC BRAIN INJURY AND RESERVE reserve patients would result in more rapid death. Although one study did not replicate this finding (Geerlings et al., 1997), a follow-up study by the same group using patients with more advanced dementia did (Geerlings et al., 1999). Higher measured CR has also been associated with more rapid cognitive decline in patients with AD (Stern et al., 1999;Scarmeas et al., 2006). Explanation of this finding is along similar lines. At some point AD pathology overrides the processes that mediate CR. This point should arrive at an earlier stage of clinical severity in patients with higher CR because the underlying AD pathology is more severe.

IMAGING STUDIES OF COGNITIVE RESERVE
Several imaging studies of CR in AD used resting cerebral blood flow (CBF) as a surrogate for AD pathology (Friedland et al., 1985;McGeer et al., 1990;DeCarli et al., 1992). Our original functional imaging study found that, in patients matched for overall severity of dementia, the parietotemporal flow deficit was greater in those with more years of education (Stern et al., 1992). This observation was confirmed in a later PET study (Alexander et al., 1997). After controlling for clinical dementia severity, higher education was correlated with reduced cerebral metabolism in prefrontal, premotor and left superior parietal association areas. The negative correlations are consistent with the CR hypothesis prediction that at any given level of disease clinical severity a subject with a higher level of CR should have greater AD pathology (i.e., lower CBF). These studies support the idea that although pathology was more advanced in patients with higher education, the clinical manifestations of the disease were comparable to those in patients with lower education and less pathology.
Presumably the patients with more education had more cognitive reserve. We made a similar observation for occupational attainment (Stern et al., 1995a, b), and later for leisure activities (Scarmeas et al., 2003). These findings were confirmed in a prospective study with subsequent neuropathologic analysis. Education was found to modify the association between AD pathology and levels of cognitive function: for the same degree of brain pathology there was better cognitive function with each year of education (Bennett et al., 2003). Stern and colleagues (Stern et al., 2005;Stern, 2006Stern, , 2009 have suggested that the neural implementation of CR might take two forms: neural reserve and neural compensation. The idea behind neural reserve is that there is natural interindividual variability in the brain networks or cognitive processes that underlie the performance of any task. This variability could be in the form of differing efficiency or capacity of these networks, or in greater flexibility in the networks that can be invoked to perform a task. While healthy individuals may invoke these networks when coping with increased task demands, the networks could also help an individual cope with brain pathology. An individual whose networks are more efficient, have greater capacity, or are more flexible might be more capable of coping with the challenges imposed by brain pathology. "Neural compensation" refers to the process by which individuals suffering from brain pathology use brain structures or networks (and thus cognitive strategies) not normally used by individuals with intact brains in order to compensate for brain damage. "Neural compensation" is reserved for a situation where it can be demonstrated that the more impaired group is using a different network than the unimpaired group.

TRAUMATIC BRAIN INJURY AND RESERVE
How a brain responds to injury must be dependent on a host of environmental, genetic, and constitutional factors that come into play at the moment that injury occurs (Risdall and Menon, 2011). Recovery depends on how these anatomic, biochemical, and physiologic resources participate in responding to injury, both acutely and chronically (Mazzeo et al., 2009;Bigler and Maxwell, 2011). All biological systems are programmed for some level of repair and restitution following injury, disease and/or aging, including the CNS (Schwartz et al., 1999). Biological reparative and recovery responses relate in part to how regular maintenance occurs along with redundancy of systems, in other words how backup systems may take over function (Molina-Holgado and Molina-Holgado, 2010). The CNS has both a neural maintenance system and some redundancy mechanisms that participate in recovery. Elaborate maintenance and reparative systems activate when injury occurs, even in severe nonfatal TBI (Bach-y-Rita, 2003). Obviously, the more minor the insult, potentially the more readily and quickly reserve factors may respond to the injury and bring about restitution. Since both the severity of injury and the degree of lesion burden are factors associated with outcome (Bigler, 2005), reserve factors would positively contribute to outcome if reserve mechanisms provided some buffer to better withstand injury severity and/or the amount of tissue damage. Prediction of outcome is multifactorial, as more completely discussed in Chapter 29, but CR and BRC likely play a significant role. Vakil and colleagues (Levi et al., 2013;Sela-Kaufman et al., 2013) examined a variety of preinjury variables in 89 individuals with moderate-to-severe TBI assessed approximately 14 years postinjury. Using a factor 694 E.D. BIGLER AND Y. STERN analytic approach, a three-factor CR structure emerged that included premorbid intellectual ability, leisure activity, and socioeconomic status. This group also examined preinjury emotional status in a subset of the overall TBI group, observing that higher levels on a neuroticism index preinjury was associated with poorer TBI outcome. Others have shown that a pre-existing psychiatric disorder is a vulnerability factor for TBI outcome (Silver et al., 2009). Salmond et al. (2006) suggested that retained intellectual functioning postinjury could be a marker of higher preinjury level of intellectual ability, and likewise observed that higher postinjury IQ was a resilience factor against developing depression. Although the extent is unknown, substantial genetic factors likely underlie pathophysiologic responses to injury as well as reparative mechanisms (Jordan, 2007;Raivich and Makwana, 2007;McAllister, 2010;Kumar and Loane, 2012). Given these broad constitutional and genetic generalizations, how healthy and optimally functional a brain is at the time of injury likely relates to outcome, influenced by CR and BRC factors. For example, Sumowski et al. (2013a) showed that higher educational levels attenuated the negative effect of TBI on outcome when controlling for injury severity variables. The discussion that follows will first cover severity of injury related to recovery followed by parenchyma brain changes related to TBI severity. CR and BRC factors that have been related to TBI outcome will also be reviewed. Some basic aspects of brain injury pathology as expressed in reduced total brain volume (TBV) need to be discussed in some detail to establish the hypothetical framework for how CR and BRC may influence TBI outcome.
"RECOVERY" AND TRAUMATIC BRAIN INJURY SEVERITY Whitnall et al. (2006) examined long-term disability following traumatic brain injury in a large cohort of 475 individuals who sustained TBI of all levels of severity based on the Glasgow Coma Scale score (GCS) (Teasdale and Jennett, 1974). The majority of the sample had sustained mild TBI. While disability at 1 year was related to injury severity, 43% of all TBI patients had reached some level of "good recovery," defined as the patient resuming most normal activities but potentially having minor residual problems based on Glasgow Outcome Scale (GOS) criteria (Wilson et al., 1998), the universal standard for classifying injury severity ranging from mild to severe. However, 5-7 years later, some who had initially achieved "good recovery" had deteriorated and dropped into a disabled category while some who were initially classified as disabled moved into the "good recovery" category. Overall, only 47% had achieved "good recovery," including some with severe injuries. This same cohort has now been followed out to 12-14 years postinjury, with only 23% having what was described as "improved" status. Why the variability in TBI outcome, and why is there not a more linear relationship between injury severity and outcome? How does one patient with severe TBI recover, yet another TBI patient, with less severe injury, experiences residual disability? Why do some TBI patients with ostensibly the same level of brain injury recover while others do not? Why do some who "recover" later become disabled, when there has been no other intervening variable other than age? Lingsma et al. (2010) approached this heterogeneity of injury and recovery issue by examining the univariate as well as the cumulative contribution of various factors in predicting TBI outcome. TBI severity accounted for the most variance -about 22% of the explained variancewith lesion burden based on computed tomography (CT) findings next at about 15%, and demographic factors accounting for about 7%. Cumulatively these factors combined to account for about 35% of the explained variance in TBI outcome. If one now considers how to explain the remainder of variance in predicting TBI outcome and other issues related to reserve and resiliency to brain injury, it may be that CR and BRC help understand the heterogeneity of TBI outcome.

INJURY SEVERITYAND OVERALL
LESION BURDEN Maxwell et al. (2010) measured brain weight at postmortem examination from TBI patients with moderate-to-vegetative outcome who survived months to years postinjury and compared this to brain weight in an age-matched comparison group who died from non-neurologic causes. Those with moderate-to-severe TBI had an overall volume loss of approximately 113 cc, with the vegetative TBI patients losing on average 167.7 cc compared to controls. Since patients in the mildto-moderate level of TBI, which combined represent 80-90% of all injuries, are most likely to survive, there are no postmortem studies that have systematically measured brain weight in the mild-to-moderate range of TBI, and therefore, the impact of injuries at this level of severity has to be inferred from quantitative in vivo neuroimaging studies, which may be determined by comparing total brain volume (TBV) of TBI patients to ageand sex-matched controls. Blatter et al. (1997) showed that, on average, mild complicated-(i.e., GCS in the 13-15 range, with some abnormality on neuroimaging) to-severe TBI who recovered sufficiently to undergo inpatient rehabilitation resulted in an estimated volume loss of $50 cc by at least 6 weeks postinjury. Similarly,

TRAUMATIC BRAIN INJURY AND RESERVE
MacKenzie et al. (2002) reported that a TBI group ranging from mild-(the majority of whom had some form of positive imaging findings) to-moderate injury severity, on average experienced a volume loss of 33 cc compared to controls. An example of atrophic change related to injury severity and brain volume loss in TBI is shown in Figure 43.1, where a control coronal section of the brain is compared to young adults with mild, moderate, and severe TBI. Clearly, inspection of these images demonstrates volume loss as injury severity increases, as reflected in ventricular prominence, increased size of cortical sulci, and decreased hippocampal size. Based on the quantitative studies cited above, substantial parenchymal volume loss occurs in TBI, proportional to the severity of injury. Importantly for brain reserve, Maxwell et al. (2010) showed that the reduced volume in TBI was associated with neuronal loss and greater disability. Therefore, atrophy from TBI is, in part, a reflection of neuronal loss, which has some relationship with TBI outcome (see also Tate et al., 2011). In adults (Wilde et al., 2006) as well as children (Ghosh et al., 2009) studies have shown that with each point drop in GCS score there is a statistically increased likelihood for resultant whole brain atrophy. Other metrics of injury severity, such as post-traumatic amnesia (PTA), duration of coma, and the lesion burden of day-of-injury neuroimaging abnormalities, also relate to development of chronic atrophic changes and loss of brain volume ) (see also Ch. 17).
All of these findings support the direct role of injury severity in the amount of traumatically induced tissue loss or lesion burden from trauma that would directly influence BRC. Scheibel et al. (2009), using functional magnetic resonance imaging (fMRI) techniques to assess activation patterns for similar levels of cognitive performance in TBI patients with varying levels of injury severity, demonstrated that as injury severity increased, so did activation patterns, suggesting that to maintain equal levels of cognitive performance with less severely brain-injured subjects, or with orthopedically injured controls, the severely injured required greater, diffuse cerebral activation. Reserve issues may come into play depending on how recruitment and activation occurs to marshal cognitive processes that facilitate compensation.
CR and BRC also interact with lesion burden, as shown by Grafman et al. (1988), who examined Vietnam War veterans with penetrating missile wounds. Preinjury measures of intellectual performance were most predictive of postinjury intellectual functioning, followed by lesion size and then lesion location. These same veterans have now been followed for 40 years, with higher levels of preinjury intellectual functioning still remaining predictors for preserved cognitive ability later in life (Raymont et al., 2011). Complex neurodevelopmental and genetic factors participate in the emergence of intellectual abilities (Toga and Thompson, 2005), and this in some way must establish a substrate for network Fig. 43.1. Coronal T1 MRI at the level of the hippocampus comparing the appearance of the normal control brain (black arrow points to hippocampus), compared to one with mild TBI (Glasgow Coma Scale (GCS) ¼ 13), moderate TBI (GCS ¼ 9) and severe TBI (GCS ¼ 3). Mild range of TBI is typically defined by a GCS score of 13-15, moderate 12-9, and severe 8 and below. Note the reduction in hippocampal size with concomitant increase in the size of the temporal horn of the lateral ventricle with increasing injury severity. Also, note the relationship between increasing hippocampal atrophy and concomitant increase in temporal horn size. Arrow in the GCS ¼ 3 patient points to dilated temporal horn with atrophic hippocampus in the floor of the temporal horn.

696
E.D. BIGLER AND Y. STERN recovery following TBI (Castellanos et al., 2011). The longitudinal outcome studies of Vietnam veterans clearly support a role for both CR and BRC in the long-term outcome following TBI (Raymont et al., 2011).

SIZE^FUNCTION RULE AND NEURAL CONNECTIVITY
One facet of BRC has to do with the structural integrity of the brain during aging and in response to injury or disease. Numerous studies have shown that within certain limits there is an optimal size for specific brain regions that participate in certain functions. For example, the size of the hippocampus and memory (Bigler et al., , 2002, frontal regions and executive function (Antonova et al., 2004;Premkumar et al., 2008), olfactory bulb and smell discrimination (Buschhuter et al., 2008;Haehner et al., 2008), visual cortex and visual discrimination (Hart et al., 2008), etc., to name but a few of the positive size-function relationships. All of these relationships are positive but their magnitude is consistently modest at best. Obviously, there are many factors that would contribute to the "health" of a neural region, its size and functional relationships. Size is certainly not explanatory of all brain-behavior relationships and should only be viewed as one dimension within the complex of brain-behavior relationships (Roth et al., 2010). This is likely due to distinct individual differences as well as a restricted range of any given brain structure wherein size becomes optimized with its function and connectivity with other brain regions. If an optimal brain size-connectivity reflects maximal brain-behavior correlates, then preinjury variables which relate to size-function -such as TBV -likely play a role in TBI. This represents one theme for this review.
The connectivity between brain regions represents a major factor in how a given function emerges from a neural network because no single brain region functions in isolation. Sporns (2011) reviews evolutionary as well as the economy of connectivity in regulating brain function. From a connectivity standpoint, it is likely that efficiency is maximized by output with the minimal distance (and hence processing speed) and connections for the best output. Schiff (2006Schiff ( , 2010 speculated that recovery of consciousness following brain injury relates to how networks adapt and reconnect and that preinjury cognitive reserve relates to efficient connectivity. Efficiency of functional connectivity may trump size because no matter what optimal size may be reached by a given structure, it has to be effectively and efficiently connected with other brain regions to bring about function. In many respects, TBI may be considered a disorder of disrupted connectivity; this can be readily appreciated in Figure 43.2, where normal diffusion tensor imaging (DTI) tractography depicts aggregate white matter pathways in the intact, typically developed brain compared to a patient with moderate-to-severe TBI, where a lessening of white matter connectivity is evident (Maas and Menon, 2012). DTI analyses in non-TBI aging and dementia reveal interesting relationships with cognitive reserve (Arenaza-Urquijo et al., 2011), with no reason to not suspect that similar factors apply to the traumatically injured brain.
A number of studies have now shown that the degree of network damage both acutely and chronically predicts outcome from TBI (Kraft et al., 2012;Shumskaya et al., 2012;Van Horn et al., 2012;Caeyenberghs et al., 2014). There is still much basic neuroimaging research that needs to be done in how best to define networks (Habeck et al., 2012), but the fact that to-date various approaches to operationally defined networks derived from a variety of methods are proving to be potent predictors of TBI outcome supports the fruitfulness of this approach in assessing the TBI patient (Stiles, 2012). Schiff's (2006Schiff's ( , 2010 speculations about CR and recovery of consciousness have implications for brain connectivity of the upper brainstem, with higher centers involved in cognition. From the perspective offered by Schiff, CR variables may either bestow and/or be a by-product of brainstem networks that functionally reconnect with the cerebrum. Some support for this comes from Dawson et al. (2007), who speculated that in mild TBI, where structured assessment of PTA can be measured, that PTA may be influenced by CR variables. Indeed both IQ and education were related to shorter PTA. Salmond et al. (2006) also observed that CR influenced affective outcome following TBI, irrespective of lesion volume or other morphometric findings of pathologic change. In TBI patients otherwise matched in terms of lesion and demographic factors other than IQ, TBI subjects with premorbid estimates of higher IQ exhibited significantly less depression. In contrast, preinjury history of psychiatric disorder represents a vulnerability factor for developing postinjury depression (Stein and McAllister, 2009).
During development primary pathways are typically established that regulate a particular function. These take ascendancy over parallel pathways, which still may retain some function but are redundant (Blumberg et al., 2009). These redundant pathways probably coexist with the primary pathway, and may be activated as part of recovery (Bach-y- Rita, 1990;Page et al., 2004). So the connectivity issue raised above deals not only with the primary connectivity pathways of the brain, but also secondary and tertiary pathways that may be tapped when the primary pathway is damaged. To use a highway analogy, there may be numerous paths to take to a particular destination but there will typically only be one that spans the shortest distance permitting the quickest travel time.
Developmental and lifespan relationships between brain structures and function likely follow similar neural transmission rules (Van Der Werf et al., 2001;Kochunov et al., 2010) -the shortest pathway, with the fewest synaptic connections to perform optimally may be the most efficient neural pathway. In fact, speed of processing is one of the important factors in maximizing the sizefunction rule and this certainly relates to functional connectivity. For example, Haier et al. (2004) demonstrated a distributed neural basis of intellectual performance related to volume, connectivity, and speed of processing (Colom et al., 2007;Jung and Haier, 2007). Similarly, Barbey et al. (2012) showed that deficits on traditional psychometric measures of intellectual functioning (i.e., Wechsler Adult Intelligence Scale (WAIS)) and executive functioning (i.e., Delis-Kaplan Executive Function System (D-KEFS)) related to locations of lesions that disrupted networks, in particular the superior longitudinal and arcuate fasciculi.
From a connectivity perspective, multiple sclerosis (MS) and its selective damage to myelinated axons is the prototype white matter disease where brain volume reduction is associated with deficits in speed of processing because of diminished white matter integrity (Lazeron et al., 2006). In fact, the case can be made that understanding axonal pathology, the effects of disconnection as well as adaptation that occurs in MS, has relevance to understanding other predominate disorders that affect white matter, such as TBI (Medana and Esiri, 2003;Filippi and Rocca, 2008). The observations in MS, as well as other diseases that affect brain volume and neural connectivity, have relevance to the question of reserve and outcome in TBI (Loitfelder et al., 2011). Since both gray and white matter volume are positively interrelated and both relate to TBV, TBV also represents a proxy for underlying whole brain white and gray matter integrity. As already discussed, loss of brain volume in TBI (i.e., reduced TBV) is proportional to the severity of injury, but TBI disproportionably damages white matter with volume loss associated with slowed psychomotor speed, memory and attention (Vannorsdall et al., 2010). The degree to which white matter connectivity is disrupted in TBI likely determines the extent of cognitive impairment and white matter volume loss is proportional to TBV reduction (Bigler et al., 2010b).
The term traumatic axonal injury (TAI) is often used to describe the selectivity of white matter damage in TBI as well as the disconnection that trauma brings about (Buki and Povlishock, 2006). While the etiologic mechanisms of white matter damage are clearly different between TBI and MS, studies that have examined CR variables in MS demonstrate that prmorbid cognitive factors relate to disease outcome. For example, Sumowski et al. (2009) found that a word-reading proxy of premorbid intelligence to estimate CR in MS patients demonstrated that performance of those with high premorbid ability was closer to control subjects than those with less CR. Furthermore, Loitfelder et al. (2011) have shown that cognitive performance in MS patients relates to how damaged networks compensate, which likely involves features of both BRC and CR. Educational level in MS, viewed from the CR perspective as "intellectual enrichment," lessens the effect of brain atrophy in MS on memory and learning (Sumowski and Leavitt, 2013;Sumowski et al., 2013a, b).
Based on the above discussion, if an optimal brain size-connectivity reflects maximal brain-behavior correlates, then preinjury variables that relate to sizefunction-connectivity, such as TBV, likely play a role in recovery from TBI. Lastly, from a BRC perspective, optimal size as reflected in brain volume during the aging process probably bestows some resiliency against injury and disease. For example, larger temporal lobe volume, including hippocampal volume in the elderly, has been found to potentially mitigate b-amyloid deposition, a key feature in the development of Alzheimer's disease (Chetelat et al., 2010). In a unique birth cohort 698 E.D. BIGLER AND Y. STERN from Scotland, the Lothian Birth Cohort (Deary et al., 2007), Royle et al. (2013) used intracranial volume obtained when participants were approximately 73 years of age to estimate maximal brain size in youth, when this cohort was first assessed with cognitive testing. By taking their current brain volume at age 73 and comparing to estimated volume when first assessed, the relative relation between cognitive ability and brain volume when young and actual brain volume when old could be ascertained. Royle et al. observed that cognitive ability in youth was "a strong predictor of estimated prior and measured current brain volume in old age (p. 2726)." Furthermore, they demonstrated that the general cognitive ability (basically measures of IQ) relation to brain volume of $0.26 holds throughout the lifespan. If brain injury becomes a life event that reduces brain volume or selected volume of key brain structures such as the temporal lobe or hippocampus, this would constitute a negative effect on reserve.

BRAIN ATROPHY
Since the degree of cortical atrophy relates modestly to outcome (Schonberger et al., 2009;Warner et al., 2010a, b;Irimia, et al., 2011;Tate et al., 2011;Tam et al., 2013), the BRC hypothesis would posit that if size is related in some fashion to function then somewhat greater size prior to injury may afford some protection against injury (Staff, 2012). Furthermore, since injury severity relates to TBV loss following TBI, there are complex interactions of age, injury severity, and the development of cortical atrophy following TBI. Also, since there are age-related differences in brain volume, and normal aging results in brain volume reductions, additional complex interactions between normal volumes, age, and the injured brain occur (Bigler, 2009). If atrophy reflects neuronal loss, the larger preinjury volumes potentially could withstand some degree of neuronal loss, but have a similar level of residual intact neurons postinjury as a noninjured, smaller brain. Hypothetically, the available number of viable neurons left after injury may be the basis for recovery and if this is true, possessing a greater number of neurons preinjury may benefit recovery. TBI has been deemed a risk factor for onset of dementia in later life (Plassman et al., 2000;Smith et al., 2013) and one of the factors that likely relates to this risk is the loss of brain volume in TBI. For example, using the identical image quantification techniques, Bigler et al. (2000) showed that in an Alzheimer sample the average volume difference between age-matched controls and those with Alzheimer's disease was approximately 125 cc volume loss. Note, this is a similar volume loss to that noted by Maxwell et al. (2010) in TBI patients who died but had recovered only to a moderate-to-severe level of disability before dying. Blatter et al. (1997), in assessing TBV in TBI patients with injury severity spanning the mild-to-severe range, observed approximately a 50 cc volume loss. Raymont et al. (2008) assessed a cohort of Vietnam veterans with predominantly penetrating head injury some 30 years postinjury, showing that while not progressing to the point of meeting criteria for dementia, having sustained a brain injury earlier in life resulted in an exacerbated decline in general intelligence when compared to non-TBI controls monitored over the same time period. Since there is an agemediated annual TBV loss and the majority of head injuries occur in the young, it has been speculated that as the TBI individual ages, the mixture of age effects with the prior volume loss from the brain injury predispose the TBI patient to cross the threshold leading to dementia at an increased frequency and earlier age (Bigler, 2009). Mild progressive atrophy has been documented in prospective longitudinal studies of mild TBI (Ross et al., 2012;Zhou et al., 2013), therefore the potential exists over the lifespan that even a mild TBI could be a significant event that, later in life, influences cognitive decline (Bigler, 2013a).
Recently, because of the ease of automated image analysis methods combined with longitudinally conducted research, aging studies with repeat MRI scans in older individuals have shown that using various metrics of white matter integrity, from DTI to volumetric methods, points of demarcation can be identified in those who transition from age-typical effects to mild cognitive impairment and subsequently convert to Alzheimer's disease (Carlson et al., 2008;Price et al., 2012;Silbert et al., 2012;Zhuang et al., 2012). Since the neuropathology of TBI can be considered within the context of white matter damage and white matter burden, the influence of TBI may be via increasing the burden of white matter damage that accelerates the influence of normal aging. Till et al. (2008) have shown that in some TBI patients deterioration in cognitive functioning occurs after a period of recovery and that this postrecovery decline may relate to a variety of secondary neuropathologies initiated by the original injury, but take time to evolve (Gavett et al., 2011). Similarly, Farbota and colleagues (2012a, b) have shown that brain volume loss continued to occur out to 4 years postinjury. Furthermore, Johnson et al. (2011Johnson et al. ( , 2013 have related some of the progressive volume loss changes to neuroinflammatory effects on white matter. This may set the stage for long-term white matter pathology that modifies the potential for recovery (Bigler, 2013b;Butler et al., 1996). Some of this is akin to what was discussed in the introduction from the Nun Study where cerebral atrophy and underlying neuropathology may be present but its effects not expressed (or fully expressed) in some individuals because of factors associated with CR and/or BRC. Just the mere presence of atrophy may not be the major factor in TBI outcome, but what constitutes the underlying neuropathology-associated reduced brain volume and how adaptive mechanisms reconstitute connections and neural function. Given these complex associations and interactions between the neuropathologic underpinnings of TBI outcome, how does preinjury brain volume relate to TBI outcome?
Brain reserve, brain size, and traumatic brain injury Given the above discussions about TBI and resultant brain atrophy, the BRC hypothesis would posit that. for healthy individuals with no neurodevelopmental disorder, larger brain size would provide some advantage at the time of injury and this size factor might act as a buffer in terms of the effects of trauma-induced atrophy.
As already mentioned, brain size is modestly but positively correlated with IQ. Bigler et al. (1999) examined a group of TBI patients mostly with moderate to severe TBI, dividing outcome by whether the postinjury fullscale IQ score was above or below 90. As a group, the below 90 TBI patients had a significantly smaller total intracranial volume (TICV). Since TICV becomes invariant after childhood but is highly correlated with TBV (Lange et al., 2010), it is a marker for premorbid brain size . One inference from this study was that larger premorbid brain volume may have afforded some buffer for the effects of brain injury on cognitive performance. Kesler et al. (2003) undertook an innovative study where preinjury high school standardized achievement testing was obtained in mostly moderate-to-severe TBI patients who had undergone an inpatient rehabilitation treatment program. Again, those with the lowest postinjury IQ scores also had the lowest TICV values and, potentially most importantly from a reserve standpoint, demonstrated the greatest reduction in postinjury IQ when compared to estimated preinjury IQ. Acute lymphoblastic leukemia (ALL) has a predilection for damaging white matter as the disease progresses. Although not a TBI per se, its onset occurs in individuals otherwise healthy and therefore afforded Kesler and colleagues (2010) the opportunity to explore CR factors in relationship to postonset ALL brain volumes. Maternal education as a CR proxy was related to both brain and neuropsychological outcome.

Hippocampal atrophy
The hippocampus is particularly vulnerable to brain injury (Bigler and Maxwell, 2011); as demonstrated in Figure 43.1, it is one of the brain structures that consistently exhibits atrophic changes. Interestingly, in Alzheimer's disease, larger hippocampal volume seems to bestow some protective influence on when someone develops dementia (Chetelat et al., 2010). If preinjury size of a given brain structure such as the hippocampus is reflective of the "health" of that structure, some optimal size may bestow some level of resiliency when injured. Given the obvious -that hippocampal atrophy increases with severity of injury -it is likely that the preinjury size would relate to reserve and outcome. The model would predict that larger hippocampi may afford some protection from injury and result in better outcome. This, however, has not been empirically tested.

PREMORBID INTELLECTUAL AND ACADEMIC ABILITY AND COGNITIVE OUTCOME FROM TRAUMATIC BRAIN INJURY
Neuropsychological studies have suggested that well-to overlearned abilities such as reading provide a metric to estimate premorbid ability (Lezak et al., 2012). Indeed there is a large literature that has examined these types of estimates to establish premorbid ability in reference to outcome (Barnett et al., 2006). For example, Green et al. (2008) observed higher levels of estimated premorbid IQ were associated with higher performance level on speed of processing tasks postinjury at 2 and 12 months. How well these factors are actually predictive is debatable, in part because they are based on postinjury inferences about preinjury ability. For example, Fuentes et al. (2010), in a pediatric sample, did not find support for using these variables in predicting outcome. One reason may be that postinjury assessment of academic ability is likely affected by the brain injury and brain injury severity (Mathias et al., 2007). In studies where known preinjury academic performance measures are established, partial support for cognitive reserve is found. For example, Farmer et al. (2002) compared children with documented learning problems prior to injury to an otherwise matched cohort of TBI children as well as a control sample. The group with prior learning problems displayed greater deficits in memory function than the matched TBI group without a history of any learning difficulty. Fay et al. (2010) examined retrospective parent ratings of a child's cognitive ability prior to sustaining a mild TBI, finding that lower estimated levels of premorbid cognitive ability were associated with more cognitive symptoms and problems postinjury. McKinlay et al. (2010) Bartrés-Faz, 2013) did. Clearly cerebrovascular disease represents a different etiology than TBI, but part of brain injury pathology from TBI is reflected in damage not only to brain parenchyma but also to the vasculature (Bigler and Maxwell, 2011). In those with vascular disease, lower education modifies outcome following stroke (Elkins et al., 2006), as does lower premorbid intellectual function and occupational attainment in cardiovascular disease affecting cognition (Ropacki et al., 2007). This may be another one of those factors where BRC and CR interact, because there is high genetic variability in brain vascularization and the extent of arteriole development which could aid in recovery from brain injury (Wang et al., 2010). However, the role of vascular injury induced by TBI and the influence of CR and BRC have not been explored.

PREMORBID NEUROLOGIC AND NEUROPSYCHIATRIC BURDEN AND COGNITIVE OUTCOME FROM TRAUMATIC BRAIN INJURY
One assumption in brain reserve theory is that the greater neurologic burden a particular brain sustains prior to the onset of injury or illness, the more vulnerable the brain to injury. Ropacki and Elias (2003) examined two groups of TBI patients matched in other ways but differing on whether there was premorbid history of alcoholism, mental illness, prior neurologic disorder, or drug abuse. The group with preexisting neurologic conditions performed significantly worse on timed measures such as Trail Making, Block Design and Digit Symbol along with the Stroop Color-Word tests. These are tests commonly affected by TBI (Lezak et al., 2012) and are highly dependent on white matter connectivity, which is likewise vulnerable in TBI (Bigler et al., 2010a). Wilde et al. (2004) also found that preinjury substance abuse was associated with greater postinjury brain atrophy. Preinjury neuropathology may either predispose the brain to be less resilient following TBI or mean that "reserve" factors have already been spent in response to these earlier insults and are simply not available to assist in recovery from the TBI. Particularly evident in the sports concussion literature is that having a prior head injury, creates a circumstance for increased likelihood for a second head injury and greater sequelae than would otherwise happen, had the first injury not occurred (Cantu, 2003). This phenomenon is also well demonstrated in animal models (Weber, 2007;Prins et al., 2010). Thus, second head injury also supports the theory that neurologic burden at the time of injury increases the likelihood for neurocognitive and neurobehavioral deficits after subsequent head injuries. In other words, having a prior brain injury reduces the brain's ability to withstand injury, simultaneously increasing the likelihood for neurobehavioral and neurocognitive sequelae. Additional details on sports concussion and potential long-terms effects are covered in Chapter 10.
McAllister and Stein (2010) review how history of prior neuropsychiatric disorder potentially interfaces with TBI, resulting in greater likelihood for residual impairments associated with the TBI. From a reserve standpoint, this likely involves both CR and BRC principles. TBI disproportionately damages frontal and temporal regions of the brain (Bigler, 2007a) and this is readily observed in Figure 43.3. The all-important frontotemporolimbic regulatory pathways for emotional and cognitive control are housed within these networks and are particularly vulnerable to injury following TBI (McAllister and Stein, 2010). Any pre-existing perturbation within this system, combined with the effects of brain injury, likely enhances the adverse effects of the brain injury. Likewise, preinjury resiliencies in such neural systems may have protective influences. Based on a model like this, Capizzano et al. (2010) speculate on how pre-and post-TBI factors relate to psychiatric morbidity following injury.
Nordstr€ om et al. (2013; see also Newcombe and Menon, 2013) have shown a most interesting potential that preinjury lower cognitive functioning, history of intoxications, and lower socioeconomic status were all risk factors for mild TBI in men. Such findings have implications that CR and BRC variables may also be influential in who sustains a brain injury.

AGE
Brain atrophy is a consequence of normal aging, the so-called end product of natural cell death (Blatter et al., 1995). Conservative estimates indicate that on average <0.3% per annum volume loss occurs with normal aging until mid-life, at which time a more rapid volume decline occurs, that becomes more dynamically related to older age (Sluimer et al., 2009;Beckett et al., 2010). However, aging is also associated with increased rates of a host of other age-related disorders including cerebrovascular disease (Prins et al., 2005), which place additional burdens on cognitive functioning. Some have assumed that age represents a "reserve" factor, in that the younger brain has higher levels of reserve (more brain volume, healthier white matter connections, etc.). In fact, several studies support this notion that younger age, all other injury variables equal, TRAUMATIC BRAIN INJURY AND RESERVE is associated with better outcome following TBI (Goldstein and Levin, 2001;Green, et al., 2008). Animal models of TBI indicate greater pathologic consequences with the same level of injury in older subjects with less recovery supporting less reserve with the aging process (Marklund et al., 2009). BRC would predict that less reserve comes with older age and the few studies that have attempted to examine this appear to support the contention.
Cerebral atrophy that is out of proportion to age therefore sets the stage for potentially earlier onset of dementia and mortality. Olesen et al. (2011), in a 20 year survival study, showed that those subjects with the greatest amount of cerebral atrophy as a group had decreased survival after age 85. Potentially most important to TBI, because of the greater selective white matter damage in TBI, are the findings of Skoog et al. (2012) showing that head size modified the impact of white matter lesions on dementia, where larger size played some protective role. While neither of these studies specifically examined TBI, both have implications for the TBI patient suggesting that the least amount of trauma-induced cerebral atrophy will be associated with more resilient aging in the individual who has sustained a TBI and that optimal brain size may afford some protection even over selective white matter damage.

TRAUMATIC BRAIN INJURY AND DEMENTIA
The relationship of dementia to TBI has been alluded to in several statements made above (see also Moretti et al., 2012;Bigler, 2013b) and discussed more fully in Chapter 44. Undoubtedly the most well studied aspect of reserve theories are those that relate to the development of dementia. TBI is a risk factor for development of dementia later in life (Plassman, et al., 2000;Bigler, 2009Bigler, , 2013bSmith et al., 2013). As such many of the CR and BRC findings that have been shown to be important in indicating who may develop dementia probably also apply to the TBI patient. Of particular relevance is that the underlying neuropathology of TBI involves b-amyloid deposition (Johnson et al., 2010(Johnson et al., , 2012, occurring even in mild TBI (McKee et al., 2010). Similarly, much has been written about the detrimental effects of apolipoprotein E4 (ApoE4) on aging, onset and progression in Alzheimer's disease, and its potential role in TBI (Mahley and Huang, 2012;Liu et al., 2013;Shi et al., 2014). Possession of the E4 allele in the individual with TBI may increase the likelihood of greater initial adverse effects from the injury (McAllister, 2012) and set the stage for later-in-life earlier progression to Alzheimer's disease (Sivanandam and Thakur, 2012;  Rao et al., 2014). These specific genetic relations with APOE genotype and other genetic factors in TBI outcome are discussed in Chapter 3. The issue of hippocampal atrophy and brain reserve has already been raised in this review and mentioned as the visible effects of increased atrophy associated with TBI, as shown in Figure 43.1. Figure 43.4 shows another interesting dimension of hippocampal atrophy as it may relate to the development of dementia. In this figure, a TBI patient with hippocampal atrophy is compared to a patient with Alzheimer's disease, a mild cognitive impairment (MCI) patient with heavy amyloid burden, and a MCI patient without much amyloid burden. What is evident in this illustration is that the Alzheimer's disease patient, TBI patient, and MCI patient with heavy amyloid burden all exhibit substantial hippocampal atrophy. It is likely that the size of the hippocampus plays an important role in CR and BRC factors associated with TBI outcome and aging, as well as the development of dementing illnesses associated with prior history of TBI. However, the reduced size of the hippocampus in TBI likely reflects an entire network disruption because of the extensive and whole brain afferent-efferent connections associated with the hippocampus. It may be that in TBI hippocampal atrophy is also a marker for whole brain network damage.
If the intactness of neural networks postinjury is predictive of TBI outcome, the amount of network damage also may be key factor in BR. Network damage as determined by DTI and resting state fMRI is predictive of Alzheimer's disease and its progression (Adriaanse et al., 2014;Liu et al., 2014;Sheline and Raichle, 2013). If TBI reduces the network, or its resiliency in functioning, that may prepare the ground for changes later in life that speed up the onset of decline and transition to dementia. The overall burden of white matter damage, especially if there are low-grade neuroinflammatory changes that occur over time, may set the stage for dementia in later life following a TBI earlier in life (Bigler, 2013b, c).
Given that the most frequent TBI is a mild or concussive TBI (Coronado et al., 2012), particular attention has been directed to sports concussion. Indeed, Lehman et al. (2012) examined cause of death in retired National Football League (NFL) players and observed a threefold increased standardized mortality ratio in retired players. Hart et al. (2013) have related cognitive dysfunction and depression to white matter pathology in retired NFL players as well. De Beaumont et al. (2009) studied retired athletes who sustained their last concussion in early adulthood using an auditory oddball paradigm found significantly delayed and attenuated P300 component. All of this suggests that early injury may set the stage for adverse effects of the original injury to influence brain aging . However, Savica et al. (2012) examined a cohort of high school football players from 1946 through 1956 in Rochester, Minnesota, compared with 140 nonfootball-playing classmates, and found no increased levels of any type of dementia, Parkinson's disease, or amyotrophic lateral sclerosis. Obviously these studies are just being done but as reviewed herein numerous CR and BRC hypotheses could be tested with regards to prior TBI and increased risk for dementia in later life.

SUMMARY
Factors associated with both brain and cognitive reserve likely influence outcome from TBI. From the BRC perspective, optimal brain size and the size of key brain structures prior to injury likely bestow some level of reserve in response to injury. The key element for the influence of BRC may be how much structural atrophy results from the injury and how these atrophic pathologic influences alter the brain's underlying neural networks that relate to particular functions. From the CR perspective, higher levels of premorbid cognitive ability are associated with better recovery. Higher premorbid ability level likely is associated with efficient neural systems that capitalize on maximal neural connectivity within some optimal brain size or given structure, such as the hippocampus. Several examples of how BRC and CR intertwine in the recovery process following TBI are offered.