Preventing Hearing Loss and Restoring Hearing: A New OutlookView this Special Issue
Review Article | Open Access
Mary Rudner, Thomas Lunner, "Cognitive Spare Capacity and Speech Communication: A Narrative Overview", BioMed Research International, vol. 2014, Article ID 869726, 10 pages, 2014. https://doi.org/10.1155/2014/869726
Cognitive Spare Capacity and Speech Communication: A Narrative Overview
Background noise can make speech communication tiring and cognitively taxing, especially for individuals with hearing impairment. It is now well established that better working memory capacity is associated with better ability to understand speech under adverse conditions as well as better ability to benefit from the advanced signal processing in modern hearing aids. Recent work has shown that although such processing cannot overcome hearing handicap, it can increase cognitive spare capacity, that is, the ability to engage in higher level processing of speech. This paper surveys recent work on cognitive spare capacity and suggests new avenues of investigation.
Speech is the main mode of communication for most people. If speech understanding is compromised by noise or hearing impairment, communication may become harder, leading to limitations in social participation. Technical compensation is available in the form of hearing aids. However, although the amplification provided by hearing aids can improve speech understanding in quiet, persons with hearing impairment still have disproportionately large difficulties understanding speech in noise. One of the reasons for this may be that when the cognitive resources required for speech comprehension are engaged in the lower level processes of deciphering the signal, fewer resources may be available for higher level language processing. In other words, cognitive spare capacity is reduced.
1.1. Speech Comprehension
Speech comprehension requires the auditory ability to hear the signal and the cognitive ability to relate this information to the existing knowledge stored in semantic long-term memory [1, 2]. The role of cognition in speech comprehension is reflected in the hierarchical nature of its cortical representation [3, 4].
Speech processing engages a clearly defined cortical network involving the classical language areas in the left inferior frontal cortex and superior temporal gyrus [3, 4]. The primary auditory cortex is sensitive to most sounds and is the first cortical region to be activated during speech perception . Listening to words activates the middle and superior temporal gyri bilaterally and listening to sentences engages regions involved in processing semantics and syntax in the left prefrontal cortex . It has been possible to trace the pathways linking these regions by using animal models [4–6]. These pathways represent different functional streams that take either a ventral route through superior temporal regions to ventrolateral prefrontal cortex or a dorsal route through posterior parietal cortex and dorsolateral prefrontal cortex [6, 7]. One ventral route seems to deal more with conceptual or semantic processing, while there is a dorsal route that is more related to phonological or articulatory processing [6, 7]. Ventral and dorsal routes for syntactic processing have also been proposed .
1.2. Hearing Impairment
Around 25% of the population in developed countries has a hearing impairment severe enough to interfere with speech communication . Hearing sensitivity decreases with age such that although only about 2% of individuals in their early twenties have a hearing loss, the prevalence of significant hearing impairment is 40–45% in persons over the age of 65 and exceeds 83% in persons over the age of 70 [10, 11]. Hearing difficulties are associated with long-term absence from work in the working age population [12, 13] and loneliness in the older population . Further, individuals with better cognitive abilities report more hearing difficulties [15, 16], possibly because they have higher expectations of their communication. Even moderate degrees of hearing impairment lead to decrease in neural activity during speech processing and may contribute to grey matter loss in primary auditory cortex [17, 18].
Types of hearing loss are traditionally categorized according to site of lesion: impairment of sound transmission in the external or middle ear is referred to as conductive hearing loss, while other types of hearing loss are referred to as sensorineural. Sensorineural hearing loss can be further subdivided into sensory loss, resulting from impairment of cochlear function, retrocochlear loss, resulting from impairments relating to conduction in the auditory nerve or brainstem, and central losses, resulting from impairments in cortical processing of the auditory signal. Sensorineural hearing loss is the major diagnostic category and includes age-related hearing loss or presbyacusis. These categories are relatively coarse and it has been suggested that they may be inadequate for pinpointing the contribution of hearing loss to communication difficulties under adverse listening conditions .
The primary diagnostic tool in audiology is the pure tone audiogram. This method of determining frequency-specific hearing thresholds is based on delivering sine waves of different intensities to each ear and asking the patients to respond by pressing a button each time they hear a sound. The resulting resolution is poor, and since this procedure requires the processes of intention and attention that characterize listening as opposed to simply hearing and thus tap into cognitive processes that may also be declining with age, diagnosis may be confounded. Other diagnostic tools include measures of auditory brainstem response and otoacoustic emissions which may be more independent of high-level cognitive contribution, although it has recently been shown that cognitive load influences brainstem responses  and otoacoustic emissions may also be influenced by attention through efferent innervation . Assessment of speech intelligibility in quiet and in noise is also part of hearing evaluation.
1.3. Hearing Aids
The most important objective for hearing aid signal processing is to make speech audible . This is not a trivial problem. Over 30 years ago, Plomp  proposed a model of hearing aid benefit that classed hearing impairment in terms of attenuation and distortion showing that while the hearing aids of the day could compensate well for the former by providing amplification, they were poorer at tackling the latter. As distortion is a characteristic of even the mildest hearing losses, it is important that hearing aids address this issue and the industry has taken on this challenge . Distortion can be simply characterized as a decrease in the ability to distinguish speech from noise. It is not only due to decreased frequency and temporal resolution, as well as impaired ability to discriminate pitch and localize sound sources, but also due to abnormal growth of loudness , such that if all sounds are amplified the same way, some may become uncomfortably loud. Thus, modern digital hearing aids include technologies that tackle some of these problems . Wide dynamic range compression systems restore audibility by amplifying weaker sounds more than loud sounds to compensate for the abnormal growth of loudness. The regulation of the compression system may be fast (syllabic) or slow (automatic volume control). Fast-acting wide dynamic range compression (fast WDRC) provides different gain-frequency responses for adjacent speech sounds with different short-term spectra on a syllabic level. On the assumption that communication partners look at each other, directional microphones may be used to attenuate sounds not coming from the front. Of course, if the attended signal does not come from the front, directional microphones may make communication harder. Single-channel noise reduction schemes (NR) may reduce background sounds by identifying portions of the signal as nonspeech and attenuating these. This does not improve speech intelligibility per se, but it may reduce the annoyance from background sounds. Notwithstanding the benefits of signal processing, there is no getting away from the fact that it may also degrade the auditory signal, which may make listening harder. This applies in particular to aggressive signal processing algorithms that may be used experimentally but are not generally prescribed to patients. Aggressive processing is characterized by substantial spectral alteration of the signal within the space of a few milliseconds. For example, some aggressive NR algorithms generate audible artifacts  and WDRC distorts individual speech sounds in ways that influence the phonological or sublexical structure of the incoming speech signal [22, 28–30].
Acoustic noise impacting speech perception can be categorized as signal degradation, energetic masking, and informational masking . Signal degradation reduces the amount of information in the signal. As we have seen, this is the result of hearing aid signal processing. Other examples relate to processing for data transmission. Energetic masking is a competing signal that partially obscures the target signal. Air conditioning fans are a good example. Informational masking also obscures the target signal but in addition has a fluctuating structure that in some circumstances may distract the listener but in others may allow the listener to systematically glimpse parts of the signal. An informational masker may consist of tonal patterns, for example, or one or more competing speakers. As regards the neural networks underpinning speech comprehension in noise, a pattern is starting to emerge involving widespread frontal and parietal activation as well as increased temporal activation . There is also some evidence that the brain tracks target and competing speech streams in a manner that is modulated by attention  with selective attention networks for pitch and location .
Persons with hearing impairment have particular difficulties listening in noise which may be reflected in recruitment of neural networks supporting compensatory processing [35, 36] whereas persons with normal hearing are generally better at coping with informational than energetic masking ; the same may not always be true for persons with hearing impairment [38–40]. An informational masker includes cues in terms of pitch or temporal fine structure that may help segregation and dips in the masker may reveal portions of the target signal. This may result in the listener perceiving fragments of a target signal that need to be pieced together to achieve understanding. An informational masker may also include semantic information that distracts the listener from the target signal and thus needs to be inhibited. Such processes rely on cognitive functions.
2. Working Memory and Speech Comprehension
2.1. The Role of Cognition in Listening
Cognitive processes are required to focus on the speech signal and match its contents to stored knowledge [1, 2]. When listening takes place in adverse conditions, for example, when there is background noise or the listener has a hearing impairment, high-level cognitive functions such as working memory and executive processes are implicated [41, 42]. Working memory (WM) is the capacity to perform task-relevant processing of information kept in mind [42, 43] and is supported by a frontoparietal network [44, 45] that is sensitive to stimulus quality and memory load [46, 47]. Many different models of WM have been proposed , and one of the most influential of them is the component model originating in the seminal 1974 paper by Baddeley and Hitch . This model was characterized by a central executive controlling two slave buffers for processing verbal and visuospatial information, respectively. It elegantly accounted for a host of empirical data from dual task paradigms, that is, tasks requiring two different kinds of processing at the same time. However, it could not easily account for evidence of multimodal information binding, for example, use of visual cues during speech understanding. A new generation of WM models including an episodic buffer filling just such a function saw the light of day around the turn of the 21st century. These include an updated version of the original component model  and a model specifically describing the role of WM in language understanding: the WM model for ease of language understanding (ELU) [41, 50]. Although early work placed the episodic buffer among executive functions organized in the frontal lobes , later work has shown that multimodal information binding does not necessarily load on executive functions. For example, visual binding has been shown to take place without executive involvement  and multimodal semantic binding has been shown to have its locus in the temporal lobes [53, 54].
The ELU model  links in with a parallel line of conceptual development represented by the individual differences approach to WM. This approach focuses on the large variance in individual ability to perform WM tasks rather than characterizing different components of WM [55–57]. According to the ELU model , language understanding proceeds rapidly and smoothly under optimal listening conditions, facilitated by an episodic buffer which matches phonological information in the incoming speech stream with the existing representations stored in long-term memory. Because this buffer deals with the rapid, automatic multimodal binding of phonology, it is known by the acronym RAMBPHO. Adverse listening conditions hinder RAMBPHO processing. This may result in a mismatch between auditory signal and information in the mental lexicon in long-term memory. Under such circumstances, explicit or conscious processing resources need to be brought into play to unlock the lexicon. The ELU model proposes that this occurs in a slow processing loop. Processing in the slow loop may include executive functions such as shifting, updating, and inhibition . Inhibition may be required to suppress irrelevant interpretations, while updating may bring new information into the buffer at the expense of discarding older information. Shifting may come into play to realign expectations [30, 59]. All these functions are linked to the frontal lobes  and there is evidence that they are supported by anatomically distinct substrates . Their role in speech communication under adverse conditions may be bringing together ambiguous signal fragments with relevant contextual information. There is a constant interplay between predictive kinds of priming of what is to come in a dialogue and postdictive reconstructions of what was missed through mismatches with the lexicon in semantic long-term memory . There is no doubt that such processing is effortful and increases cognitive load [61, 62] and modulates the neural networks involved in speech processing under adverse conditions . From an individual difference perspective, it makes sense that individuals with high WM capacity would perform better on tasks requiring speech understanding under adverse conditions, and this is indeed the case [64–66].
More than a decade ago, it was established that there is a relation between cognitive ability, in particular WM capacity, and the benefit obtained from hearing aid signal processing [64, 67–69]. In particular, it was shown that any benefit of fast-acting WDRC in terms of the ability to understand speech in noise was contingent on cognitive ability [64, 68]. Since then, it has been shown that this relationship is influenced by type of background noise [70–72] and the type of target speech material [30, 70, 73]. Cognitive resources are especially important when modulated noise is combined with fast-acting WDRC [30, 61, 71–73] above all when the target speech is unpredictable . These complex relations change over time [30, 73, 74].
The capacity of WM can be increased by training, suggesting an inherent plasticity in the system [75, 76]. Training effects may generalise to similar nontrained tasks, for example, a different WM task . This is known as near transfer. However, generalization to other cognitive abilities, known as far transfer, has been elusive . Recent work, however, has shown that for older adults, cognitive training requiring multitasking can result in sustained reduction in multitasking costs and improvement in WM . As we have noted, WM is about simultaneous storage and processing, in other words a form of multitasking. The results of Anguera et al.  suggest that in order to improve WM, it may be more efficient to target multitasking abilities as such. Since WM capacity is related to the ability to understand speech in noise, it is tempting to speculate that increasing WM capacity may also improve the ability to understand speech in noise. However, published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust . We suggest that cognitive training that targets the multitasking abilities inherent in speech understanding under adverse conditions may improve WM capacity and result in better speech understanding in adverse conditions. This is an important avenue for future research.
3. Cognitive Spare Capacity for Communication
3.1. Cognitive Spare Capacity
When listening takes place in adverse conditions, it is clear that the cognitive resources available for higher level processing of speech will be reduced . In other words, the listener has less cognitive spare capacity (CSC) [59, 69, 81, 82].
CSC is closely related to WM in that it is concerned with short-term maintenance and processing of information . Work to date suggests that the storage functions of CSC and WM are similar  but that once executive processing demands are introduced, there no longer seems to be a simple relationship between the two concepts [69, 82, 84]. Thus, in order to understand the role of cognition in speech understanding under adverse conditions, it is important to measure not only WM capacity but also CSC. The concept of CSC is related to, although distinct from, other concepts in the literature. For example, differences in susceptibility to functional impairment as a result of brain damage have been explained in terms of “cognitive reserve,” that is, individual differences in cognitive function , or “brain reserve,” that is, individual differences in brain size . CSC is similar to these concepts in that it is based on individual differences in cognitive function and may explain differences in speech communication and underlying mechanisms that may be related to functional changes at any level of the auditory system [69, 81].
Recent work has shown that noise reduction (NR) in hearing aids can enhance CSC by improving retention of heard speech [83, 87]. This applies to both adults with normal hearing thresholds  and adults with sensorineural hearing impairment . In the study by Ng et al. , experienced hearing aid users listened to sets of highly intelligible, ecologically valid sentences from the Swedish hearing in noise test (HINT) [88, 89]. The HINT sentences were presented in noise and the participants were asked to memorize the final word of each sentence. The participants repeated all the target words to ensure that they were intelligible. At the end of each set, participants were prompted to recall all the sentence-final words. Although they were capable of repeating the sentence-final words, irrespective of the presence of background noise, noise did disrupt recall performance . Being able to retain heard information is an integral part of speech communication. Thus, the findings of Ng et al.  demonstrate that, for individuals with hearing impairment, background noise reduces the cognitive resources available for performing the kind of cognitive processing involved in communication. This is in line with the work showing that extra effort expended simply in order to hear comes at the cost of processing resources that might otherwise be available for encoding the speech content in memory [90, 91]. However, when NR was implemented, the negative effect of noise on recall was reduced, even though the ability to repeat sentence-final words remained the same . This demonstrates that hearing aid signal processing can enhance memory processes underpinning speech communication. Informational masking was more disruptive of memory processing than energetic masking and was also more susceptible to the positive effect of NR . However, it remains to be determined whether it is the semantic content or phonological structure of the informational masker that interacts with the ability of NR to improve memory for highly intelligible speech.
Speech communication under adverse conditions is likely to draw on cognitive functions other than simply memory retention [30, 59]. In order to investigate the ability to perform executive processing of heard speech at different memory loads, the cognitive spare capacity test (CSCT) [82, 84] was developed. In the CSCT, sets of spoken two-digit numbers are presented and the participant reports back certain numbers according to instructions. Two executive functions are targeted at two different memory loads. The executive functions in question are updating and inhibition, both of which are likely to be engaged during speech understanding in adverse conditions. Updating ability may be required to strategically replace the contents of WM with relevant material while inhibition ability may be brought into play to keep irrelevant information out of WM. Memory load depends on how many numbers need to be reported. In everyday communication, seeing the face of your communication partner can enhance speech perception by several dB . Thus, in order to determine how visual cues influence CSC, the CSCT manipulates availability of visual cues. The CSCT can be administered in quiet or in noise and other manipulations introducing different kinds of signal processing are also possible.
Across three different studies including persons with and without hearing loss, an interesting pattern of results has emerged [69, 82, 84, 93]. Adults with normal hearing who perform the CSCT in quiet conditions have lower scores when they see the talker’s face [82, 84]. This is probably because when target information is highly intelligible, visual cues provide superfluous information that causes distraction during performance of the executive tasks [82, 84]. Although this finding is contrary to the literature on speech perception, which demonstrates better performance in noise when the talker’s face is visible, for individuals with normal hearing  and individuals with hearing impairment [95–97], it is in line with other lines of evidence showing that visual cues may increase listening effort [98, 99]. In particular, dual task performance is lower for audiovisual compared to auditory stimuli when intelligibility is equated across modalities [98, 99].
Adults with normal hearing who perform CSCT in noisy conditions do not show this pattern  and nor do older adults with raised hearing thresholds, even in quiet . In these conditions, visual cues probably help segregate the target signal from internal or external noise, resulting in richer cognitive representations [82, 100]. Older adults with hearing loss demonstrate lower CSC than young adults, even with better SNR, adapted to provide high intelligibility  and individualised amplification, and this effect is most notable in noise and when memory load is high . Although CSC and WM do not seem to be strongly related, there is evidence that age-related differences in WM and executive function do influence CSC [69, 93]. It remains to be seen how different kinds of hearing aid signal processing will interact with executive processing of speech with and without visual cues and whether training CSC can counteract age-related decline in its capacity or even improve CSC. Adaptive training based on CSCT processing may provide a means of improving the ability to understand speech under adverse conditions.
3.2. Phonological Representation
The ELU model describes the way in which the mapping of phonological structure of target speech onto phonological representations in the mental lexicon  is mediated by WM during speech understanding under adverse conditions . We have seen that fast-acting WDRC distorts the speech signal in a way that may influence its phonological characteristics [22, 28–30]. In the short term, this may make it harder to match speech to representations, thus requiring more cognitive engagement to achieve speech understanding [41, 70, 73]. However, in the long term, when hearing aid users have had the opportunity to become accustomed to the way in which speech sounds different, phonological representations may alter to match incoming information. Some evidence of this has been found in cochlear implantees  and hearing aid users . It is even possible that the new phonological representations based on processed speech may be more mutually distinct than the representations they replace based on less appropriate signal processing. The neural correlates of such changes in phonological representation due to habitual use of WRDC have yet to be investigated.
Lexical access is faster when phonological representations are easier to distinguish from each other [102, 104]. However, long-term severe acquired hearing impairment may lead to less distinct phonological representations . This makes it harder to determine whether printed words rhyme with each other , especially when orthography is misleading . For example, individuals with poor phonological representations due to severe long-term hearing impairment may be more unsure than their peers with normal hearing whether “pint” rhymes with “lint” or whether “blue” rhymes with “through.” However, good WM capacity can compensate for this deficit, albeit at the cost of long-term memory representations . Compensatory processing by individuals with hearing impairment during visual rhyme judgment is associated with larger amplitude of the N2 component , indicating use of a compensatory strategy, possibly involving increased reliance on explicit mechanisms such as articulatory recoding and grapheme-to-phoneme conversion.
In summary, phonological structure of target speech material is not only influenced by speaker characteristics but also by distortion due to hearing aid signal processing. Phonological representations in the mental lexicon may be influenced by long-term effects of both hearing impairment and signal processing. Further, both of these may have distinct neural signatures. Measures designed to improve phonological distinctiveness of both target speech and phonological representations are likely to enhance CSC and support speech communication under adverse conditions. This deserves further investigation.
3.3. Semantic Context
Provision of semantic context can facilitate speech understanding under adverse conditions. This process engages language networks in left posterior inferior temporal cortex and inferior frontal gyri bilaterally . Studies investigating the role of WM capacity in the benefit obtained from WDRC have indicated that the semantic content of the materials delivered for speech recognition may influence this relationship. For example, Rudner et al.  found that WM capacity was associated with speech understanding for individuals with hearing impairment using WDRC listening to matrix-type sentences [109, 110], but not Swedish HINT sentences [88, 89]. The Hagerman sentences are semantically coherent, but the five-word syntactic structure is always the same and each word comes from a closed set of ten appropriate items. Thus none of the items can be accurately predicted. The HINT sentences, by contrast, are diverse in length, syntactic structure and semantic coherence. It is likely that the constrained structure and content of the Hagerman sentences make guessing harder and thus increase reliance on the bottom-up information provided by the speech signal. However, it has been found that the benefit of having access to the temporal fine structure of the speech signal was greater for open set materials than for closed-set materials , indicating that the regular structure and closed set of matrix-like sentences can facilitate guessing. Future work should systematically investigate the interaction between the semantic coherence of the speech signal, hearing aid signal processing, and individual cognitive characteristics such as WM and CSC.
Text cues can facilitate speech understanding in noise when they match the semantic content of the auditory signal [112–116] and inhibit it when they are misleading . Cue integration is supported by language networks including the inferior frontal gyrus and temporal regions . Matching text cues also enhance the perceived clarity of degraded speech  and recently it was shown that this effect may be modulated by both lexical access speed and WM capacity . WM capacity modulates the activation of networks involved in semantic processing  and also predicts the ability to inhibit misleading text cues during speech understanding in steady state noise  as well as the facilitation of speech understanding against a single talker background . Recently, it has been shown that coherence and cues can have separate facilitatory effects on perceived clarity of degraded speech . Future work should focus on determining the benefit of providing text cues for hearing aid users, for example, using automatic speech recognition  and how this interacts with the semantic coherence of the target speech, the availability of semantic content in the noise background, and individual cognitive skills. Imaging studies are likely to provide important information about the neurocognitive systems supporting these complex interactions. Aging and Communication
Sensory and cognitive functions decline with age [118, 119]. Sensory decline can be traced to physiological change, but the mechanisms behind cognitive change are more elusive, although both genetic and lifestyle factors have been implicated . Several different theories attempt to explain the relation between sensory and cognitive decline. The common cause hypothesis  proposes that a general reduction in processing efficiency drives both phenomena. The information degradation hypothesis , on the other hand, claims than when sensory input is degraded, cognitive processing becomes less efficient as a result. Reserve theories suggest that the ability to cope with brain damage is related to premorbid brain size or cognitive ability . The compensation-related utilization of neural circuits hypothesis  suggests that older adults compensate for less efficient processing by engaging more neural resources than younger adults when task load is still relatively low while brain maintenance theory  proposes that individual differences in the manifestation of age-related brain changes and pathology allow some people to show little or no age-related cognitive decline. All these theories are more or less sophisticated in their attempts to capture the relationship between physiological, sensory, and cognitive function in an aging perspective. The relations they describe suggest that keeping the brain healthy and providing it with better sensory input will facilitate speech understanding for individuals of advancing age. The theories that focus on a special role for cognition suggest that lowering cognitive load and enhancing CSC during speech communication may have special importance in later adulthood and even allow some older adults to function communicatively just as successfully as their younger counterparts.
Recent work has shown that older adults show less activation in auditory cortex than younger adults while listening to speech in noise, especially at poor signal to noise ratios and compensate by recruiting prefrontal and parietal areas associated with WM . Epidemiological studies show that individuals with hearing loss are at increased risk of cognitive impairment and that rate of cognitive decline and risk of cognitive impairment are associated with severity of hearing loss . Thus, hearing loss may result in decreasing CSC. No study has yet specifically addressed this issue. However, analysis of data from the Betula study of cognitive aging  demonstrated that hearing aid users with poorer hearing also had poorer long-term memory . This applied even when the long-term memory task had no auditory component. However, degree of hearing loss was not associated with decline in WM. Importantly, there was no significant association between loss of vision and cognitive function. These results suggest that although hearing loss and cognitive decline are related, even in hearing aid users, the association may not apply across all cognitive domains. The challenge is to uncover the specific mechanisms behind age-related sensory and cognitive decline so that speech communication can be preserved into old age by optimizing cognitive capacity. This may involve a range of different interventions that target hearing through appropriate hearing aid fitting, enhance the role of other sensory modalities that can be exploited in communication, and capitalize on cognitive abilities by seeking to maintain and extend them.
Speech communication in adverse conditions makes specific demands on cognitive resources. In particular, WM capacity and executive function are engaged in unravelling the speech signal. This depletes CSC and leaving fewer resources for higher level processing of speech. CSC is influenced by cognitive load, noise, visual cues, and aging and can be enhanced by appropriate hearing aid signal processing. The phonological structure and semantic content of speech influence processing mechanisms and engagement of cognitive resources. Optimizing CSC is an important aim for preserving speech communication into old age. We have reviewed evidence suggesting that CSC may be enhanced by a number of means including cognitive training and providing the optimal balance between visual, phonological, and semantic information. Future research should focus on finding ways to optimize CSC.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The authors would like to thank Jerker Rönnberg for comments and suggestions on a previous version of the paper.
- J. Kiessling, M. K. Pichora-Fuller, S. Gatehouse et al., “Candidature for and delivery of audiological services: special needs of older people,” International Journal of Audiology, vol. 42, supplement 2, pp. S92–S2101, 2003.
- M. K. Pichora-Fuller and G. Singh, “Effects of age on auditory and cognitive processing: implications for hearing aid fitting and audiologic rehabilitation,” Trends in Amplification, vol. 10, no. 1, pp. 29–59, 2006.
- J. E. Peelle, “The hemispheric lateralization of speech processing depends on what “speech” is: a hierarchical perspective,” Frontiers in Human Neuroscience, vol. 6, article 309, 2012.
- S. K. Scott and I. S. Johnsrude, “The neuroanatomical and functional organization of speech perception,” Trends in Neurosciences, vol. 26, no. 2, pp. 100–107, 2003.
- J. H. Kaas and T. A. Hackett, “Subdivisions of auditory cortex and processing streams in primates,” Proceedings of the National Academy of Sciences of the United States of America, vol. 97, no. 22, pp. 11793–11799, 2000.
- J. P. Rauschecker and S. K. Scott, “Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing,” Nature Neuroscience, vol. 12, no. 6, pp. 718–724, 2009.
- G. Hickok and D. Poeppel, “The cortical organization of speech processing,” Nature Reviews Neuroscience, vol. 8, no. 5, pp. 393–402, 2007.
- A. D. Friederici and S. M. Gierhan, “The language network,” Current Opinion in Neurobiology, vol. 23, no. 2, pp. 250–254, 2013.
- A. Davis and G. Britain, “Acceptability, benefit and costs of early screening for hearing disability: a study of potential screening tests and models,” Tech. Rep., National Coordinating Centre for Health Technology Assessment, University of Southampton, 2007.
- A. Davis and K. A. Davis, “Epidemiology of aging and hearing loss related to other chronic illnesses,” in Proceedings of the 2nd International Adult Conference Hearing Care for Adults—The Challenge of Aging, L. Hickson, Ed., pp. 23–32, Chicago, Ill, USA, 2009.
- S. Gordon-Salant, “Hearing loss and aging: new research findings and clinical implications,” Journal of Rehabilitation Research and Development, vol. 42, no. 4, pp. 9–24, 2005.
- J. Nachtegaal, J. M. Festen, and S. E. Kramer, “Hearing ability in working life and its relationship with sick leave and self-reported work productivity,” Ear and Hearing, vol. 33, no. 1, pp. 94–103, 2012.
- P. V. Pierre, A. Fridberger, A. Wikman, and K. Alexanderson, “Self-reported hearing difficulties, main income sources, and socio-economic status; a cross-sectional population-based study in Sweden,” BMC Public Health, vol. 12, no. 1, article 874, 2012.
- M. Pronk, D. J. Deeg, and S. E. Kramer, “Hearing status in older persons: a significant determinant of depression and loneliness? Results from the Longitudinal Aging Study Amsterdam,” American Journal of Audiology, vol. 22, no. 2, pp. 316–320, 2013.
- E. H. N. Ng, M. Rudner, T. Lunner, and J. Rönnberg, “Relationships between self-report and cognitive measures of hearing aid outcome,” Speech, Language and Hearing, vol. 16, no. 4, pp. 197–207, 2013.
- A. A. Zekveld, E. L. George, T. Houtgast, and S. E. Kramer, “Cognitive abilities relate to self-reported hearing disability,” Journal of Speech, Language, and Hearing Research, vol. 56, no. 5, pp. 1364–1372, 2013.
- M. A. Eckert, S. L. Cute, K. I. Vaden Jr., S. E. Kuchinsky, and J. R. Dubno, “Auditory cortex signs of age-related hearing loss,” Journal of the Association for Research in Otolaryngology, vol. 13, no. 5, pp. 703–713, 2012.
- J. E. Peelle, V. Troiani, M. Grossman, and A. Wingfield, “Hearing loss in older adults affects neural systems supporting speech comprehension,” The Journal of Neuroscience, vol. 31, no. 35, pp. 12638–12643, 2011.
- S. Stenfelt and J. Rönnberg, “The Signal-Cognition interface: interactions between degraded auditory signals and cognitive processes,” Scandinavian Journal of Psychology, vol. 50, no. 5, pp. 385–393, 2009.
- P. Sörqvist, S. Stenfelt, and J. Rönnberg, “Working memory capacity and visual-verbal cognitive load modulate auditory-sensory gating in the brainstem: toward a unified view of attention,” Journal of Cognitive Neuroscience, vol. 24, no. 11, pp. 2147–2154, 2012.
- S. Srinivasan, A. Keil, K. Stratis, K. L. Woodruff Carr, and D. W. Smith, “Effects of cross-modal selective attention on the sensory periphery: cochlear sensitivity is altered by selective attention,” Neuroscience, vol. 223, pp. 325–332, 2012.
- T. Lunner, M. Rudner, and J. Rönnberg, “Cognition and hearing aids,” Scandinavian Journal of Psychology, vol. 50, no. 5, pp. 395–403, 2009.
- R. Plomp, “Auditory handicap of hearing impairment and the limited benefit of hearing aids,” Journal of the Acoustical Society of America, vol. 63, no. 2, pp. 533–549, 1978.
- B. Edwards, “The future of hearing aid technology,” Trends in Amplification, vol. 11, no. 1, pp. 31–45, 2007.
- B. C. J. Moore, “Perceptual consequences of cochlear hearing loss and their implications for the design of hearing aids,” Ear and Hearing, vol. 17, no. 2, pp. 133–161, 1996.
- H. Dillon, Hearing Aids, Boomerang Press, Turramurra, Australia, 2nd edition, 2012.
- D. Wang, U. Kjems, M. S. Pedersen, J. B. Boldt, and T. Lunner, “Speech perception of noise with binary gains,” Journal of the Acoustical Society of America, vol. 124, no. 4, pp. 2303–2307, 2008.
- L. M. Jenstad and P. E. Souza, “Quantifying the effect of compression hearing aid release time on speech acoustics and intelligibility,” Journal of Speech, Language, and Hearing Research, vol. 48, no. 3, pp. 651–667, 2005.
- S. Bor, P. Souza, and R. Wright, “Multichannel compression: effects of reduced spectral contrast on vowel identification,” Journal of Speech, Language, and Hearing Research, vol. 51, no. 5, pp. 1315–1327, 2008.
- M. Rudner, J. Rönnberg, and T. Lunner, “Working memory supports listening in noise for persons with hearing impairment,” Journal of the American Academy of Audiology, vol. 22, no. 3, pp. 156–167, 2011.
- S. L. Mattys, M. H. Davis, A. R. Bradlow, and S. K. Scott, “Speech recognition in adverse conditions: a review,” Language and Cognitive Processes, vol. 27, no. 7-8, pp. 953–978, 2012.
- S. K. Scott and C. McGettigan, “The neural processing of masked speech,” Hearing Research, vol. 303, pp. 58–66, 2013.
- E. M. Zion Golumbic, N. Ding, S. Bickel et al., “Mechanisms underlying selective neuronal tracking of attended speech at a ‘cocktail party’,” Neuron, vol. 77, no. 5, pp. 980–991, 2013.
- A. K. C. Lee, S. Rajaram, J. Xia et al., “Auditory selective attention reveals preparatory activity in different cortical regions for selection based on source location and source pitch,” Frontiers in Neuroscience, vol. 6, article 190, 2013.
- P. A. Reuter-Lorenz and K. A. Cappell, “Neurocognitive aging and the compensation hypothesis,” Current Directions in Psychological Science, vol. 17, no. 3, pp. 177–182, 2008.
- P. C. M. Wong, J. X. Jin, G. M. Gunasekera, R. Abel, E. R. Lee, and S. Dhar, “Aging and cortical mechanisms of speech perception in noise,” Neuropsychologia, vol. 47, no. 3, pp. 693–703, 2009.
- A. J. Duquesnoy, “Effect of a single interfering noise or speech source upon the binaural sentence intelligibility of aged persons,” Journal of the Acoustical Society of America, vol. 74, no. 3, pp. 739–743, 1983.
- J. M. Festen and R. Plomp, “Effects of fluctuating noise and interfering speech on the speech-reception threshold for impaired and normal hearing,” Journal of the Acoustical Society of America, vol. 88, no. 4, pp. 1725–1736, 1990.
- E. L. J. George, A. A. Zekveld, S. E. Kramer, S. T. Goverts, J. M. Festen, and T. Houtgast, “Auditory and nonauditory factors affecting speech reception in noise by older listeners,” Journal of the Acoustical Society of America, vol. 121, no. 4, pp. 2362–2375, 2007.
- C. Lorenzi, G. Gilbert, H. Carn, S. Garnier, and B. C. J. Moore, “Speech perception problems of the hearing impaired reflect inability to use temporal fine structure,” Proceedings of the National Academy of Sciences of the United States of America, vol. 103, no. 49, pp. 18866–18869, 2006.
- J. Rönnberg, T. Lunner, A. Zekveld et al., “The Ease of Language Understanding (ELU) model: theory, data, and clinical implications,” Frontiers in Systems Neuroscience, vol. 7, article 31, 2013.
- P. Sörqvist, “The role of working memory capacity in auditory distraction: a review,” Noise and Health, vol. 12, no. 49, pp. 217–224, 2010.
- A. D. Baddeley and G. Hitch, “Working memory,” Psychology of Learning and Motivation—Advances in Research and Theory, vol. 8, pp. 47–89, 1974.
- R. Cabeza and L. Nyberg, “Imaging cognition II: an empirical review of 275 PET and fMRI studies,” Journal of Cognitive Neuroscience, vol. 12, no. 1, pp. 1–47, 2000.
- E. E. Smith and J. Jonides, “Working memory: a view from neuroimaging,” Cognitive Psychology, vol. 33, no. 1, pp. 5–42, 1997.
- D. M. Barch, T. S. Braver, L. E. Nystrom, S. D. Forman, D. C. Noll, and J. D. Cohen, “Dissociating working memory from task difficulty in human prefrontal cortex,” Neuropsychologia, vol. 35, no. 10, pp. 1373–1380, 1997.
- T. S. Braver, J. D. Cohen, L. E. Nystrom, J. Jonides, E. E. Smith, and D. C. Noll, “A parametric study of prefrontal cortex involvement in human working memory,” NeuroImage, vol. 5, no. 1, pp. 49–62, 1997.
- A. Miyake and P. Shah, Models of Working Memory: Mechanisms of Active Maintenance and Executive Control, Cambridge University Press, 1999.
- A. Baddeley, “The episodic buffer: a new component of working memory?” Trends in Cognitive Sciences, vol. 4, no. 11, pp. 417–423, 2000.
- J. Rönnberg, “Cognition in the hearing impaired and deaf as a bridge between signal and dialogue: a framework and a model,” International Journal of Audiology, vol. 42, supplement 1, pp. S68–S76, 2003.
- V. Prabhakaran, K. Narayanan, Z. Zhao, and J. D. E. Gabriel, “Integration of diverse information in working memory within the frontal lobe,” Nature Neuroscience, vol. 3, no. 1, pp. 85–90, 2000.
- R. J. Allen, A. D. Baddeley, and G. J. Hitch, “Is the binding of visual features in working memory resource-demanding?” Journal of Experimental Psychology: General, vol. 135, no. 2, pp. 298–313, 2006.
- M. Rudner, P. Fransson, M. Ingvar, L. Nyberg, and J. Rönnberg, “Neural representation of binding lexical signs and words in the episodic buffer of working memory,” Neuropsychologia, vol. 45, no. 10, pp. 2258–2276, 2007.
- M. Rudner and J. Rönnberg, “The role of the episodic buffer in working memory for language processing,” Cognitive Processing, vol. 9, no. 1, pp. 19–28, 2008.
- M. A. Just and P. A. Carpenter, “A capacity theory of comprehension: individual differences in working memory,” Psychological Review, vol. 99, no. 1, pp. 122–149, 1992.
- M. Daneman and P. A. Carpenter, “Individual differences in integrating information between and within sentences,” Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 9, no. 4, pp. 561–584, 1983.
- M. Daneman and P. A. Carpenter, “Individual differences in working memory and reading,” Journal of Verbal Learning and Verbal Behavior, vol. 19, no. 4, pp. 450–466, 1980.
- A. Miyake, N. P. Friedman, M. J. Emerson, A. H. Witzki, A. Howerter, and T. D. Wager, “The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: a latent variable analysis,” Cognitive Psychology, vol. 41, no. 1, pp. 49–100, 2000.
- M. Rudner, H. N. Ng, N. Rönnberg et al., “Cognitive spare capacity as a measure of listening effort,” Journal of Hearing Science, vol. 1, no. 2, pp. 47–49, 2011.
- P. C. Fletcher and R. N. A. Henson, “Frontal lobes and human memory insights from functional neuroimaging,” Brain, vol. 124, no. 5, pp. 849–881, 2001.
- M. Rudner, T. Lunner, T. Behrens, E. S. Thorén, and J. Rönnberg, “Working memory capacity may influence perceived effort during aided speech recognition in noise,” Journal of the American Academy of Audiology, vol. 23, no. 8, pp. 577–589, 2012.
- A. A. Zekveld, S. E. Kramer, and J. M. Festen, “Cognitive load during speech perception in noise: the influence of age, hearing loss, and cognition on the pupil response,” Ear and Hearing, vol. 32, no. 4, pp. 498–510, 2011.
- C. J. Wild, A. Yusuf, D. E. Wilson, J. E. Peelle, M. H. Davis, and I. S. Johnsrude, “Effortful listening: the processing of degraded speech depends critically on attention,” The Journal of Neuroscience, vol. 32, no. 40, pp. 14010–14021, 2012.
- T. Lunner, “Cognitive function in relation to hearing aid use,” International Journal of Audiology, vol. 42, supplement 1, pp. S49–S58, 2003.
- M. A. Akeroyd, “Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults,” International Journal of Audiology, vol. 47, supplement 2, pp. S53–S71, 2008.
- J. Besser, T. Koelewijn, A. A. Zekveld, S. E. Kramer, and J. M. Festen, “How linguistic closure and verbal working memory relate to speech recognition in noise—a review,” Trends in Amplification, vol. 17, no. 2, pp. 75–93, 2013.
- M. K. Pichora-Fuller, “How cognition might influence hearing aid design, fitting, and outcomes,” Hearing Journal, vol. 62, no. 11, pp. 32–35, 2009.
- S. Gatehouse, G. Naylor, and C. Elberling, “Benefits from hearing aids in relation to the interaction between the user and the environment,” International Journal of Audiology, vol. 42, supplement 1, pp. S77–S85, 2003.
- M. Rudner and T. Lunner, “Cognitive spare capacity as a window on hearing aid benefit,” Semin Hear, vol. 34, no. 4, pp. 298–307, 2013.
- C. Foo, M. Rudner, J. Rönnberg, and T. Lunner, “Recognition of speech in noise with new hearing instrument compression release settings requeres explicit cognitive storage and processing capacity,” Journal of the American Academy of Audiology, vol. 18, no. 7, pp. 618–631, 2007.
- T. Lunner and E. Sundewall-Thorén, “Interactions between cognition, compression, and listening conditions: effects on speech-in-noise performance in a two-channel hearing aid,” Journal of the American Academy of Audiology, vol. 18, no. 7, pp. 604–617, 2007.
- M. Rudner, C. Foo, E. Sundewall-Thorén, T. Lunner, and J. Rönnberg, “Phonological mismatch and explicit cognitive processing in a sample of 102 hearing-aid users,” International Journal of Audiology, vol. 47, supplement 2, pp. S91–S98, 2008.
- M. Rudner, C. Foo, J. Rönnberg, and T. Lunner, “Cognition and aided speech recognition in noise: specific role for cognitive factors following nine-week experience with adjusted compression settings in hearing aids,” Scandinavian Journal of Psychology, vol. 50, no. 5, pp. 405–418, 2009.
- R. M. Cox and J. Xu, “Short and long compression release times: speech understanding, real-world preferences, and association with cognitive ability,” Journal of the American Academy of Audiology, vol. 21, no. 2, pp. 121–138, 2010.
- E. Dahlin, A. S. Neely, A. Larsson, L. Bäckman, and L. Nyberg, “Transfer of learning after updating training mediated by the striatum,” Science, vol. 320, no. 5882, pp. 1510–1512, 2008.
- T. Klingberg, E. Fernell, P. J. Olesen et al., “Computerized training of working memory in children with ADHD—a randomized, controlled trial,” Journal of the American Academy of Child and Adolescent Psychiatry, vol. 44, no. 2, pp. 177–186, 2005.
- A. M. Owen, A. Hampshire, J. A. Grahn et al., “Putting brain training to the test,” Nature, vol. 465, no. 7299, pp. 775–778, 2010.
- J. Anguera, J. Boccanfuso, J. Rintoul et al., “Video game training enhances cognitive control in older adults,” Nature, vol. 501, no. 7465, pp. 97–101, 2013.
- H. Henshaw and M. A. Ferguson, “Efficacy of individual computer-based auditory training for people with hearing loss: a systematic review of the evidence,” PLoS ONE, vol. 8, no. 5, Article ID e62836, 2013.
- M. K. Pichora-Fuller, “Audition and cognition: what audiologists need to know about listening,” in Hearing Care for Adults, C. Palmer and R. Seewald, Eds., pp. 71–85, Stäfa, Switzerland, 2007.
- S. Mishra, M. Rudner, T. Lunner, and R. Rönnberg, “Speech understanding and cognitive spare capacity,” in Proceedings of the International Symposium on Auditory and Audiological Research (ISAAR '10), pp. 305–313, Elsinore, Denmark, 2010.
- S. Mishra, T. Lunner, S. Stenfelt, J. Rönnberg, and M. Rudner, “Seeing the talker’s face supports executive processing of speech in steady state noise,” Frontiers in Systems Neuroscience, vol. 7, article 96, 2013.
- E. H. N. Ng, M. Rudner, T. Lunner, M. S. Pedersen, and J. Rönnberg, “Effects of noise and working memory capacity on memory processing of speech for hearing-aid users,” International Journal of Audiology, vol. 52, no. 7, pp. 433–441, 2013.
- S. Mishra, T. Lunner, S. Stenfelt, J. Rönnberg, and M. Rudner, “Visual information can hinder working memory processing of speech,” Journal of Speech, Language, and Hearing Research, vol. 56, no. 4, pp. 1120–1132, 2013.
- D. Barulli and Y. Stern, “Efficiency, capacity, compensation, maintenance, plasticity: emerging concepts in cognitive reserve,” Trends in Cognitive Sciences, vol. 17, no. 10, pp. 502–509, 2013.
- P. Satz, M. A. Cole, D. J. Hardy, and Y. Rassovsky, “Brain and cognitive reserve: mediator(s) and construct validity, a critique,” Journal of Clinical and Experimental Neuropsychology, vol. 33, no. 1, pp. 121–130, 2011.
- A. Sarampalis, S. Kalluri, B. Edwards, and E. Hafter, “Objective measures of listening effort: effects of background noise and noise reduction,” Journal of Speech, Language, and Hearing Research, vol. 52, no. 5, pp. 1230–1240, 2009.
- M. Hällgren, B. Larsby, and S. Arlinger, “A Swedish version of the Hearing In Noise Test (HINT) for measurement of speech recognition,” International Journal of Audiology, vol. 45, no. 4, pp. 227–237, 2006.
- M. Nilsson, S. D. Soli, and J. A. Sullivan, “Development of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise,” Journal of the Acoustical Society of America, vol. 95, no. 2, pp. 1085–1099, 1994.
- S. L. McCoy, P. A. Tun, L. C. Cox, M. Colangelo, R. A. Stewart, and A. Wingfield, “Hearing loss and perceptual effort: downstream effects on older adults' memory for speech,” Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology, vol. 58, no. 1, pp. 22–33, 2005.
- D. R. Murphy, F. I. M. Craik, K. Z. H. Li, and B. A. Schneider, “Comparing the effects of aging and background noise on short-term memory performance,” Psychology and Aging, vol. 15, no. 2, pp. 323–334, 2000.
- W. H. Sumby and I. Pollack, “Visual contribution to speech intelligibility in noise,” Journal of the Acoustical Society of America, vol. 26, article 212, 1954.
- S. Mishra, S. Stenfelt, T. Lunner, J. Rönnberg, and M. Rudner, “Cognitive spare capacity in older adults with hearing loss,” Frontiers in Aging Neuroscience, vol. 6, article 96, 2014.
- N. P. Erber, “Interaction of audition and vision in the recognition of oral speech stimuli,” Journal of Speech and Hearing Research, vol. 12, no. 2, pp. 423–425, 1969.
- J. G. W. Bernstein and K. W. Grant, “Auditory and auditory-visual intelligibility of speech in fluctuating maskers for normal-hearing and hearing-impaired listeners,” Journal of the Acoustical Society of America, vol. 125, no. 5, pp. 3358–3372, 2009.
- K. W. Grant, B. E. Walden, and P. F. Seitz, “Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition, and auditory-visual integration,” Journal of the Acoustical Society of America, vol. 103, no. 5, pp. 2677–2690, 1998.
- K. W. Grant and P.-F. Seitz, “The use of visible speech cues for improving auditory detection of spoken sentences,” Journal of the Acoustical Society of America, vol. 108, no. 3 I, pp. 1197–1208, 2000.
- S. Fraser, J.-P. Gagné, M. Alepins, and P. Dubois, “Evaluating the effort expended to understand speech in noise using a dual-task paradigm: the effects of providing visual speech cues,” Journal of Speech, Language, and Hearing Research, vol. 53, no. 1, pp. 18–33, 2010.
- P. A. Gosselin and J.-P. Gagné, “Older adults expend more listening effort than young adults recognizing audiovisual speech in noise,” International Journal of Audiology, vol. 50, no. 11, pp. 786–792, 2011.
- J. B. Frtusova, A. H. Winneke, and N. A. Phillips, “ERP Evidence that auditory-visual speech facilitates working memory in younger and older adults,” Psychology and Aging, vol. 28, no. 2, pp. 481–494, 2013.
- M. K. Pichora-Fuller, B. A. Schneider, and M. Daneman, “How young and old adults listen to and remember speech in noise,” Journal of the Acoustical Society of America, vol. 97, no. 1, pp. 593–608, 1995.
- P. A. Luce and D. B. Pisoni, “Recognizing spoken words: the neighborhood activation model,” Ear and Hearing, vol. 19, no. 1, pp. 1–36, 1998.
- D. S. Lazard, H. J. Lee, M. Gaebler, C. A. Kell, E. Truy, and A. L. Giraud, “Phonological processing in post-lingual deafness and cochlear implant outcome,” NeuroImage, vol. 49, no. 4, pp. 3443–3451, 2010.
- S. Moradi, B. Lidestam, and J. Rönnberg, “Gated audiovisual speech identification in silence vs. noise: effects on time and accuracy,” Frontiers in Psychology, vol. 4, article 359, 2013.
- U. Andersson, “Deterioration of the phonological processing skills in adults with an acquired severe hearing loss,” European Journal of Cognitive Psychology, vol. 14, no. 3, pp. 335–352, 2002.
- E. Classon, M. Rudner, and J. Rönnberg, “Working memory compensates for hearing related phonological processing deficit,” Journal of Communication Disorders, vol. 46, no. 1, pp. 17–29, 2013.
- E. Classon, M. Rudner, M. Johansson, and J. Rönnberg, “Early ERP signature of hearing impairment in visual rhyme judgment,” Frontiers in Psychology, vol. 4, article 241, 2013.
- J. M. Rodd, M. H. Davis, and I. S. Johnsrude, “The neural mechanisms of speech comprehension: fMRI studies of semantic ambiguity,” Cerebral Cortex, vol. 15, no. 8, pp. 1261–1269, 2005.
- B. Hagerman and C. Kinnefors, “Efficient adaptive methods for measuring speech reception threshold in quiet and in noise,” Scandinavian Audiology, vol. 24, no. 1, pp. 71–77, 1995.
- B. Hagerman, “Sentences for testing speech intelligibility in noise,” Scandinavian Audiology, vol. 11, no. 2, pp. 79–87, 1982.
- T. Lunner, R. K. Hietkamp, M. R. Andersen, K. Hopkins, and B. C. Moore, “Effect of speech material on the benefit of temporal fine structure information in speech for young normal-hearing and older hearing-impaired participants,” Ear and Hearing, vol. 33, no. 3, pp. 377–388, 2012.
- A. A. Zekveld, M. Rudner, I. S. Johnsrude, and J. Rönnberg, “The effects of working memory capacity and semantic cues on the intelligibility of speech in noise,” Journal of the Acoustical Society of America, vol. 134, no. 3, pp. 2225–2234, 2013.
- E. Sohoglu, J. E. Peelle, R. P. Carlyon, and M. H. Davis, “Predictive top-down integration of prior knowledge during speech perception,” The Journal of Neuroscience, vol. 32, no. 25, pp. 8443–8453, 2012.
- A. A. Zekveld, S. E. Kramer, M. S. M. G. Vlaming, and T. Houtgast, “Audiovisual perception of speech in noise and masked written text,” Ear and Hearing, vol. 29, no. 1, pp. 99–111, 2008.
- A. A. Zekveld, M. Rudner, I. S. Johnsrude, D. J. Heslenfeld, and J. Rönnberg, “Behavioral and fMRI evidence that cognitive ability modulates the effect of semantic context on speech intelligibility,” Brain and Language, vol. 122, no. 2, pp. 103–113, 2012.
- A. A. Zekveld, M. Rudner, I. S. Johnsrude, J. M. Festen, J. H. M. Van Beek, and J. Rönnberg, “The influence of semantically related and unrelated text cues on the intelligibility of sentences in noise,” Ear and Hearing, vol. 32, no. 6, pp. e16–e25, 2011.
- C. Signoret, I. Johnsrude, E. Classon, and M. Rudner, “Lexical access speed determines the role of working memory in pop-out,” in Proceedings of the Cognitive Hearing Science for Communication, Linköping, Sweden, June 2013.
- L. Nyberg, M. Lövdén, K. Riklund, U. Lindenberger, and L. Bäckman, “Memory aging and brain maintenance,” Trends in Cognitive Sciences, vol. 16, no. 5, pp. 292–305, 2012.
- P. B. Baltes and U. Lindenberger, “Emergence of a powerful connection between sensory and cognitive functions across the adult life span: a new window to the study of cognitive aging?” Psychology and Aging, vol. 12, no. 1, pp. 12–21, 1997.
- B. A. Schneider, M. Daneman, and M. K. Pichora-Fuller, “Listening in aging adults: from discourse comprehension to psychoacoustics,” Canadian Journal of Experimental Psychology, vol. 56, no. 3, pp. 139–152, 2002.
- F. R. Lin, K. Yaffe, J. Xia et al., “Hearing loss and cognitive decline in older adults,” JAMA Internal Medicine, vol. 173, no. 4, pp. 293–299, 2013.
- L.-G. Nilsson, L. Bäckman, K. Erngrund et al., “The betula prospective cohort study: memory, health, and aging,” Aging, Neuropsychology, and Cognition, vol. 4, no. 1, pp. 1–32, 1997.
- J. Rönnberg, H. Danielsson, M. Rudner et al., “Hearing loss is negatively related to episodic and semantic long-term memory but not to short-term memory,” Journal of Speech, Language, and Hearing Research, vol. 54, no. 2, pp. 705–726, 2011.
Copyright © 2014 Mary Rudner and Thomas Lunner. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.