Indiana University

rand11
The Speech Research Laboratory is a part of the Psychological and Brain Sciences department at Indiana University in Bloomington.
1101 E. 10th Street
Bloomington, IN 47405
812-855-1768

Speech Research Lab Meeting – Friday April 12 – Catherine Tamis-LeMonda

April 8th, 2014

For this week’s SRL meeting we are teaming up with the Developmental Seminar again and will be welcoming Dr. Catherine Tamis-LeMonda, professor of applied psychology at  NYU Steinhardt’s Center for Research on Culture, Development, and Education. Her research is focused on infant and toddler learning and development in the areas of language and communication, object play, cognition, motor skills, gender identity, emotion regulation, and social understanding, and the long term implications of early emerging skills for children’s developmental trajectories.

Her title and abstract are below, also included is a link to download a brand new publication in Current Directions in Psychological Science:  “Why Is Infant Language Learning Facilitated by Parental Responsiveness?”
LeMonda-2014

All are welcome and invited to attend
Where: Psychology Conference Room #128 (behind front office)
When: Friday April 11, 2014 –  1:30pm

Temporal Structure of Language Input: Implications for Infant Word Learning

Catherine S. Tamis-LeMonda

Parent-infant interaction is the primary context in which infants learn culturally valued skills, most notably how to use the tool of language to share intentions with others. In this talk, I consider several temporal features of language input that function to support infants’ word learning. This analysis considers language input within a nested time structure: unfolding from second-to-second and minute-to-minute across the weeks, months, and years of children’s lives. Based on a second-to-second, microanalysis of infant-mother language interactions, we show that language input is contiguous and contingent on infant actions: mothers increase their talk following infant communicative or exploratory behaviors, but suppress language when infants are off task. Moreover, responsive language is lexically rich and multi-modal, thereby providing infants with physical cues (e.g., gestures) to the words that are spoken. When language input is examined across the minutes and hours of infants’ home routines, it is characterized by massive fluctuations that range over 120 words, in which spurts of lexically rich input are followed by minutes of silence. In the context of these fluctuations, infants hear a relatively low, constant number of novel words each minute against a backdrop of repeated words. This novelty in the context of familiarity creates a foreground-background effect that does not overload the young learner. Moreover, the forms and functions of language shift from minute to minute as infants transition to new activity contexts. For example, mothers are more likely to use referential language, and in turn more nouns, during literacy routines than during feeding and grooming routines, but conversely more regulatory language and verbs during routines such as feeding and play. These temporal fluctuations in language content provide infants with opportunities to learn the various pragmatic functions of language. Over the months and years of early child development, parents tailor their language input in line with the changing skills of their infants. These modifications function to “up-the-ante” in ways that scaffold child learning.

 

 

Speech Research Lab Meeting – Friday March 28 – Marc Bornstein

March 24th, 2014

This week’s SRL meeting is presented jointly with the Developmental Seminar and we are pleased to welcome Dr. Marc Bornstein from the NIH. The title and abstract for the talk are provided below. All are welcome and invited to attend.

Where: Psychology Conference Room 128
When: Friday, March 28, 1:30pm

Title: A Behavioral Neuroscience of Parenting
Marc H. Bornstein
Editor, Parenting: Science and Practice
Eunice Kennedy Shriver National Institute of Child Health and Human Development

Human caregiving has evolutionary bases and is constituted of many highly conserved actions.  Infant cries capture our attention, and we cannot resist reacting to them. When we engage infants, we unconsciously, automatically, and thoroughgoingly change our speech - in prosody, lexicon, syntax, pragmatics, and content – and do so knowing full well babies cannot understand what we say. Behavioral and cultural study reveal some universal forms of parenting that guide formulating testable hypotheses about autonomic and central nervous system substrates of parenting.  In this talk, I first discuss parenting and a general orientation toward this evolutionarily significant and individually important activity in terms of its nature, structure, and goals. Next, I review behavioral and cross-cultural research designed to uncover commonly expressed - perhaps universal — approaches to parenting infants and young children.  I then turn to describe an experimental neuroscience of parenting in studies of autonomic nervous system reactivity (in vagal tone and thermoregulation) and central nervous system function (using TMS, EEG/ERP, and fMRI). Because the intersection of parenting and neuroscience is still a rather new discipline, I forecast some frontiers of this budding field before reaching some general conclusions.  I hope that my talk will have value and meaning for experimentalists to understand process; for developmentalists to understand process through time; and for clinicians, to understand process through time to improve life and well-being in children, parents, and families.

Speech Research Lab Meeting – March 7 – Irina Castellanos

March 3rd, 2014

For this week’s SRL meeting, postdoctoral research fellow Dr. Irina Castellanos will present a WIPI-style talk to discuss some preliminary data from her current research project.

The title and brief abstract are below, all are invited and welcome to attend.

Where: Psychology room 128

When: Friday, March 7, 1:30pm

Title: Spoken word learning in infants with hearing loss: The role of parent interactions.

Abstract: Language serves as a foundation for social and cognitive development. As compared to normal hearing (NH) peers, many deaf infants with hearing aids (HAs) and cochlear implants (CIs) display speech and language delays throughout childhood. Predictors for speech and language performance include, for example, age at implantation (Houston & Miyamoto, 2010), amount of residual hearing (El-Hakim et al., 2001), and communication mode (Kirk, Miyamoto, Ying, Perdew, & Zuganelis, 2000). After accounting for these factors, considerable individual differences remain. In the present study, we investigated the role of parents’ scaffolding of attention on hearing-impaired infants’ novel word learning. Previous research indicates that NH infants’ early word learning is facilitated when parents’ labeling of novel objects is synchronized with infants’ self-directed looking and touching of the novel objects (Yu & Smith, 2012). For hearing-impaired children, research indicates that parents use more directive parenting styles than parents of NH children(Meadow-Orlans, 1997). We predict that parent-infant coordination of looking and touching behavior may be influenced by the infants’ hearing status and that differences may be predictive of infants’ word learning.

Speech Research Lab Meeting – February 28 – Andrej Kral

February 19th, 2014

The SRL is happy to announce a visit by our esteemed colleague Dr. Andrej Kral who heads the Laboratory of Auditory Neuroscience and Neuroprostheses. Dr. Kral is currently the Chair and Professor of Auditory Neuroscience at the Medical University of Hannover, Germany and Adjunct Professor of Cognition and Neuroscience at the University of Texas at Dallas. His lab investigates how congenital auditory deprivation (deafness) affects the microcircuitry in the auditory system. For a very recent review (and to provide context for this talk) see: Kral (2013) Auditory Critical Periods: A Review from a System’s Perspective. Neuroscience, 247 (2013) 117-133. Download here: Kral_2013_Neuroscience

In addition to Dr. Kral’s planned talk, he will be available for meetings with faculty, postdocs, and graduate students at the IU school of Medicine campus on Thursday (2/27) and here in Bloomington on Friday (2/28). Please email Terren Green (tgreen@indiana.edu) or myself if you are interested in a one-on-one or group meeting with Dr. Kral.

The title and abstract for his talk are below, all are invited and welcome to attend.
Time/location for his talk: Psychology Conference Room 128; Friday, February 28th, 1:30pm

Title: Congenital Deafness Disrupts Top-Down Influences in the Auditory Cortex

Abstract: The available evidence shows that many basic cerebral functions are inborn. Learned, on the other hand, are representations of sensory objects that are highly individual and depend on the subject‘s experiences. Related to it, cortico-cortical interactions and the function of the cortical column depend on experience and are shaped by sensory inputs. Periods of high susceptibility to environmental manipulations are given by higher synaptic plasticity and a naive state of neuronal networks that may easily be patterned by sensory input. Adult learning, on the other hand, is characterized by weaker synaptic plasticity but the ability to control and modulate plasticity by the need of the organism through top–down interactions and modulatory systems. Congenital deafness affects the development not only by delaying it, but also by desynchronizing different developmental steps. In its ultimate consequence, congenital deafness results in an auditory system that lacks the ability to supervise early sensory processing and plasticity, but also lacks the high synaptic plasticity of the juvenile brain. Critical developmental periods result. It remains an open question whether restoring juvenile plasticity by eliminating molecular breaks of plasticity will reinstall functional connectivity in the auditory cortex and bring a new therapy for complete sensory deprivation in the future. Focus on integrative aspects of critical periods will be required to counteract the reorganization taken place in the deprived sensory system and the other affected cerebral functions by training procedures

Dr. Kral’s website with links to studies and publications: http://www.neuroprostheses.com/AK/Main.html

Speech Research Lab Meeting – 2-21-14 – Elena Safronova

February 17th, 2014

For this week’s SRL meeting we welcome Elena Safronova from the University of Barcelona. Elena is a visiting doctoral student in Isabelle Darcy’s lab, she is completing her PhD under the mentorship of  Joan Carles Mora on attention control and acoustic/phonological memory and their role in L2 phonology. Please join us in welcoming Elena, the title and abstract of her talk are below, all are invited and welcome to attend this talk.

Psychology Room 128, Friday 2/21/14, 1:30pm

Don’t be so categorical!

Role of Cognitive Skills in L2 Vowel Perception

Elena Safronova

Universitat de Barcelona

esafrosa7@alumnes.ub.edu

It seems that we begin life being able to perceive very fine acoustic-phonetic distinctions existing in the world’s languages (Kuhl & Rivera-Gaxiola, 2008). Then this fascinating perceptual sensitivity undergoes a rapid reorganization due to the development of cognitive skills and establishment of the first language (L1) categories, which eventually makes as become a committed to L1 speech perceiver (Conboy et al., 2008; Kuhl et al., 1992; Werker and Tees, 1984). When it comes to learning a second/foreign language (L2) later in life the attunement to the acoustic-phonetic properties of L1 sounds may hinder formation of accurate representations of L2 speech sounds, leading to the presence of foreign accent in speech production. Despite the fact that the ability to establish new phonological categories for a L2 is thought to remain intact across the life span, it is closely related to the individuals’ ability to detect acoustic-phonetic differences between L1 and L2 sounds (Flege, 1995), which in its turn may be a source of a widely observed inter-subject variation among late L2 learners. These findings call for research on cognitive mechanisms that may contribute to the  L2 speech discrimination ability.

The study I will present explores the role of acoustic memory, phonological memory and attention control in EFL learners’ discrimination of tense-lax /i:/-/?/.  The participants of the study were Spanish/Catalan (N=50, mean age = 19.96) EFL learners with average age of onset of English learning of 6.7 years old. The results were consistent with previous research, demonstrating Spanish/Catalan EFL learners’ over-reliance on duration when perceiving the target  vowel contrast (Cebrian, 2006;  Cerviño-Povedano & Mora, 2011). The participants’ acoustic memory and attention control scores significantly correlated with percentage of correctly discriminated natural and duration-neutralized stimuli, indicating that participants’ storage capacity for the acoustic information of the speech input as well as the ability to foreground/background relevant/irrelevant details were related to their vowel discrimination. Participants’ phonological memory capacity did not have any significant effect on vowel discrimination ability. The results of the study have shown that individuals with higher memory capacity for the acoustic details in speech input and higher attentional control over relevant/irrelevant acoustic information are significantly better at discriminating English tense-lax /i:/-/?/ vowels under both natural and duration-neutralized conditions than the lower ability group. Regression analyses indicated that acoustic memory and attention control accounted for 10.3% and 17.4% respectively, of the unique variance in English vowel discrimination accuracy, thus highlighting the important role of cognitive mechanisms in the re-weighting of phonetic cues and more target-like L2 speech perception.

References

1 Cebrian, J. (2006). Experience and the use of duration in the categorization of L2 vowels. Journal of Phonetics 34. 372-387.

2 Cerviño-Povedano, E. and Mora, J. C. (2011). Investigating Catalan learners of English over-reliance on duration: vowel cue weighting and phonological short-term memory. Dziubalska-Ko?aczyk, K., Wrembel, M. and Kul, M. (eds) Achievements and perspectives in SLA of speech: New Sounds 2010. VolumeI. Frankfurt am Main: Peter Lang. 56-64.

3 Conboy, B. T., Sommerville, J. A., Kuhl, P. K. (2008). Cognitive control factors in speech perception at 11 months. Developmental Psychology 44(5), 1505-1512.

4 Flege, J.E. (1995). Second language speech learning: theory, findings, and problems. In W. Strange (Ed) Speech Perception and Linguistic Experience: Issues in Cross-linguistic Research. Timonium, MD: York Press, pp. 229-273.

5 Kuhl P. K, Williams K. A., Lacerda F., Stevens K. N., Lindblom B. (1992). Linguistic experience alters phonetic perception in infants by 6 months of age. Science. 255:606–8.

6 Werker, J., and Tees, R. (1984). Cross-language speech perception: Evidence for perceptual reorganization during the first year of life Infant Behaviour and Development, 7, 49-63.

Speech Research Lab Meeting – 2-14-14 – Journal Club

February 13th, 2014

This week’s SRL meeting we will be a journal club led by postdocs Angela AuBuchon and Jessica Montag. The paper we will discuss is a new Ear and Hearing e-publication ahead of print, “The Association Between Visual, Nonverbal Cognitive Abilities and Speech, Phonological Processing, Vocabulary and Reading Outcomes in Children With Cochlear Implants.”  Click here to download. All are invited and welcome to attend.

Time/place: Psychology room 128, 1:30pm, Friday 2/14/14

Title: The Association Between Visual, Nonverbal Cognitive Abilities and Speech, Phonological Processing, Vocabulary and Reading Outcomes in Children With Cochlear Implants

Authors: Lindsey Edwards and Sara Anderson

Abstract: Objective: The aim of this study was to explore the possibility that specific nonverbal, visual cognitive abilities may be associated with outcomes after pediatric cochlear implantation. The study therefore examined the relationship between visual sequential memory span and visual sequential reasoning ability, and a range of speech, phonological processing, vocabulary knowledge, and reading outcomes in children with cochlear implants. Design: A cross-sectional, correlational design was used. Sixty-six children aged 5 to 12 years completed tests of visual memory span and visual sequential reasoning, along with tests of speech intelligibility, phonological processing, vocabulary knowledge, and word reading ability (the outcome variables). Auditory memory span was also assessed, and its relationship with the other variables examined. Results: Significant, positive correlations were found between the visual memory and reasoning tests, and each of the outcome variables. A series of regression analyses then revealed that for all the outcome variables, after variance attributable to the age at implantation was accounted for, visual memory span and visual sequential reasoning ability together accounted for significantly more variance (up to 25%) in each outcome measure. Conclusions: These findings have both clinical and theoretical implications. Clinically, the findings may help improve the identification of children at risk of poor progress after implantation earlier than has been possible to date as the nonverbal tests can be administered to children as young as 2 years of age. The results may also contribute to the identification of children with specific learning or language difficulties as well as improve our ability to develop intervention strategies for individual children based on their specific cognitive processing strengths or difficulties. Theoretically, these results contribute to the growing body of knowledge about learning and development in deaf children with cochlear implants.

Speech Research Lab Meeting – January 31 – Jon Willits

January 28th, 2014

For this week’s SRL meeting, Dr. Jon Willits, a postdoc in Mike Jones’ lab will present a WIPI style talk. His title and abstract are provided below, all are welcome and invited to attend.

When: Friday, January 31, 2014, 1:30pm
Where: Psychology room #128

Title: Semantic Models and their Applications to Vocabulary Development in Children with Cochlear Implants and Normal Hearing Children

Abstract: I will give a talk that has two parts. In the first half I will discuss semantic memory models and some of their applications to vocabulary development in typically developing children. I will show that these models can help predict interesting facts about language development, such as which words and concepts are easy and difficult to learn. The models also make suggestions about some underlying facts about the language learning process, such as the importance of different kinds of information. In the second half I will discuss some preliminary work on applying these models to language development with children with cochlear implants, such as modeling behavior on verbal fluency tasks.

Speech Research Lab Meeting – January 24 – Suyog Chandramouli

January 21st, 2014

For this week’s SRL meeting a graduate student in our lab, Suyog Chandramouli, will be presenting a WIPI (Work-in-Progress) style presentation to discuss some new ideas about his project. Below is a brief synopsis and two readings to provide background for the discussion.

Download the two articles by clicking the following links:

WixtedRohrer1993JEPLMC- Proactive Interference and Dynamics of Free Recall

Wickens – Some Characteristics of Word Encoding

All are welcome to join.
Where: Psychology conference room #128
When: Friday, January 24, 1:30pm

“In this WIPI, I will talk about a release from proactive interference experiment that is being designed in the lab to study storage and
retrieval of speech information in memory.

The aim of the study is to obtain rankings of the relative importance of parameters of speech stimuli such as gender of talker, accents, etc. in aiding episodic memory performance, and to study how these ranks vary under conditions such as clear/processed speech, embedding in noise, etc.

Use of the release from proactive interference paradigm was pioneered by DD Wickens in the 60’s and 70’s to study coding processes in human memory. The design closely follows modifications of the PI-design by Gardiner (1972) and, John Wixted and Doug Rohrer (1993).

Studies may be conducted in the future with cochlear implant patients at different levels of expertise to glean the parameters salient to them in different listening conditions. With this knowledge, it might be possible to develop programs that would focus on training novice cochlear implant users to automatically pay attention to features important for encoding context so that they can better remember and recall information.

Attached is a short overview by Wickens of his research, and the paper by Wixted and Rohrer that will be discussed in the presentation.

Speech Research Lab Meeting – January 17 – Dr. Mead Killion

January 13th, 2014

Please join us for this week’s SRL meeting where we welcome Dr. Mead Killion, the founder and president of Etymotic Research and adjunct professor of Speech and Hearing at Northwestern University. The title and abstract for his talk and a little background about our speaker is provided below. Dr. Killion has included a paper from the Hearing Review Killion 2004 Myths Noise and Dmics to provide some background, click the link to download the pdf. All are welcome and invited to attend.

Where: Psychology Conference Room 128

When: Friday, January 17th, 1:30pm

Remarks on hearing loss from music and noise exposure, SNR loss, SNR-loss tests, sniper-localization loss, and diplacusis.

Abstract: Review of theories about causes and effects of various hearing losses, including dead patches on the cochlea causing false pitch (Yehudi Menuhin could no longer play the violin because of his diplacusis).  A Magic Formula for predicting localization ability and intelligibility in noise under virtually any condition, given the results of a couple QuickSIN tests, and Brain Rewiring — Slow (10,000 hours for a professional musician) and Fast (reconnecting with once-learned tasks “I once could play Back Home in Indiana”) — will be discussed with regard to SNR-loss retraining.

About the speaker: Mead C. Killion, Ph.D., Sc.D.(hon)  

Unknown

Mead Killion is the founder and Chief Technology Officer of Etymotic Research, an R&D organization whose mission includes:  1) Helping people hear, 2) Helping people preserve

their hearing, 3) Helping people enjoy hearing and 4) Improving hearing tests.

Mead has been Adjunct Professor of Audiology at Northwestern University for 30 years, and directed PhD research at City University of New York for several years.

He holds two degrees in mathematics and a Ph.D. in audiology, plus an honorary doctor of science from Wabash College.  He has published 86 papers and 19 book chapters in the fields of acoustics, psychoacoustics, transducers, and hearing aids, and has lectured in 19 foreign countries. Dr. Killion helped design several generations of hearing aid microphones, earphones and integrated circuit amplifiers.  His research has resulted in dramatic increases in the sound quality of hearing aids, earplugs, and earphones.

He is a past president of the American Auditory Society, which gave him a lifetime achievement award in 2010, and is a member of the Vandercook College of Music Board of Trustees.   He has 82 U.S. patents issued with 30 patents pending. Aside from his work, Dr. Killion has been a dedicated choir director for 30 years, a violinist, an amateur jazz pianist, has run 32 marathons, enjoys sailing, and has recently taken up flying.

http://www.iu.edu/~srlweb/?p=559

Speech Research Lab Meeting – December 6 – Dr. Benjamin Hornsby – cancelled

December 2nd, 2013

For this week’s SRL meeting we welcome Ben Hornsby, an Assistant Professor in the Department of Hearing and Speech Science at Vanderbilt University. Dr. Hornsby’s research focuses on 1) identifying and understanding the underlying mechanisms responsible for deficits in speech processing in adults and children with hearing loss, 2) understanding the factors responsible for the large variability in the psychosocial impact of hearing loss and benefit from rehabilitation and 3) the development and assessment of methods to minimize the perceptual and psychosocial consequences of hearing loss. His current work examines relationships between speech processing deficits, cognitive processing demands and listening-related fatigue in adults and children with hearing loss. For some background reading, download a copy of his recent paper, Subjective Fatigue in Children with Hearing Loss: Some Preliminary Findings: Hornsby_2013. The title and abstract for his talk are below, all are invited and welcome to attend.

When: Friday, December 6th, 1:30pm

Where: Psychology Room 128 (conference room behind front office)

 

Too Tired to Listen? Fatigue in Adults and Children with Hearing Loss

Subjective reports from the literature have suggested for many years that fatigue was an important, but overlooked, consequence of hearing loss. Consider this anecdotal report from a person with hearing loss: “I crashed. This letdown wasn’t the usual worn-out feeling after a long day. It was pure exhaustion, the deepest kind of fatigue. I took a nap hoping it would refresh me, but when I woke up three hours later I was still so tired I gave up on the day…. The only cause of my fatigue I could identify was the stress of struggling to understand what those around [me] were saying…” (Copithorne, 2006). Despite the serious consequences of fatigue, its relationship to hearing loss and speech processing remains largely unexplored. This presentation describes ongoing work in our laboratory using subjective and objective measures to explore the relationship between speech processing and mental fatigue in adults and children with hearing loss.