Indiana University

rand4
The Speech Research Laboratory is a part of the Psychological and Brain Sciences department at Indiana University in Bloomington.
1101 E. 10th Street
Bloomington, IN 47405
812-855-1768

Archive for the 'talks' Category

08th Apr 2014

Speech Research Lab Meeting – Friday April 12 – Catherine Tamis-LeMonda

For this week’s SRL meeting we are teaming up with the Developmental Seminar again and will be welcoming Dr. Catherine Tamis-LeMonda, professor of applied psychology at  NYU Steinhardt’s Center for Research on Culture, Development, and Education. Her research is focused on infant and toddler learning and development in the areas of language and communication, object play, cognition, motor skills, gender identity, emotion regulation, and social understanding, and the long term implications of early emerging skills for children’s developmental trajectories.

Her title and abstract are below, also included is a link to download a brand new publication in Current Directions in Psychological Science:  “Why Is Infant Language Learning Facilitated by Parental Responsiveness?”
LeMonda-2014

All are welcome and invited to attend
Where: Psychology Conference Room #128 (behind front office)
When: Friday April 11, 2014 –  1:30pm

Temporal Structure of Language Input: Implications for Infant Word Learning

Catherine S. Tamis-LeMonda

Parent-infant interaction is the primary context in which infants learn culturally valued skills, most notably how to use the tool of language to share intentions with others. In this talk, I consider several temporal features of language input that function to support infants’ word learning. This analysis considers language input within a nested time structure: unfolding from second-to-second and minute-to-minute across the weeks, months, and years of children’s lives. Based on a second-to-second, microanalysis of infant-mother language interactions, we show that language input is contiguous and contingent on infant actions: mothers increase their talk following infant communicative or exploratory behaviors, but suppress language when infants are off task. Moreover, responsive language is lexically rich and multi-modal, thereby providing infants with physical cues (e.g., gestures) to the words that are spoken. When language input is examined across the minutes and hours of infants’ home routines, it is characterized by massive fluctuations that range over 120 words, in which spurts of lexically rich input are followed by minutes of silence. In the context of these fluctuations, infants hear a relatively low, constant number of novel words each minute against a backdrop of repeated words. This novelty in the context of familiarity creates a foreground-background effect that does not overload the young learner. Moreover, the forms and functions of language shift from minute to minute as infants transition to new activity contexts. For example, mothers are more likely to use referential language, and in turn more nouns, during literacy routines than during feeding and grooming routines, but conversely more regulatory language and verbs during routines such as feeding and play. These temporal fluctuations in language content provide infants with opportunities to learn the various pragmatic functions of language. Over the months and years of early child development, parents tailor their language input in line with the changing skills of their infants. These modifications function to “up-the-ante” in ways that scaffold child learning.

 

 

Posted in news, talks | No Comments »

24th Mar 2014

Speech Research Lab Meeting – Friday March 28 – Marc Bornstein

This week’s SRL meeting is presented jointly with the Developmental Seminar and we are pleased to welcome Dr. Marc Bornstein from the NIH. The title and abstract for the talk are provided below. All are welcome and invited to attend.

Where: Psychology Conference Room 128
When: Friday, March 28, 1:30pm

Title: A Behavioral Neuroscience of Parenting
Marc H. Bornstein
Editor, Parenting: Science and Practice
Eunice Kennedy Shriver National Institute of Child Health and Human Development

Human caregiving has evolutionary bases and is constituted of many highly conserved actions.  Infant cries capture our attention, and we cannot resist reacting to them. When we engage infants, we unconsciously, automatically, and thoroughgoingly change our speech – in prosody, lexicon, syntax, pragmatics, and content – and do so knowing full well babies cannot understand what we say. Behavioral and cultural study reveal some universal forms of parenting that guide formulating testable hypotheses about autonomic and central nervous system substrates of parenting.  In this talk, I first discuss parenting and a general orientation toward this evolutionarily significant and individually important activity in terms of its nature, structure, and goals. Next, I review behavioral and cross-cultural research designed to uncover commonly expressed – perhaps universal — approaches to parenting infants and young children.  I then turn to describe an experimental neuroscience of parenting in studies of autonomic nervous system reactivity (in vagal tone and thermoregulation) and central nervous system function (using TMS, EEG/ERP, and fMRI). Because the intersection of parenting and neuroscience is still a rather new discipline, I forecast some frontiers of this budding field before reaching some general conclusions.  I hope that my talk will have value and meaning for experimentalists to understand process; for developmentalists to understand process through time; and for clinicians, to understand process through time to improve life and well-being in children, parents, and families.

Posted in news, talks | No Comments »

19th Feb 2014

Speech Research Lab Meeting – February 28 – Andrej Kral

The SRL is happy to announce a visit by our esteemed colleague Dr. Andrej Kral who heads the Laboratory of Auditory Neuroscience and Neuroprostheses. Dr. Kral is currently the Chair and Professor of Auditory Neuroscience at the Medical University of Hannover, Germany and Adjunct Professor of Cognition and Neuroscience at the University of Texas at Dallas. His lab investigates how congenital auditory deprivation (deafness) affects the microcircuitry in the auditory system. For a very recent review (and to provide context for this talk) see: Kral (2013) Auditory Critical Periods: A Review from a System’s Perspective. Neuroscience, 247 (2013) 117-133. Download here: Kral_2013_Neuroscience

In addition to Dr. Kral’s planned talk, he will be available for meetings with faculty, postdocs, and graduate students at the IU school of Medicine campus on Thursday (2/27) and here in Bloomington on Friday (2/28). Please email Terren Green (tgreen@indiana.edu) or myself if you are interested in a one-on-one or group meeting with Dr. Kral.

The title and abstract for his talk are below, all are invited and welcome to attend.
Time/location for his talk: Psychology Conference Room 128; Friday, February 28th, 1:30pm

Title: Congenital Deafness Disrupts Top-Down Influences in the Auditory Cortex

Abstract: The available evidence shows that many basic cerebral functions are inborn. Learned, on the other hand, are representations of sensory objects that are highly individual and depend on the subject‘s experiences. Related to it, cortico-cortical interactions and the function of the cortical column depend on experience and are shaped by sensory inputs. Periods of high susceptibility to environmental manipulations are given by higher synaptic plasticity and a naive state of neuronal networks that may easily be patterned by sensory input. Adult learning, on the other hand, is characterized by weaker synaptic plasticity but the ability to control and modulate plasticity by the need of the organism through top–down interactions and modulatory systems. Congenital deafness affects the development not only by delaying it, but also by desynchronizing different developmental steps. In its ultimate consequence, congenital deafness results in an auditory system that lacks the ability to supervise early sensory processing and plasticity, but also lacks the high synaptic plasticity of the juvenile brain. Critical developmental periods result. It remains an open question whether restoring juvenile plasticity by eliminating molecular breaks of plasticity will reinstall functional connectivity in the auditory cortex and bring a new therapy for complete sensory deprivation in the future. Focus on integrative aspects of critical periods will be required to counteract the reorganization taken place in the deprived sensory system and the other affected cerebral functions by training procedures

Dr. Kral’s website with links to studies and publications: http://www.neuroprostheses.com/AK/Main.html

Posted in news, talks | No Comments »

17th Feb 2014

Speech Research Lab Meeting – 2-21-14 – Elena Safronova

For this week’s SRL meeting we welcome Elena Safronova from the University of Barcelona. Elena is a visiting doctoral student in Isabelle Darcy’s lab, she is completing her PhD under the mentorship of  Joan Carles Mora on attention control and acoustic/phonological memory and their role in L2 phonology. Please join us in welcoming Elena, the title and abstract of her talk are below, all are invited and welcome to attend this talk.

Psychology Room 128, Friday 2/21/14, 1:30pm

Don’t be so categorical!

Role of Cognitive Skills in L2 Vowel Perception

Elena Safronova

Universitat de Barcelona

esafrosa7@alumnes.ub.edu

It seems that we begin life being able to perceive very fine acoustic-phonetic distinctions existing in the world’s languages (Kuhl & Rivera-Gaxiola, 2008). Then this fascinating perceptual sensitivity undergoes a rapid reorganization due to the development of cognitive skills and establishment of the first language (L1) categories, which eventually makes as become a committed to L1 speech perceiver (Conboy et al., 2008; Kuhl et al., 1992; Werker and Tees, 1984). When it comes to learning a second/foreign language (L2) later in life the attunement to the acoustic-phonetic properties of L1 sounds may hinder formation of accurate representations of L2 speech sounds, leading to the presence of foreign accent in speech production. Despite the fact that the ability to establish new phonological categories for a L2 is thought to remain intact across the life span, it is closely related to the individuals’ ability to detect acoustic-phonetic differences between L1 and L2 sounds (Flege, 1995), which in its turn may be a source of a widely observed inter-subject variation among late L2 learners. These findings call for research on cognitive mechanisms that may contribute to the  L2 speech discrimination ability.

The study I will present explores the role of acoustic memory, phonological memory and attention control in EFL learners’ discrimination of tense-lax /i:/-/?/.  The participants of the study were Spanish/Catalan (N=50, mean age = 19.96) EFL learners with average age of onset of English learning of 6.7 years old. The results were consistent with previous research, demonstrating Spanish/Catalan EFL learners’ over-reliance on duration when perceiving the target  vowel contrast (Cebrian, 2006;  Cerviño-Povedano & Mora, 2011). The participants’ acoustic memory and attention control scores significantly correlated with percentage of correctly discriminated natural and duration-neutralized stimuli, indicating that participants’ storage capacity for the acoustic information of the speech input as well as the ability to foreground/background relevant/irrelevant details were related to their vowel discrimination. Participants’ phonological memory capacity did not have any significant effect on vowel discrimination ability. The results of the study have shown that individuals with higher memory capacity for the acoustic details in speech input and higher attentional control over relevant/irrelevant acoustic information are significantly better at discriminating English tense-lax /i:/-/?/ vowels under both natural and duration-neutralized conditions than the lower ability group. Regression analyses indicated that acoustic memory and attention control accounted for 10.3% and 17.4% respectively, of the unique variance in English vowel discrimination accuracy, thus highlighting the important role of cognitive mechanisms in the re-weighting of phonetic cues and more target-like L2 speech perception.

References

1 Cebrian, J. (2006). Experience and the use of duration in the categorization of L2 vowels. Journal of Phonetics 34. 372-387.

2 Cerviño-Povedano, E. and Mora, J. C. (2011). Investigating Catalan learners of English over-reliance on duration: vowel cue weighting and phonological short-term memory. Dziubalska-Ko?aczyk, K., Wrembel, M. and Kul, M. (eds) Achievements and perspectives in SLA of speech: New Sounds 2010. VolumeI. Frankfurt am Main: Peter Lang. 56-64.

3 Conboy, B. T., Sommerville, J. A., Kuhl, P. K. (2008). Cognitive control factors in speech perception at 11 months. Developmental Psychology 44(5), 1505-1512.

4 Flege, J.E. (1995). Second language speech learning: theory, findings, and problems. In W. Strange (Ed) Speech Perception and Linguistic Experience: Issues in Cross-linguistic Research. Timonium, MD: York Press, pp. 229-273.

5 Kuhl P. K, Williams K. A., Lacerda F., Stevens K. N., Lindblom B. (1992). Linguistic experience alters phonetic perception in infants by 6 months of age. Science. 255:606–8.

6 Werker, J., and Tees, R. (1984). Cross-language speech perception: Evidence for perceptual reorganization during the first year of life Infant Behaviour and Development, 7, 49-63.

Posted in news, talks | No Comments »

13th Jan 2014

Speech Research Lab Meeting – January 17 – Dr. Mead Killion

Please join us for this week’s SRL meeting where we welcome Dr. Mead Killion, the founder and president of Etymotic Research and adjunct professor of Speech and Hearing at Northwestern University. The title and abstract for his talk and a little background about our speaker is provided below. Dr. Killion has included a paper from the Hearing Review Killion 2004 Myths Noise and Dmics to provide some background, click the link to download the pdf. All are welcome and invited to attend.

Where: Psychology Conference Room 128

When: Friday, January 17th, 1:30pm

Remarks on hearing loss from music and noise exposure, SNR loss, SNR-loss tests, sniper-localization loss, and diplacusis.

Abstract: Review of theories about causes and effects of various hearing losses, including dead patches on the cochlea causing false pitch (Yehudi Menuhin could no longer play the violin because of his diplacusis).  A Magic Formula for predicting localization ability and intelligibility in noise under virtually any condition, given the results of a couple QuickSIN tests, and Brain Rewiring — Slow (10,000 hours for a professional musician) and Fast (reconnecting with once-learned tasks “I once could play Back Home in Indiana”) — will be discussed with regard to SNR-loss retraining.

About the speaker: Mead C. Killion, Ph.D., Sc.D.(hon)  

Unknown

Mead Killion is the founder and Chief Technology Officer of Etymotic Research, an R&D organization whose mission includes:  1) Helping people hear, 2) Helping people preserve

their hearing, 3) Helping people enjoy hearing and 4) Improving hearing tests.

Mead has been Adjunct Professor of Audiology at Northwestern University for 30 years, and directed PhD research at City University of New York for several years.

He holds two degrees in mathematics and a Ph.D. in audiology, plus an honorary doctor of science from Wabash College.  He has published 86 papers and 19 book chapters in the fields of acoustics, psychoacoustics, transducers, and hearing aids, and has lectured in 19 foreign countries. Dr. Killion helped design several generations of hearing aid microphones, earphones and integrated circuit amplifiers.  His research has resulted in dramatic increases in the sound quality of hearing aids, earplugs, and earphones.

He is a past president of the American Auditory Society, which gave him a lifetime achievement award in 2010, and is a member of the Vandercook College of Music Board of Trustees.   He has 82 U.S. patents issued with 30 patents pending. Aside from his work, Dr. Killion has been a dedicated choir director for 30 years, a violinist, an amateur jazz pianist, has run 32 marathons, enjoys sailing, and has recently taken up flying.

http://www.iu.edu/~srlweb/?p=559

Posted in news, talks | No Comments »

02nd Dec 2013

Speech Research Lab Meeting – December 6 – Dr. Benjamin Hornsby – cancelled

For this week’s SRL meeting we welcome Ben Hornsby, an Assistant Professor in the Department of Hearing and Speech Science at Vanderbilt University. Dr. Hornsby’s research focuses on 1) identifying and understanding the underlying mechanisms responsible for deficits in speech processing in adults and children with hearing loss, 2) understanding the factors responsible for the large variability in the psychosocial impact of hearing loss and benefit from rehabilitation and 3) the development and assessment of methods to minimize the perceptual and psychosocial consequences of hearing loss. His current work examines relationships between speech processing deficits, cognitive processing demands and listening-related fatigue in adults and children with hearing loss. For some background reading, download a copy of his recent paper, Subjective Fatigue in Children with Hearing Loss: Some Preliminary Findings: Hornsby_2013. The title and abstract for his talk are below, all are invited and welcome to attend.

When: Friday, December 6th, 1:30pm

Where: Psychology Room 128 (conference room behind front office)

 

Too Tired to Listen? Fatigue in Adults and Children with Hearing Loss

Subjective reports from the literature have suggested for many years that fatigue was an important, but overlooked, consequence of hearing loss. Consider this anecdotal report from a person with hearing loss: “I crashed. This letdown wasn’t the usual worn-out feeling after a long day. It was pure exhaustion, the deepest kind of fatigue. I took a nap hoping it would refresh me, but when I woke up three hours later I was still so tired I gave up on the day…. The only cause of my fatigue I could identify was the stress of struggling to understand what those around [me] were saying…” (Copithorne, 2006). Despite the serious consequences of fatigue, its relationship to hearing loss and speech processing remains largely unexplored. This presentation describes ongoing work in our laboratory using subjective and objective measures to explore the relationship between speech processing and mental fatigue in adults and children with hearing loss.

Posted in news, talks | No Comments »

30th Sep 2013

Speech Research Lab Meeting – Friday October 4 – Journal Club

For this week’s SRL meeting we are planning to have a journal club style discussion about a brand new paper entitled “Swinging at a Cocktail Party: Voice Familiarity Aids Speech Perception in the Presence of a Competing Voice.”  Click here to download, the abstract and citation is below. All are invited and are welcome to attend.

Where: Psychology Conference Room 128

When: Friday, October 4, 2013, 1:30pm

Johnsrude IS, Mackey A, Hakyemez H, Alexander E, Trang HP, and Carlyon RP. (2013) Psychological Science. Published online ahead of print.

Abstract: People often have to listen to someone speak in the presence of competing voices. Much is known about the acoustic cues used to overcome this challenge, but almost nothing is known about the utility of cues derived from experience with particular voices—cues that may be particularly important for older people and others with impaired hearing. Here, we use a version of the coordinate-response-measure procedure to show that people can exploit knowledge of a highly familiar voice (their spouse’s) not only to track it better in the presence of an interfering stranger’s voice, but also, crucially, to ignore it so as to comprehend a stranger’s voice more effectively. Although performance declines with increasing age when the target voice is novel, there is no decline when the target voice belongs to the listener’s spouse. This finding indicates that older listeners can exploit their familiarity with a speaker’s voice to mitigate the effects of sensory and cognitive decline.

Posted in news, talks | No Comments »

23rd Sep 2013

Speech Research Lab Meeting – Friday September 27 – Justin Aronoff

For this week’s SRL meeting we are happy to welcome Dr. Justin Aronoff, a new faculty member at University of Illinois in the Department of Speech and Hearing Sciences.  His research interests are focused on understanding how information from the left and right ear is combined and developing new techniques to improve bilateral cochlear implant user’s performance. The talk and abstract are provided below and a brief paper on a new test of spectral resolution will provide some background about his talk, download here.

Where: Psychology Room 128

When: Friday, September 27, 2013, 1:30pm

Title: Two ears are better than one: The benefits, limits, and possibilities of bilateral cochlear implants. 

Abstract: Having two ears greatly helps a listener localize the origin of a sound as well as understand speech in noisy environments such as a restaurant. For patients who have lost their hearing, getting a single cochlear implant can greatly help them understand speech in quiet environments, but they still have considerable difficulty localizing sounds and understanding speech in noisy environments. It is becoming more common for patients to be implanted with a cochlear implant in both ears (bilateral cochlear implants) in an effort to help them perform better in challenging listening tasks. This talk will discuss the benefits and limits of current bilateral cochlear implants and how the two implants can be coordinated to yield even better performance.

Visit his lab website for additional information about his research interests and publications: binauralhearinglab.shs.illinois.edu

Posted in news, talks | No Comments »

23rd Sep 2013

PRESTO Workshop – Wednesday October 9th – 2:00pm

Our PRESTO workshop is coming up soon, and we are looking forward to hearing how everyone has used the PRESTO sentence test in their research. Come and learn how PRESTO has been incorporated into research studies involving normal hearing participants, aging adults, hearing-impaired listeners, non-native English speakers, and pre- and post-lingually deafened cochlear implant users. Faculty, postdocs, and graduate students are encouraged to attend, particularly those from IU Speech and Hearing, Psychology, IUPUI, and Purdue.

The PRESTO workshop will be held on Wednesday afternoon, October 9th, following the conclusion of the Aging and Speech Communication meeting. The meeting will be held in the Psychology Building, just a few steps away from the Indiana Memorial Union where the ASC meeting is held.

Where: Psychology Building, Room 137C

1101 E. 10th Street,
Bloomington, IN 47405

When: Wednesday, October 9th, 2:00-5:30pm. 

For participants staying at the Indiana Memorial Union, parking is provided in the hotel parking lot. For other attendees, parking will be provided in the parking garage directly behind the Psychology building, adjacent to the Kelly School of Business. It is located on the corner of North Fee Lane and E 11th Street. Park and bring your ticket and it will be validated. Please email if you need additional parking instructions. IUPUI attendees can use their “A” parking permits in all IU garages and “B” permits can be used to park in the “C” lots.

For those presenting, please consider the following questions: *15-20 minute presentation, 10-15 minutes for discussion*

  1. How did you incorporate PRESTO into your research?
  2. What are your results with PRESTO in your chosen population(s)?
  3. Have these materials been helpful/useful, have you uncovered anything new?
  4. Have you experienced any challenges or problems using these materials?
  5. How do you plan to use the PRESTO in your future work?

Snacks and coffee will be provided, and we look forward to a lively discussion.

We’re looking forward to seeing everyone at the PRESTO workshop!

UPDATE: Schedule

When:       Wednesday, 2:00pm-5:30pm
Following the conclusion of the ASC meeting.

Where:      Psychology Building, Room 137-C

Steps away from the IMU, see map on next page.

Coffee and snacks will be provided.

 

Schedule:

2:00            Introduction and Welcome – David Pisoni

2:15-2:45   Pavel Zhorik – University of Louisville

2:45-3:15   Dan Fogerty – University of South Carolina

3:15-3:45   Nirmal Srinivasan – NCRAR (Portland) & UT-Dallas

3:45-4:15   Katie Faulkner – Indiana University

4:15-4:45   Terrin Tamati – Indiana University

4:45-5:15   Summary of other work by Team PRESTO (Bay Pines, Veterans Affairs Healthcare System-University of South Florida & University of Washington)

5:15-5:30   Wrap up – Future Directions

 

7:30pm     Dinner at Samira Restaurant

                  100 W 6th St, Bloomington, IN 47404

Posted in conferences, news, talks | No Comments »

19th Sep 2013

Speech Research Lab Meeting – Friday September 20 – Eriko Atagi

For this week’s SRL meeting Eriko Atagi, doctoral student in Speech and Hearing, will be giving a practice talk for an upcoming meeting. Her title and abstract are below, all are invited and welcome to attend – there will be plenty of time to give feedback on the content and suggestions for improvement.
Where: Psychology 128
When: Friday, September 20, 2013, 1:30pm
Title: Auditory free classification of nonnative speech
Abstract: Recent research on speech variability has found that listeners encode and integrate indexical features of speech (e.g., talker’s gender, age, dialect) with the linguistic information in speech. Furthermore, through repeated encoding, listeners build categories of indexical features (e.g., male/female, child/adult), similar to the categories of linguistic variables (e.g., phonemes and semantic classes). For English, a language that now has more nonnative speakers than native speakers across the world, foreign accents are indexical features that introduce a significant amount of between-talker variability. The auditory free classification task—a task in which listeners freely group talkers based on audio samples—has been a useful tool for examining listeners’ perceptual representations of regional dialects, and is employed in the current studies to examine native and nonnative listeners’ representations of nonnative speech. In this talk, I present two studies that address the following questions. (1) What are the salient features of nonnative speech for native listeners, and how stable is their perception across different stimulus sets? (2) Does listeners’ perception of nonnative speech change depending on whether they are asked about the general similarity of talkers or about the talkers’ native languages? (3) How does nonnative listeners’ perception of nonnative speech compare to that of native listeners? Results indicate that listeners—both native and nonnative—find nonnative talkers’ degrees of foreign accent to be a central organizational principle. Other listener and stimulus factors, however, also play important roles in further shaping listeners’ perception of nonnative speech. Specifically, I will discuss listeners’ attention to the talkers’ native language, listeners’ prior linguistic experience, and variability in the stimulus set as relevant factors when perceiving nonnative speech.

Posted in news, talks | No Comments »