-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathresearch.html
252 lines (188 loc) · 16.1 KB
/
research.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
<!DOCTYPE html>
<html lang="en">
<head>
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-93DR37RSFK"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-93DR37RSFK');
</script>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Research</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="Description" lang="en" content="open source html and css template">
<meta name="author" content="mlp design">
<meta name="robots" content="index, follow">
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="slidy.css">
</head>
<body>
<div id="menu">
<nav>
<input type="checkbox" id="show-menu" role="button">
<label for="show-menu" class="open"><span class="fa fa-bars"></span></label>
<label for="show-menu" class="close"><span class="fa fa-times"></span></label>
<ul id="topnav">
<li><a href="index.html">Home</a></li>
<li><a href="research.html">Research</a></li>
<li><a href="teaching.html">Teaching</a></li>
<li><a href="cv.html">CV</a></li>
<li><a href="contact.html">Contact</a></li>
<li>
<ul>
<li><a href="https://github.com/mkachlicka"><i class="fa-brands fa-github"></i></a></li>
<li><a href="https://scholar.google.com/citations?user=RaK9uLgAAAAJ&hl=en"><i class="fa-brands fa-google"></i></a></li>
<li><a href="https://www.linkedin.com/in/mkachlicka/"><i class="fa-brands fa-linkedin"></i></a></li>
<li><a href="https://twitter.com/mkachlicka"><i class="fa-brands fa-twitter"></i></a></li>
<li><a href="https://bsky.app/profile/mkachlicka.bsky.social"><i class="fa-brands fa-bluesky"></i></a></li>
</ul>
</li>
</ul>
</nav>
</div>
<div id="container">
<div id="pageheader">
<h1>Research</h1>
</div>
<div class="section">
<div class="pageitem">
<div id="slider">
<figure>
<img src="images/1.png" alt>
<img src="images/9.png" alt>
<img src="images/11.png" alt>
<img src="images/1.png" alt> <!-- note 1st image is also the last -->
</figure>
</div>
</div>
</div>
<div class="section">
<h1>Current projects</h1>
<div class="pageitem">
<h2>Speech cue weighting and salience (SpeechCues)</h2>
<p>Language structure is conveyed in speech by a complex set of acoustic cues, including changes in duration, amplitude, and frequency. Individuals who are better able to detect these acoustic cues may be able to more rapidly absorb the structure of a new language. However, detecting the acoustic cues of a second language can be difficult due to differences across languages in how sound patterns convey language structure. In particular, second language speakers may have difficulty directing attention towards the most relevant cues. Individuals who are better able to direct their attention to individual acoustic dimensions may be better able to focus on the acoustic cues which provide the most reliable information in an L2. We are testing the role of attention in second language learning using electroencephalography (EEG) and whether second language learners can be trained to attend to the most informative cues, and if so, whether this experience changes how they attend to sound cues. <a href="https://sites.google.com/view/audioneurolab/current-projects/speech-cues?authuser=0">This project</a> will lead to a better understanding of why some people struggle to learn a second language more than others. </p>
</div>
<div class="pageitem">
<h2>Representations of natural sound categories (EnviSounds)</h2>
<p>By necessity, the similarity between sounds is defined with respect to the dimensions used in making similarity judgments. Sounds might be similar according to one dimension (e.g., pitch) but dissimilar according to another dimension (e.g., amplitude). However, it is difficult to disentangle their contributions, as various properties often concur; for example, sounds with similar pitch might also have similar amplitude. Acoustic differences can also be overwritten entirely by the sounds’ meaning or common context. Thus, it is vital to assess the degree to which different object properties determine perceived similarity. This project explores whether and, if so, to what degree acoustic vs semantic information about environmental sounds contributes to similarity judgments. Building on our previous work measuring similarity between sounds, we want to extend this framework by investigating the representations of natural sound categories and various dimensions underlying those representations. We will test whether and to what degree similarity judgements depend on the dimension judged. In the real world, many dimensions are jointly and to varying degrees driving our perception of the surrounding world. Comparing the yielded representational similarity matrices (RDMs) would allow us to discover to what degree different sound properties both uniquely and in common determine perceived similarity.</p>
</div>
</div>
<div class="section">
<h1>Publications and presentations</h1>
<div class="pageitem">
<h2>Publications</h2>
<p> <b>Kachlicka, M.</b>, Symons, A. E., Saito, K., Dick, F., & Tierney, A. (2024).
<a href="https://doi.org/10.1162/imag_a_00297" target="_blank"> 'Tone language experience enhances dimension-selective attention and subcortical encoding but not cortical entrainment to pitch'</a>,
<em>Imaging Neuroscience</em>, 2, 1-19. </p>
<p> <b>Kachlicka, M.</b>, & Tierney, A. (2024).
<a href="https://doi.org/10.1016/j.cortex.2024.06.016" target="_blank"> 'Voice actors show enhanced neural tracking of pitch, prosody perception, and music perception'</a>,
<em>Cortex</em>, 178, 213-222. </p>
<p> <b>Kachlicka, M.</b>, Patel., A. D., Liu, F., & Tierney, A. (2024).
<a href="https://doi.org/10.1016/j.cognition.2024.105757" target="_blank"> 'Weighting of cues to categorization of song versus speech in tone language and non-tone-language speakers'</a>,
<em>Cognition</em>, 246, 105757. </p>
<p> Saito, K., <strong>Kachlicka, M.</strong>, Suzukida, Y., Mora-Plaza, I., Ruan, Y. & Tierney, A. (2024).
<a href="https://doi.org/10.1037/xhp0001166" target="_blank"> 'Auditory processing as perceptual, cognitive, and motoric abilities underlying successful second language acquisition: Interaction model'</a>, <em>Journal of Experimental Psychology: Human Perception and Performance</em>, 50(1), 119–138. </p>
<p> Saito, K., Hanzawa, K., Petrova, K., <b>Kachlicka, M.</b>, Suzukida, Y., & Tierney, A. (2022).
<a href="https://doi.org/10.1111/lang.12503" target="_blank"> 'Incidental and multimodal high variability phonetic training: Potential, limits, and future directions'</a>,
<em>Language Learning</em>, 72(4), 1049–1091. </p>
<p> Saito, K., Petrova, K., Suzukida, Y., <b>Kachlicka, M.</b>, & Tierney, A. (2022).
<a href="https://psycnet.apa.org/doi/10.1037/xhp0001042" target="_blank"> 'Training auditory processing promotes second language speech acquisition'</a>,
<em>Journal of Experimental Psychology: Human Perception and Performance</em>, 48(12), 1410–1426. </p>
<p> Saito, K., <b>Kachlicka, M.</b>, Suzukida, Y., Petrova, K., Lee, B. J., Tierney, A. (2022).
<a href="https://doi.org/10.1016/j.cognition.2022.105236" target="_blank"> 'Auditory precision hypothesis-L2: Dimension-specific relationships between auditory processing and second language segmental learning'</a>,
<em>Cognition</em>, 229, 105236. </p>
<p> <b>Kachlicka, M.</b>, Laffere, A., Dick, F, Tierney, A. (2022).
<a href="https://doi.org/10.1016/j.neuroimage.2022.119024" target="_blank"> 'Slow phase-locked modulations support selective attention to sound'</a>,
<em>NeuroImage</em>, 252, 119024. </p>
<p> Saito, K., Macmillan, K., <b>Kachlicka, M.</b>, Kunihara, T., & Minematsu, N. (2022).
<a href="https://doi.org/10.1017/S0272263122000080" target="_blank"> 'Automated assessment of second language comprehensibility: Review, training, validation, and generalization studies'</a>,
<em>Studies in Second Language Acquisition</em>, 45, 234–263. </p>
<p> Saito, K., Macmillan, K., Kroeger, S., Magne, V., Takizawa, K., <b>Kachlicka, M.</b>, & Tierney, A. (2022).
<a href="https://doi.org/10.1017/S0142716422000029" target="_blank"> 'Roles of domain-general auditory processing in spoken second language vocabulary attainment in adulthood'</a>,
<em>Applied Psycholinguistics</em>, 43, 581–606. </p>
<p> Saito, K., Sun, H., <b>Kachlicka, M.</b>, Alayo, J., Nakata, T., & Tierney, A. (2022).
<a href="https://doi.org/10.1017/S0272263120000467" target="_blank"> 'Domain-general auditory processing explains multiple dimensions of L2 acquisition in adulthood'</a>,
<em>Studies in Second Language Acquisition</em>, 44(1), 57–86.
<a href="https://www.cambridge.org/core/journals/studies-in-second-language-acquisition/albert-valdman-award" target="_blank"> '(Received the Albert Valdman award for outstanding publication in SSLA)'</a> </p>
<p> Mitchell, A., Oberman, T., Aletta, F., <b>Kachlicka, M.</b>, Lionello, M., Erfanian, M., & Kang, J. (2021).
<a href="https://doi.org/10.1121/10.0008928" target="_blank"> 'Investigating urban soundscapes of the COVID-19 lockdown: A predictive soundscape modelling approach'</a>,
<em>The Journal of Acoustical Society of America</em>, 150, 4474–4488. </p>
<p> Saito, K., <b>Kachlicka, M.</b>, Sun. H., & Tierney, A. (2020).
<a href="https://doi.org/10.1016/j.jml.2020.104168" target="_blank"> 'Domain-general auditory processing as an anchor of post-pubertal second language pronunciation learning: Behavioural and neurophysiological investigations of perceptual acuity, age, experience, development, and attainment'</a>,
<em>Journal of Memory and Language</em>, 115, 104168. </p>
<p> Mitchell, A., Oberman, T., Aletta, F., Erfanian, M., <b>Kachlicka, M.</b>, Lionello, M., & Kang, J. (2020).
<a href="https://doi.org/10.3390/app10072397" target="_blank"> 'The Soundscape Indices (SSID) Protocol: A Method for Urban Soundscape Surveys—Questionnaires with Acoustical and Contextual Information'</a>,
<em>Applied Sciences</em>, 10(7), 2397. </p>
<p> <b>Kachlicka, M.</b>, Saito, K., & Tierney, A. (2019).
<a href="https://doi.org/10.1016/j.bandl.2019.02.004" target="_blank"> 'Successful second language learning is tied to robust domain-general auditory processing and stable neural representation of sound'</a>,
<em>Brain and Langauge</em>, 192, 15–24.</p> <br>
</div>
<div class="pageitem">
<h2>Preprints</h2>
<p> <b>Kachlicka, M.</b>, Symons, A. E., Ruan, Y., Saito, K., Dick, F., & Tierney, A. (2024).
<a href="https://doi.org/10.31234/osf.io/y4uph" target="_blank"> 'Effects of targeted perceptual training on L2 prosodic cue weighting strategies' </a>,
<em>PsyArXiv</em>. </p>
<p> <b>Kachlicka, M.</b>, Symons, A. E., Saito, K., Dick, F., & Tierney, A. (2024).
<a href="https://doi.org/10.31234/osf.io/ec47p" target="_blank"> 'Tone language experience enhances dimension-selective attention and subcortical encoding but not cortical entrainment to pitch'</a>,
<em>PsyArXiv</em>. *<em>Now published in Imaging Neuroscience!</em></p>
<p> Correia, S., dos Santos Rato, A. A., Fernandes, J. D., Ge, Y., <b>Kachlicka, M.</b>, Saito, K., & Rebuschat, P. (2024).
<a href="https://doi.org/10.31234/osf.io/9mtfd" target="_blank"> 'Effects of implicit perceptual training and cognitive aptitude on the perception and production of non-native contrasts'</a>,
<em>PsyArXiv</em>. </p>
<p> Symons, A. E., <b>Kachlicka, M.</b>, Wright, E., Razin, R., Dick, F., & Tierney, A. (2023).
<a href="https://doi.org/10.31234/osf.io/d4u93" target="_blank"> 'Dimensional salience varies across verbal and nonverbal domains'</a>,
<em>PsyArXiv</em>. </p>
<p> <b>Kachlicka, M.</b>, Patel., A. D., Liu, F., & Tierney, A. (2023).
<a href="https://doi.org/10.31234/osf.io/dwfsz" target="_blank"> 'Weighting of cues to categorization of song versus speech in tone language and non-tone-language speakers'</a>,
<em>PsyArXiv</em>. *<em>Now published in Cognition!</em></p> <br>
</div>
<div class="pageitem">
<h2>Manuscripts in preparation</h2>
<p> Ghooch Kanloo, A. H., <b>Kachlicka, M.</b>, Saito, K., & Tierney, A. T. (under review). 'Individual differences in perception of prosody in Mandarin-accented speech are linked to pitch perception, melody memory, musical training, and neural encoding of sound'.</p>
<p> Ge, Y., Monaghan, P., <b>Kachlicka, M.</b>, Saito, K., & Rebuschat, P. (under review). 'Auditory ability predicts implicit learning of segmental and suprasegmental features in L2 word acquisition'.</p>
<p> <b>Kachlicka, M.</b>, van den Bosch, J., Kang, J., & Dick, F. (in preparation). 'A selection of environmental sounds for behavioural and neuroimaging research: Introducing EnviSounds dataset'.</p>
<p> <b>Kachlicka, M.</b>, van den Bosch, J., Kang, J., & Dick, F. (in preparation). 'Representations of natural sound categories'.</p>
<p> Saito, K., Argyri, F., <b>Kachlicka, M.</b>, Suzukida, Y., & Tierney, A. T. (in preparation). 'The bilingual advantage hypothesis revisited: Exploring the auditory processing abilities and executive function of bilingual and monolingual children with diverse biographical backgrounds'.</p> <br>
</div>
<div class="pageitem">
<h2>Conference papers</h2>
<p> Mitchell, A.*, Oberman, T., Aletta, F., Erfanian, M., <b>Kachlicka, M.</b>, Lionello, M., & Kang, J. (2020).
<a href="https://doi.org/10.1121/1.5136970" target="_blank"> 'Making cities smarter with new soundscape indices'</a>,
<em>The Journal of the Acoustical Society of America</em>, 146, 2873. </p>
<p> Aletta, F.*, Oberman, T., Mitchell, A., Erfanian, M., Lionello, M., <b>Kachlicka, M.</b>, & Kang, J. (2019).
<a href="https://doi.org/10.1016/S0140-6736(19)32814-4" target="_blank"> 'Associations between soundscape experience and self-reported wellbeing in open public urban spaces: a field study'</a>,
<em>The Lancet</em>, 394(S17). </p> <br>
</div>
<div class="pageitem">
<h2>Conference Posters</h2>
<p> <b>Kachlicka, M.</b>*, Symons, A. E., Ruan, Y., Saito, K., Dick, F., Tierney, A. T. (2024). <a href="http://dx.doi.org/10.13140/RG.2.2.26092.58248" target="_blank">'Effects of targeted perceptual training on L2 suprasegmental cue weighting strategies'</a> at the 33rd Conference of the European Second Language Association (3-6 July 2024, Montpellier, France) </p>
<p> <b>Kachlicka, M.</b>*, Symons, A. E., Saito, K., Dick, F., Tierney, A. T. (2023). <a href="http://dx.doi.org/10.13140/RG.2.2.28291.48165" target="_blank">'Effects of first language background and musical experience on cue weighting, attention and dimensional salience in speech and music'</a> at the Society for the Neurobiology of Language 15th Annual Meeting (23-26 October 2023, Marseille, France) </p>
<p> <b>Kachlicka, M.</b>*, van den Bosch, J., Kang, J., Dick, F. (2022). <a href="http://dx.doi.org/10.13140/RG.2.2.22589.84963" target="_blank">'Representations of natural sound categories' </a> at FENS Summer School 'Artificial and natural computations for sensory perception' (22-28 May 2022, Bertinoro, Italy) </p> <br>
</div>
<div class="pageitem">
<h2>Talks</h2>
<p> 'Dimensionality in auditory perception' at University of Oxford, UK (October 2024) </p>
<p> 'How does prior experience change the way we learn new languages' at University College London Institute of Education, UK (September 2024) </p>
<p> 'How does prior experience change the way we listen to sounds and learn new languages' at University of Miami, US (June 2024) </p>
<p> 'Auditory processing in second language learning' at Brain & Language Seminar at University of Helsinki, Finland (February 2019)</p> <br> <br>
</div>
</div>
<!-- footer items go here
<div class="section">
<div id="footer">
</div>
</div>
-->
<div class="section">
<div id="credits">
<div class="col"><p>Last modified 2024/12/03</p></div>
<!-- This part has to be kept intact under the CC-NC Licence -->
<div class="col"><p><a href="http://mlpdesign.net">HTML & CSS</a> by MLPdesign</p></div>
<!-- CC-NC Licence credit ends here -->
</div>
</div>
</div>
</body>
</html>