Time Course Of Auditory Processing, Visual Processing, Language And Speech Processing
Muluk, Nuray Bayar
xmlui.mirage2.itemSummaryView.MetaDataShow full item record
Each stimulus is processed in the brain at a certain speed/time. Hearing, vision and language are included in this process. Such as, the onset of language specific phonetic phonological analysis has been estimated at 100-200 ms. Listener the smaller the gap that can be detected. Such as, rapidly changing (gap) sounds such as /r/, /I/. There are need both short (20msec for phoneme duration signals) and long (200msec for syllable-duration signals) segments of speech. In hearing, language and speech processing functions, brain works together with all fields (auditory processing, memory, language and the image and speech recording area, etc.) synchronizely for seconds as the orchestra. If neurons can not participate this processing synchronizely, synchronization is corrupted. Processing time of information and synchronization work should be the basis for hearing, language and speech training. Phonetics in speech come to our ears in a few seconds through sound waves. If these sounds can not received within a few seconds, they get lost. If received, they were processed in the auditory pathway and brain in a few seconds. The purpose of this review is to draw attention that, if the sounds are received and processed within a few seconds, the training method used in speech training model should be intended for sounds' transmission and processing in a few seconds. In addition, all of the functions (auditory, view processing, memory and language) should be included into training by bottom-up approach. Auditory processing is the ability to listen, comprehend and respond to information that we hear through our auditory channels. It needs decoding of the neural message. Auditory processing involves attention to detection and identification of the signal; and decoding of the neural message. If we don't give full attention to heard things, listening difficulty ocuurs. In poor attention and listening conditions, rapid acoustic changes in speech can not be discriminated.