HÖRST
glossary
H
H2O impedance measurement is a variant of tympanometry in which the pressure-volume response of the middle ear is examined with the ear canal filled with water. Controlled pressure changes are used to assess the mobility of the eardrum and ossicular chain. Deviations in the impedance curve indicate tube dysfunction, effusions, or stiffening (e.g., otosclerosis). Since water has a different acoustic resistance than air, this method provides greater sensitivity for small leaks and membrane damage. Clinically, it is mainly used in pediatric audiology and veterinary diagnostics.
Habituation refers to the diminishing response to repeatedly presented, unchanging stimuli. In the auditory system, it causes constant background noise to be filtered out over time. This mechanism protects against information overload and allows the brain to focus on new, relevant signals. In tinnitus therapy, habituation is used to reduce awareness of ear noises. Without habituation, hypersensitivity and increased cognitive stress result from constant noise perception.
The shark fin pattern in the audiogram describes alternating high and low points along the curve, similar to the jagged edges of a shark's tooth. It indicates measurement artifacts, lack of concentration, or simulated hearing loss behavior. Clinically, it is important to recognize this pattern in order to ensure valid findings and avoid misdiagnosis. If non-organic hearing loss is suspected, objective tests such as OAE or AEP follow. Cleaning the test environment and giving clear instructions to patients reduce herringbone artifacts.
The reverberation effect describes the phenomenon whereby a sound is perceived for longer in a room with reverberation than in an anechoic chamber. In psychoacoustics, reverberation leads to an increase in volume and distortion of the temporal structure of speech signals. In hearing aid fitting, reverberation separation must be taken into account so that speech comprehension is maintained in real rooms. Measurements of reverberation time (RT60) provide parameters for room acoustics optimization. Training programs teach listeners to distinguish between direct and reflected sound components.
The malleus is the first of the three ossicles in the middle ear and is directly connected to the eardrum. It mechanically transmits vibrations from the eardrum to the incus, thereby controlling the transmission of sound to the inner ear. Its leverage effect amplifies the sound pressure and enables efficient impedance matching between the air and fluid media. The hammer reflex, triggered by loud sounds, protects against excessive sound damage. In surgery, care is taken to preserve the malleus structures so as not to impair sound conduction.
The hammer-anvil reflex is a muscle contraction of the tensor tympani and stapedius muscles in response to loud noises, which stiffens the ossicular chain. This dampens vibrations and protects the inner ear from noise damage. Reflex latency and amplitude are measured in tympanometry to assess middle ear and brainstem function. Unilateral deficits indicate nerve lesions or ossicular pathologies. The reflex contributes to acoustic adaptation and shields against impulse noise.
A handheld microphone is an external microphone held by speakers in FM or DECT systems to transmit speech directly to hearing aid receivers. It improves speech comprehension in noisy or large rooms, as ambient noise is not picked up. Direct radiation minimizes signal loss and improves the signal-to-noise ratio. Receivers in the hearing aid decode the radio signal and transmit it to the earpiece. Handheld microphones are essential in classrooms, conferences, and religious events.
A home device is a hearing system that offers programs specially optimized for use at home, e.g., for watching TV or talking on the phone. This category often includes tabletop or near-field communication devices with direct hearing aid pairing. They offer higher amplification and special filters to clearly transmit distant or digital sound sources. Home devices complement mobile hearing aid care and increase comfort in the home environment. Integration with smart home systems enables automatic scene selection.
Skin conduction (also known as structure-borne sound conduction) transmits vibrations directly to the inner ear via soft tissue and bone, bypassing the outer and middle ear. It plays a role in hearing one's own voice (autophony) and in bone conduction hearing systems. Skin conduction measurements help to distinguish between conductive and sensorineural hearing loss. Bone conduction devices use sound cuvettes or implants to specifically stimulate this pathway. Skin conduction levels are less frequency-dependent than air conduction.
Behind-the-ear (BTE) hearing aids sit behind the ear and connect to an ear mold in the ear canal via a tube. They offer space for larger amplifiers, batteries, and multi-channel signal processors. BTE systems are powerful and suitable for moderate to severe hearing loss. Modern models feature wireless connectivity, directional microphones, and rechargeable batteries. The design allows for easy handling and robust electronics.
HRTF describes the frequency-dependent filtering effect of the head, torso, and outer ears on incoming sound waves. It forms the basis for spatial hearing and virtual audio rendering, as it encodes interaural time and level differences. Measurements are taken using microphones in artificial heads or individual calibration methods. In hearing aid development, HRTF models are used to maintain natural localization even with behind-the-ear devices. VR and 3D audio technologies are based on HRTF synthesis for immersive sound experiences.
The healing phase following tympanic membrane perforation or middle ear surgery involves an initial inflammatory response, tissue regeneration, and scar formation. In the first few days, the focus is on pain and infection control, followed by tissue remodeling over several weeks. Tympanometry and otoscopy are used to monitor the reclosure and function of the tympanic membrane. Hearing improves gradually, and complete recovery can take months. Physical rest and avoidance of pressure changes support healing.
The helix is the upper, curved edge of the outer ear and serves to focus sound into the cavum conchae. Its shape influences the spectral filtering of external sound and supports vertical localization. Anatomical variations in the helix can influence individual HRTF profiles. In hearing aid fitting, attention is paid to helix compatibility in order to avoid pressure points. Surgically, the helix plays a role in otoplasty and reconstruction.
A Helmholtz resonator is an acoustic resonator consisting of a cavity and a narrow opening that greatly amplifies sound at its natural frequency. In the ear, the cavum conchae has a similar effect and emphasizes frequencies around 2–5 kHz, which promotes speech comprehension. Acoustic filters in hearing aids use the Helmholtz principle for compact bass reduction or notch filters against tinnitus frequencies. Room acoustic elements such as bass traps work on the same physical principle.
The comfort threshold is the level at which sound is perceived as uncomfortably loud. Hearing loss often causes this threshold to shift upward, meaning that those affected perceive loud stimuli as disturbing at a higher level. Hearing aid compression must take the comfort threshold into account to avoid overmodulation. Measurements using Bekesy audiometry or loudness scaling determine individual comfort ranges. Fine tuning protects against discomfort and distortion.
Heterophonic masking occurs when an interfering sound in one frequency band impairs the perception of a useful sound in another band. This effect explains why external noises interfere with speech even though they are at different frequencies. Masking models in hearing aids simulate heterophonic effects in order to optimally adjust compression and filters. Psychoacoustic tests quantify masking level differences. Understanding in noise improves when masking is specifically reduced.
Hidden hearing loss refers to synaptic damage between inner hair cells and the auditory nerve, which remains undetectable in standard audiograms. Those affected complain of difficulty understanding speech in noisy environments, even though their hearing thresholds are normal. The pathology manifests itself in reduced evoked potentials and altered OAE. Research focuses on synapse-protective therapies and early diagnosis. Hidden hearing loss underscores the importance of central auditory processing tests.
High-definition audiology combines high-resolution measurement methods, adaptive signal processing, and AI-supported analyses to revolutionize hearing diagnostics and hearing aid fitting. It uses detailed cochlea and cortex profiles to develop personalized amplification and compression strategies. Real-time data from mobile apps and biosensors flows into cloud-based fitting platforms. The goal is maximum speech intelligibility and comfort in all listening situations. Initial studies show significant improvements over standard methods.
A behind-the-ear (BTE) device places the electronics and battery behind the ear, while a tube directs sound to the earpiece in the ear canal. This design allows for powerful amplification and complex signal processors with low weight in the ear canal. BTE devices are robust, easy to use, and suitable for moderate to severe hearing loss. Modern models integrate Bluetooth, telecoil, and inductive charging functions. Open or closed earmolds allow for individual control of feedback and sound quality.
High-frequency hearing loss primarily affects the perception of high frequencies above approximately 2000 Hz. It often manifests itself in difficulty understanding consonants such as "s," "f," or "t," especially in noisy environments. The causes are usually noise damage, aging processes, or ototoxic medications that damage the hair cells in the basal cochlea region. Audiometrically, the loss manifests itself as an increase in the hearing threshold in the high frequencies. Hearing aid compression can specifically amplify the high-frequency range to restore speech intelligibility.
The auditory pathway transmits acoustic information from the inner ear via several core stations in the brainstem to the auditory cortex. It begins at the hair cells, runs via the vestibulocochlear nerve to the cochlear nucleus, and continues via the olive, lateral lemniscus, and inferior colliculus to the thalamus. Each station extracts specific features such as time and level differences. Damage at any point leads to central auditory processing disorders. Objective evoked potentials (ABR, MLR, CAEP) test the integrity of the auditory pathway.
The auditory impression refers to the subjective perception of sound quality, volume, and spatial position. It depends not only on acoustic parameters, but also on psychological factors such as attention and expectation. In audiology, auditory perception is assessed using questionnaires and psychoacoustic tests. Hearing aid optimization aims to create a natural and pleasant auditory perception. Differences in auditory perception explain why people have different levels of satisfaction with hearing systems even when the measurements are identical.
Hearing adaptation describes the process of getting used to a new hearing aid or implant, as the brain has to process new sound patterns. Initially, many users find the amplified sounds too loud or unfamiliar. Through systematic use and targeted hearing training, the auditory cortex adapts and filters out unwanted sounds. The adjustment phase typically lasts several weeks to months. Accompanying audiological readjustment improves the success of the adjustment and wearing comfort.
Auditory thread depth is a measure of the temporal resolution of the auditory system, i.e., how closely successive sound events can still be perceived as separate impulses. It is tested using short clicks or pulses and is expressed as the minimum interstimulus interval duration. Low auditory thread depth makes it difficult to understand speech in impulsive noise. Measurements help to identify central temporal processing disorders. Auditory training can improve auditory thread depth through neural plasticity.
Audible feedback refers to feedback that hearing aid wearers sometimes perceive as an echo or whistling when the microphone signal enters the receiver. This is caused by leaks in the earmold or incorrect amplification settings. Modern hearing systems detect feedback in real time and reduce it using adaptive filter algorithms. Mechanical measures such as tight earmolds and microphone positioning minimize the risk of feedback. An optimized feedback manager improves sound quality and user satisfaction.
The audiogram measures hearing thresholds across a wide range of frequencies and levels to determine the individual dynamic range and comfort zone. It combines tone and loudness measurements and displays the results in audiogram curves. The analysis helps to determine optimal compression and amplification parameters for hearing aids. Deviations from the normal hearing field indicate bottlenecks in loudness perception and masking effects. Regular repetition documents progress in treatment.
A hearing filter selects specific frequency ranges to emphasize speech and suppress background noise. Hearing aids use digital multiband filters that adaptively respond to changes in the environment. Filter parameters such as center frequency, bandwidth, and slope are customized individually. Incorrectly adjusted filters can weaken speech components or distort sound. Psychoacoustic tests check filter effectiveness in real-life scenarios.
Hearing research encompasses interdisciplinary studies on hearing mechanisms, diagnostic procedures, and hearing aid technologies. It ranges from molecular investigations of regenerative therapies to psychoacoustic experiments and clinical studies of new hearing aid algorithms. Current focal points include hidden hearing loss, AI-based signal processing, and cochlear regeneration. Research findings are incorporated into guidelines and product developments. International collaborations and publications ensure transfer into practice.
A hearing aid acoustician is a specialist who performs hearing tests, fits hearing aids, and fine-tunes them. They advise on device types, earmolds, and programs, and train users in their use and care. The training combines audiological, technical, and communication skills. Quality assurance is ensured through validation tests and follow-up care. Good hearing aid specialists work closely with audiologists and ENT doctors.
Hearing aid batteries provide electrical power for analog and digital hearing systems. Common types are zinc-air cells (sizes 10, 13, 312, 675) with a service life of 3–14 days. Rechargeable batteries are becoming increasingly popular as they offer greater convenience and sustainability. Battery/charging cycles must be documented to prevent performance drops. Training on how to change batteries is part of hearing aid instruction.
The hearing aid channel is the device-specific frequency band in which a hearing system amplifies or filters. Modern hearing aids have 4–16 channels to finely adjust the sound spectrum. More channels allow for more precise adjustment to the audiogram, but can increase computing power and latency. Channel parameters are visualized and optimized in the fitting software interface. However, the number of channels alone does not guarantee better speech intelligibility without correct fine-tuning.
A hearing aid program is a stored combination of settings for specific listening situations (e.g., quiet, restaurant, telephone). Programs automatically adjust amplification, compression, and microphone characteristics to the ambient sound. Users switch manually or automatically via scene recognition. Multiple programs increase flexibility but require training to use. Audiologists set programs individually and calibrate transition parameters.
Hearing aid provision includes selection, fitting, instruction, and follow-up care for hearing aid users. It begins with audiological diagnostics, continues with otoplasty production, and ends with fine-tuning in real-life testing. Regular check-ups ensure long-term functionality and satisfaction. Interdisciplinary collaboration with doctors and therapists optimizes rehabilitation. Documentation of all steps is part of the quality of care and cost coverage by insurance companies.
An audiogram is a graphical representation of the audiogram and other measurement results such as OAE or reflexes in an overview. It visualizes hearing thresholds, dynamic range, and comfort zones. Audiograms serve as a reference for fitting and follow-up checks. Software-generated graphs enable comparison of different measurement times. Clear visualization supports patients and specialists in their discussions.
Hearing implants are electronic prostheses that convert acoustic information into electrical impulses and transmit them directly to the auditory nerve or brainstem. Types include cochlear implants, brainstem implants, and bone conduction implants. Indications range from severe hearing loss to deaf inner ear. Implantation is performed surgically, followed by speech rehabilitation and mapping. Long-term success shows significant improvements in speech comprehension and quality of life.
Auditory criticality describes the range around the hearing threshold in which small changes in volume are perceived particularly strongly. It is relevant for adjusting compression so that signals remain natural and sound fluctuations remain audible. Measurements of critical bandwidth provide information about filter design and masking effects. Narrower critical bands lead to better frequency selectivity. Fitting strategies in hearing aids take criticality into account to avoid sound coloration.
The auditory canal (conduction channel) is the anatomical connection between the outer ear and the inner ear, consisting of the ear canal, eardrum, and ossicular chain. It transmits sound mechanically and optimizes impedance matching between air and fluid media. Diseases affecting this pathway (e.g., otosclerosis) lead to conductive hearing loss. Surgical procedures such as stapedotomy modify the auditory canal to restore mobility. Tympanometry and audiograms analyze the functional status.
Auditory localization is the ability to determine the direction of a sound source based on interaural time differences (ITD) and level differences (ILD). The superior olivary complex in the brainstem compares signals from both ears. Precise localization improves speech comprehension and safety in everyday life. Hearing aids with binaural networking maintain localization by processing signals synchronously. Tests in a free sound field evaluate localization accuracy.
The auditory nerve (vestibulocochlear nerve, VIII cranial nerve) transmits electrical impulses from the cochlea and vestibular organ to the brainstem. It branches into cochlear and vestibular portions and is essential for hearing and balance. Lesions lead to hearing loss, tinnitus, or vertigo. Diagnostics include ABR measurements and imaging techniques. Early surgery is indicated for tumors such as acoustic neuroma.
In perception psychology, the horopter is the imaginary spatial curve on which visual and auditory stimuli are perceived as spatially congruent. When visual and acoustic stimulation are combined, the horopter helps to minimize conflicts between information received by the eyes and ears. Experiments are being conducted to investigate how deviations from this line affect localization accuracy. For hearing aid users, the interaction of visual and auditory cues is relevant for precisely locating speech sources. Adjustments in hearing technology can aim to filter auditory signals so that they match the visual horopter.
Listening breaks are deliberately inserted periods of silence between speech or music signals that give the auditory system time to process information. They improve speech comprehension by providing segmentation cues and enabling cognitive relief. In audiotherapy, listening breaks are used to give tinnitus patients periods of rest from the noise in their ears. Psychoacoustic studies show that regular breaks reduce auditory fatigue. Hearing aid programs can implement digital silence insertions to avoid excessive stimulation.
The hearing level refers to the sound pressure level at a specific point in the ear canal, measured in dB SPL. It forms the basis for calibrating audiometers and adjusting hearing aids. Differences between the input signal level and the hearing level in the earpiece determine the effective amplification. In room acoustics, the listening level is used to optimize volume distribution and sound quality. Audiologists ensure that listening levels are below the comfort threshold and above the hearing threshold.
Hearing physiology describes the biological and biophysical processes from sound reception to neural processing in the brain. It encompasses mechanical processes in the outer ear, electrochemical transduction in hair cells, and neural signal transmission. Changes in any of these steps lead to specific hearing disorders that can be analyzed physiologically. Research in auditory physiology provides the basis for therapies for hearing loss and tinnitus. Textbooks combine anatomy, biomechanics, and neurophysiology to provide an integrative understanding.
Hearing preference refers to individual preferences for sound characteristics, such as warm bass or clear treble. It arises from personal hearing adjustments and neurological processing differences. During hearing aid fitting, preferences are taken into account by fine-tuning the filters and compression parameters. Measurements are taken by comparing different sound profiles and subjective ratings. Taking hearing preferences into account increases wearing comfort and acceptance.
An audio sample is a short sound or speech sequence used to test hearing aid programs or room acoustics. It helps the wearer assess the sound character and intelligibility under real-life conditions. In research, standardized audio samples are used to compare the effects of signal processing algorithms. Hearing samples can include music, speech, or artificial test signals. Their systematic analysis helps to make optimizations.
Tinnitus noise is a steady, broadband noise that is used as a test signal in audiometry to check masking and filtering effects. In tinnitus therapy, auditory noise is used as a masker to cover up ear noises. The spectral composition can be white, pink, or brown, depending on the desired masking effect. Auditory noise helps analyze cochlear function and central noise processing. Customizable noise profiles support individual therapy goals.
Ear cleaning refers to the professional removal of cerumen and deposits in the external auditory canal in order to restore sound conduction. It is performed manually under a microscope or by gentle rinsing. Regular ear cleaning prevents cerumen obturans and acute otitis externa. Subsequent tympanometry checks the restoration of middle ear function. Patients are trained in self-cleaning techniques to prevent recurrence.
The quiet listening state is the state of minimal acoustic stimulation, usually measured in a soundproof room. It defines the baseline for hearing threshold tests and evoked potentials. A stable quiet listening state ensures reproducible measurement results and avoids masking by ambient noise. Changes in the quiet listening state can indicate adaptive processes or neural plasticity. Standardized norms specify maximum background levels for test environments.
The hearing threshold is the lowest sound pressure level that can just be heard and varies with frequency. It is documented individually for each frequency in the audiogram and forms the basis for diagnosis and hearing aid fitting. Deviations from normal values define degrees of hearing loss from mild to severe. Threshold determination is performed using tone audiometry under controlled conditions. Clinically, it is the first step in differentiating between conductive and sensorineural hearing loss.
Auditory segmentation is the ability to break down continuous sound signals into meaningful units such as words or syllables. It is based on acoustic markers such as pauses, formant transitions, and volume fluctuations. Disruptions in segmentation lead to difficulties in understanding speech, especially in noisy environments. Segmentation tests use sentences with variable pause patterns. Auditory training can improve segmentation performance in the auditory cortex.
The hearing range refers to the range between the quietest audible and loudest tolerable sound intensity, measured in decibels. It represents the dynamic range of hearing and varies from person to person depending on age and hearing health. With normal hearing, the hearing range is typically between 0 dB HL and approximately 120 dB SPL. A limited range requires compression in hearing aids to make soft sounds audible and loud sounds comfortable. Changes in the hearing range can indicate conditions such as presbycusis or noise-induced hearing loss.
The hearing spectrum represents the distribution of the hearing threshold across the frequency spectrum and shows how well different frequencies are perceived. It is recorded in the audiogram as a curve from low to high frequencies. Deviations in certain areas indicate high-frequency or low-frequency losses. Hearing aids adjust amplification profiles across the spectrum to compensate for deficits. Researchers compare the hearing spectra of different populations to determine normal values and risk factors.
The audio track is the accompanying soundtrack to video or multimedia content and contains speech, music, and effects. For accessible offerings, it is often supplemented with subtitles or sign language. Technically, the audio track is mixed in multichannel audio (stereo, 5.1) to create spatial effects. In hearing training and rehabilitation, listening to individual tracks can help train speech comprehension. With hearing aids that support direct streaming, the audio track is transmitted digitally and without interference to the device.
Sudden hearing loss is a sudden onset of sensorineural hearing loss, usually in one ear, often accompanied by tinnitus and a feeling of pressure. The exact causes are unclear, but possible factors include circulatory disorders, viruses, or stress. Immediate treatment with corticosteroids and circulation-promoting agents improves the chances of recovery. Audiometry documents the extent of hearing loss, and follow-up checks show regeneration. Early rehabilitation can compensate for residual hearing loss and alleviate tinnitus.
A hearing system is a combination of a hearing aid, earmold, and optional accessories such as FM receivers or streamers. It comprises microphones, amplifiers, signal processors, and receivers in a coordinated ensemble. Modern systems offer multi-channel compression, directional microphones, feedback management, and wireless connectivity. The hearing care professional customizes the hearing system based on the audiogram and personal hearing preferences. Regular software updates maintain performance and compatibility with new devices.
Hearing technology encompasses all technical aids and procedures for improving hearing, from hearing aids and cochlear implants to room and sound reinforcement technology. It combines acoustics, electronics, and signal processing to optimize speech intelligibility and sound quality. Subdisciplines include microphone design, amplifier architecture, filter algorithms, and user interfaces. Hearing technology research is driving developments such as AI-supported scene recognition and brain-computer interfaces. Users benefit from customizable, networked systems for all areas of life.
Hearing loss refers to a reduction in hearing ability, divided into conductive, sensorineural, and central hearing disorders. It is quantified based on the shift in the hearing threshold in the audiogram. Causes include age, noise, illness, or genetic factors. Treatment options range from hearing aids and implants to medication and surgery. Early detection and interdisciplinary rehabilitation improve communication skills and quality of life.
Hearing ability encompasses the entire capacity to detect and locate sound sources and process acoustic information. It includes parameters such as hearing threshold, dynamic range, frequency resolution, and speech comprehension. Measurement methods such as audiograms, OAE, and AEP provide objective data on hearing ability. Psychometric tests record subjective aspects such as hearing comfort and hearing stress. Maintaining and improving hearing ability are central goals of audiology and hearing acoustics.
The auditory center in the temporal lobe of the cerebral cortex (primary auditory cortex) processes the frequency, volume, and spatial characteristics of sound. It receives input via the auditory pathway and interacts with speech and memory centers. Cortical plasticity enables adaptation to hearing aids and rehabilitation after hearing loss. Lesions in the auditory center lead to central auditory processing disorders despite intact peripheral equipment. Imaging techniques (fMRI, PET) show activation patterns during acoustic tasks.
Hospitalism describes psychological and cognitive impairments that arise as a result of sensorineural hearing loss due to social isolation and loss of communication. Those affected often develop anxiety, depression, and withdrawal, which further exacerbates hearing loss. Early psychosocial interventions and hearing rehabilitation prevent hospitalism. Interdisciplinary care by audiologists, psychologists, and social workers is important. Studies show that social support and hearing aid provision significantly reduce hospitalism.
Hyperacusis is hypersensitivity to normal everyday sounds, which are perceived as painful or unpleasant. It is caused by changes in peripheral or central auditory pathways, often in combination with tinnitus. Comfort and discomfort thresholds are determined for diagnostic purposes. Treatment includes desensitization training, cognitive behavioral therapy, and, if necessary, medication. Hyperacusis can severely impair quality of life and requires multidisciplinary care.