HÖRST
glossary
V
Valid hearing threshold determination reliably records the minimum audible sound pressure levels of a hearing test subject at defined frequencies. It requires standardized test conditions (quiet booth, calibrated audiometer) and clear instructions to the patient. Validity is increased by checking test-retest consistency and clinical plausibility, for example through cross-checks with otoacoustic emissions. Psychometric methods such as catch trials can reveal psychogenic response patterns. Only valid thresholds form a reliable basis for diagnosis and hearing aid fitting.
Validation audiometry comprises objective and subjective test procedures that check the consistency between measured audiograms and everyday experiences. It combines standard audiometry with speech audiometry, OAE screening, and self-assessment questionnaires (e.g., APHAB). The aim is to verify the success of the fitting and the quality of the adjustment, as well as to identify any discrepancies. Adaptive test sets simulate realistic hearing situations to ensure that the results are practical. The results are used to readjust the hearing system parameters and to document the quality of the fitting.
The vanish effect describes the temporary disappearance or attenuation of tinnitus when a specific sound signal is played, often immediately after the stimulus ends. This phenomenon indicates cortical reorganization and central inhibitory pathways that modulate the tinnitus generator network. It is used in studies to identify effective masking profiles and investigate neural plasticity. Clinically, the vanishing effect can provide information about suitable sound therapy parameters. Long-term use of the identified stimuli can contribute to permanent habituation.
A variable filter dynamically adjusts its center frequency, bandwidth, and slope to changing acoustic environments. In hearing aids, it allows speech to be emphasized in noisy situations and reduces background noise. Algorithms continuously analyze the input signal and adjust filters in real time to optimize the balance between speech intelligibility and natural sound. Adaptive filters can also detect feedback peaks and initiate countermeasures. Using machine learning approaches, modern systems learn user preferences in order to individualize filter strategies.
Processing in the brain refers to the central analysis, integration, and interpretation of auditory signals after peripheral transduction. It involves pathways in the brainstem, thalamus, and primary and secondary auditory cortex areas. Here, time and level differences, speech patterns, and music-specific information are extracted and linked to memory content. Plasticity enables adaptation to hearing loss or hearing aids through the reorganization of neural networks. Disorders at this level lead to central auditory processing disorders and require targeted therapies.
The masking effect describes the suppression of soft sounds by loud noise or tones present at the same time. It is psychoacoustically essential for masking phenomena and determines which sounds remain audible in complex sound mixtures. In audiometry, targeted masking prevents cross-hearing and isolates the ear being tested. In hearing aids, controlled maskers are used to mask tinnitus or attenuate disturbing frequencies. Masking patterns are determined individually to achieve an optimal balance between signal preservation and noise suppression.
Ossification describes pathological bone remodeling in the middle ear, usually characteristic of otosclerosis, which leads to fixation of the ossicular chain. The stapes footplate is particularly frequently affected, which greatly reduces sound conduction. The audiogram shows a prototypical air-bone threshold spread. Therapeutically, ossification is corrected by stapedotomy, whereby the ossified stapes is bypassed and replaced by a prosthesis. Long-term follow-ups confirm the stability of the reconstruction and hearing gain.
An amplifier circuit in hearing aids consists of a preamplifier, signal processor, and output stage, which amplifies weak microphone signals to an audible level. Digital amplifier circuits enable multiband compression, feedback management, and adaptive filtering. Linearity and output power determine sound fidelity and maximum volume. Signal-to-noise ratio and total harmonic distortion are critical parameters for amplifier quality. Modern ASICs integrate amplifiers and DSPs in small form factors with low power consumption.
Amplification describes the increase in the sound pressure level of an input signal to make it audible to the residual hearing. In hearing systems, this is done on a frequency-dependent basis in line with the hearing loss profiles in an audiogram. Compression algorithms ensure that loud signals are not overcompressed and quiet signals are amplified appropriately. Amplification can be linear (same factor) or nonlinear (dynamic adjustment). The aim is to achieve maximum speech intelligibility with a subjectively natural sound.
The vestibular system comprises the saccule, utricle, and three semicircular canals in the inner ear and registers acceleration and head movements. It sends signals via the vestibular portion of the VIII cranial nerve to the brain stem and cerebellum to control balance and eye reflexes. Dysfunctions lead to dizziness, nystagmus, and balance disorders. Diagnostic procedures include caloric testing, VEMP, and video nystagmography. Vestibular rehabilitation trains central compensation and stabilizes gait and balance control.
Vestibular vertigo is a spinning or tilting sensation caused by disorders of the vestibular system in the inner ear or its central connections. Causes can include vestibular neuritis, Meniere's disease, or vestibular migraine equivalent. Accompanying symptoms include nausea, nystagmus, and balance disorders. Diagnostics include caloric testing, VEMP, and video nystagmography to distinguish peripheral from central causes. Treatment involves corticosteroids, vestibular rehabilitation, and, in recurrent cases, intratympanic gentamicin therapy.
The vestibular system consists of the otolith organs (sacculus, utriculus) and the three semicircular canals, which register linear and rotational accelerations. It sends information about head movements and position to the brain stem, cerebellum, and somatosensory cortex to control balance and spatial orientation. Reflexes such as the vestibulo-ocular reflex ensure stable gaze during head movement. Disorders lead to dizziness, unsteadiness, and nausea. Rehabilitation promotes central compensation through exercise programs and neurofeedback.
Der vestibulookuläre Reflex (VOR) stabilisiert das Bild auf der Netzhaut, indem Augenbewegungen entgegengesetzt zu Kopfbewegungen gesteuert werden. Er hat eine sehr kurze Latenz (<10 ms) und wird über direkte Verbindungen zwischen vestibulären Kernen und okulomotorischen Neuronen realisiert. Ein intakter VOR ist essenziell für klare Sicht beim Gehen oder Laufen. Pathologische VOR‑Parameter (Gain, Phase) werden in der Video‑Head‑Impulse‑Test (vHIT) gemessen. Therapie bei VOR‑Schwäche umfasst gezieltes Blick‑Stabilisationstraining.
Vibration sensitivity is the perception of mechanical vibrations transmitted via Pacini and Meissner corpuscles in the skin and deeper tissues. In the ear, vibration sensitivity is used in bone conduction audiometry, where a sound transducer generates vibrations in the mastoid. The threshold is typically 0.2–0.5 g at 250–500 Hz. Changes in vibration perception can indicate neuropathic or vestibular disorders. Vibration measurements support the diagnosis of bone conduction pathways and tactile feedback in hearing systems.
Vibration conduction (bone conduction) transmits sound by stimulating the cochlea directly with vibrations of the skull, without involving the eardrum. It is tested audiometrically to distinguish between sound conduction and sound perception disorders. Implantable bone conduction devices (BAHS, Bonebridge) use vibration conduction to treat middle ear pathologies. Efficiency depends on the location and frequency of vibration; mastoid implants offer better low bass. Vibration conduction also plays a role in the somatosensory interaction of the vestibular system.
A vibration plate generates low-frequency whole-body vibrations for the rehabilitation of vestibular and musculoskeletal functions. In hearing rehabilitation, it is used experimentally to combine vestibular stimulation with auditory training. Vibration parameters (frequency, amplitude) are selected so that they activate the balance system without causing nausea. Studies show improved VOR gain and gait stability after combined vibration-vestibular training. Its use is still being clinically tested, but promises multisensory therapeutic effects.
A virtual acoustic environment (VAE) simulates realistic 3D sound fields via headphones or speaker systems using HRTF-based rendering. It is used in hearing research and training to safely represent complex everyday situations (restaurants, streets). VAEs allow controlled manipulation of background noise, sound source movement, and reverberation. In hearing aid development, adaptive algorithms are tested under realistic conditions. Users benefit from individualized simulations for targeted rehabilitation.
Visual reinforcement describes the support of hearing through visual information, such as lip reading, gestures, or text subtitles. Multisensory integration in the superior temporal sulcus improves speech comprehension in noisy situations. Augmented reality systems project real-time transcriptions into the field of vision to optimize visual reinforcement. Neuroplasticity promotes neural connections between visual and auditory areas in cases of hearing loss. Training combines auditory and visual stimuli to strengthen cross-modal compensation.
The voicing feature distinguishes voiced (e.g., /b/, /d/) from unvoiced consonants (e.g., /p/, /t/) based on vocal fold vibration. Voiced sounds exhibit a fundamental frequency in the spectrum, while unvoiced sounds mainly correspond to turbulent noise. In speech audiometry, voicing recognition is tested to diagnose high-frequency losses and time resolution problems. Hearing aid programs emphasize voicing-relevant frequency bands to compensate for articulation deficits. Misperception of voicing leads to speech comprehension errors, especially in noisy environments.
The vocal tract comprises the throat, oral cavity, and nasal cavity, which act as variable resonators to form speech sounds. Changes in the shape and length of the vocal tract produce different formants that characterize vowels. Acoustic models of the vocal tract are used in hearing research and speech synthesis. Resonance shifts caused by hearing aid earmolds can minimally alter vowel formants. Speech therapy training takes vocal tract mechanics into account in order to specifically promote articulation in cases of hearing loss.