HÖRST
Glossary
V
Valid hearing threshold determination reliably records the minimum perceptible sound pressure level of a hearing test subject at defined frequencies. It requires standardized test conditions (quiet booth, calibrated audiometer) and clear instructions to the patient. Validity is increased by checking test-re-test consistency and clinical plausibility, for example through cross-checks with otoacoustic emissions. Psychometric methods such as catch trials can reveal psychogenic response patterns. Only valid threshold values form a reliable basis for diagnostics and hearing aid fitting.
Validation audiometry comprises objective and subjective test procedures that check the correspondence between measured audiograms and everyday experiences. It combines standard audiometry with speech audiometry, OAE screening and self-assessment questionnaires (e.g. APHAB). The aim is to verify fitting success and quality and to identify discrepancies. Adaptive test sets simulate realistic hearing situations in order to ensure that the results are suitable for practical use. Results are incorporated into the readjustment of hearing system parameters and the documentation of fitting quality measures.
The vanish effect describes the temporary disappearance or attenuation of tinnitus when a certain sound signal is played, often immediately after the end of the stimulus. This phenomenon indicates cortical reorganization and central inhibitory pathways that modulate the tinnitus generator network. It is used in studies to identify effective masker profiles and to investigate neuronal plasticity. Clinically, the Vanish effect can provide an indication of suitable sound therapy parameters. Long-term application of the identified stimuli can contribute to permanent habituation.
A variable filter dynamically adapts its center frequency, bandwidth and slope to changing acoustic environments. In hearing aids, it makes it possible to emphasize speech components in noisy situations and reduce background noise. Algorithms continuously analyze input signals and adjust filters in real time to optimize trade-offs between speech intelligibility and sound naturalness. Adaptive filters can also detect feedback peaks and initiate countermeasures. Using machine learning approaches, modern systems learn user preferences in order to individualize filter strategies.
Processing in the brain refers to the central analysis, integration and interpretation of auditory signals after peripheral transduction. It includes pathways in the brainstem, thalamus and primary and secondary auditory cortex areas. Here, time and level differences, speech patterns and music-specific information are extracted and linked to memory content. Plasticity enables adaptation to hearing loss or hearing systems by reorganizing neuronal networks. Disorders at this level lead to central auditory processing disorders and require targeted therapies.
The masking effect describes the suppression of quiet tones by loud noise or tones that are present at the same time. It is psychoacoustically essential for masking phenomena and determines which sounds remain audible in complex sound mixtures. In audiometry, targeted masking prevents cross-hearing and isolates the ear to be tested. In hearing aids, controlled maskers are used to cover tinnitus or attenuate disturbing frequencies. Masking patterns are individually determined to achieve the optimum balance between signal preservation and noise suppression.
Ossification describes pathological bone remodeling in the middle ear, usually characteristic of otosclerosis, which leads to a fixation of the ossicular chain. The stapes foot is particularly frequently affected, which greatly reduces sound conduction. The audiogram shows a prototypical air-bone threshold spread. Therapeutically, the ossification is corrected by stapedotomy, whereby the ossified stapes is bypassed and replaced by a prosthesis. Long-term controls confirm the stability of the reconstruction and hearing gain.
An amplifier circuit in hearing aids consists of a preamplifier, signal processor and output stage that amplifies weak microphone signals to audible levels. Digital amplifier circuits allow multi-band compression, feedback management and adaptive filtering. Linearity and output power determine sound fidelity and maximum volume. Signal-to-noise ratio and distortion factor are critical parameters for amplifier quality. Modern ASICs integrate amplifiers and DSP in small designs with low energy consumption.
Amplification describes the increase in the sound pressure level of an input signal in order to make it audible to the residual hearing. In hearing systems, it is frequency-dependent and matches the hearing loss profiles in an audiogram. Compression algorithms ensure that loud signals are not over-compressed and quiet signals are appropriately amplified. Amplification can be linear (equal factor) or non-linear (dynamic adaptation). The aim is maximum speech intelligibility with a subjectively natural sound.
The vestibular apparatus comprises the sacculus, utriculus and three semicircular canals in the inner ear and registers accelerations and head movements. It sends signals via the vestibular part of the VIII. cranial nerve to the brain stem and cerebellum to control balance and eye reflexes. Dysfunctions lead to dizziness, nystagmus and balance disorders. Diagnostic procedures include caloric testing, VEMP and video nystagmography. Vestibular rehabilitation trains central compensation and stabilizes gait and stance control.
Vestibular vertigo is a spinning or tilting sensation caused by disorders of the vestibular system in the inner ear or its central connections. Causes can be vestibular neuritis, Meniere's disease or vestibular migraine equivalent. Accompanying symptoms are nausea, nystagmus and balance disorders. Diagnostics include caloric, VEMP and video nystagmography to differentiate peripheral from central causes. Corticosteroids, vestibular rehabilitation and, in recurrent cases, intratympanic gentamicin therapy are used therapeutically.
The vestibular system consists of the otolith organs (sacculus, utriculus) and the three semicircular canals, which register linear and rotational accelerations. It sends information about head movements and position to the brain stem, cerebellum and somatosensory cortices to control balance and spatial orientation. Reflexes such as the vestibulo-ocular reflex ensure a stable gaze posture during head movements. Disorders lead to dizziness, unsteady gait and nausea. Rehabilitation promotes central compensation through exercise programs and neurofeedback.
Der vestibulookuläre Reflex (VOR) stabilisiert das Bild auf der Netzhaut, indem Augenbewegungen entgegengesetzt zu Kopfbewegungen gesteuert werden. Er hat eine sehr kurze Latenz (<10 ms) und wird über direkte Verbindungen zwischen vestibulären Kernen und okulomotorischen Neuronen realisiert. Ein intakter VOR ist essenziell für klare Sicht beim Gehen oder Laufen. Pathologische VOR‑Parameter (Gain, Phase) werden in der Video‑Head‑Impulse‑Test (vHIT) gemessen. Therapie bei VOR‑Schwäche umfasst gezieltes Blick‑Stabilisationstraining.
Vibration perception is the perception of mechanical vibrations, mediated via Pacini and Meissner corpuscles in the skin and deeper tissues. In the ear, vibration perception is used in bone conduction audiometry by using a transducer to generate vibrations at the mastoid. The threshold is typically 0.2-0.5 g at 250-500 Hz. Changes in vibration sensation can indicate neuropathic or vestibular disorders. Vibration measurements support diagnostics of bone conduction pathways and tactile feedback in hearing systems.
Vibration conduction (bone conduction) transmits sound by vibrations of the skull directly stimulating the cochlea, without eardrum involvement. It is audiometrically tested in order to differentiate between sound conduction and sound perception disorders. Implantable bone conduction devices (BAHS, Bonebridge) use vibration conduction to treat middle ear pathologies. Efficiency depends on vibration location and frequency; mastoid implants provide better low bass. Vibration conduction also plays a role in the somatosensory interaction of the vestibular system.
A vibration plate generates low-frequency whole-body vibrations for the rehabilitation of vestibular and musculoskeletal functions. It is used experimentally in hearing rehabilitation to couple vestibular stimulation with auditory training. Vibration parameters (frequency, amplitude) are chosen to activate the vestibular system without causing nausea. Studies show improved VOR gain and gait stability after combined vibration vestibular training. Use is still undergoing clinical trials, but promises multisensory therapeutic effects.
A virtual acoustic environment (VAE) simulates realistic 3D sound fields via headphones or loudspeaker systems using HRTF-based rendering. It is used in hearing research and training to safely present complex everyday situations (restaurant, street). VAEs allow controlled manipulation of background noise, sound source movement and reverberation. In hearing aid development, adaptive algorithms are tested under realistic conditions. Users benefit from individualized simulations for targeted rehabilitation.
Visual amplification describes the support of hearing through visual information, such as lip-reading, gestures or text subtitles. Multisensory integration in the superior temporal sulcus improves speech comprehension in noisy situations. Augmented reality systems project real-time transcriptions into the field of vision to optimize visual reinforcement. Neuroplasticity promotes neural connectivity between visual and auditory areas in hearing loss. Training combines auditory and visual stimuli to strengthen cross-modal compensation.
The voicing feature distinguishes voiced (e.g. /b/, /d/) from unvoiced consonants (e.g. /p/, /t/) based on vocal fold vibration. Voiced sounds show a fundamental frequency in the spectrum (fundamental), while unvoiced sounds mainly correspond to turbulent noise. In speech audiometry, voicing detection is tested to diagnose high frequency loss and time resolution problems. Hearing aid programs emphasize voicing-relevant frequency bands to compensate for articulation loss. Misperception of voicing leads to speech intelligibility errors, especially in noisy environments.
The vocal tract comprises the pharynx, oral cavity and nasal cavity, which act as variable resonators that shape the sounds of speech. Changes in the shape and length of the vocal tract produce different formants that characterize vowels. Acoustic models of the vocal tract are used in auditory research and speech synthesis. Resonance shifts caused by hearing aid earmolds can minimally change vowel formants. Speech therapy training takes vocal tract mechanics into account in order to specifically promote articulation in hearing loss.