HÖRST
Glossary
A
The A-weighting level is a level scale in which low and very high frequencies are weighted less according to the sensitivity of the human ear. It depicts the perception of loudness at medium frequencies particularly accurately and is expressed in decibels (dB A). It is used in noise measurement to assess real noise exposure in everyday life and to plan protective measures. Employers and authorities use the A-weighting level to set limit values for workplace noise. Weighting provides a better correlation between measured sound pressure and perceived loudness.
Abduction of the eardrum is the outward movement of the eardrum when the pressure in the middle ear increases. This mechanism is part of the natural pressure equalization via the Eustachian tube and protects the inner ear from excessive strain. Failure to equalize pressure can lead to pain, a feeling of pressure and reduced hearing. Abduction can be objectively measured and assessed by tympanometry. Clinically, it is examined in middle ear diseases such as otitis media or obstruction of the Eustachian tube.
Absolute pitch is the rare ability to correctly identify pitches without a reference tone. Less than one percent of the population has this ability, which is usually innate; it enables precise recognition of notes and frequencies. Musicians with absolute pitch can clearly identify sounds regardless of instrument and volume. At the same time, this ability can be a nuisance in everyday life, as unwanted sounds are perceived more strongly. Training can improve relative hearing abilities, but absolute hearing remains predominantly innate.
The axis shift refers to a lateral shift of the frequency response in the audiogram or impedance curve diagram. It is caused by changes in the mechanical transmission chain of the middle ear or by measurement artifacts. In diagnostics, the axis shift helps to differentiate between conductive and sensorineural hearing loss. A significant shift can indicate otosclerosis, eardrum perforation or tubal dysfunction. Audiometers automatically record such shifts to support the findings.
The afferent auditory pathway transmits acoustic information from the inner ear via the auditory nerve to various brainstem nuclei up to the auditory cortex. It includes the vestibulocochlear nerve (VIII cranial nerve), the cochlear nucleus and higher central structures. Disorders in this pathway lead to sensorineural hearing loss and central auditory processing disorders. Objective measurement methods such as the brainstem response (ABR) test the integrity of the afferent auditory pathway. An intact afferent auditory pathway is a prerequisite for understanding speech and localizing sound sources.
Ageusia refers to the complete loss of the sense of taste and occasionally occurs in combination with hearing and balance disorders. It can be caused by damage to the chorda tympani nerve, which transmits taste signals from the tongue to the brain. Patients also complain of reduced saliva production and loss of appetite. In ENT diagnostics, ageusia is often examined together with olfactory tests. Treatment depends on the underlying cause, such as infection or trauma.
Air conduction describes the transmission of sound waves via the air through the outer ear and middle ear to the inner ear. It is the primary auditory pathway for normal everyday sounds and is shown on audiograms as an air conduction curve. Deviations between air and bone conduction indicate conductive hearing loss. Measurements of air conduction make it possible to differentiate between middle ear and inner ear disorders. Clinically, air conduction is measured using headphone audiometry.
In an auditory context, accommodation refers to the adaptation of the auditory system to changing sound pressure levels through muscular tension of the auditory ossicles. This mechanism protects the inner ear from loud stimuli and optimizes sensitivity to quiet signals. Accommodation takes place within milliseconds and is controlled by the stapedius and tensor muscles. If the muscles or nerves are damaged, the protective reflex can fail, which increases the risk of noise damage. Audiometrically, impaired accommodation is reflected in altered reflex thresholds.
Active listening training includes targeted exercises to improve auditory perception and speech intelligibility, especially in difficult listening situations. Various sound patterns and speech signals are presented in order to strengthen central processing skills. Studies show that regular training promotes neuronal plasticity in the auditory cortex. Areas of application include tinnitus therapy, rehabilitation after sudden hearing loss and support for central hearing disorders. Modern programs use computer-assisted tasks and biofeedback.
Acoustics is the study of the generation, propagation and perception of sound. It forms the basis for all audiological measurement methods and the development of hearing aids. Within acoustics, a distinction is made between airborne, bone and structure-borne sound. Applied acoustics deals with room acoustics, noise protection and soundproofing measures. In hearing aid technology, acoustic principles are incorporated into filter design and amplifier technology.
Auditory hallucinations are the perception of voices or sounds without an external sound source. They can have psychological causes (e.g. schizophrenia) or neurological lesions. In audiology, they are distinguished from tinnitus, as hallucinations can carry linguistic content. Neuropsychological tests and imaging procedures are used for diagnosis. Psychotherapy and medication are used therapeutically.
The acoustic reflex test measures the stapedius reflex, which reacts to loud sounds by contracting the stapedius muscle. This reflex protects the inner ear from overloading and can indicate middle ear or brain stem lesions. Reflex failures on one or both sides provide a differentiated diagnosis of conductive and sensorineural hearing loss. The test is carried out using tympanometry devices that record reflex thresholds and latencies. It is clinically important for neural hearing disorders and otosclerosis.
In hearing aids, acoustic signal processing refers to the conversion of microphone signals into optimized sound signals for the wearer. Digital chips filter out background noise, amplify speech and dynamically adapt to the environment. Techniques such as feedback suppression and adaptive directional microphones improve hearing quality in noisy environments. Advanced systems use AI to learn hearing preferences and automatically recognize scenes. Signal processing is crucial for natural hearing with hearing systems.
The stapedius reflex is an involuntary contraction of the stapedius muscle in response to an intense sound stimulus. Lifting the stapes footplate reduces the transmission of sound to the inner ear and protects it. Reflex measurements provide information about the function of the middle ear, facial nerve and brain stem. A missing or asymmetrical reflex response can indicate otosclerosis or cranial nerve damage. The reflex is part of the standard tympanometry in audiological diagnostics.
Acoustic trauma is caused by sudden, extremely loud sound events such as explosions or blasts. It leads to hair cell damage in the inner ear, which is often accompanied by tinnitus and permanent hearing loss. Immediate measures include corticosteroids to reduce inflammation and high-pressure oxygenation. Long-term consequences can include impaired speech intelligibility and hyperacusis. Prevention through hearing protection is crucial to avoid acoustic trauma.
Presbyacusis is the gradual, physiological loss of hearing in old age. Hair cells in the inner ear and neuronal connections are mainly affected, which leads to reduced speech comprehension. Symptoms are particularly noticeable in the high frequency range and with background noise. Hearing aids and hearing training can significantly improve quality of life and communication. Preventive measures such as noise protection and nutrition play a supporting role.
The alveolar membrane in the inner ear is a fine layer that carries hair cells in the organ of Corti and converts vibrations into neuronal signals. It ensures precise frequency separation along the cochlea. Changes or damage to the membrane impair pitch recognition and volume perception. Histological studies show that age and exposure to noise reduce the elasticity of the membrane. Biological research aims at regenerative therapies to restore this membrane.
The anvil is the middle of the three auditory ossicles in the middle ear and transmits vibrations from the malleus to the stapes. It acts as a lever that increases the sound pressure before the vibrations are transmitted to the inner ear. Through this amplification, the anvil ensures efficient conversion of airborne sound into bone sound. Functional disorders such as ossification (otosclerosis) can cause conductive hearing loss. For detailed information on the sound conduction chain and test procedures, see the Rinne and Weber test.
The amplitude describes the deflection of a sound wave and determines the perceived volume. It is measured as the sound pressure level in decibels and correlates directly with the perception of hearing. High amplitudes can lead to hair cell damage, while low amplitudes are close to the hearing threshold. In audiometry, the amplitude indicates the dynamic bandwidth of hearing. Technical applications regulate amplitudes to minimize distortion in hearing aids.
Amplitude modulation (AM) refers to the change in sound amplitude following a modulation signal, such as speech or music signals. In hearing tests, AM is used to test the ear's sensitivity to modulation. A reduced perception of AM can indicate central auditory processing disorders. In hearing aids, AM detection helps to separate speech from background noise. Psychoacoustic experiments with AM provide insights into neuronal coding mechanisms in the auditory system.
Anacusis refers to the complete loss of hearing in which neither air nor bone conduction nor the slightest acoustic stimuli can be perceived. It can be congenital or caused by severe damage to the inner ear, auditory nerve or central auditory pathways. Those affected are completely dependent on visual and tactile aids such as sign language or vibro-alerts to communicate. Medically, anacusis is tested using sound and speech audiometry as well as otoacoustic emissions and evoked potentials to determine the extent and origin of the damage.
Analog hearing aids amplify acoustic signals continuously without digital signal processing. They work with simple amplifier stages and filters, are inexpensive but less flexible than digital models. Adjustments are made mechanically or via potentiometers, which makes fine-tuning difficult. Today, analog devices are rarely used, mainly in simple applications or as a backup. Their sound quality is considered less natural than digital systems.
Anisoacusis describes a different hearing threshold in both ears, often caused by unilateral middle ear or inner ear damage. Audiometrically, there is an asymmetry between the air and bone conduction curves. Clinically, anisoacusis may indicate otosclerosis, Meniere's disease or neural lesions. Treatment depends on the cause, such as surgical intervention or hearing aid fitting. Monitoring of anisoacusis helps to assess the course of the disease and the success of therapy.
Antiemetics relieve nausea and vomiting that occur in vestibular disorders such as inner ear inflammation. They usually act on histamine or dopamine receptors in the vomiting center. By reducing accompanying symptoms, they improve the therapeutic tolerance of vestibular training. Long-term use requires monitoring, as side effects such as fatigue can occur. In ENT practice, antiemetics are combined with vestibular rehabilitation for optimal results.
Vestibular dysfunctions affect areas of the brain that control appetite and nausea. Disorders of vestibular perception often lead to eating disorders and weight loss. Therapies include vestibular training and pharmacological support to normalize eating behavior. Dietary recommendations with easily digestible foods reduce accompanying symptoms. Interdisciplinary care by ENT, neurology and nutritional therapists improves quality of life.
Arbitrary sound sources are unpredictable, random sounds in the environment that do not belong to speech patterns. They make it difficult to understand speech and increase the cognitive load when listening. Hearing aid algorithms must recognize and filter out such noise. Laboratory tests with arbitrary signals test the robustness of hearing systems. Psychoacoustic studies investigate how the brain separates arbitrary sounds from relevant signals.
The arterial pressure in the inner ear ensures sufficient blood supply to the hair cells and neuronal structures. If the pressure drops, this can lead to ischemia and hearing loss. Vascular examinations measure blood flow parameters in order to detect vascular bottlenecks. Treatment options range from drug treatment to microsurgery. Stable perfusion is crucial for hearing health and the regeneration of sensory cells.
The articulation index (AI) indicates the proportion of speech sounds that are correctly reproduced by a hearing aid user. It is measured in speech audiometry and presented as a value between 0 and 1. A high AI (> 0.7) means good speech intelligibility, low values indicate a need for adaptation. AI measurements help to optimize hearing aid programs and document rehabilitation progress. The index correlates closely with subjective hearing comfort in everyday life.
In ear canal atresia, the external auditory canal is congenitally missing, which leads to complete conductive blockage. Those affected suffer unilateral or bilateral conductive hearing loss. Surgical opening (atresieplasty) can partially restore hearing. Audiological care includes bone conduction hearing systems until the operation. Long-term follow-up checks scarring and hearing gain.
An audiogram is a graph that shows hearing thresholds across different frequencies. Air and bone conduction are measured separately to distinguish conductive from sensorineural hearing loss. Normal values are 0-20 dB; deviations indicate degrees of loss. Audiograms are the basis of every hearing medical diagnosis and treatment planning. Modern digital audiometers automatically save and compare curves.
An audiologist is a medical specialist or scientist who specializes in the diagnosis and treatment of hearing and balance disorders. They carry out complex tests such as AEP, OAE and speech audiometry. Audiologists work on an interdisciplinary basis with ENT doctors, neurologists and hearing aid acousticians. They develop individual rehabilitation plans and provide long-term support for patients. Their training encompasses medicine, neuroscience and technology.
Audiology is the interdisciplinary field that deals with hearing, balance and auditory processing. It combines aspects of medicine, physics, psychology and technology. Audiologists research hearing mechanisms, develop diagnostic procedures and optimize hearing aids. Clinical audiology includes screening, differential diagnostics and therapy. The aim is to maintain and improve the ability to hear and communicate.
Audiometry refers to all measurement methods for determining hearing thresholds and speech intelligibility. This includes sound, speech and objective measurements such as OAE and AEP. The results are used in hearing aid fitting and therapy monitoring. Modern audiometry devices use computer-aided procedures and automated protocols. Regular audiometry is used to monitor the progress of noise work or ototoxic medication.
Auditory evoked potentials are electrical signals in the brain that are measured in response to sound stimuli. They allow objective assessment of the auditory pathway from ear to cortex. AEPs are used for newborn screening, suspected brain stem lesions and neurological diseases. Different wave components provide information about individual stations of the auditory pathway. The examination is carried out using scalp electrodes, without the active cooperation of the patient.
Auditory feedback occurs when hearing aid microphones pick up the amplified sound from the listener again and enter a feedback loop. This manifests itself as whistling or humming and can severely impair the listening experience. Modern hearing systems use adaptive algorithms to detect and suppress feedback in real time. Acoustic adjustments such as tight earmolds further reduce the risk. An optimal microphone-hearer distance design already minimizes feedback mechanically.
Auditory processing comprises the central processes for analyzing and interpreting sound signals in the brain. It includes feature extraction, speech comprehension and sound localization. Auditory processing disorders manifest themselves in difficulties understanding speech in background noise. Neuropsychological tests and central audiometry procedures help with the diagnosis. Rehabilitation through auditory training aims at plasticity of the auditory cortex.
The auditory cortex in the upper temporal lobe is the central processing station for sound information. This is where the frequency, volume, rhythm and direction of sound are evaluated. Plastic changes in the cortex enable learning processes such as auditory training and tinnitus management. Lesions lead to central hearing disorders and speech comprehension deficits. Imaging techniques (fMRI, PET) examine activity patterns during acoustic stimuli.
The ABR measures waves of electrical activity along the auditory pathway in the brainstem following click stimuli. It is used for the objective diagnosis of hearing thresholds and neural conduction disorders. ABR is standard in newborn screening and in cases of suspected acoustic neuroma. Analysis of the wave latencies allows conclusions to be drawn about lesion locations from the ear to the brain stem. Examination is painless using scalp electrodes.
Auricular stimulation uses electrical or mechanical stimuli on the auricle to influence neuronal networks. It is used in pain therapy, tinnitus treatment and vestibular rehabilitation programs. Stimulation can promote blood circulation and stimulate neuronal plasticity. Clinical studies are investigating effects on chronic tinnitus and dizziness. Safety profile is considered good, side effects are rare.
Auriculotherapy is a form of ear acupuncture in which certain points on the auricle are treated in order to achieve systemic effects. It is also used to treat tinnitus, dizziness and stress. Its effectiveness is scientifically controversial, but patients report subjective improvement. Points that correspond to certain organs and nerve reflex zones are treated. Auriculotherapy is part of integrative ENT and pain therapy concepts.
The outer ear comprises the pinna and external auditory canal and conducts sound waves to the eardrum. The shape of the outer ear amplifies certain frequencies and supports directional perception. Diseases such as exostoses or otitis externa impair sound reception. Audiological examinations check the patency and resonance of the outer ear. Surgical interventions can restore form and function in the case of malformations.
Autophone refers to the perception of one's own voice via bone conduction, which leads to a muffled sound. This effect occurs because vibrations reach the inner ear directly via the bones of the skull. When we speak, we perceive our voice as louder and fuller than others. Autophony can occur more frequently in the case of tube dysfunction or after middle ear surgery. Audiometric tests separate air conduction from bone conduction in order to diagnose autophony.
B
The balance organ in the inner ear, consisting of the three semicircular canals and sacculus or utriculus, controls balance and spatial orientation. Movements of the head cause the endolymph in the semicircular canals to flow, mechanically stimulating hair cells. These stimuli are transmitted to the brain via the vestibular nerve, where they are combined with visual and proprioceptive information. Disturbances can cause dizziness, nausea and fluctuations in balance. Caloric testing and VEMP tests are used for diagnostic purposes.
The basilar membrane runs spirally through the cochlea and carries the organ of Corti with its hair cells. Sound waves in the inner ear induce traveling waves on the membrane, whose point of maximum deflection determines the perceived pitch. Depending on the frequency, different sections of the membrane vibrate, which enables the tonotopic organization in the auditory system. Damage to the basilar membrane impairs frequency resolution and speech intelligibility. Research into regenerative therapies is aimed at restoring its function after noise damage.
Bilateral hearing loss occurs when both ears have a measurable hearing loss. It can occur symmetrically or asymmetrically and can have various causes, such as exposure to noise, genetic factors or ageing processes. Those affected often suffer from reduced speech comprehension and social isolation. They are usually fitted with hearing aids or cochlear implants on both sides. Regular audiological checks ensure that the hearing systems are optimally adjusted.
Békésy audiometry is a method of measuring the hearing threshold in which the patient presses a continuous tone button as soon as they hear sound and releases it when they no longer hear it. At the same time, the sound pressure is continuously varied so that conclusions can be drawn about thresholds and adaptation behavior. The method provides differentiated information about hearing thresholds in unilateral and bilateral examinations. It is particularly suitable for diagnosing sensorineural hearing loss. Today it is supplemented by automated computer-aided tests.
A coating on the eardrum is often caused by inflammatory processes such as otitis media or chronic moisture in the ear canal. It can inhibit the eardrum's ability to vibrate and lead to conductive hearing loss. Otoscopically, the plaque appears as a whitish or yellowish layer. Treatment involves microscopic cleaning and, if necessary, topical antibiotics. A follow-up check by tympanometry ensures that the eardrum function is restored.
The annoyance volume is a psychoacoustic measure of how annoying a noise is perceived, regardless of its sound pressure level. It is determined in studies by interviewing test subjects and is incorporated into noise protection guidelines. Factors such as pitch, duration and context influence the subjective annoyance. Measures to reduce noise include noise barriers, room acoustics optimization and hearing protection. Annoyance levels are important parameters for the planning of living and working areas.
A ventilation disorder of the tympanic cavity occurs when the Eustachian tube does not open and close properly. This prevents pressure equalization between the middle ear and the nasopharynx. Symptoms include a feeling of pressure, hearing loss and recurrent otitis. Tympanometry is used for diagnosis, tube catheters, nasal steroids or balloon dilatation help therapeutically. Chronic cases may require tympanostomy tube implantation.
A tympanostomy tube is a small tube that is surgically inserted into the eardrum to ensure permanent ventilation of the middle ear. It prevents fluid build-up and recurring middle ear infections. The tubes usually fall out by themselves after a few months as soon as the eardrum has healed. Follow-up checks by otoscopy and tympanometry ensure the success of the treatment. They are used less frequently in adults than in children.
In benign paroxysmal positional vertigo (BPLS), otoliths in the posterior semicircular canal become detached and irritate the cupula. Even small head movements lead to severe, short-lasting attacks of vertigo. The diagnosis is made clinically using the Dix-Hallpike test. The Epley maneuver repositions the otoliths and usually relieves the symptoms immediately. Recurrences are common, so patients can learn simple positioning exercises.
In rare cases, benzodiazepines can have an ototoxic effect and lead to dizziness, tinnitus or hearing loss. The active substances influence GABAergic neurotransmission in the auditory system. Symptoms reversible after discontinuation, persistent in severe cases. Audiometric monitoring recommended for long-term therapy. Alternatives such as SSRIs are being considered to avoid ototoxicity.
Occupational hearing loss is caused by chronic exposure to noise in the workplace, for example in industry or construction. It usually manifests itself as sensorineural hearing loss in the high-frequency range. Prevention through hearing protection, noise reduction and regular audiometry is required by law. Early detection allows protective measures to be adapted in good time. Rehabilitation includes hearing aid fitting and noise insensitivity training.
Some analgesics and antibiotics (e.g. aminoglycosides) have an ototoxic effect and can damage hair cells in the inner ear. Symptoms range from tinnitus to permanent hearing loss. Reducing the dose or changing the substance can often reverse early damage. Regular otoacoustic emission tests monitor cochlear function during therapy. Interdisciplinary coordination between ENT and oncology prevents hearing damage.
Binaural interaction refers to the processing of different signals from both ears in the brain to localize and distinguish sound sources. It enables spatial perception and speech comprehension in noise. Disruptions lead to reduced directional hearing and communication problems. Audiological tests such as Binaural Masking Level Difference quantify the interaction. Hearing systems promote it through synchronized signal processing.
Binaural localization uses time and level differences between the ears to determine the direction of sound. Small time differences (ITD) and volume differences (ILD) are evaluated in the superior olive nucleus. Precise directional hearing is essential for speech comprehension and safety in road traffic. Hearing aids with binaural networking maintain this ability through coordinated microphone processing. Tests in the free sound field check localization accuracy.
Binaural redundancy refers to the advantage of both ears receiving the same signal, which increases recognizability. In noise, speech intelligibility is improved because the brain uses multiple copies of the signal. Redundancy effects can be measured in speech audiometry. Hearing aids should not reduce redundant information in order to maximize intelligibility.
Binaural summation describes the improved perception of loudness and recognition threshold when both ears are involved. The combined information leads to a gain in loudness of around 3 dB compared to monaural hearing. This effect supports hearing in noisy environments. Clinically, it is taken into account when hearing aids are fitted on both ears.
Binaural suppression describes how the brain suppresses background noise when the useful signal and masker are fed to both ears in a phase-differentiated manner. The Masking Level Difference (MLD) quantifies the hearing gain through phase-optimized stimuli. Tests on this help to diagnose central auditory processing disorders. Modern hearing aids use these findings to improve signal-to-noise ratios.
Binaural fitting means the simultaneous fitting of hearing systems in both ears. It preserves localization, speech understanding and sound quality. Clinical studies show better hearing performance and less listening effort compared to monaural fitting. Synchronized programs and microphones optimize binaural effects.
Binaural hearing is the interaction of both ears for spatial sound perception. It enables directional hearing, noise suppression and speech comprehension in complex acoustic situations. The superior olivary complex in the brain stem is the central processing station. Loss of one ear significantly reduces these abilities. Rehabilitation aims to maximize remaining binaural effects.
A biphasic tinnitus masker alternately generates two different frequencies to modulate the tone and perception of the tinnitus. Phase shifts break neural adaptation, resulting in greater relief. Maskers can be integrated into hearing aids or stand-alone devices. Clinical studies have shown a short-term reduction in tinnitus volume.
The bit depth indicates how many bits are used to represent an audio sample and determines the dynamic resolution. Higher bit depth enables finer gradations and lower quantization noise. In hearing aids, it influences sound fidelity and noise reduction. Usual values are 16-24 bits, professional systems use up to 32 bits.
The blue dot effect describes a temporary increase in the hearing threshold after exposure to noise. Those affected perceive sounds more quietly until the hair cells have recovered. This phenomenon demonstrates the protective function of acoustic adaptation. Long-term or repeated exposure can lead to permanent hearing loss. Audiometric checks document recovery times.
Bluetooth hearing aids use wireless radio technology to receive audio signals directly from telephones, televisions or computers. They improve speech intelligibility and comfort by blocking out ambient noise. Low latency and binaural synchronization are important quality features. Battery-powered models avoid battery changes. Compatibility with standard profiles (APT-X, LE) ensures a wide range of applications.
The three semicircular canals in the vestibular apparatus (horizontal, superior, posterior) register rotational movements of the head. They are filled with endolymph and contain hair cell sensors in the cupula. Each movement generates a specific flow that is transmitted to the brain. Diseases such as BPLS primarily affect the posterior semicircular canals. Functional tests are caloric testing and video nystagmography.
Bone conduction transmits sound by vibration of the skull directly to the inner ear, bypassing the outer ear and middle ear. It is used in audiometry to differentiate between conductive and sensorineural hearing loss. Bone conduction hearing systems are used for patients with middle ear problems. Modern implants such as BAHS offer permanent bone conduction solutions.
Bonebridge is an active transcutaneous bone conduction implant that conducts sound vibrations directly into the temporal bone. It is suitable for patients with conductive hearing loss and unilateral deafness. The external sound processor unit transmits signals magnetically to the implanted vibration module. Clinical studies show high patient satisfaction and speech understanding.
During ear surgery, stimulation of the vagus nerve can lead to bradycardia as parasympathetic fibers are stimulated. Anesthetists closely monitor heart rate and blood pressure. Vagolytic medication is given as a preventative measure. Surgeons work gently to minimize pressure on the meatus and round windows. Events require immediate cardiologic intervention.
Humming in the ear describes low-frequency, often pulsatile noises that those affected find disturbing. Causes include vascular turbulence, muscle tremors or hearing aid feedback. Diagnostically, auscultation and Doppler sonography help to rule out vascular causes. Maskers, biofeedback or vascular therapy with medication are used therapeutically. Chronic humming can severely impair quality of life.
C
The caloric test checks the function of the horizontal semicircular canals by stimulating the auditory canal with warm or cold water or air. The temperature difference creates a convection of the endolymph, which causes nystagmus (uncontrollable eye movements) and thus visualizes the vestibular function. The strength and direction of the nystagmus provide an indication of the integrity of the vestibular system and its central circuitry. This procedure is particularly important for diagnosing unilateral vestibular deficits and for clarifying symptoms of vertigo. Side effects are rare, but may increase nausea or dizziness in the short term.
The semicircular canal is a bony canal filled with endolymph in the inner ear that registers rotational movements of the head. Each of the three orthogonally arranged canals (horizontal, superior, posterior) contains a sensor capsule (ampulla) with hair cells that are mechanically stimulated by the flow of fluid. These stimuli are transmitted to the brain via the vestibular part of the VIII. These stimuli are transmitted to the brain via the vestibular part of the VIII cranial nerve and are essential for balance and spatial orientation. Disturbances or blockages in the semicircular canals, as occur in benign paroxysmal positional vertigo, lead to severe dizzy spells. Caloric testing and video nystagmography are standard methods for testing their function.
The capitis transversus muscle, also part of the deep neck muscle, attaches to the mastoid process and stabilizes head movements. Its tension can indirectly influence the pressure in the middle ear, as the skull bone transmits slight deformations. Tension in this muscle in the postauricular region is occasionally associated with earache and tinnitus. Manual therapy and physiotherapeutic stretching exercises release muscular imbalances and alleviate accompanying symptoms. During the clinical examination, the therapist looks for pain radiating towards the ear.
A cartilaginous earmold is an individual earmould made of flexible material that is inserted into the ear canal and seals hearing aid components tightly. It optimally transmits sound to the inner device and prevents feedback. Thanks to its soft texture, it adapts to the shape of the ear and offers long hours of wearing comfort. Hygienic cleaning and regular replacement are important to prevent earwax build-up and skin irritation. Custom-fit adapters significantly improve sound quality and speech intelligibility.
Cerebral hearing loss results from damage to the central auditory pathways or auditory cortex, but not from problems in the ear itself. Causes can be strokes, tumors or traumatic brain injuries. Those affected often have normal peripheral hearing, but suffer from poor speech intelligibility and central processing disorders. Evoked potentials (AEP) and imaging procedures such as MRI help with diagnosis. Rehabilitation includes special hearing training that promotes neuronal plasticity.
Cerumen, also known as earwax, is a protective mixture of secretions from the ceruminous glands and dead skin cells in the external auditory canal. It traps dust and germs and prevents infections by containing antimicrobial substances. Normal self-cleaning occurs through jaw movements when speaking and chewing. However, excessive cerumen formation can block the ear canal and lead to hearing loss, itching or inflammation. If a plug forms, the ENT specialist gently removes the cerumen under vision.
Cerumen obturans describes a compact earwax plug that almost completely blocks the ear canal. It is caused by excessive production or incorrect cleaning, e.g. with cotton buds. Symptoms include hearing loss, a feeling of pressure and occasionally tinnitus. It is removed microscopically or by rinsing with lukewarm water. Regular check-ups and prophylactic drops prevent recurrences.
Cerumen management includes techniques for the safe removal of earwax, for example manual micro suction, irrigation or cerumen-dissolving drops. The aim is to restore the openness of the ear canal without damaging the eardrum. Professional management reduces complications such as cerumen obturans or foreign bodies in the ear. Audiological checks before and after the procedure ensure the success of the treatment. Patients receive instructions on gentle self-care.
The cochlear duct is the bony duct filled with endolymph in the cochlea in which the organ of Corti is located. It separates the scala vestibuli and scala tympani and enables frequency analysis through the basilar membrane. Vibrations of the endolymph set the membrane in motion and stimulate hair cells. Damage to the cochlear duct leads to sensorineural hearing loss and impaired tonotopy. Histological studies are investigating the regeneration potential of this structure.
The chorda tympani is a branch of the facial nerve that conveys the sense of taste from the front two-thirds of the tongue and passes through the tympanic cavity. The nerve can be irritated during otitis media or middle ear surgery, resulting in taste disturbances (dysgeusia). Symptoms usually subside after healing or removal of inflammatory stimuli. Chronic lesions require neurological clarification. The function of the chorda tympani is often tested for taste-related complaints.
Chorda myositis is an inflammation of the muscles around the chorda tympani or adjacent structures in the middle ear. It can cause pain, tinnitus and temporary hearing loss. The causes are usually viral infections or autoimmune reactions. It is treated with anti-inflammatory medication and physiotherapy. Otitis media and neuralgia should be ruled out in the differential diagnosis.
Chronic otitis is a long-lasting inflammation of the middle ear, often with perforation of the eardrum and recurrent effusions. Symptoms include chronic discharge (otorrhea), hearing loss and occasional episodes of pain. Treatment includes surgical repair, tympanoplasty and antibiotic therapy. Long-term monitoring prevents complications such as cholesteatoma. Audiometry documents the development of hearing function.
A CIC hearing aid (Completely-in-Canal) sits completely in the ear canal and is almost invisible. It uses the natural sound funnel function of the outer ear and is comfortable to wear. Due to the small design, the range and battery size are limited, but it is ideal for mild to moderate hearing loss. Fitting requires exact ear impression and fine tuning by the acoustician. Regular cleaning is important to avoid earwax deposits.
The cochlea is the spiral-shaped inner ear organ in which sound is converted into neuronal signals. Hair cells are located on its basilar membrane, which encode different frequencies depending on where they are deflected. Sensory transduction takes place through mechano-electrical conversion in the hair cells. Damage to the cochlea is the main cause of sensorineural hearing loss. Research on cochlear regeneration aims to restore lost hair cells.
A cochlear implant is an electronic inner ear prosthesis that converts sound signals into electrical impulses and transmits them directly to the auditory nerve. It consists of an external speech processor and an implanted electrode splint. CI enables deaf or profoundly hearing impaired patients to understand speech, often after a short rehabilitation phase. Indication is made by a multidisciplinary team after audiometry and MRI. Speech training and adaptation of the processor are crucial for success.
Cochleoplasty refers to surgical procedures on the cochlea, such as the removal of cholesteatomas or implant placement. Access is usually via the round window or a cochleotomy. The aim is to maintain or restore function in middle ear and inner ear disorders. Postoperative audiometry monitors hearing gain and freedom from complications.
Cochlear dead zones are areas on the basilar membrane without functional hair cells, caused by noise, age or ototoxins. They appear as horizontal gaps in the audiogram and impair speech comprehension. Dead zones are irreversible, therapy is aimed at compensation through hearing aids or CI. Mapping strategies for CIs take dead zones into account for optimal stimulation.
The cochlear nucleus in the brain stem is the first central station of the auditory pathway where auditory nerve fibers end. It is divided into ventral and dorsal parts with different tasks in time and frequency analysis. From here, signal pathways run to higher auditory centers and to the cerebellum. Lesions lead to central auditory processing deficits. Electrodenstimulation in the nucleus is used for brainstem implants.
The biological cochlear amplifier is created by the activity of the outer hair cells, which generate mechanical feedback and thus increase the sensitivity and frequency selectivity of the cochlea. This active process amplifies soft sounds by up to 50 dB and sharpens the sound resolution. Damage to outer hair cells leads to broadband hearing loss and reduced speech audiometry performance. Otoacoustic emissions indirectly measure the function of this amplifier.
A cochleotomy is the surgical opening of the cochlea, usually to fix CI electrodes in the inner cavity. Access is made carefully at the round window in order to preserve residual hearing. Precise surgery minimizes trauma and preservation of structures for possible residual function. Postoperatively, the electrode is checked by X-ray and audiometry. Complications such as perilymph leakage require immediate revision.
The inferior commissure is a nerve pathway that connects the left and right inferior colliculi in the midbrain and thus supports binaural processing of sound information. It enables the integration of time and level differences of both ears for directional hearing. Lesions lead to impaired localization and reduced speech comprehension in complex acoustic situations. Animal studies are investigating its role in auditory plasticity.
Compliance of the middle ear describes the mobility of the eardrum and ossicular chain in response to changes in pressure. It is measured using tympanometry and expressed in ml or mmho. Low compliance indicates stiffening (e.g. otosclerosis), high compliance indicates perforation of the eardrum. The compliance curve helps to differentiate between middle ear diseases. Treatment decisions for tympanoplasty or stapes surgery are based on compliance data.
The connective tissue layer of the eardrum lies between the skin and mucous membrane layer and gives it stability and elasticity. It consists of collagen fibres that optimize vibration properties. Injuries to this layer, such as perforations, impair sound conduction and require surgical reconstruction. In tympanoplasty, this layer is replaced by grafts. Histological examinations show the ability to regenerate under certain conditions.
The organ of Corti is located on the basilar membrane and contains inner and outer hair cells that convert sound into electrical signals. Inner hair cells are primary sensory cells, while outer hair cells act as cochlear amplifiers. The mechanical movement of the tectorial membrane stimulates the hair cells, whose stereocilia generate electrochemical stimuli. Damage leads to sensorineural hearing loss and reduced frequency resolution. Research is aimed at cell regeneration using gene therapy.
The development of the organ of Corti begins embryonically and is largely completed by birth. Critical phases include differentiation of hair cells and neuronal connection to the auditory nerve. Disruptions in this phase lead to congenital hearing loss. Animal models show that growth factors could stimulate regeneration. Understanding developmental biology is key to future therapies.
The membrane of Corti separates the scala media and scala tympani within the cochlea and supports the organ of Corti. Its stiffness varies along the cochlea and enables tonotopic frequency analysis. Changes due to age or noise influence membrane mechanics and hearing thresholds. Histological staining reveals microstructures and pathologies. Repair approaches test biomaterials for membrane regeneration.
Cortical Auditory Evoked Potentials (CAEP) are slow brain responses to sound stimuli, measured in the auditory cortex. They provide information about the cortical processing of sounds and speech. CAEPs are used in pediatric audiological assessments and central hearing disorders. The latency and amplitude of the waves allow conclusions to be drawn about the speed of stimulus processing. Clinical applications include monitoring CI users.
Cortical plasticity describes the ability of the auditory cortex to adapt structurally and functionally to changing stimuli. After hearing loss or CI implantation, neuronal networks reorganize themselves in order to make optimal use of residual hearing. Training and rehabilitation promote plastic processes and improve speech comprehension. Imaging studies (fMRI) show cortical reorganization after hearing therapy. Plasticity decreases with age, but remains present throughout life.
The VIII. cranial nerve carries acoustic and vestibular information from the inner ear to the brain stem. It branches into cochlear and vestibular parts and is essential for hearing and balance. Lesions lead to unilateral hearing loss, tinnitus or dizziness. Diagnosis is made by ABR and caloric testing. In the case of tumors such as acoustic neuroma, early surgical removal is indicated.
CMD refers to functional disorders of the temporomandibular joint, which can lead to ear pain, tinnitus and hearing loss through muscle tension. Misalignments change cranial mechanics and transfer tension to the meatus. Treatment includes physiotherapy, splint therapy and myoelectric stimulation. Interdisciplinary cooperation between dentistry, ENT and physiotherapy is essential. Improvement is often seen within a few weeks.
Cross-hearing occurs when a sound stimulus is perceived by the non-tested ear during an audiometric test. This distorts measurement results and makes it difficult to attribute hearing losses. Masking with white noise in the opposite ear prevents cross-hearing. Correct masking is standard in the differential diagnosis of conductive and sensorineural hearing loss. Modern audiometers support automatic masking.
The cupula is a gelatinous cap in the ampulla of each semicircular canal in which hair cells are embedded. Movements of the endolymph bend the cupula and thus mechanically stimulate the hair cells. This principle enables the detection of rotational accelerations. Dysfunctions of the cupula due to otolith detachment lead to positional vertigo. Therapy is carried out with repositioning maneuvers such as Epley.
D
Attenuation describes the weakening of sound energy as it passes through a medium or component. In the ear, the middle ear with its ossicles acts as an attenuator that reduces extremely loud impulses and thus protects the inner ear. In ear canal and room acoustics, attenuation levels are measured in order to control reflections and reverberation. Hearing aids use specific attenuation filters to reduce disturbing frequencies and increase sound comfort.
The damping factor is the ratio of coupled energy to emitted energy in a vibrating system. In the middle ear, it provides information on how elastically the ossicular chain vibrates and how strongly it absorbs vibration energy. Low attenuation factors indicate excessive reflections, high ones indicate strong energy losses. Audiometrically, a change in attenuation can indicate otosclerosis or loosening of implants.
The attenuation coefficient quantifies how quickly sound waves lose amplitude in a material or medium. In the cochlea, it influences how vibrations decay along the basilar membrane, thus shaping the frequency resolution. In building and room acoustics, it defines how strongly walls or ceilings absorb sound. Hearing aid manufacturers take material damping into account in earmoulds in order to minimize resonances.
Dehiscence of the semicircular canal is characterized by a bony gap in the roof of a semicircular canal, usually in the superior canal. This opening leads to abnormal irritation of the cupula and causes symptoms such as autophonic noise, dizziness with pressure changes and hearing loss. Diagnosis is made by CT scan and vestibular function tests. Surgical closure of the dehiscence can significantly alleviate symptoms.
Decompensation refers to the failure of hearing aids or central processing when a hearing loss is so severe that compensation mechanisms are no longer sufficient. Those affected suddenly experience that familiar hearing aid programs are no longer sufficient and report considerable difficulties in understanding. This condition requires re-evaluation of the fitting, often with stronger amplification or cochlear implant. Rapid adaptation reduces stress and social isolation.
Auditory deprivation occurs when the brain receives no or only greatly reduced acoustic stimuli over a long period of time. This leads to regression of central auditory networks and impaired speech comprehension, even if peripheral hearing is later restored. Early hearing care for children is essential to prevent deprivation and ensure normal speech development. Rehabilitation includes intensive auditory training to promote neural plasticity.
Desensitization aims to reduce hypersensitivity to tinnitus sounds by confronting those affected with noise or music stimuli in a controlled manner. Through regular, controlled exposure, the brain becomes accustomed to the noise and increasingly blocks it out. Psychological methods such as cognitive behavioral therapy complement auditory training. Long-term studies show a lasting reduction in tinnitus stress and improved quality of life.
Detection describes the process from which sound pressure level the ear can just barely perceive a sound. The detection threshold is determined in a quiet room using sound audiometry and forms the hearing curve in the audiogram. It serves as the basis for defining normal hearing and degrees of hearing loss. Variations in detection performance provide information about peripheral and central hearing disorders.
The decibel (dB) is a logarithmic unit for specifying level ratios, often sound pressure or sound intensity. An increase of 10 dB corresponds to approximately a doubling of the perceived volume. In audiology, hearing thresholds are specified relative to a standard (0 dB HL). Decibel values help to define noise exposure limits and calibrate hearing aid amplification.
Diagnostic audiometry includes all tests that determine the type and extent of hearing loss, including sound, speech and impedance measurements. It differentiates between conductive and sensorineural hearing loss as well as central disorders. The results serve as the basis for treatment decisions such as hearing aid fitting or surgical interventions. Modern computer-aided audiometers provide precise, reproducible findings.
In dichotic listening, different acoustic signals are presented to each ear simultaneously to test central processing and lateralization. Typical tests present competing speech or tone sequences to assess attention and filtering ability. Disturbances are seen in central auditory processing disorders or after strokes. Dichotic paradigms are used in pediatric audiology diagnostics and neurorehabilitation.
Differential tone audiometry measures the ability to recognize very small frequency differences between two tones. Test subjects indicate which tone sounds higher or lower; this allows the frequency resolution of the ear to be quantified. Reduced differentiation ability indicates central or cochlear disorders. The method provides insights into neuronal sharpening and plasticity of the auditory system.
Digital hearing systems convert acoustic signals into digital data, process them using algorithms and convert them back into sound. They offer adaptive noise reduction, feedback management and multi-channel compression. Software-supported fine adjustment allows individual sound profiles for different listening situations. Compared to analog devices, they provide better speech intelligibility and greater flexibility.
Discrimination refers to the ability to perceive two similar acoustic stimuli as different, such as differences in pitch or volume. It is tested in speech and tone audiometry and is crucial for speech comprehension. Limited discrimination is found in cochlear dead zones and central processing disorders. Training programs aim to improve discrimination thresholds.
Distance hearing describes the detection of sound sources that are far away from the listener. Sound pressure levels fall with increasing distance, which is why the ear and hearing systems must be sensitive to quiet signals. In room acoustics and sound reinforcement technology, loudspeaker positions and reverberation times are optimized to facilitate distance hearing. With hearing loss, distance hearing deteriorates more than near hearing, which requires special amplification strategies.
A distortion product OAE is a back emission generated by the cochlea when two sounds are present at the same time and the non-linear properties of the hair cells generate distortion products. These emissions are measured in the ear canal and provide information about the function of the outer hair cells. The presence of DPOAEs indicates an intact cochlear amplifier, their absence indicates damage. DPOAE tests are fast, objective and are also used in newborns.
Distortion products arise in non-linear systems when two or more frequencies are mixed and generate new frequencies (sum/difference). In the ear, they are generated by the active amplification of the outer hair cells. They can be used diagnostically as otoacoustic emissions and indicate cochlear health. In electrobiology, they are used as an indicator of system linearity and filter quality.
DPOAE refers to the measurement of specific distortion products generated by the cochlea in response to two test tones. It allows non-invasive assessment of outer hair cell function without the active cooperation of the patient. DPOAEs are considered standard in newborn hearing screening and early ototoxicity diagnostics. Absence of DPOAEs with a normal tympanogram indicates sensorineural hearing loss.
Pressure equalization between the middle ear and the environment takes place via the Eustachian tube and ensures that the eardrum can vibrate freely. Malfunctions lead to negative or positive pressure, which causes pain and hearing loss. Techniques such as Valsalva maneuvers or tube catheters treat tubal dysfunction. Tympanometry documents the pressure curve and helps in the decision making process for tympanostomy tubes.
A feeling of pressure occurs when the middle ear pressure differs from the external and internal pressure, usually when traveling by plane or having a cold. The eardrum stretches and mechanical sound conduction deteriorates. Repeated ventilation exercises activate the auditory tube and equalize the pressure. A persistent feeling of pressure may indicate tubal dysfunction or middle ear effusion.
Pressure pain in the ear indicates inflammatory processes such as otitis media or exostoses. Palpation of the tragus and percussion of the mastoid area relieve pain in case of pathological changes. Pain intensity often correlates with the degree of inflammation and the amount of effusion. Pain therapy combines analgesics with targeted treatment of the underlying disease.
The dynamic range describes the difference between the hearing threshold and the pain threshold of the ear. It typically ranges from 0 dB HL to around 120 dB SPL. Hearing aids must cover this range without causing distortion. Reduced dynamic range in hearing loss requires compression to attenuate loud sounds and make soft sounds audible.
Dynamic compression in hearing aids reduces the difference in level between soft and loud signals by attenuating loud sounds to a greater extent. This means that ambient noise remains tolerable and quiet speech becomes audible. Compression parameters such as ratio and attack/release time are set individually. However, excessive compression can impair sound quality and speech intelligibility.
Dysacusis describes an impaired sound quality despite being able to hear, such as distortion or blurring of the speech signal. Those affected hear sounds but cannot distinguish them clearly. This is usually caused by cochlear non-linearities or central processing deficits. Therapy includes targeted hearing training and adaptation of the signal processing in the hearing aid.
Tubal dysfunction occurs when the Eustachian tube does not open and close properly, leading to pressure build-up and effusion in the middle ear. Symptoms include a feeling of pressure, hearing loss and recurring infections. Diagnosis is by tympanometry and tube function test. Treatment ranges from nasal drops and balloon dilatation to tympanostomy tube implantation.
E
Echolocation is the active localization of objects by emitting sound pulses and evaluating the returning echoes. Bats and some marine mammals use this method to navigate in the dark or murky water and find prey. In humans, echolocation can be trained, for example by blind people who use it to derive spatial information acoustically. Neurobiological studies show that auditory areas in the brain are plastically reorganized in the process. Technical applications are adapting the principle for sonar and ultrasound devices in medicine and industry.
Intrinsic sensitivity refers to the minimum signal that a measuring device or hearing system can still reliably detect from its own noise. In hearing aids, it corresponds to the internal microphone and amplification noise, which is the lower limit for amplification. A low value is important so that quiet environmental noises are not masked by inherent noise. Manufacturers optimize electronic components and filter algorithms to reduce the inherent sensitivity. In measurement technology, the noise floor is shown as a key figure.
Intrinsic noise is the continuous background noise of electronic systems in the absence of an input signal. In hearing systems, it can impair the perception of very quiet sounds and reduce wearing comfort. The level of self-noise depends on circuit topology, component quality and filter design. Modern digital hearing aids use noise reduction algorithms to actively minimize self-noise. Regular maintenance and cleaning of the microphones also prevent extraneous noise.
Acoustic sleep aids such as white noise, the sound of the sea or gentle piano music promote falling asleep and staying asleep by masking disturbing ambient noise. People with tinnitus in particular benefit from continuous sound patterns that draw the focus away from the ear noise. Studies show that such sounds shorten the time it takes to fall asleep and improve sleep quality. Apps and hearing aid programs offer customizable sound profiles and timer functions. It is important to keep the volume below 40 dB so as not to put additional strain on the hearing.
The transient response describes the initial reaction of a vibrating system to a sudden sound stimulus before a steady state is reached. In the ear, this applies to the eardrum and ossicular chain, which initially vibrate excessively before reaching stable amplitudes. Audiometric impedance measurements use the transient process to detect middle ear pathologies such as otosclerosis or tube occlusion. Abnormal transient response times indicate altered stiffness or mass of the structures. In hearing aid technology, the transient response of filters is optimized in order to minimize distortion during rapid level changes.
The setting range of a hearing aid defines the level range that the device can process and amplify without distortion. It ranges from the minimum input volume, at which amplification still takes place, to the maximum volume, at which compression sets in. A wide adjustment range allows very quiet and loud signals to be heard without clipping or discomfort. Audiologists select a device with an appropriate range based on the individual hearing loss profile. Technical datasheets provide the range of adjustment along with compression ratio and gain factors.
Single frequency analysis breaks down complex sound signals into their individual frequency components using Fourier transformation. It provides amplitude and phase-specific information on each frequency component and is the basis for spectral analysis in audiology. Applications can be found in the analysis of otoacoustic emissions, room acoustics measurements and hearing aid fine-tuning. Diagrams show level curves across the frequency spectrum and allow conclusions to be drawn about filter behavior and cochlear function. In research, single frequency analysis is used to investigate neuronal response patterns in the auditory system.
In single tone audiometry, sounds of individual frequencies and levels are presented one after the other to determine the hearing threshold per frequency. The results are visualized in the audiogram as air and bone conduction curves. This procedure is standard in the diagnosis of conductive and sensorineural hearing loss. Modern audiometers offer automated test protocols and adaptive procedures for faster, more reliable measurements. The validity depends on the subject's cooperation and reaction time.
Electrocochleography measures electrical potentials in the inner ear and auditory nerve in response to acoustic stimuli. A needle electrode on the eardrum or an ear canal electrode is used to record the summation potential and endolymphatic pressure. ECochG is used to diagnose Meniere's disease, endolymphatic hydrops and acoustic trauma. Peak pressure amplitudes correlate with the severity of the hydrops. The examination is minimally invasive and provides important data on internal ear mechanics.
The sensitivity range describes the level range in which the human ear or a hearing system can process acoustic stimuli without distortion. For the human ear, this range lies between the hearing threshold (0 dB HL) and the pain threshold (~120 dB SPL). Hearing aids use compression to adapt this range to the residual hearing in order to attenuate loud sounds and make soft sounds audible. Measurement systems calibrate the sensitivity range to ensure linear response within this window.
The threshold of sensitivity is the lowest sound pressure level that can just be perceived by the ear. In audiometry, it is determined separately for each test frequency and documented in the audiogram. Deviations from normal values define the degree of hearing loss. Together with the pain threshold, the sensitivity threshold forms the dynamic range of hearing. Clinically, it helps to differentiate between conductive and sensorineural hearing loss.
Endolymph is the potassium-rich fluid in the cochlear duct and the membranous semicircular canals. It transmits mechanical vibrations to hair cells and generates electrochemical signals. A pressure disturbance of the endolymph, as in endolymphatic hydrops, leads to dizziness and hearing loss. Laboratory measurements and clinical tests of endolymphatic function support the diagnosis of Menière's disease. Research focuses on the regulation of endolymph volume for the treatment of vestibular disorders.
Energy measurement integrates sound levels over time and frequency to assess cumulative noise exposure. It forms the basis for occupational noise protection guidelines that define maximum daily doses. Devices continuously record level values and calculate daily exposure values (LEX,8h). Epidemiological studies correlate energy exposure with the risk of hearing loss. Preventive measures are based on energy measurements to reduce noise damage.
Relaxation sounds such as white noise, the sound of the sea or gentle melodies mask disturbing noises in the ears and promote sleep and stress reduction. In tinnitus patients, they reduce the focus on the ringing in the ears and improve quality of life. Clinical studies show that controlled exposure to sound reduces anxiety and sleep onset latency. Apps and hearing aid programs offer personalized sound libraries. It is important to keep levels below 40 dB to avoid additional hearing stress.
Diseases of the Eustachian tube include tubal catarrh, tubal stenosis and tubal obstruction. Symptoms include a feeling of pressure, hearing loss and recurrent middle ear effusions. Tympanometry and tube function tests are used for diagnosis. Balloon dilatation, nasal steroids and tympanostomy tubes are used therapeutically. Chronic cases require close monitoring and interdisciplinary treatment.
The excitation threshold is the minimum stimulus level that triggers a response in hair cells or auditory neurons. In the cochlea, it varies along the basilar membrane and defines the tonotopy. Measurements by microelectrodes or evoked potentials provide insight into neuronal sensitivity. Elevated thresholds indicate hair cell damage or central adaptation.
A replacement hearing aid serves as a short-term supply in the event of failure of the main hearing aid and is preconfigured with standard programs for everyday sounds. It prevents underuse and social isolation until the hearing aid is repaired. Audiologists pre-program replacement devices individually to ensure seamless hearing comfort. Regular maintenance minimizes unexpected failures.
A substitute signal is an artificially generated sound pattern that compensates for missing acoustic information. It is used in hearing systems to mask tinnitus or to synthesize missing frequencies. Substitute signal algorithms are based on psychoacoustic models of hearing perception. The aim is to optimize speech intelligibility and sound quality.
The extended high-frequency range covers frequencies above 8 kHz to around 16 kHz and contributes to sound color and music perception. Early detection of high-frequency loss serves as an early indicator of noise damage. High-frequency audiometry tests this range to detect subtle deficits. Hearing aids with high-frequency amplification improve music and speech intelligibility in complex sound environments.
The Eustachian tube connects the middle ear and nasopharynx, regulates pressure equalization and protects against nasal secretions. It opens when swallowing or yawning and closes passively to ensure ventilation of the middle ear. Dysfunctions lead to a feeling of pressure, hearing loss and effusions. Balloon dilatation and nasal corticoids are established therapies. Functional tests measure opening pressure and duration.
Evoked potentials are electrical responses of the auditory system to sound stimuli, measured by scalp electrodes. They are divided into ABR (brainstem), MLR (midlatency) and CAEP (cortical). These objective tests check the integrity of the auditory pathway without active cooperation. Used for newborn screening, neurological diagnostics and CI fitting. Analysis of latency and amplitude provides information about lesion locations.
Exostoses are benign bony growths in the external auditory canal, often caused by repeated cold and moist stimuli ("surfer's ear"). They narrow the canal, promote cerumen retention and increase the risk of otitis externa. Surgical removal clears the ear canal again. Prevention through ear protection against cold and water is recommended.
Exposure limits define permissible noise levels at the workplace over specified periods of time, e.g. 85 dB(A) over 8 hours. They are based on epidemiological studies on noise damage and are enshrined in law. Exceedances require technical noise reduction and personal hearing protection. Measurements provide LEX,8h values for compliance with the limit values.
External otitis is an inflammation of the external auditory canal, usually caused by bacteria or mycosis. Symptoms include itching, pain and discharge. Treatment includes cleansing, topical antibiotics or antifungals and keeping the ear dry. Chronic forms require long-term care and pH-neutral cleansing preparations.
An extra-cochlear implant stimulates the auditory nerve outside the cochlea, such as brainstem implants for retrocochlear deafness. Electrodes are placed in the area of the cochlear nucleus. Indication for non-functioning cochlea. Rehabilitation includes intensive speech training and mapping sessions.
F
Acoustic feedback occurs when a microphone picks up sound from the speaker and amplifies it again, creating a feedback loop. This typically manifests as whistling or humming and can severely degrade sound quality. Adaptive algorithms are used in hearing aids and sound reinforcement systems to detect and suppress feedback in real time. Mechanical measures such as tight earmoulds or directional microphones also minimize feedback risks. An optimally tuned system prevents audible artifacts for the user.
In field tone audiometry, continuous tones at defined frequencies and levels are presented via headphones or loudspeakers to determine hearing sensitivity. Unlike with impulse probes, the patient holds a control as soon as they hear the sound and releases it when it disappears. This produces a precise threshold curve that documents adaptation behavior and hearing range. The procedure is particularly suitable for research and differential diagnosis of rare hearing disorders. Modern devices automate the process and store the results digitally.
The petrous bone (os petrosum) is part of the temporal bone and surrounds the inner ear as well as the auditory and vestibular nerves. Its dense bone structure protects the sensitive sensory organs from mechanical influences. Inflammation or tumors in the temporal bone can lead to hearing loss, tinnitus and dizziness. Imaging (CT, MRI) shows the petrous temporal bone in detail in order to detect pathological changes. Surgical interventions in this area require the utmost precision in order to protect nerve structures.
An acoustic filter selects certain frequency ranges and suppresses others in order to shape sound spectra in a targeted manner. Multi-band compression filters are used in hearing aids to emphasize speech and attenuate background noise. Filter types such as high-pass, low-pass, band-pass and notch filters allow specific interventions in the frequency spectrum. The characteristics of a filter are described by parameters such as slope and quality (Q factor). Precise filtering improves speech intelligibility and sound quality.
The filter characteristic defines how strongly and in which frequency range a filter attenuates or amplifies. It is represented graphically in the frequency response, whereby the transition bandwidth and slope are decisive. In hearing aid technology, the filter characteristic determines which speech frequencies are emphasized and which noise frequencies are suppressed. Adaptive filters dynamically adjust their characteristics to changing listening situations. A precise design prevents sound distortion and reduces listening effort.
The filter quality (Q factor) describes the sharpness of a resonance peak in a bandpass or notch filter. A high Q value means a narrow bandwidth with steep edges, while a low Q value enables wider transitions. In hearing aids, the Q value is selected so that speech bands are clearly separated and noise is minimized. However, too high a Q factor can cause resonance effects and sound coloration. Fine-tuning the Q-factors is part of the hearing aid fitting by the acoustician.
The slope describes how quickly a filter attenuates outside its passband, measured in dB/octave. Steep slopes (e.g. 24 dB/octave) suppress unwanted frequencies more strongly, but can lead to phase distortion. In hearing systems, a compromise is chosen between attenuation effect and natural sound. Edge steepness also influences the crosstalk of neighboring filter bands. Adaptive systems vary the slope according to the situation in order to achieve optimum speech intelligibility.
An FM system transmits speech signals wirelessly via FM radio from a transmitter unit (teacher microphone) directly to the receiver in the hearing aid. This improves speech intelligibility in noisy environments or large rooms, as ambient noise is blocked out. FM receivers are often integrated in hearing aids or available as accessories. Range and sound quality depend on transmitter power and antenna concept. Regular calibration ensures reliable transmission without interference.
Formants are resonant frequency bands in speech that are created by vowel tract formation and characterize vowels. The first two formants (F1, F2) are crucial for distinguishing vowels. In speech audiometry, formants are analyzed to diagnose articulation disorders. Hearing aids and speech processors emphasize formants to improve intelligibility. Spectral analysis visualizes formant position and width.
Formant transitions describe the dynamic shift of formants when changing between speech sounds, for example from consonant to vowel. They are important acoustic cues for speech perception and phoneme recognition. Distorted or attenuated transitions lead to comprehension problems, especially in background noise. Audiological tests evaluate formant transitions in real time. Speech training can improve perception and production of these transitions.
A free field is an acoustically unlimited space without reflective surfaces in which sound propagates in a spherical shape. In audiometry, free-field conditions are simulated in order to test hearing aids and loudspeakers objectively. Measuring microphones record the sound pressure at various distances from the sound source. Free-field measurements provide data for sound reinforcement planning and room acoustics optimization. In practice, low-reflection chambers or open-field setups are used.
In free-field measurements, the sound pressure is determined in an open, anechoic environment in order to obtain precise level and frequency response data. The loudspeaker and microphone are positioned at standardized distances, usually 1 m. The results are used to calibrate audiometer headphones and loudspeaker systems. Sources of error such as ground reflections are minimized by shadowing. Free-field measurements are the reference for room and product acoustics.
Frequency refers to the number of oscillation cycles per second and is measured in Hertz (Hz). In the hearing range, it typically ranges from 20 Hz to 20 kHz, with speech being predominantly between 250 Hz and 4 kHz. Frequency analysis is central to audiometry, otoacoustic emissions and hearing aid filter design. The cochlea and auditory cortex are organized tonotopically, meaning that different frequencies are processed at different locations. Changes in frequency perception can indicate cochlear or central disorders.
Frequency resolution describes the ability to perceive two closely spaced frequencies as separate sounds. It depends on the filter bandwidth and the membrane capacity of the cochlea. High resolution is essential for music and speech recognition, as many overtones are close together. Narrow filter bands are used in hearing aids to maximize frequency resolution. Psychoacoustic tests determine individual resolution limits.
A frequency band is a defined range of frequencies that is processed by a filter or amplifier. In multi-band hearing aids, the audio spectrum is often divided into 4-16 bands in order to process specific speech and interference frequencies. Each band can be compressed, amplified or attenuated separately. The band limits and bandwidths are adapted to the hearing loss profile. Fine tuning of the bands optimizes speech intelligibility and sound fidelity.
The frequency range indicates the entire spectrum in which a system (ear, microphone, loudspeaker) operates. For the human ear, this range is approximately between 20 Hz and 20 kHz, with individual variability and age dependency. Hearing aids typically cover 100 Hz to 8 kHz in order to optimally amplify speech. Frequency ranges are indicated in audiograms and technical specifications. Limitations in the frequency range have a direct impact on sound quality and intelligibility.
The frequency response shows the amplification or attenuation of a system over the frequency spectrum. In hearing aid technology, it documents how different frequencies in the output signal are adjusted. A linear frequency response reproduces the input signal without distortion, compressed response curves improve speech components. Measurements in the free field or with an artificial ear provide exact curves. Clinical fitting software visualizes frequency response and allows fine tuning.
Frequency modulation (FM) changes the carrier frequency of a signal depending on a modulation signal, such as speech. FM systems in hearing acoustics transmit audio signals wirelessly with high interference immunity. FM receivers in hearing systems decode the modulated signal and improve speech intelligibility in noisy environments. Technical parameters such as modulation index and bandwidth determine transmission quality. FM is standard in schools and conference systems for the hearing impaired.
Frequency selectivity describes the ability of the ear or filter to process individual frequencies separately. In the cochlea, it is created by tonotopic tuning of the basilar membrane. Hearing aids attempt to reproduce selectivity by using narrow filter bands. Loss of selectivity leads to wider masking and poorer speech intelligibility. Psychoacoustic tests measure selectivity via masking paradigms.
Frequency distortion occurs when a system amplifies or attenuates certain frequencies unevenly, which changes the sound spectrum. This can be caused by non-linear filters, overdrive or diaphragm damage. In hearing aids, distortion is minimized by linear amplification stages and feedback suppression. Measurements with sine sweeps and spectral analysis quantify distortion. High distortion impairs naturalness and speech intelligibility.
A crossover splits an audio signal into several bands in order to amplify or filter them separately. In multi-channel hearing aids, it enables differentiated compression and noise suppression per band. Passive crossovers work with coils and capacitors, active ones with electronic filters. The edge steepness and filter quality determine the selectivity between bands. Precise crossovers prevent crosstalk and phase errors.
Functional hearing impairment occurs when there is no evidence of organic damage, but hearing behavior is impaired. The causes are often psychological, such as stress or attention disorders. Symptoms include fluctuations in the hearing threshold and discrepancies between test and everyday performance. Diagnostics combines objective measurements (OAE, AEP) with behavioral audiometry. Therapy includes psychological support and habitual hearing training.
Functional tests examine specific aspects of hearing and balance function, such as the stapedius reflex, tube function or vestibular stimuli. They supplement audiograms with information on mechanical and central processing. Standard tests are tympanometry, VEMP and caloric testing. Results are incorporated into differentiated diagnoses and treatment plans. Modern test systems automate processes and provide reproducible data.
G
The spiral ganglion is a nerve cell ganglion inside the cochlea in which the cell bodies of the auditory nerve fibers (bipolar neurons) are located. It receives electrochemical signals from the hair cells and transmits action potentials to the brain stem via the vestibulocochlear nerve. Damage or degeneration in the ganglion spirale, for example due to age or ototoxins, leads to sensorineural hearing loss. Researchers are investigating how electrical stimulation of the ganglion can improve the performance of cochlear implants. Histological studies show that residual cell populations are crucial for the success of an implantation.
In psychoacoustics, gating describes the modulation of a sound signal through an on-off window in order to examine the onset and decay of the signal. It is used to analyze how quickly the ear reacts to the onset or cessation of a sound and how precisely the listener recognizes the signal boundaries. These measurements provide insights into temporal resolution and neuronal processing speeds in the auditory system. Clinically, gating helps to diagnose central auditory processing disorders. Experiments show that gating abilities decrease with age and hearing loss.
Sign language is a fully-fledged visual-gestural language used by deaf and hard of hearing people to communicate. It has its own grammar rules, syntax and vocabulary, independent of spoken languages. It plays an important role in hearing rehabilitation as an alternative form of communication. Interpreters and subtitles complement media services for sign language users. Research into the neurolinguistics of sign language shows that the same areas of the brain are activated as with spoken languages.
Ear training includes systematic training to train musical and linguistic hearing, for example to recognize intervals, chords or speech sounds. It promotes neuronal plasticity in the auditory cortex and improves differentiation and discrimination skills. In audiotherapy, auditory training is used to treat central auditory processing disorders. Software-supported programs offer adaptive exercises and feedback. Long-term training increases speech intelligibility, especially in noisy environments.
The external auditory canal conducts sound to the eardrum and, due to its shape, creates resonances in the frequency range of 2-4 kHz, which favors speech comprehension. It is lined with skin and cerumen glands, which produce earwax and prevent infections. Exostoses or cerumen obstructions impair sound conduction and lead to hearing loss. Otoscopic examination and cleaning are standard in ENT practice. Surgical procedures on the ear canal require preservation of skin integrity to avoid scarring and stenosis.
The hammer, anvil and stirrup form the chain of ossicles in the middle ear, which mechanically amplify sound pressure from the eardrum to the oval window. Their leverage increases the sound pressure by about 1.3 times before vibrations are transmitted to the inner ear. Joints and muscles (stapedius, tensor tympani) regulate stiffness and protect against loud stimuli. Diseases such as otosclerosis ossify these structures and cause conductive hearing loss. In surgery, prostheses or stapes otoplasties are used to restore mobility to the chain.
Hearing protection prevents noise damage by attenuating harmful sound pressure levels and is available in various forms: Earplugs, earmuffs or customized earmoulds. Protection factors (SNR, HML) provide information about attenuation performance in different frequency ranges. Correct fit and wearing comfort are crucial for effectiveness and acceptance. Legal requirements for noise protection apply in industrial and leisure environments. Modern electronic hearing protectors allow speech to be understood while at the same time protecting against impulse and continuous sound.
The hearing threshold is the lowest sound pressure level that can just be perceived and varies with frequency. It is plotted on the audiogram as a function of frequency and defines normal hearing (0-20 dB HL). Shifts in the threshold indicate hearing loss and determine the need for care. Thresholds are determined by tone audiometry under standardized conditions. Long-term courses show progression of noise damage or therapy effects.
Hearing training includes exercises to improve sound and speech perception, e.g. pitch, rhythm or speech comprehension tasks. It uses neural plasticity to strengthen central auditory processing after hearing loss or CI implantation. Computer-based programs adapt difficulty levels and provide immediate feedback. Studies show significant improvements in dB thresholds and discrimination skills. Integration in rehabilitation increases suitability for everyday use and communication comfort.
Hearing loss describes a reduction in hearing ability, subdivided into conductive, sensorineural and combined forms. Causes range from cerumen obstruction to noise trauma and neuronal lesions. The degree and frequency range of the loss are documented in the audiogram. Treatment options include hearing aids, implants or surgery. Early detection and intervention significantly improve prognosis and quality of life.
Hearing amplification is usually provided by hearing aids or implants that increase sound pressure or provide electrical stimulation. Digital hearing systems offer frequency-selective amplification and compression to make soft sounds audible and loud sounds tolerable. Amplification profiles are individually adapted to the audiogram. Excessive amplification can cause feedback or discomfort. Fine adjustment by the acoustician optimizes speech intelligibility and sound quality.
Hearing delay refers to a delay in sound perception, for example due to central processing difficulties or hearing system latencies. Latencies of over 10 ms can impair speech comprehension and audio-video synchronization. In digital hearing aids, latency is minimized by fast signal processors. Diagnostically, evoked potentials and reaction times are measured in dichotic or latency tests. Rehabilitation aims to reduce central delays through training.
In the case of hypersensitivity or central processing disorders, normal background noise can lead to mental exhaustion. Those affected complain of concentration problems, headaches and stress. Therapy includes hearing training, cognitive behavioral therapy and the targeted use of hearing protection. Adaptation of the working environment and breaks reduces symptoms. Research investigates the neuronal correlates of auditory fatigue.
The gelatine membrane is the middle, connective tissue layer of the eardrum and gives it tensile strength and elasticity. It consists of collagen fibers in a radial and circular arrangement. Injuries or perforations to this layer impair the ability to vibrate and lead to conductive hearing loss. In tympanoplasty, the gelatin skin is replaced with grafts to restore integrity and function. Histological studies investigate healing processes and scarring.
General masking adds broadband noise to the test signal to prevent cross-hearing and unwanted co-reactions. In audiometry, it ensures valid threshold determinations on both sides. Masking level is based on interaural attenuation values. Incorrect masking can falsify test results, correct protocols are defined in standards.
Noise is a sound event with an irregular or complex frequency spectrum that is not perceived as a musical tone. It can be disturbing (noise) or pleasant (natural sounds), depending on the context and volume. In psychoacoustics, parameters such as loudness, masking and emotional response are investigated. Noise management in working and living areas serves health and comfort goals.
Sound sensitivity refers to the individual reaction to acoustic stimuli, ranging from normal hearing to hyperacusis. Patients with hyperacusis perceive moderate volumes as painful or stress-inducing. Comfort and discomfort thresholds are determined diagnostically. Therapy includes desensitization training and cognitive procedures. Targeted hearing protection prevents additional sensory overload.
A sound level meter measures sound pressure levels in dB (A) or dB (C) and is used in industry, environmental and health studies. Modern Class 1 meters offer high accuracy and frequency weighting according to standards. Mobile apps use smartphone microphones, but are less accurate. Calibration and correct placement are a prerequisite for reliable data.
The organ of balance in the inner ear consists of the semicircular canals and the otolith organs (sacculus, utriculus). It registers rotational and linear accelerations and sends information about head position and movement to the brain. Disturbances lead to dizziness, nausea and unsteadiness when walking. Diagnostic tests include caloric testing, VEMP and videonystagmography. Rehabilitation includes vestibular training to compensate.
A bell filter (peak filter) emphasizes or attenuates a narrow frequency band around a mid-frequency and is used in hearing aids for fine-tuning. The filter has two transition slopes whose steepness is defined by a Q factor. Bell filters can be used to correct specific resonances or interfering frequencies. They are part of multi-band equalizers and compression systems.
Glutamate is the primary neurotransmitter released by inner hair cells at the synaptic cleft to transmit auditory signals to afferent neurons. The amount and speed of glutamate release influences the temporal precision of signal transmission. Dysregulation can lead to synaptic deterioration and hidden hearing loss. Research is investigating glutamatergic modulators to protect synapses in noise trauma.
Goodness of fit evaluates how well a hearing aid signal corresponds to the target frequency response specified by the audiogram. It is measured as a curve deviation in dB over frequencies. A high goodness of fit correlates with better speech understanding and user satisfaction. Fitting software shows fit diagrams in real time and allows fine-tuning. Regular checks ensure long-term accuracy of fit.
In group audiometry, several test subjects are tested at the same time, usually during preventive hearing checks in companies. Standardized signals are presented via loudspeakers in an open field and individual reactions are recorded by hand signals. This method is efficient, but less precise than single-tone audiometry. Deviating results are checked in individual tests.
A rubber membrane in the earmold ensures a tight fit and optimizes sound transmission in hearing aids. It prevents feedback and filters out ambient noise. The choice of material influences comfort and durability; medical silicone is standard. Regular replacement prevents cracking and leaks.
H
The H2O impedance measurement is a variant of tympanometry in which the middle ear pressure-volume behavior is examined with a water-filled ear canal. Controlled pressure changes are used to assess the mobility of the eardrum and ossicular chain. Deviations in the impedance curve indicate tube dysfunction, effusions or stiffening (e.g. otosclerosis). As water has a different acoustic resistance to air, this method provides greater sensitivity for small leaks and membrane damage. Clinically, it is mainly used in pediatric audiology and veterinary diagnostics.
Habituation refers to the diminishing reaction to repeatedly presented, unchanged stimuli. In the auditory system, it leads to constant background noise being faded out over time. This mechanism protects against information overload and makes it possible to focus on new, relevant signals. Habituation is used in tinnitus therapy to reduce the awareness of ear noises. If habituation is lacking, hypersensitivity and increased cognitive stress arise due to constant noise perception.
The sharkbone pattern in the audiogram describes alternating high and low points along the curve, similar to the teeth of a shark's tooth. It indicates measurement artifacts, lack of concentration or simulated hearing loss. Clinically, it is important to recognize this pattern in order to ensure valid findings and avoid misdiagnosis. If non-organic hearing loss is suspected, objective tests such as OAE or AEP follow. Cleaning the test environment and providing clear instructions to the patient reduce sharkbone artifacts.
The reverberation effect describes the phenomenon that a sound is perceived longer in a room with reverberation than in an anechoic chamber. Psychoacoustically, reverberation leads to an increase in level and distortion of the temporal structure of speech signals. Reverberation separation must be taken into account when fitting hearing aids in order to maintain speech intelligibility in real rooms. Reverberation time measurements (RT60) provide parameters for room acoustic optimization. Training programs teach listeners to distinguish between direct and reflected sound components.
The malleus is the first of the three auditory ossicles in the middle ear and is directly connected to the eardrum. It mechanically transmits vibrations from the eardrum to the anvil and thus controls the transport of sound into the inner ear. Its lever effect amplifies the sound pressure and enables efficient impedance matching between the air and fluid medium. The hammer reflex, triggered by loud sounds, protects against excessive sound damage. In surgery, care is taken to preserve the malleus structures so as not to impair sound conduction.
The hammer-anvil reflex is a muscle contraction of the tensor tympani and stapedius in response to loud noises, which stiffens the ossicular chain. This dampens vibrations and protects the inner ear from noise damage. Reflex latency and amplitude are measured in tympanometry to assess middle ear and brainstem function. Unilateral deficits indicate nerve lesions or ossicular pathologies. The reflex contributes to acoustic adaptation and shields against impulse sound.
A handheld microphone is an external microphone held by speakers in FM or DECT systems to transmit speech directly to hearing aid receivers. It improves speech intelligibility in noisy or large rooms as ambient noise is not picked up. Direct transmission minimizes signal loss and improves signal-to-noise ratio. Receivers in the hearing aid decode the radio signal and transmit it to the receiver. Hand-held microphones are essential in classrooms, conferences and religious events.
A home device is a hearing system that offers programs specially optimized for use at home, e.g. for television or telephony. This category often includes desktop or near-field communication devices with direct hearing aid coupling. They offer higher amplification and special filters to clearly transmit distant or digital sound sources. Home devices complement mobile hearing aid care and increase comfort in the home environment. Integration with smart home systems enables automatic scene selection.
Skin conduction (also known as structure-borne sound conduction) transmits vibrations via soft tissue and bone directly to the inner ear, bypassing the outer ear and middle ear. It plays a role in hearing one's own voice (autophony) and in bone conduction hearing systems. Skin conduction measurements help to distinguish conductive from sensorineural hearing loss. Bone conduction devices use sound cuvettes or implants to specifically stimulate this pathway. Skin conduction levels are less frequency dependent than air conduction.
BTE (behind-the-ear) hearing aid sits behind the pinna and connects to an earmold in the ear canal via a tube. It offers space for larger amplifiers, batteries and multi-channel signal processors. BTE systems are powerful and suitable for moderate to severe hearing loss. Modern models feature wireless networking, directional microphones and rechargeable batteries. The design enables easy handling and robust electronics.
The HRTF describes the frequency-dependent filter effect of the head, torso and auricles on incoming sound waves. It forms the basis for spatial hearing and virtual audio rendering, as it encodes interaural time and level differences. Measurements are made using microphones in artificial heads or individual calibration methods. In hearing aid development, HRTF models are used to obtain natural localization even with devices behind the ear. VR and 3D audio technologies are based on HRTF synthesis for immersive sound experiences.
The healing phase after eardrum perforation or middle ear surgery includes the initial inflammatory reaction, new tissue formation and scarring. In the first few days, the focus is on pain and infection control, followed by tissue remodeling over a period of weeks. Tympanometry and otoscopy monitor the reclosure and function of the eardrum. Hearing improvement is gradual, full recovery can take months. Physical rest and avoidance of pressure changes support healing.
The helix is the upper, curved edge of the pinna and is used to focus sound into the cavum conchae. Its shape influences the spectral filtering of external sound and supports vertical localization. Anatomical variations of the helix can shape individual HRTF profiles. In hearing aid fitting, attention is paid to helix compatibility in order to avoid pressure points. Surgically, the helix plays a role in earmolds and reconstructions.
A Helmholtz resonator is an acoustic resonator consisting of a cavity and a narrow opening that strongly amplifies sound at its natural frequency. In the ear, the cavum conchae has a similar effect and emphasizes frequencies around 2-5 kHz, which promotes speech comprehension. Acoustic filters in hearing aids use the Helmholtz principle for compact bass attenuation or notch filters against tinnitus frequencies. Room acoustic elements such as bass traps work according to the same physical principle.
The comfort threshold is the level at which sound is perceived as unpleasantly loud. With hearing loss, this threshold often shifts upwards, which means that those affected later find loud stimuli annoying. Hearing aid compression must take the comfort threshold into account in order to avoid overmodulation. Measurements using Bekesy audiometry or loudness scaling determine individual comfort ranges. Fine adjustment protects against discomfort and distortion.
Heterophonic masking occurs when an interfering sound in one frequency band impairs the perception of a useful sound in another band. This effect explains why external sounds interfere with speech even though they are in different frequencies. Masking models in hearing aids simulate heterophonic effects in order to optimally adjust compression and filters. Psychoacoustic tests quantify masking level differences. Understanding in noise improves when masking is specifically reduced.
Hidden hearing loss refers to synaptic damage between the inner hair cells and the auditory nerve that remains undetectable in standard audiograms. Affected persons complain of difficulties in understanding in noise, although the hearing thresholds are normal. The pathology manifests itself in reduced evoked potentials and altered OAEs. Research focuses on synaptoprotective therapies and early diagnosis. Hidden Hearing Loss emphasizes the importance of central auditory processing tests.
High-definition audiology combines high-resolution measurement techniques, adaptive signal processing and AI-supported analysis to revolutionize hearing diagnostics and hearing aid fitting. It uses detailed cochlear and cortex profiles to develop personalized amplification and compression strategies. Real-time data from mobile apps and biosensors flows into cloud-based fitting platforms. The aim is to maximize speech intelligibility and comfort in all hearing situations. Initial studies show significant improvements compared to standard methods.
A behind-the-ear (BTE) device places the electronics and battery behind the pinna, while a tube conducts sound to the earmold in the ear canal. This design allows powerful amplification and complex signal processors with low weight in the ear canal. BTE devices are robust, easy to use and suitable for moderate to severe hearing loss. Modern models integrate Bluetooth, telecoil and inductive charging functions. Feedback and sound quality can be individually controlled using open or closed earmolds.
A loss of high frequencies primarily affects the perception of high frequencies above around 2000 Hz. It often manifests itself in difficulties understanding consonants such as "s", "f" or "t", especially in noisy environments. The causes are usually noise damage, ageing processes or ototoxic drugs that damage hair cells in the basal cochlear region. Audiometrically, the loss manifests itself as an increase in the hearing threshold in the high frequencies. Hearing aid compression can specifically amplify the high frequency range in order to restore speech intelligibility.
The auditory pathway conducts acoustic information from the inner ear via several nuclei in the brain stem to the auditory cortex. It begins at the hair cells, runs via the vestibulocochlear nerve to the cochlear nucleus and further via the olive, lateral lemniscus and inferior colliculus to the thalamus. Each station extracts specific features such as time and level differences. Damage at any point leads to central auditory processing disorders. Objective evoked potentials (ABR, MLR, CAEP) test the integrity of the auditory pathway.
Auditory impression refers to the subjective perception of sound quality, volume and spatial position. It depends not only on acoustic parameters, but also on psychological factors such as attention and expectation. In audiology, hearing impression is measured using questionnaires and psychoacoustic tests. Hearing aid optimization aims to create a natural and pleasant auditory impression. Differences in hearing impression explain why people have different levels of satisfaction with hearing systems with identical measured values.
Hearing habituation describes the process of getting used to a new hearing aid or implant, as the brain has to process new sound patterns. Initially, many wearers find the amplified sounds too loud or unfamiliar. Through systematic wearing and targeted hearing training, the auditory cortex adapts and filters out unwanted parts. The weaning phase typically lasts several weeks to months. Accompanying audiological readjustment improves the success of adaptation and wearing comfort.
The auditory path depth is a measure of the temporal resolution of the auditory system, i.e. how closely successive sound events are still perceived as separate impulses. It is tested with short clicks or pulses and specified as the minimum interstimulus interval duration. Low auditory path depth makes it difficult to understand speech in impulsive noise. Measurements help to identify central temporal processing disorders. Auditory training can improve auditory pathway depth through neural plasticity.
Hearing feedback refers to feedback that hearing aid wearers sometimes perceive as an echo or whistling when the microphone signal reaches the receiver. This is caused by leaks in the earmold or incorrect amplification settings. Modern hearing systems detect feedback in real time and reduce it using adaptive filter algorithms. Mechanical measures such as tight earmoulds and microphone positioning minimize feedback risks. An optimized feedback manager increases sound quality and wearer satisfaction.
Hearing field analysis measures the hearing thresholds over a broad frequency and level spectrum to determine the individual dynamic range and comfort zone. It combines tone and loudness measurements and displays the results in hearing field curves. The analysis helps to determine optimum compression and amplification parameters for hearing aids. Deviations from the normal hearing field indicate bottlenecks in the perception of loudness and masking effects. Regular repetition documents progress in fitting.
A hearing filter selects certain frequency ranges to emphasize speech and suppress background noise. Hearing aids use digital multi-band filters that react adaptively to changes in the environment. Filter parameters such as center frequency, bandwidth and slope are individually adjusted. Incorrectly set filters can attenuate speech components or distort the sound. Psychoacoustic tests check filter effectiveness in real scenarios.
Hearing research comprises interdisciplinary studies on the mechanisms of hearing, diagnostic procedures and hearing aid technologies. It ranges from molecular studies of regenerative therapies to psychoacoustic experiments and clinical studies of new hearing aid algorithms. Current areas of focus are hidden hearing loss, AI-based signal processing and cochlear regeneration. Research results are incorporated into guidelines and product developments. International collaborations and publications ensure transfer into practice.
A hearing aid acoustician is a specialist who carries out hearing tests, fits and fine-tunes hearing aids. He advises on device types, earmolds and programs and trains wearers in their use and care. The training combines audiological, technical and communication skills. Quality assurance is achieved through validation tests and follow-up support. Good acousticians work closely with audiologists and ENT doctors.
Hearing aid batteries supply the electrical energy for analog and digital hearing systems. Common types are zinc-air cells (sizes 10, 13, 312, 675) with runtimes of 3-14 days. Rechargeable batteries are gaining in importance as they increase comfort and sustainability. Battery/charging cycles must be documented in order to avoid a drop in performance. Battery replacement training is part of hearing aid training.
The hearing aid channel is the device-specific frequency band in which a hearing aid amplifies or filters. Modern hearing aids have 4-16 channels to fine-tune the sound spectrum. More channels enable more precise fitting to the audiogram, but can increase computing power and latency. Channel parameters are visualized and optimized in the fitting software interface. However, the number of channels alone does not guarantee better speech intelligibility without correct fine-tuning.
A hearing aid program is a stored combination of settings for specific listening situations (e.g. quiet, restaurant, telephone). Programs automatically adapt amplification, compression and microphone characteristics to the ambient sound. Users switch manually or automatically via scene recognition. Diverse programs increase flexibility, but require training to use. Acoustician defines programs individually and calibrates transition parameters.
The provision of hearing aids includes selection, fitting, instruction and aftercare for hearing aid wearers. It begins with audiological diagnostics and continues with earmold production and fine-tuning in the real-life test. Regular checks ensure long-term function and satisfaction. Interdisciplinary cooperation with doctors and therapists optimizes rehabilitation. Documentation of all steps is part of the quality of care and cost coverage by insurance companies.
A hearing graph is the graphical representation of the audiogram and other measurement results such as OAE or reflexes in an overview. It visualizes hearing thresholds, dynamic range and comfort zones. Hearing graphs serve as a reference for fitting and progress checks. Software-generated graphs allow comparison of different measurement times. Clear visualization supports patients and professionals in discussions.
Hearing implants are electronic prostheses that convert acoustic information into electrical impulses and transmit them directly to the auditory nerve or brain stem. Types include cochlear implants, brainstem implants and bone conduction implants. Indications range from profound hearing loss to deaf inner ear. Implantation is performed surgically, followed by speech rehabilitation and mapping. Long-term success shows significant improvements in speech understanding and quality of life.
Auditory criticality describes the range around the hearing threshold in which small level changes are perceived particularly strongly. It is relevant for adjusting the compression so that signals remain natural and sound fluctuations remain audible. Measurements of the critical bandwidth provide information about filter design and masking effects. Narrower critical bands lead to better frequency selectivity. Fitting strategies in hearing aids take criticality into account to avoid sound coloration.
The auditory canal is the anatomical connection between the outer ear and the inner ear, consisting of the auditory canal, eardrum and ossicular chain. It transmits sound mechanically and optimizes impedance matching between the air and fluid medium. Diseases in this pathway (e.g. otosclerosis) lead to conductive hearing loss. Surgical procedures such as stapedotomy modify the auditory conduit to regain mobility. Tympanometry and audiogram analyze functional status.
Auditory localization is the ability to determine sound source direction based on interaural time differences (ITD) and level differences (ILD). The superior olivary complex in the brain stem compares signals from both ears. Precise localization improves speech comprehension and safety in everyday life. Hearing aids with binaural networking achieve localization by processing signals synchronously. Tests in the free sound field evaluate localization accuracy.
The auditory nerve (vestibulocochlear nerve, VIII cranial nerve) conducts electrical impulses from the cochlea and the vestibular organ to the brain stem. It branches into cochlear and vestibular parts and is essential for hearing and balance. Lesions lead to hearing loss, tinnitus or dizziness. Diagnostics include ABR measurements and imaging procedures. Early surgery is indicated for tumors such as acoustic neuroma.
In perceptual psychology, the horopter is the imaginary spatial curve on which visual and auditory stimuli are perceived as spatially congruent. With combined visual and acoustic stimulation, the horopter helps to minimize conflicts between eye and ear information. Experiments are being conducted to investigate how deviations from this line affect localization accuracy. For hearing aid wearers, the interaction of visual and auditory cues is relevant in order to precisely locate speech sources. Adaptations in hearing technology can aim to filter auditory signals so that they match the visual horopter.
Auditory pauses are deliberately inserted periods of silence between speech or music signals that give the auditory system time to process them. They improve speech comprehension by providing segmentation markers and enabling cognitive relief. In audiotherapy, listening breaks are used to provide tinnitus patients with periods of rest from the noise in their ears. Psychoacoustic studies show that regular breaks reduce auditory fatigue. Hearing aid programs can implement digital silence insertions to avoid excessive stimulation.
The hearing level refers to the sound pressure level at a specific point in the ear canal, measured in dB SPL. It is the basis for the calibration of audiometers and the adjustment of hearing aids. Differences between the input signal level and the hearing level in the earmold determine the effective amplification. In room acoustics, the hearing level is used to optimize volume distribution and sound quality. Audiologists make sure that hearing levels are below the comfort threshold and above the hearing threshold.
Hearing physiology describes the biological and biophysical processes from sound reception to neuronal processing in the brain. It includes mechanical processes in the outer ear, electrochemical transduction in hair cells and neuronal signal transmission. Changes in any of these steps lead to specific hearing disorders that can be analyzed physiologically. Research in hearing physiology provides the basis for therapies for hearing loss and tinnitus. Textbooks combine anatomy, biomechanics and neurophysiology into an integrative understanding.
Hearing preference refers to individual preferences for sound characteristics, such as warm bass or clear treble. It results from personal hearing adjustments and neurological processing differences. When fitting hearing aids, the preference is taken into account by fine-tuning the filters and compression parameters. Measurements are made by comparing different sound profiles and subjective rating. Good consideration of hearing preference increases wearing comfort and acceptance.
A listening test is a short sound or speech sequence that is used to test hearing aid programs or room acoustics. It is used by the wearer to assess the sound character and intelligibility under real conditions. In research, standardized auditory samples are used to compare the effects of signal processing algorithms. Audio samples can include music, speech or artificial test signals. Their systematic analysis helps to make optimizations.
Auditory noise is a uniform, broadband noise that is used as a test signal in audiometry to test masking and filtering effects. In tinnitus therapy, auditory noise is used as a masker to cover up ringing in the ears. The spectral composition can be white, pink or brown, depending on the desired masking effect. Auditory noise helps to analyze cochlear function and central noise processing. Customizable noise profiles support individual therapy goals.
Hearing cleaning refers to the professional removal of cerumen and deposits in the external auditory canal in order to restore sound conduction. It is carried out manually under a microscope or by means of gentle irrigation. Regular hearing cleaning prevents cerumen obturans and acute otitis externa. Subsequent tympanometry checks the restoration of middle ear function. Patients are trained in self-cleaning techniques to prevent recurrences.
The auditory quiet state is the state of minimal acoustic stimulation, usually measured in a soundproof room. It defines the baseline for hearing threshold tests and evoked potentials. A stable auditory quiet state ensures reproducible measurement results and avoids masking by ambient noise. Changes in the auditory quiet state can indicate adaptive processes or neuronal plasticity. Standardized norms define maximum background levels for test environments.
The hearing threshold is the lowest sound pressure level that can just be perceived and varies with frequency. It is documented individually for each frequency in the audiogram and forms the basis for diagnostics and hearing aid fitting. Deviations from normal values define degrees of hearing loss from mild to profound. Thresholds are determined using tone audiometry under controlled conditions. Clinically, it is the first step in differentiating between conductive and sensorineural hearing loss.
Auditory segmentation is the ability to break down continuous sound signals into meaningful units such as words or syllables. It is based on acoustic markers such as pauses, formant transitions and volume fluctuations. Disruptions in segmentation lead to speech comprehension difficulties, especially in noisy situations. Segmentation tests use sentences with variable pause patterns. Auditory training can improve segmentation performance in the auditory cortex.
The hearing range refers to the range between the quietest perceptible and the loudest tolerable sound intensity, measured in decibels. It represents the dynamic range of hearing and varies individually depending on age and hearing health. In normal hearing, the hearing range is typically between 0 dB HL and about 120 dB SPL. A restricted range requires compression in hearing aids to make soft sounds audible and loud sounds comfortable. Changes in hearing range can indicate conditions such as presbycusis or noise damage.
The hearing spectrum represents the distribution of the hearing threshold across the frequency spectrum and shows which frequencies are perceived how well. It is recorded in the audiogram as a curve from low to high frequencies. Deviations in certain areas indicate a loss of high or low frequencies. Hearing aids adjust amplification profiles along the spectrum to compensate for deficits. In research, hearing spectra from different populations are compared to determine normal values and risk factors.
The audio track is the acoustic accompaniment to video or multimedia content and contains speech, music and effects. For accessible content, it is often supplemented with subtitles or sign language. Technically, the audio track is mixed in multi-channel audio (stereo, 5.1) to create spatial effects. In hearing training and rehabilitation, listening to individual tracks can train speech comprehension. For hearing aids with direct streaming, the audio track is transmitted to the device digitally and without interference.
A sudden hearing loss is a sudden onset, usually one-sided sensorineural hearing loss, often accompanied by tinnitus and a feeling of pressure. The exact causes are unclear, possible factors are circulatory disorders, viruses or stress. Immediate therapy with corticosteroids and blood circulation stimulants improves the chances of recovery. Audiometry documents the extent of hearing loss, follow-up checks show regeneration. Early rehabilitation can compensate for residual hearing loss and alleviate tinnitus.
A hearing system is a combination of hearing aid, earmold and optional accessory components such as an FM receiver or streamer. It comprises microphones, amplifiers, signal processors and receivers in a coordinated ensemble. Modern systems offer multi-channel compression, directional microphones, feedback management and wireless connectivity. Fitting is carried out individually by the acoustician based on the audiogram and personal hearing preferences. Regular software updates maintain performance and compatibility with new devices.
Hearing technology encompasses all technical aids and procedures for improving hearing, from hearing aids and cochlear implants to room and sound technology. It combines acoustics, electronics and signal processing to optimize speech intelligibility and sound quality. The sub-disciplines include microphone design, amplifier architecture, filter algorithms and user interfaces. Hearing technology research is driving developments such as AI-supported scene recognition and brain-computer interfaces. Users benefit from customizable, networked systems for all areas of life.
Hearing loss refers to a reduction in hearing ability, subdivided into conductive, sensorineural and central hearing disorders. It is quantified on the basis of the shift in the hearing threshold in the audiogram. Causes include age, noise, illness or genetic factors. Treatment options range from hearing aids and implants to medication and surgery. Early detection and interdisciplinary rehabilitation improve communication skills and quality of life.
Hearing ability encompasses the entire ability to detect and localize sound sources and process acoustic information. It includes parameters such as hearing threshold, dynamic range, frequency resolution and speech comprehension. Measurement methods such as audiogram, OAE and AEP provide objective data on hearing ability. Psychometric tests record subjective aspects such as hearing comfort and hearing strain. Maintaining and improving hearing ability are central goals of audiology and hearing acoustics.
The auditory center in the temporal lobe of the cerebral cortex (primary auditory cortex) processes the frequency, volume, and spatial characteristics of sound. It receives input via the auditory pathway and interacts with speech and memory centers. Cortical plasticity enables adaptation to hearing aids and rehabilitation after hearing loss. Lesions in the auditory center lead to central auditory processing disorders despite intact peripheral equipment. Imaging techniques (fMRI, PET) show activation patterns during acoustic tasks.
Hospitalism describes psychological and cognitive impairments that arise as a result of sensorineural hearing loss due to social isolation and loss of communication. Those affected often develop anxiety, depression, and withdrawal, which further exacerbates hearing loss. Early psychosocial interventions and hearing rehabilitation prevent hospitalism. Interdisciplinary care by audiologists, psychologists, and social workers is important. Studies show that social support and hearing aid provision significantly reduce hospitalism.
Hyperacusis is hypersensitivity to normal everyday sounds, which are perceived as painful or unpleasant. It is caused by changes in peripheral or central auditory pathways, often in combination with tinnitus. Comfort and discomfort thresholds are determined for diagnostic purposes. Treatment includes desensitization training, cognitive behavioral therapy, and, if necessary, medication. Hyperacusis can severely impair quality of life and requires multidisciplinary care.
I
Iatrogenic hearing loss occurs as an undesirable side effect of medical procedures or therapies, such as ototoxic drugs (aminoglycosides, cisplatin) or damage during ear surgery. Hair cells in the inner ear or synaptic connections are often affected, which can lead to permanent sensorineural hearing loss. As a preventive measure, medication doses are monitored and ototoxicity-protective substances are considered. After iatrogenic damage has occurred, early hearing rehabilitation with hearing aids or implants can help. Interdisciplinary coordination between ENT, oncology, and audiology minimizes risks.
Idiopathic hearing loss refers to hearing loss of unknown cause, with no organic findings or known risk factors. It can occur suddenly (idiopathic sudden hearing loss) or gradually and usually affects high frequencies. Diagnostics include extensive imaging procedures, laboratory analyses, and otoacoustic emissions, but often remain inconclusive. Similar to sudden hearing loss, it is treated with corticosteroids and vasodilators. Long-term management includes monitoring and, if necessary, hearing aid fitting.
An ITE (in-the-ear) hearing aid sits completely inside the ear canal and is barely visible from the outside. It uses the natural sound funnel effect of the outer ear and offers good sound quality, but is less powerful than BTE devices. Due to its compact design, battery capacity and amplification reserves are limited, making ITE devices particularly suitable for mild to moderate hearing loss. Fitting requires precise earmolds and regular maintenance to prevent cerumen blockages. Users appreciate its discretion and comfort.
The IIC (Invisible-in-Canal) hearing aid is a subtype of the ITE and sits deep in the ear canal just in front of the eardrum. It is virtually invisible and offers optimized speech intelligibility thanks to minimal feedback. Despite its compact design, tiny microphones and amplifier technology enable multi-channel signal processing. There are limitations in cases of severe hearing loss and usability (e.g., battery replacement). Hygienic cleaning and regular checks are essential to prevent performance losses.
Impedance describes the resistance and reactance of an acoustic or mechanical system to sound transmission, measured in ohms or mmho. In the ear, it refers to the eardrum and middle ear chain, whose mobility is examined during pressure changes (tympanometry). Changes in the impedance curve indicate fluid accumulation, stiffening, or perforations. In hearing aid technology, impedance measurement is used to check the fit of ear molds. Optimal impedance matching maximizes sound conduction efficiency.
An impulse noise is a short, sudden increase in sound pressure, such as a bang or a blow, with a broad frequency spectrum. Such stimuli can cause acoustic trauma if peak levels exceed 140 dB SPL. In audiometry, impulse noises are used to test the stapedius reflex and auditory masking reflex. Hearing protection for impulse noise differs from continuous noise protection because rapid attenuation responses are required. Research is investigating material dynamics and reflexive mechanisms to protect against impulse damage.
In-situ measurements are performed directly in the installed state, e.g., OAE or HRTF measurements in the ear canal with the hearing aid inserted. They allow realistic recording of amplification and filter effects under fitting conditions. Unlike free-field measurements, in-situ methods take into account individual ear anatomy and otoplasty effects. Modern fitting software integrates in-situ data for precise fine calibration. Regular in-situ checks ensure long-term quality of care.
Infrasound refers to sound with frequencies below 20 Hz, which are below the human hearing threshold but can produce physically perceptible vibrations. Sources include natural phenomena (earthquakes, wind) and technical installations (wind power, industry). Long-term exposure can cause discomfort, a feeling of pressure in the ear, and sleep disturbances. Standardized measurement methods and filter techniques help to detect and attenuate infrasound. Research is investigating the effect of infrasound on vestibular functions.
An incomplete stapedius reflex is evident when the stapedius muscle only partially contracts in response to loud stimuli. Audiologically, this leads to reduced attenuation of the ossicular chain and an increased risk of noise-induced hearing loss. Incomplete reflexes indicate muscle dysfunction, nerve lesions, or middle ear disorders. Reflex testing with tympanometry quantifies amplitude and latency. Therapeutically, hearing aid compression and muscle training can support reflex enhancement.
The inner ear consists of the cochlea and vestibular organ and converts mechanical sound and movement stimuli into electrical nerve impulses. The cochlea contains hair cells on the basilar membrane, which are stimulated differently depending on the frequency. The vestibular organ registers head movements and position. Fluid-filled scales and membranes ensure electrochemical transduction. Injuries or degeneration here lead to sensorineural hearing loss and vertigo.
Sensorineural hearing loss is caused by damage to hair cells, the auditory nerve, or central auditory pathways. It manifests itself in increased hearing thresholds and reduced speech comprehension, especially in noisy environments. Causes include age, noise trauma, genetic factors, and ototoxins. Treatment options include hearing aids, cochlear implants, and auditory training. Research into hair cell regeneration and synaptic protection aims to find a cure.
The inner hair cells are the primary sensory cells of the cochlea, converting sound-induced membrane movements into electrical signals. They are individually connected to afferent nerve fibers and are crucial for sound and speech intelligibility. Loss or dysfunction of the IHC leads to severe sensorineural hearing loss. Unlike outer hair cells, they cannot regenerate in humans. Gene therapy and stem cell approaches are being researched as methods of repair.
Insufficiency of the Eustachian tube causes the ventilation mechanism to fail, preventing pressure equalization between the middle ear and the throat. This leads to chronic negative pressure, fluid buildup, and hearing loss. Symptoms include a feeling of pressure, crackling, and recurrent otitis. Diagnosis is made using a tube function test and tympanometry; treatment includes balloon dilation, catheters, and ear tubes. Long-term insufficiency requires interdisciplinary care.
An integrated tinnitus noiser is a feature in modern hearing aids that emits a soft noise signal directly from the device to mask or desensitize tinnitus. The noise profile can be individually adjusted in terms of frequency spectrum and volume. Continuous noiser playback promotes habituation and reduces tinnitus perception in everyday life. Users can activate masking programs depending on the situation. Studies show that integrated noise generators improve sleep and quality of life.
Intensity describes the power per unit area of a sound wave and is usually expressed in watts per square meter (W/m²) or in decibels (dB SPL). It correlates with the perceived loudness, with a tenfold increase in sound intensity corresponding to an increase of 10 dB. In the ear, high intensities cause greater deflection of the eardrum and basilar membrane, which can lead to hair cell damage if the pain threshold is exceeded. Audiologists determine the intensity-loudness function to determine the dynamic range and comfort threshold. Hearing aids use this knowledge for compression algorithms that attenuate loud signals and amplify quiet ones.
Interaural level difference is the difference in sound level between the right and left ears caused by the head shadow effect. ILD serves as an important indicator for the horizontal localization of high frequencies (>1.5 kHz). In the superior olive nucleus, ILD information is combined with time differences to enable spatial hearing. Hearing aids with binaural networking receive ILD cues by synchronously exchanging level information. ILD tests in anechoic chambers quantify localization efficiency.
Die Interaurale Zeitdifferenz ist die Differenz in der Ankunftszeit eines Schallsignals an beiden Ohren und dient primär der Lokalisation tiefer Frequenzen (<1.5 kHz). Bereits Mikrosekundenunterschiede reichen aus, damit das Gehirn Schallquellen präzise ortet. ITD‑Verarbeitung erfolgt im medialen Olivenkern, wo phase-locked Neurone unterschiedliche Verzögerungen vergleichen. Störungen der ITD-Verarbeitung führen zu Lokalisationseinschränkungen und schlechterem Sprachverstehen in Lärm. Hörsysteme müssen Latenzen minimieren, um natürliche ITD‑Cues nicht zu verfälschen.
An intracochlear electrode is part of a cochlear implant and is inserted into the cochlea through a cochleotomy. It electrically stimulates specific regions of the cochlea, thereby replacing defective hair cells. The number and distribution of the electrodes determine the spectral resolution of the implant. Surgical precision during insertion minimizes trauma and preserves residual hearing. Postoperative mapping adjusts stimulation levels per electrode for optimal speech comprehension.
Intralabyrinthine pressure refers to the hydrostatic pressure of the endolymphatic and perilymphatic spaces in the inner ear. Changes, such as those seen in Meniere's disease, lead to hydrops and cause vertigo, tinnitus, and hearing loss. Pressure measurements in animal models help to understand pathomechanisms and develop pressure regulation procedures. Clinically, intralabyrinthine pressure is indirectly inferred via tympanometry and ECochG. Therapeutic approaches aim to relieve pressure through diuretics or surgical decompression.
During intraoperative monitoring, evoked potentials of the brainstem (ABR) are continuously recorded during ear or skull base surgery. This protects against damage to the auditory nerve and brainstem structures by detecting functional loss at an early stage. Neurophysiologists adjust stimulation and recording parameters in real time. Failures or latency changes trigger immediate surgical pauses or technical adjustments. The procedure increases safety during acoustic neuroma resections and cochlear implantations.
Intratympanic gentamicin therapy is used to treat refractory Meniere's disease by injecting the antibiotic directly into the middle ear. Gentamicin diffuses through the eardrum into the cochlea and selectively destroys vestibular hair cells to reduce vertigo attacks. The dose is carefully titrated to minimize hearing loss. Follow-up includes audiometric checks and vestibular function tests. The therapy offers effective vertigo control with low systemic toxicity.
Ion toxicity refers to damage to hair cells and nerve cells in the ear caused by certain ion-mediated substances, such as aminoglycosides or cisplatin. These ototoxins increase calcium permeability and generate reactive oxygen species, leading to cell death. Early detection is achieved through DPOAE monitoring during therapy. Protective strategies include antioxidants and calcium channel blockers. Long-term effects range from tinnitus to permanent hearing loss.
Ipsilateral hearing describes perception in the same ear as the sound source, contralateral hearing in the opposite ear. This dichotomy is central to localization and binaural processing. In diagnostics, ipsilateral and contralateral reflexes (stapedius) are tested to detect lateralized pathologies. Differences in thresholds or reflex responses indicate nerve lesions or middle ear disorders. Rehabilitation aims to compensate for lateral deficits through binaural treatment.
An isochronic loudness scale ranks sounds of equal perceived loudness across different frequencies. It is based on psychoacoustic data and shows that the human ear is most sensitive at mid-range frequencies. Isochronic curves (Fletcher-Munson curves) are used to calibrate audiometers and for weighting (A, C filters) in sound level meters. In hearing aid fitting, they help to ensure comfort and naturalness of hearing.
Isochronic tinnitus is a rhythmic sound in the ear that is perceived in sync with the heartbeat ("pulsatile tinnitus"). It is caused by vascular turbulence or pressure fluctuations in the inner ear. Diagnosis includes Doppler sonography and MRI angiography to rule out vascular causes. Treatment depends on the cause, e.g., embolization or pressure therapy. Since it is linked to the cardiovascular system, it requires interdisciplinary clarification.
J
The Jakobson effect describes the improved perception of speech sounds through brief focusing on their acoustic characteristics, similar to phonemic "listening." It occurs when listeners actively pay attention to certain frequency ranges and thus recognize nuances in consonants and vowels more clearly. This effect is used in speech therapy to treat articulation weaknesses. Audiological training programs enhance the effect through targeted practice of individual phonemes. Neurophysiological measurements show increased cortical activity in auditory areas during the Jakobson effect.
The Jarisch-Herxheimer reaction is an acute inflammatory reaction following the death of bacteria, which can rarely occur in the inner ear when ototoxic antibiotics kill bacteria in the cochlea. This releases toxins that temporarily exacerbate dizziness, tinnitus, and hearing loss. The reaction usually begins a few hours after the start of therapy and subsides within 24–48 hours. Steroids and antioxidants are administered symptomatically to reduce inflammation and oxidative damage. It is important to be aware of this effect so as not to confuse iatrogenic damage with treatment failure.
The Jensen test is a speech intelligibility test in which sentences or words are presented at different signal-to-noise ratios. It measures the minimum ratios at which speech can still be understood and quantifies hearing performance in realistic noise situations. The results help to tailor hearing aid programs to everyday conditions. Test variants use stationary noise or multi-speaker scenarios. The Jensen Test is well established in pediatric audiology and adult rehabilitation.
The jet noise effect describes the broad frequency spectrum and high sound pressure levels generated by jet engines. Particularly low and mid-range frequencies travel long distances and can cause sleep disturbances and hearing stress in the vicinity of busy airports. Sound measurements determine emission levels in order to optimize noise barriers and flight paths. Low-noise engine technologies and operating time regulations are used as preventive measures. Long-term studies document the effects on the hearing and quality of life of residents.
Itching in the ear canal is a common symptom of dry skin, eczema, or allergic reactions to ear molds. It can lead to scratches and secondary infections if patients use cotton swabs or their fingers to scratch the area. Treatment includes moisturizing ear drops, corticosteroid ointments, and adjustment of the otoplastic materials. In cases of chronic itching, a dermatological examination is recommended. Audiologists provide advice on skin care and hygienic cleaning of hearing aids.
A young ear refers to the ear of a child or adolescent that is still developing anatomically and functionally. The ear canal, eardrum thickness, and bone structure differ from those of adults and influence acoustic measurements. Audiological tests and hearing aid fittings must be calibrated according to age. Pediatric audiology programs take into account language development stages and compliance. Long-term monitoring ensures that hearing loss diagnoses are detected and treated early.
Jugular sinus pressure in the mastoid region can be used to indirectly assess intracranial and labyrinthine pressure via the Valsalva maneuver or pressure probes. It influences venous drainage of the inner ear and can lead to tinnitus or vertigo when pressure is elevated. Clinically, it is tested when venous malformations or hydrocephalus are suspected. Imaging and Doppler ultrasound complement the pressure measurement. Therapeutically, relief measures such as diuretics or surgical shunts may be indicated.
The just-noticeable difference is the smallest perceptible change in an acoustic stimulus, such as frequency or volume, that a listener can detect. It is determined psychoacoustically by presenting stimuli with minimal differences. JND values vary with frequency, base level, and individual hearing condition. They are important for filter design and compression parameters in hearing aids. Small JNDs enable fine sound gradations, while large JNDs can limit speech comprehension.
K
The caloric test examines the function of the horizontal semicircular canal by introducing warm or cold water or air into the ear canal. Temperature differences generate endolymphatic currents that trigger typical nystagmus (rapid eye movements). The intensity and direction of the nystagmus provide information about vestibular functional asymmetries and central vestibular integrity. It is standard in the diagnosis of vertigo and helps to localize vestibular deficits on one side. Since the stimulation can be unpleasant, the examination is performed under continuous monitoring of eye movements.
Canal audiometry measures the sound conduction properties of individual frequency bands ("channels") in the ear canal or hearing aid. It uses narrow filter bands to determine thresholds and amplification requirements separately for each channel. The results help to precisely adjust multiband compression parameters and ensure clear speech comprehension. In research, channel audiometry is used to investigate frequency selectivity and masking effects. Modern hearing aid fitting software visualizes channel audiograms in real time for fine calibration.
A channel compressor is a dynamic processor that controls level compression separately in each frequency channel of a hearing aid. It reduces loud signals above the comfort threshold more than quiet signals in order to adapt the dynamic range to the residual hearing. Parameters such as ratio, attack and release times are optimized individually for each channel. Multi-channel compression makes it possible to emphasize speech components in critical bands while attenuating impulse-like noises. However, incorrectly adjusted compressors can cause sound artifacts and discomfort.
Channel separation refers to the division of the audio spectrum into separate frequency bands for independent processing. It forms the basis for multiband compression, filtering, and noise reduction in hearing aids. Good channel separation minimizes crosstalk between adjacent bands and prevents phase problems. The number and bandwidth of the channels are adjusted to the hearing loss profile and the processing power of the processor. Adaptive systems change channel boundaries depending on the situation to ensure optimal sound quality in changing environments.
The number of channels indicates how many frequency bands a hearing aid divides the audio signal into. Typical values range from 4 to 16 channels; more channels allow for finer adjustment but require higher processing power. A higher channel count supports precise masking management and individual amplification profiles. However, too many channels can lead to over-adjustment and increased noise. The ideal channel count depends on the hearing loss pattern and the processing capabilities of the wearer.
Capsulitis is an inflammation of the bony capsule of the inner ear, usually resulting from otitis media or a skull base injury. It causes severe ear pain, dizziness, and often sensorineural hearing loss. CT/MRI scans and laboratory tests are used to determine the extent of the inflammation and identify the pathogen. Treatment includes systemic antibiotics, pain management, and surgical drainage if necessary. Early treatment is essential to prevent permanent damage to the inner ear.
Cascade amplification refers to a multi-stage amplification architecture in which several amplifier stages are connected in series. Each stage increases the level slightly, achieving overall amplification without significant distortion. This technique improves noise performance and linearization compared to single stages with high amplification. In digital hearing aids, cascade amplification is found in both analog-to-digital converters and output amplifiers. It contributes to low inherent noise and high fidelity.
Sound compression reduces the dynamics of audio signals by attenuating loud sections more than quiet ones. In hearing aids, it is essential for protecting residual hearing from overloading while making weak signals audible. Compression parameters such as ratio, knee point, and release time determine the response behavior. Adaptive compression automatically adjusts to speech and ambient noise. However, incorrectly set compression can make the sound seem "flat" or unnatural.
The cerebellopontine angle is the anatomical space between the cerebellum and the pons through which the VIII cranial nerve passes. Acoustic neuromas, benign tumors that cause hearing loss, tinnitus, and vertigo, often develop here. Microsurgical resection requires access through this angle, whereby the brain stem and vessels must be spared. Intraoperative monitoring of the auditory brainstem exits protects nerve function. Postoperative imaging checks the completeness of the resection and for complications.
Der Klirrfaktor gibt das Verhältnis der Summe aller harmonischen Obertöne zur Grundschwingung an und quantifiziert Verzerrungen in einem System. In Hörgeräten beschreibt er, wie stark das Ausgangssignal vom Eingangssignal abweicht. Niedrige Klirrfaktoren (<1 %) sind wünschenswert für unverfälschten Klang. Messungen erfolgen mit Sinus‑Sweeps und Spektralanalyse. Hoher Klirrfaktor kann Sprachverständnis und Klangqualität erheblich verschlechtern.
Acoustic trauma is caused by extremely short, high-intensity sound explosions that can immediately destroy hair cells and synaptic connections in the inner ear. Symptoms include sudden hearing loss, tinnitus, and dizziness. Emergency treatment with high-dose corticosteroids and hyperbaric oxygenation can reduce damage, but must be administered immediately. Long-term consequences include permanent hearing loss and psychological stress. Prevention through hearing protection during gunfire or explosions is essential.
Bone conduction transmits sound directly to the cochlea via vibrations in the skull, bypassing the outer and middle ear. It is used in audiometry to distinguish between conductive and sensorineural hearing loss. Bone conduction hearing aids are used to treat patients with middle ear problems. Implantable bone conduction devices (BAHS, Bonebridge) deliver higher sound quality than traditional bone conduction headphones. Bone conduction also plays a role in autophony.
The cochlea is the snail-shaped organ in the inner ear where sound waves are converted into electrical nerve impulses. The basilar membrane is lined with inner and outer hair cells, which encode sounds of different frequencies through mechano-electrical transduction. Tonotopy ensures that high frequencies are detected at the base and low frequencies at the apex of the cochlea. Damage to the cochlea, for example due to noise or ototoxins, leads to permanent sensorineural hearing loss. Research into cell regeneration and cochlear implants aims to restore function.
Communicative accessibility means that people with hearing loss have unrestricted access to linguistic content, for example through sign language, subtitles, inductive hearing systems, or real-time transcription. It encompasses technical, architectural, and organizational measures in public spaces, media, and digital offerings. The goal is equal participation in education, culture, and everyday life. Legal requirements mandate accessibility in public institutions and online services. Audiologists and hearing care professionals advise on suitable aids and installations.
Compensation methods are used to compensate for hearing loss through technical or therapeutic means. They range from hearing aids and implants to auditory training and environmental adjustments. Digital signal processors use multiband compression, noise reduction, and directional microphones to amplify speech components. Therapeutic compensation includes central auditory processing training to promote neural plasticity. A combination of technical and rehabilitative compensation achieves the best results for speech comprehension.
Compression dynamics describe how a hearing aid responds to different input levels: quiet signals are amplified more than loud ones in order to make optimal use of the wearer's dynamic range. Important parameters include compression ratio, knee point, and attack/release time. A fast attack time protects against impulse noise, while a slow release time preserves natural sound transitions. Individual fine-tuning adapts the dynamics to the hearing loss profile and hearing preferences. Mismatches can impair speech comprehension and sound quality.
In conductive hearing loss, sound transmission through the outer ear or middle ear is impaired, for example due to cerumen impaction, tympanic membrane perforation, or otosclerosis. Those affected have normal bone conduction but elevated air conduction thresholds on the audiogram. Treatment options include surgical reconstruction, removal of obstructions, or bone conduction hearing aids. Tympanometry and the Rinne test help to distinguish between conductive and sensorineural hearing loss. The prognosis is usually very good if treatment is successful.
The head-related transfer function (HRTF) describes how the head, ears, and torso filter sound depending on frequency, thereby generating directional cues. It is essential for spatial hearing and virtual reality audio. Individual HRTFs are recorded with microphones at the ear or calculated to produce realistic 3D audio effects. In hearing aid development, HRTF models are used to maintain natural localization despite the device. Adaptive algorithms can adjust HRTFs to head movements in real time.
Headphones are sound transducers that are positioned directly on the ear and transmit sound to the eardrum in an isolated manner. They are used in audiometry (everyday testing and research) and as accessories for hearing aid streamers. Closed designs offer high shielding against ambient noise, while open designs provide a more natural sound. Calibrated measurement headphones ensure standardized sound levels during threshold tests. Hygiene and comfort are important for accurate and reliable measurements.
The force law of hair cells describes the nonlinear relationship between the deflection of hair cell stereocilia and the electrical response triggered. Small deflections lead to proportionally larger receptor potentials, which explains the sensitivity of the cochlear amplifier. When certain deflection limits are exceeded, the characteristic curve flattens out to provide protection against overstimulation. Changes to this law due to damage affect the dynamic range and frequency resolution. Biophysical models help to optimize hearing aid compression.
A crystal calibrator generates a defined sound pressure level (usually 94 dB SPL at 1 kHz) in a closed adapter to test microphone sensitivities. It uses piezoelectric crystals for stable frequency and amplitude. Calibration before each measurement ensures accuracy in audiometry and room acoustics. Regular traceability to national standards ensures measurement consistency. Documentation of calibration is part of quality control in laboratories and clinics.
Auditory short-term memory stores acoustic information for seconds to minutes in order to process speech and sounds. It enables the understanding of sentences by retaining previous words in memory. Impairments lead to difficulties with longer passages of speech and complex listening situations. Tests such as dichotic number span measure auditory memory performance. Auditory training and cognitive exercises can improve short-term memory functions.
L
The labyrinth in the inner ear consists of the bony and membranous parts and comprises the cochlea, vestibule, and semicircular canals. It serves both sound transduction (cochlea) and balance perception (vestibular organ). The spaces filled with endolymph transmit mechanical stimuli to hair cells, which convert them into electrical signals. Diseases such as labyrinthitis or Meniere's disease lead to dizziness, nausea, and hearing loss. Imaging techniques and functional tests (caloric, VEMP) examine the integrity of the labyrinth.
Labyrinthitis is an inflammation of the inner ear, typically caused by a virus or bacteria, and affects both the hearing and balance organs. Symptoms include acute vertigo, nausea, vomiting, and often unilateral hearing loss or tinnitus. Diagnosis includes audiometry, vestibular function tests, and, if necessary, MRI to rule out other causes. Treatment combines antiviral or antibiotic medications with corticosteroids and vestibular rehabilitation training. In most cases, vestibular function recovers partially, but residual damage can leave persistent dizziness or hearing loss.
Noise pollution refers to exposure to harmful or disturbing sound levels in the environment and at work. It is measured in dB A and weighted over time (e.g., LEX,8h). Chronic noise pollution leads to stress, sleep disorders, and work-related hearing loss. National and international guidelines set limits for industrial, traffic, and recreational noise. Preventive measures include noise barriers, hearing protection, and quiet zones in cities.
A noise indicator is a key figure that quantifies noise pollution, e.g., Lden (day-evening-night), Lnight, or Lday. It integrates noise levels and time proportions to assess health risks. Municipal noise maps use indicators to identify areas of high pollution and plan protective measures. Specific indicators such as LEX,8h apply to workplaces. Indicators form the basis for noise action plans and environmental reporting.
A sound level meter is a measuring device that records sound pressure levels in real time and evaluates them in dB. Professional Class 1 and Class 2 meters comply with standards (IEC 61672) for precision and frequency weighting (A, C filters). They are used in occupational safety, environmental monitoring, and room acoustics. Calibration with external calibrators ensures measurement accuracy. Mobile versions and apps offer simple indicators, but do not achieve laboratory quality.
Noise protection includes technical, structural, and organizational measures to dampen sound sources or minimize sound propagation. Examples include noise barriers, absorbent materials, and traffic calming measures. Personal hearing protection (earplugs, earmuffs) supplements structural protection. Standards regulate minimum requirements for sound insulation in buildings. Effective noise protection improves the quality of living and working and prevents hearing damage.
Noise protection regulations are legal frameworks at national or EU level that set limits and procedures for noise monitoring. They define permissible levels in industrial, traffic, and residential areas, as well as night and day times. Residents can file noise complaints and enforce measures such as speed limits or noise barriers. Local authorities draw up noise action plans based on regulations. Violations are punished with fines.
Noise-induced hearing loss is an occupational disease caused by chronic exposure to noise, leading to sensorineural hearing loss, particularly in the high-frequency range. It manifests itself as "crackling" and a declining audiogram curve from 3 kHz onwards. Prevention through hearing protection and regular preventive audiometry is required by law. Treatment involves hearing aids that specifically compensate for high-frequency loss. Rehabilitation includes auditory training and workplace adjustments.
Noise prevention aims to minimize noise pollution before it causes damage to health. It includes risk assessment, planning protective measures, and informing those affected. Technical prevention measures include quieter machines, structural insulation, and traffic control. Personal prevention measures include hearing protection and rules of conduct. Monitoring and regular measurements ensure the effectiveness of the measures.
Auditory latency is the time between a sound stimulus and a measurable response in the auditory system, e.g., evoked potentials or conscious perception. Latency times provide information about the functional status of peripheral and central auditory pathways. Prolonged latencies indicate demyelination, tumors, or neuropathic damage. In hearing aids, signal processing latency is minimized to ensure audio-video synchrony. Standard values exist for ABR, MLR, and CAEP components.
Lateral inhibition is a neural principle whereby activated neurons inhibit their neighbors to increase contrast and edge sharpness. In the auditory system, it improves frequency selectivity by attenuating adjacent frequency channels. This results in clearer sound and speech comprehension, especially in complex sound environments. Disruptions in lateral inhibition can cause broader sound fields and poorer discrimination. Models of this effect are incorporated into hearing aid filter designs.
Laterization refers to the apparent perception that a sound source is located to the left or right of the center of the body, based on interaural time differences (ITD) and level differences (ILD). The brain compares minimal differences in travel time and volume between both ears to determine direction. Laterization is essential for spatial hearing and situational orientation, for example in traffic. When fitting hearing aids, it is important to ensure that binaural synchronization is maintained so as not to distort laterization. Tests in the sound field measure laterization accuracy and help to identify processing disorders.
Loudness is the subjective auditory perception of the intensity of a sound, which does not correlate linearly with sound pressure (dB SPL). Psychoacoustic models such as the Zwicker model describe how frequency and level together determine the perceived loudness in sone. Loudness scales (see below) standardize this perception for technical applications and hearing aid fitting. Loudness depends on context, duration, and frequency spectrum; different loudness levels can be perceived at the same level. Hearing aid compression optimizes loudness perception by amplifying soft sounds and attenuating loud ones.
In loudness scaling, test subjects subjectively rate the perceived loudness of test signals on a numerical or verbal scale. Methods such as category scaling or magnitude estimation provide functions that convert sound pressure into loudness (sone). These functions are used to calibrate hearing aids to ensure the desired loudness experience. Differences in scaling reveal individual loudness sensitivity and hyperacusis tendencies. Standardized scales (DIN 45631) ensure comparability between tests.
Speakers convert electrical audio signals into sound waves and are key components in free-field audiometry and sound reinforcement systems. Important parameters include frequency response, distortion factor, and directional characteristics. Calibrated studio monitors deliver precise levels for hearing tests, while consumer loudspeakers are optimized for sound aesthetics. Coaxial or dipole loudspeakers are often used in hearing studies to minimize reflections. Loudspeaker placement in a room influences reverberation and listening comfort and is planned acoustically.
Quality of life with hearing loss encompasses physical, psychological, and social dimensions, including communication skills, self-esteem, and social participation. Hearing loss increases the risk of isolation, depression, and cognitive impairment. Tools such as the HHIE (Hearing Handicap Inventory for the Elderly) quantify subjective stress. Interventions (hearing aids, rehabilitation, psychosocial support) aim to improve all areas of quality of life. Long-term studies show that early intervention significantly improves quality of life.
Line impedance is the complex resistance of an acoustic path, e.g., the middle ear or audio cable, to sound or signal transmission. It consists of resistive and reactive components and varies depending on frequency. In tympanometry, middle ear impedance is measured to assess vibration capacity and the ossicular chain. Deviations indicate stiffening (otosclerosis) or fluid accumulation. In hearing aid technology, impedance matching is used to ensure maximum performance and minimum reflections.
An auditory lexicon is the mental representation of sound patterns, words, and their meanings stored in the brain. It enables rapid word recognition and speech comprehension by comparing acoustic inputs with stored entries. Models of speech processing distinguish between phonological and semantic lexicons. Disorders such as aphasia or central auditory processing disorders impair access to the lexicon. In rehabilitation, lexicon access is trained through speech exercises and auditory training.
Lip reading is the technique of visually deciphering spoken sounds and words based on the movement of the lips, jaw, and facial muscles. It helps people with hearing impairments to improve their speech comprehension in quiet and noisy environments. Successful lip reading requires not only visual training but also knowledge of phonetics and speech rhythms. In practice, those affected combine lip reading with hearing aids or cochlear implants to achieve maximum communication ability. Speech therapists offer systematic exercises to synchronize visual and auditory impressions.
Lip synchronization refers to the adjustment of audio and video tracks so that lip movements and spoken sound are precisely synchronized. A lack of synchronization (lip synchronization errors) interferes with speech comprehension and can lead to cognitive overload. In subtitling and hearing aids with video streaming, precise lip synchronization is essential for correctly assigning speech sources. Technically, the delay is measured digitally and compensated for in milliseconds. Good synchronization improves the perceived naturalness and acceptance of audiovisual content.
The logarithmic scale represents values in exponential increments, allowing large data ranges to be displayed compactly. In audiology, sound pressure levels are measured logarithmically in decibels (dB) to represent linear hearing sensitivity. A doubling of loudness corresponds to approximately +10 dB, which is plausible and manageable on a logarithmic scale. Audiograms and frequency responses of hearing aids use this scaling to clearly visualize hearing thresholds and amplification profiles. Logarithmic representations facilitate the comparison of different levels and frequency ranges.
Speech therapy in auditory rehabilitation focuses on the linguistic and communicative abilities of people with hearing loss. Speech therapists train articulation, pronunciation, and sound comprehension using auditory-visual methods, including lip reading and sound therapy. They develop individual therapy plans to promote speech comprehension in everyday situations. In addition, they use auditory training and cognitive strategies to compensate for central processing disorders. Close collaboration with audiologists and psychologists ensures a holistic treatment approach.
Air conduction is the main hearing pathway, in which sound waves pass through the air into the ear canal, cause the eardrum to vibrate, and are transmitted to the inner ear via the ossicular chain. Tone and speech audiometry measure air conduction thresholds using headphones to determine the extent of hearing loss. Deviations between air and bone conduction indicate sound conduction problems or middle ear disorders. The air conduction curve in the audiogram forms the basis of every audiological diagnosis. Pathologies such as otoscopy findings are correlated with air conduction data.
An air conduction audiogram is a graphical representation of hearing thresholds across frequencies, measured by air conduction testing. It shows individual hearing curves and defines the degree of hearing loss, e.g., mild, moderate, or severe. The curve distinguishes between air conduction and bone conduction in order to differentiate between the causes of hearing loss. Standardized test frequencies range from 125 Hz to 8 kHz, and up to 16 kHz for high-frequency audiometry. Audiograms are essential for selecting and fitting hearing aids.
Airborne sound is sound that propagates through the air as a pressure wave and is heard via the outer ear. It differs from structure-borne sound in that the sound source consists of fluctuations in air molecule pressure. In room acoustics, airborne sound levels, reflection, and absorption are analyzed in order to optimize reverberation and echo. Hearing tests and noise measurements are based on airborne sound measurements using microphones. Hearing protection aims to reduce airborne sound levels below the limits that are safe for health.
M
The macular organs (sacculus and utriculus) are parts of the vestibular labyrinth and register linear acceleration and gravitational influences. They contain hair-filled sensory cells whose stereocilia are embedded in a gelatinous membrane weighted with otoliths (calcium carbonate crystals). Shifts in the otoliths when the head is tilted or accelerated bend the stereocilia, triggering nerve impulses. This information is transmitted to the brain via the vestibular portion of the VIII cranial nerve and combined with visual and proprioceptive data to determine position. Damage to the macula organs leads to unsteadiness when standing and walking, as well as pathological swaying.
A malformation syndrome of the ear includes congenital malformations of the outer ear, middle ear, or inner ear, often in the context of genetic syndromes such as Goldenhar syndrome or Treacher Collins syndrome. Those affected show auricular malformations (microtia, anotia), ear canal atresia, or cochlear malformations. Hearing loss ranges from mild conductive hearing loss to complete deafness, depending on the extent of the malformation. Treatment includes surgical reconstruction, bone conduction hearing aids, or cochlear implants. Multidisciplinary care by ENT surgeons, audiologists, and plastic surgeons is crucial for functional and aesthetic results.
The mandibular reflex, also known as the chin reflex, is triggered by tapping on the lower jaw and tests trigeminal motor function. Although primarily a neurological test, the chewing muscles influence the auditory canal due to their proximity and can contribute to ear pain and tinnitus in cases of temporomandibular joint disorders. An increase or decrease in the reflex may indicate central or peripheral nerve lesions. In ENT diagnostics, it is combined with other cranial nerve reflexes to differentiate between headache and ear symptoms. Treatment for malfunction is provided through craniomandibular therapy and physical therapy.
Masking is the superimposition of a test signal with a noise or tone mask to prevent the untested ear from responding during audiometry (cross-hearing). It is necessary when the level difference between air and bone conduction allows unwanted perception in the opposite ear. Masking levels are calculated according to standardized rules to ensure the validity of threshold determination. In psychoacoustics, masking also refers to the suppression of quieter sounds by loud neighboring frequencies. In hearing aids, targeted masking is used to cover up tinnitus or reduce background noise.
The mastoid (mastoid process) is the bony protrusion behind the auricle, which contains air-filled cells and is part of the temporal bone. It serves as a buffer for middle ear infections, but can itself become inflamed in cases of chronic otitis media (mastoiditis). Clinically, the mastoid is palpated for pressure pain and swelling to detect complications. Imaging techniques (CT) show cell structure and the extent of inflammatory processes. Surgical mastoidectomy removes diseased tissue and preserves hearing function.
The external auditory canal is the outer ear canal that conducts sound from the auricle to the eardrum. It consists of bony and cartilaginous parts and is lined with skin and cerumen glands. Cerumen formation and exostoses can narrow the canal and lead to sound conduction disorders. Otoscopic examination checks the width, skin condition, and foreign bodies. In hearing aid fitting, a precise fit of the earmold in the meatus is crucial for attenuation and freedom from feedback.
The medial olive complexes in the brainstem are central switching stations for binaural auditory processing. They compare interaural time differences (ITD) to determine the direction of low-frequency sound sources. Neurons in these nuclei fire in phase with the sound waves and transmit information to higher auditory centers. Lesions lead to directional hearing disorders and reduced speech comprehension in noise. Research uses electrode recordings to analyze the precise temporal coding in the olive nuclei.
In audiology, the membrane usually refers to the eardrum, a three-layered structure that converts sound energy into mechanical vibrations. It separates the outer ear from the middle ear and transmits vibrations to the inner ear via the ossicular chain. Changes in thickness, tension, or integrity—such as perforations—affect impedance and hearing ability. The membrane also plays a role in otoacoustic emissions, as its reflections can be measured. Surgical repairs (tympanoplasty) reconstruct damaged membranes to restore sound conduction.
The tectorial membrane is a gelatinous covering in the organ of Corti that lies over the hair cells and brushes their stereocilia when sound is induced. It transmits traveling waves from the basilar membrane into lateral movements of the hair cell stereocilia, which triggers mechano-electrical transduction. Differences in the stiffness and mass of the tectorial membrane along the cochlea influence frequency selectivity. Damage or detachment of this membrane leads to hearing loss and impairs tonotopy. Research approaches are investigating biomaterials for the regeneration of the tectorial membrane after noise damage.
Ménière's disease is an inner ear disorder characterized by episodes of vertigo, fluctuations in hearing, tinnitus, and ear pressure. Pathophysiologically, it involves endolymphatic hydrops, i.e., an overflow of endolymph into the cochlear duct and semicircular canals. The diagnosis is based on clinical criteria, audiograms, and the exclusion of other causes. Treatment includes diuretics, intratympanic gentamicin administration for vestibular ablation, and vestibular training. Despite its chronic course, symptom control can significantly improve quality of life.
The mesotympanum is the middle section of the tympanic cavity in the middle ear between the epitympanum and the hypotympanum. It contains the ossicular chain and the stapes attachment at the oval window. Pathologies such as effusion or cholesteatoma often manifest in the mesotympanum and impair sound conduction. Surgical procedures (tympanotomy) aim to clean and ventilate this area. Tympanometry can indirectly estimate the pressure and volume in the mesotympanum.
Misophonia is a neurological-psychiatric disorder in which certain everyday sounds (e.g., chewing, typing) trigger intense negative emotions such as anger or disgust. Those affected react with an increased stress response, which severely limits social interaction and quality of life. The exact mechanisms are still unclear; a misconnection between auditory areas and the limbic system is suspected. Treatment approaches include cognitive behavioral therapy, tinnitus desensitization, and mindfulness exercises. Audiological examinations rule out organic hearing disorders to confirm the diagnosis.
The middle ear is an air-filled cavity that contains the eardrum, ossicular chain (malleus, incus, stapes), and Eustachian tube. It adapts sound pressure from air conduction to fluid conduction in the cochlea and protects against loud noises through reflexes. Diseases such as otitis media, otosclerosis, or cholesteatoma impair sound conduction and lead to hearing loss. Diagnosis is made by otoscopy, tympanometry, and audiometry. Surgical procedures such as stapedotomy or ear tubes improve ventilation and conductivity.
Otitis media is an inflammatory disease of the tympanic cavity, often caused by viruses or bacteria. It causes earache, fever, hearing loss, and can lead to fluid build-up. Chronic otitis media can lead to complications such as perforation of the eardrum or cholesteatoma. Treatment includes antibiotics, pain therapy, and, in the case of effusion, ear tubes. Prevention through vaccination (pneumococcus) and treatment of throat infections reduces the incidence.
The modiolus is the central bony axis of the cochlea around which the cochlea coils. It contains nerve fibers of the auditory nerve that run from the hair cells to the brainstem. The close spatial arrangement in the modiolus facilitates electrical stimulation during cochlear implantation. Pathologies such as fibrosis of the modiolus can impair implant function. In imaging, the modiolus is measured to plan surgical approaches.
Monaural hearing refers to hearing with only one ear, which eliminates binaural advantages such as localization and noise suppression. Those affected often compensate by moving their head and using visual cues. Audiologically, a monaural audiogram is evident, and masking is not necessary. Monaural fitting with only one hearing aid can maintain speech comprehension in quiet environments, but is severely limited in noisy environments. Support strategies include FM systems and room acoustics optimization.
Mondini dysplasia is a congenital malformation of the cochlea with reduced turns (usually 1–1.5 instead of 2.5). It belongs to the spectrum of inner ear malformations and leads to sensorineural hearing loss to varying degrees. Vestibular structures are also frequently affected, which can cause dizziness. Diagnosis includes high-resolution CT and hearing tests, and treatment often involves cochlear implantation. Early intervention improves speech development and balance function.
Ménière's disease is a chronic, recurrent disorder of the inner ear in which endolymphatic hydrops is accompanied by periodic attacks of vertigo, ear pressure, tinnitus, and fluctuating hearing loss. The term "Ménière's syndrome" is also used when the symptoms are incomplete or secondary to other diseases. The diagnosis is based on clinical criteria and the exclusion of other causes of vertigo using audiometry and balance tests. Treatment approaches include a low-salt diet, diuretics, intratympanic gentamicin injections, and vestibular rehabilitation training. Despite treatment, irreversible hearing loss in the affected frequency ranges may occur in the long term.
The stapedius muscle is the smallest striated skeletal muscle in the body and originates at the pyramidal process of the temporal bone. It is connected to the stapes by a tendon and reflexively pulls it back when stimulated by high-level contrast. This contraction—the stapedius reflex—reduces sound transmission to the inner ear and protects it from harmful loud noises. Its function is tested in tympanometry by measuring the change in middle ear impedance during acoustic stimulation. Impairment of the stapedius muscle, for example due to nerve lesions, leads to increased noise sensitivity and hearing disorders.
The stapedius reflex is an acoustically triggered muscle reflex in which the stapedius muscle contracts at levels above approximately 70–90 dB SPL. This stiffening of the ossicular chain dampens loud impulses and protects the sensitive hair cells in the inner ear. The reflex is measured diagnostically using tympanometry devices that record changes in impedance and reflex latency. A missing or asymmetrical reflex may indicate otosclerosis, facial nerve lesion, or central auditory pathway disorder. Reflex parameters provide important information for the differential diagnosis of middle ear and neural pathologies.
Myoelectric stimulation uses electrical impulses to activate specific muscles and provide therapeutic training or relaxation. In ENT practice, it can be used to treat tinnitus, chronic muscle tension in the jaw and facial area, and to improve Eustachian tube function. Electrodes apply weak direct or alternating currents through the skin, triggering muscle contractions. Patients report pain relief and improved functionality after regular sessions. Scientific studies are currently investigating optimal stimulation parameters and long-term effects.
Myringitis is an inflammation of the eardrum that can be caused by viral or bacterial infection, excessive heat, or chemical irritants. Those affected complain of acute ear pain, redness, and swelling of the eardrum, and occasionally fluid discharge. Clinically, myringitis can be recognized otoscopically by a cloudy or hyperemic membrane. Treatment includes analgesics, topical antibiotics if necessary, and avoidance of further irritants. Complications such as perforation or chronic inflammation are rare but possible.
Myringoplasty is a surgical procedure to reconstruct the eardrum in cases of perforation, usually with the aid of a connective tissue graft (e.g., fascia or perichondrium). The aim is to restore sound conduction and prevent recurrent otorrhea. Access is often retroauricular or endaural, followed by microsurgical suturing and covering of the defect. Success rates for permanent eardrum closure are over 85%. Postoperative audiometry checks hearing gain, and hygiene measures reduce the risk of infection.
Myringoscopy is the visual inspection of the eardrum and tympanic cavity using an otoscope or surgical microscope. It allows the color, permeability, perforations, and other pathologies of the membrane to be assessed. If necessary, samples can be taken via an instrument channel for microbiological or histological examination. Myringoscopy is routine in ENT clinics and forms the basis of all middle ear diagnostics. Clinically, the findings guide further treatment decisions, such as tympanostomy tube insertion or myringoplasty.
Myringotomy is a small incision in the eardrum to drain acute effusion or pus from the middle ear. It is often performed in combination with the insertion of a tympanostomy tube to ensure permanent ventilation. It is indicated for acute middle ear surgery, chronic effusions, and pressure pain. The procedure is performed on an outpatient basis under local anesthesia and takes only a few minutes. Rapid relief usually leads to immediate pressure reduction and improved hearing.
N
Reverberation refers to the delayed echo of sound in a room caused by reflections from walls, ceilings, and furnishings. The reverberation time (RT60) is the time it takes for the sound pressure to decrease by 60 dB and influences speech intelligibility and sound quality in rooms. Excessively long reverberation times obscure speech signals and make understanding difficult, while excessively short reverberation times create a "dead" sound. In sound reinforcement and room acoustics planning, materials and geometries are selected to create balanced reverberation behavior. Hearing aid users benefit from optimized reverberation control, as it reduces the burden on central speech processing.
Post-amplification is an adaptive amplification measure in hearing aids that reacts to recognized speech signals with a time delay in order to emphasize quiet passages. Unlike real-time compression, it intervenes retrospectively when speech energy falls below the comfort threshold. This improves speech comprehension in dynamic situations without unintentionally amplifying loud impulses. Parameters such as delay time and amplification strength are individually adjusted to the hearing profile. Clinical studies show that post-amplification is particularly beneficial in situations where volume changes occur rapidly.
Proximity hearing loss describes the decreasing ability to hear soft sounds as the distance from the sound source increases. It is based on the free sound propagation law (distance law), which states that sound pressure decreases by 6 dB for every doubling of distance. People with hearing loss are more affected by this effect because their need for amplification of soft signals increases. In audiology, proximity hearing loss is used to calibrate hearing aid amplification for different distances. Room acoustic measures and near-field microphones can compensate for the effect.
Near-field communication (NFC) is a wireless radio technology in the high-frequency range (13.56 MHz) that operates over short distances of a few centimeters. In hearing care, NFC is used to configure hearing aids via smartphone or tablet and to switch programs. The technology enables secure pairing without visible cables and saves battery power thanks to short transmission distances. Fitting apps use NFC to transfer real data from audiograms and filter settings. NFC increases user-friendliness and autonomy in hearing aid management.
Nerve fiber latency refers to the time it takes for an action potential to travel along an afferent auditory pathway from the inner ear to the brainstem. It depends on fiber diameter, myelination, and temperature. Delays in the millisecond range are normal and are documented in ABR measurements. Prolonged latencies indicate demyelination, inflammation, or tumors along the auditory pathway. Accurate measurement of nerve fiber delay helps to locate lesions and monitor the course of therapy.
Neural hearing refers to the central processing of acoustic signals in the brainstem and cortex, beyond the peripheral hair cell function. It includes functions such as time and level difference evaluation, pattern recognition, and speech interpretation. Even with an intact ear, neural hearing can be impaired (e.g., central auditory processing disorder), which manifests itself in a normal audiogram but poor speech comprehension. Test procedures such as dichotic listening and evoked potentials examine neural processing levels. Rehabilitation aims at neural plasticity through auditory training and cognitive therapies.
Vestibular neuritis is an inflammatory lesion of the vestibular part of the VIII cranial nerve, usually caused by a virus. It causes sudden onset of severe vertigo, nausea, and unsteadiness, without primary hearing loss. Vestibular function tests (caloric, VEMP) show ipsilateral deficits. Treatment includes corticosteroids, vestibular rehabilitation, and symptomatic medication. The prognosis is usually favorable, as central compensation mechanisms restore balance in the long term.
A neurinoma is a tumorous growth of nerve cells in the brain or nerve tissue, rarely in the auditory system. In the cerebellopontine angle, neurinomas of the VIII cranial nerve (acoustic neuroma) can be referred to as neurinomas. They compress the auditory and vestibular nerves and lead to unilateral hearing loss, tinnitus, and vertigo. Diagnosis is made by MRI with contrast medium, and treatment is by microsurgical resection or stereotactic radiotherapy. Early detection improves preservation of nerve function and quality of life.
Neuroplasticity is the ability of the nervous system to adapt structurally and functionally to changed stimuli or damage. In the auditory system, it manifests itself after hearing loss or cochlear implantation through the reorganization of cortical areas. Targeted auditory training and rehabilitation promote plastic processes and improve speech comprehension. Plastic changes can be documented using imaging techniques (fMRI). Plasticity is a prerequisite for successful auditory rehabilitation, but it decreases with age.
Neurotoxicity refers to damage to nerve tissue caused by chemical substances, including ototoxins such as aminoglycosides, cisplatin, or solvents. In the ear, these substances cause damage to hair cells, synapse loss, and neuronal degeneration. Early detection is achieved through otoacoustic emissions and ABR monitoring during therapy. Protective strategies include dose reduction, ototoxicity-protective adjuvants, and regular hearing checks. Long-term effects range from tinnitus to irreversible sensorineural hearing loss.
Nonlinear distortions occur when a system processes sound signals disproportionately to the input signal, generating harmonics and intermodulation products. In hearing aids, they can impair sound fidelity and speech comprehension if amplifier stages or transducers are not operating optimally. Total harmonic distortion measurements quantify the degree of nonlinear distortion and assist in the selection and calibration of hearing systems. Digital signal processors use linear pre-equalization and feedback suppression to minimize distortion. Severe distortion can also increase neurological processing effort and promote listener fatigue.
Noise cancellation is an active technique for suppressing ambient noise, in which a microphone picks up the interference signal, inverts it in real time, and mixes it with the useful signal. The result is that annoying low-frequency and constant noises—such as aircraft noise or air conditioning hum—are effectively reduced. In hearing aids and headphones, noise cancellation improves speech comprehension in noisy environments and reduces listening effort. Adaptive algorithms continuously adjust the filter settings to changing noise levels. Disadvantages may include a slightly reduced spatial hearing impression and battery consumption.
A noiser is an integrated tinnitus masker in modern hearing aids that emits a soft noise signal to mask ear noises and promote habituation. The spectrum and volume of the noiser can be individually adjusted to the wearer's tinnitus characteristics. Continuous, pleasant noise reduces the focus on tinnitus and alleviates cognitive stress. Noiser programs can be activated manually or controlled automatically by sound detection. Clinical studies show that integrated noiser functions significantly improve sleep quality and quality of life for tinnitus patients.
The nomenclature of audiological tests includes standardized terms for procedures such as tone, speech, otoacoustic emissions (OAE), and evoked potentials (ABR, CAEP). Uniform terminology facilitates communication between audiologists, ENT doctors, and researchers. It clearly defines test parameters such as frequency range, level, masking, and stimulus type. International standards (ISO, IEC) and professional associations publish guidelines on correct nomenclature. Consistent naming ensures the reproducibility and comparability of test results.
Normal hearing refers to hearing thresholds within the reference limits of 0 to 20 dB HL across the frequency spectrum from 125 Hz to 8 kHz. People with normal hearing can reliably perceive speech and everyday sounds without technical aids. Audiometric tests confirm normal hearing through symmetrical air and bone conduction curves without significant threshold deviations. Even with normal hearing, subtle central auditory processing problems (e.g., hidden hearing loss) can occur. The term serves as a starting point for classifying degrees of hearing loss and making decisions about treatment.
The norm curve in the audiogram is the standardized line that represents the normal hearing threshold across frequencies and serves as a comparative reference. Deviations of the measured threshold curve from this line indicate the degree and pattern of hearing loss. Norm curves are based on population surveys and reference levels according to ISO and ANSI standards. In the fitting software, the norm curve visualizes target amplification profiles for hearing aids. Audiologists use the deviations to determine individual audiogram comparisons and fitting goals.
The standard threshold is the reference level (0 dB HL) defined in audiology for the minimum audible sound pressure amplitude under standard conditions. It varies slightly with test frequency, but is internationally standardized to ensure comparability of test results. Values above the standard threshold define the degrees of hearing loss. The normal threshold is the basis for calibrating audiometers and hearing system parameters. It also serves as a reference value in otoacoustic emission tests and evoked potentials.
The cochlear nucleus is the first central switching station of the auditory pathway in the brainstem, where fibers of the vestibulocochlear nerve terminate. It is divided into ventral and dorsal nucleus complexes, which handle different aspects of acoustic signal processing, such as temporal fine structure and spectral information. From here, neurons travel on to the superior olive complex, lateral lemniscus, and subsequent auditory centers. Lesions in the cochlear nucleus lead to central auditory processing disorders despite intact peripheral function. Intraoperative evoked potentials (BERA) measure the integrity of the cochlear nucleus and its connections.
The frequency of hearing aid use describes how often and in what situations wearers use their hearing systems. Optimal use (daily, several hours) correlates strongly with treatment success, speech comprehension, and quality of life. Audiologists record usage patterns using questionnaires, wear time tracking in the device, and smart app statistics. Common barriers include stigma, handling problems, and limited comfort. Interventions such as training programs and individualized adjustments significantly increase acceptance of use.
Nystagmus is an involuntary, rhythmic eye movement pattern, often in response to vestibular stimuli or neural lesions. It can be spontaneous, positional, or caloric, and can vary in direction and speed. The analysis of nystagmus characteristics (e.g., direction, latency, decay time) provides differentiated information about peripheral and central vestibular pathologies. Video nystagmography (VNG) and Frenzel glasses are standard diagnostic tools. Therapeutically, vestibular rehabilitation and pharmacological interventions aim to reduce pathological nystagmus patterns.
O
An objective tinnitus signal refers to tinnitus noises generated by measurable physiological sources in the ENT area, such as vascular turbulence or muscle contractions. Unlike subjective tinnitus, objective noises can be recorded acoustically using special microphones or stethoscopes. The causes are often vascular malformations, muscle spasms in the middle ear, or Eustachian tube spasms. The diagnosis is made through parallel hearing tests and imaging such as duplex sonography or CT angiography. Depending on the cause, vascular embolization, muscle injections, or surgical corrections are used as treatment.
The ear is divided into the outer ear (ear canal and auricle), middle ear (eardrum, ossicles, Eustachian tube), and inner ear (cochlea and vestibular organ). It picks up sound, converts it mechanically and electrochemically, and transmits nerve impulses to the brain. It is also responsible for balance and spatial orientation. Diseases affecting any part of the ear can cause hearing loss, tinnitus, or vertigo. Interdisciplinary care involves ENT specialists, audiologists, and, in the case of balance problems, neurologists or physical therapists.
An ear impression is an exact negative mold of the external ear canal and the auricle, which serves as the basis for custom-made earpieces, hearing protection, and in-ear monitors. It is created directly in the ear canal using soft, skin-friendly impression material. A precise impression ensures a snug fit and optimal sound, prevents feedback, and minimizes pressure points. Incorrect impressions can lead to leaks, uncomfortable pressure, or poor sound quality. Specially trained audiologists check the impression and optimize it if necessary.
Ear candles are hollow, flammable tubes that are inserted into the ear canal and lit to create negative pressure, which is supposed to draw out earwax and impurities. However, scientific studies have shown that this method is ineffective and can cause dangerous burns, scalds, and perforations of the eardrum. ENT associations advise against ear candles and recommend medical ear cleaning procedures instead. Proper cleaning is performed under microscopic view or using cerumen-dissolving drops. In the case of recurring problems, specialist medical examination and conservative therapies are more effective.
Ear care aims to gently clean the external ear canal and outer ear and protect them from infection. Only water- or oil-based ear drops that dissolve earwax are recommended, along with wiping the external ear canal with a soft cloth. Inserting cotton swabs or other objects deep into the ear can push cerumen deeper, causing injuries to the ear canal or damage to the eardrum. In the case of stubborn earwax or foreign bodies, an ENT doctor should perform the cleaning. Regular check-ups prevent cerumen obturans and acute otitis externa.
Earwax (cerumen) is a mixture of secretions from the cerumen glands and dead skin cells that acts as a natural protective film in the ear canal. It traps dust, dirt, and microorganisms and has antimicrobial properties. Normally, cerumen is collected by jaw movements and transported out of the ear canal. However, overproduction or incorrect self-cleaning leads to the formation of plugs (cerumen obturans) and sound conduction disorders. When removing plugs, ENT doctors use rinsing, suction, or drops to prevent damage.
Earache (otalgia) can be caused by diseases of the outer ear (e.g., otitis externa), middle ear (otitis media), but also by dental or jaw-related problems. It manifests itself as a stabbing, throbbing, or burning pain, often accompanied by a feeling of pressure or hearing loss. Diagnostics include otoscopy, functional tests, and, if the cause is unclear, dental or neurological examination. Treatment depends on the underlying disease and includes analgesics, antibiotics, compresses, or surgical intervention if necessary. The primary goals are to reduce pain and avoid complications.
Ear disorders encompass all pathological conditions of the outer ear, middle ear, and inner ear, from cerumen obturans and otitis media to sensorineural hearing loss. They can be acute or chronic and cause symptoms such as hearing loss, tinnitus, dizziness, or pain. Diagnosis requires an ENT examination, audiometry, tympanometry, and, depending on the suspected condition, imaging. Treatment ranges from conservative measures (medication, physiotherapy/audiotherapy) to surgical procedures. Prevention through vaccinations (e.g., pneumococcal), hearing protection, and regular check-ups reduces the disease burden.
Tinnitus encompasses all subjectively perceived sounds without an external sound source, including tinnitus, pulsatile sounds, and muscular noises. They are caused by changes in the inner ear, middle ear, vascular flow, or central processing disorders. Pulse-synchronous noises often indicate vascular causes, while tonal sounds indicate cochlear or central dysregulation. Diagnosis is made through medical history, objective measurements (OAE, AEP), and imaging techniques. Therapies range from sound therapy and cognitive behavioral therapy to medication and invasive procedures, depending on the cause.
The shape of the outer ear varies greatly anatomically and is classified according to the contour, height, and depth of the helix, antihelix, and cavum conchae. Typical variants are the loop-shaped (reduced helix) and shell-shaped (deep concha) outer ear. Shape and size influence the HRTF and thus spatial hearing and frequency filtering. When manufacturing earmolds for hearing aids or hearing protection, the individual shape of the auricle must be taken into account precisely. Plastic surgery corrections (otoplasty) can treat aesthetic or functional problems, such as protruding ears or microtia deformities.
Otohypertension refers to increased pressure in the middle ear, which causes the eardrum to bulge outward and impairs its ability to vibrate. Common causes include Eustachian tube ventilation disorders, inflammatory effusions, or postoperative changes. Symptoms include a feeling of pressure, hearing loss, and occasionally a feeling of fullness in the ear. Diagnostically, tympanometry shows a left-shifted impedance curve with reduced compliance. Treatment aims to equalize pressure through tube function training, balloon dilation, or ear tube implantation.
Otology is the medical field that deals with diseases of the ear, its function, and treatment. It includes the diagnosis and treatment of hearing disorders, vertigo, ear pain, and ear malformations. Otologists work closely with audiologists, neurotologists, and ENT surgeons to ensure interdisciplinary care. Procedures include audiometry, tympanometry, microsurgical procedures, and implantations. Research in otology ranges from molecular repair mechanisms to innovative hearing implants.
Otomastoiditis is inflammation of the mastoid bone resulting from untreated or chronic otitis media. It manifests itself through severe pain behind the ear, fever, swelling, and often discharge from the ear canal. Diagnosis is made using CT to detect bone destruction and abscesses. Treatment includes high-dose antibiotics and often surgical mastoidectomy to remove necrotic tissue. If left untreated, it can lead to life-threatening complications such as brain abscess.
Otoneurology is an interdisciplinary branch of neurology and ENT medicine that deals with disorders of the inner ear and its central connections. It focuses on vertigo, vestibular disorders, and central auditory processing disorders. Diagnostic procedures include videonystagmography, vestibular evoked potentials, and imaging techniques. Therapeutic approaches combine medication, surgery, and rehabilitation. Otoneurologists work closely with physical therapists for vestibular training.
Otopexy refers to the surgical fixation of the auricle or middle ear structures, for example after trauma or in cases of congenital anomalies. In the outer ear, it is used to bring protruding ears (otoplasty) into anatomically correct position. In the middle ear, otopexy can stabilize the ossicular chain or implants to optimize sound conduction. The procedure is minimally invasive and performed under microscopic control. Postoperative checks ensure the mobility and function of the fixed structures.
An earmold is a custom-made shell made of silicone or acrylic that seals the ear canal and holds the hearing aid control unit or receiver. It ensures optimal sound, prevents feedback, and offers comfort through precise adaptation to the individual ear geometry. Earmolds are made from an ear impression and are regularly reworked to ensure a good fit and seal. Different designs (open, closed) affect ventilation and acoustic properties. Cleaning and care are essential to prevent material aging and cerumen buildup.
Otorhinolaryngology (ear, nose, and throat medicine) is the surgical and medical specialty for diseases of the ear, nose, and throat. It covers the diagnosis and treatment of hearing and balance disorders, sinusitis, voice and swallowing disorders, and tumors in the head and neck area. ENT doctors perform endoscopic examinations, microsurgical procedures, and implantations. Interdisciplinary collaboration with neurology, dentistry, and oncology is common. Further training covers otology, rhinology, phoniatrics, and pediatric audiology.
Otosclerosis is a bony growth on the stapes or incus foot, which leads to stiffening of the ossicular chain and conductive hearing loss. Initially, those affected often experience tinnitus and mild hearing loss, later developing a typical flattening of the air conduction curve in the audiogram. Treatment consists of stapedotomy with prosthesis implantation or conservative monitoring in mild cases. Genetic factors and hormonal influences play a role in the pathogenesis. The long-term prognosis after surgery is generally good, with hearing improvement of up to 30 dB.
Otoscopy is the visual examination of the external auditory canal and eardrum using an otoscope. It allows the skin, cerumen, signs of inflammation, perforations, and foreign bodies to be assessed. Pneumatic otoscopy also tests eardrum mobility in response to pressure changes. It is a fundamental part of every ENT examination and should be performed before audiometry. Findings lead to further diagnostics or therapeutic steps such as cleaning, drops, or surgical interventions.
The otoscopic findings document all visible changes in the ear canal and eardrum, such as redness, edema, perforation, or fluid levels. They include a description of the location, size, and morphology of pathologies, as well as functional tests such as pneumatic eardrum mobility. Standardized report forms ensure comparability and traceability. Deviations from normal findings trigger targeted therapies, e.g., antibiotics for otitis or surgery for cholesteatoma. Regular check-ups are essential for chronic middle ear diseases.
P
Pediatric audiology is the discipline that deals with acoustic care and hearing aid fitting for children. It takes into account age-specific characteristics such as ear canal anatomy, growing ear molds, and childhood hearing loss profiles. Diagnostic procedures are designed to be playful, such as child-friendly audiometry or otoacoustic emissions as screening tools. Hearing aid programs are preset to suit children before fine adjustments are made in everyday life. Close cooperation with educators, parents, and early intervention specialists ensures optimal language development and participation in social life.
Pediatric audiology encompasses the diagnosis, treatment, and care of hearing disorders in infants, children, and adolescents. It relies on objective testing methods such as OAE screening and AEP measurements, as young children often do not respond reliably to standard audiometry. From preschool age onwards, playful hearing tests are used to determine hearing thresholds and speech comprehension in an age-appropriate manner. Pediatric audiologists fit hearing aids, accompany speech and language therapy, and monitor developmental milestones. An interdisciplinary team including ENT doctors, speech therapists, and teachers ensures holistic support.
The spiral papilla, also known as the organ of Corti, is located on the basilar membrane in the cochlea and is the actual sound perception organ. It consists of inner and outer hair cells, supporting cells, and a gelatinous membrane above the stereocilia. Sound-induced traveling waves in the basilar membrane bend the stereocilia, triggering mechano-electrical transduction. The inner hair cells encode sound information, while the outer hair cells act as active amplifiers. Damage to the spiral papilla leads to sensorineural hearing loss and impairs frequency resolution.
In partial tone audiometry, the hearing threshold is determined using continuous tones, which the patient signals by pressing a button. Unlike impulse audiometry, the tester gradually scans through different frequencies and levels to draw a precise threshold curve. The procedure is suitable for detailed diagnostics, for example, in cases of suspected cochlear nonlinearities or hidden hearing loss. It detects adaptation and fatigue effects of the auditory system. Modern audiometers support automated partial tone protocols for consistent results.
A pathological hearing threshold is present when the determined hearing threshold per frequency deviates from the standard values by more than 20 dB HL over the long term. It indicates the presence of hearing loss and determines its degree (mild, moderate, severe). Pathological thresholds can develop gradually (age, noise) or acutely (acoustic trauma, sudden hearing loss). Conductive and sensorineural loss are differentiated by comparing air and bone conduction thresholds. Follow-up checks show progression or therapy effects and guide treatment decisions.
The tympanic cavity (cavitas tympani) is the air-filled space in the middle ear that encloses the eardrum, ossicular chain, and oval and round windows. It connects to the nasopharynx via the Eustachian tube and serves to equalize pressure. Pathologies such as effusion (otitis media with effusion) lead to increased pressure and sound conduction disorders. Tympanometry measures the compliance of the tympanic cavity and provides information on the ventilation status and middle ear pressure. Surgical access is often gained via the posterior auditory canal for direct intervention in the tympanic cavity.
Eardrum tubes are small plastic or metal cannulas that are anchored in the eardrum to ensure permanent ventilation of the tympanic cavity. They are indicated for recurrent otitis media effusions, tube dysfunction, or risk of cholesteatoma. They enable pressure equalization, secretion drainage, and reduce middle ear infections. The tubes are inserted on an outpatient basis under local anesthesia and usually fall out spontaneously after 6–12 months. Follow-up examinations ensure that the eardrum is closed and hearing function is restored.
Auditory perception encompasses all processes from sound reception to conscious interpretation in the brain. It includes detection, discrimination, recognition, and localization of sound sources. Psychophysical methods measure perception using threshold and discrimination tests, while neurophysiological methods record evoked potentials. Disorders of auditory perception can occur despite normal peripheral function (e.g., central auditory processing disorder). Hearing training and cognitive interventions aim to rehabilitate perception abilities.
Perilymph is the sodium-rich fluid in the scala vestibuli and scala tympani of the cochlea, which conducts mechanical vibrations in the cochlea and enables pressure equalization. It surrounds the membranous canals with the endolymph and forms an electrochemical insulation. Injuries to the basilar membrane can cause perilymph fistula, leading to vertigo and hearing loss. Perilymph pressure fluctuations are indirectly detected during electrocochleography testing. Research is investigating perilymph biomarkers as indicators of hearing damage.
Peritubalitis is an inflammation of the tissue surrounding the Eustachian tube, often as a result of chronic rhinopharyngitis or tubal catarrh. It leads to edema formation, tubal stenosis, and middle ear pressure effusions. Patients complain of pressure, hearing loss, and recurrent otitis media. Diagnosis is made through tube function tests, endoscopic inspection, and tympanometry. Treatment includes anti-inflammatory nasal drops, tube dilation, and, if necessary, ear tubes.
Perceptive hearing loss (sensorineural hearing loss) is caused by damage to the hair cells in the cochlea or the auditory nerve fibers. It shows up on the audiogram as equally high air and bone conduction thresholds and cannot be corrected surgically. Causes include noise trauma, aging, genetic defects, or ototoxins. Those affected complain of reduced speech comprehension, especially in noisy environments, and benefit from hearing aids or cochlear implants. Rehabilitation measures also include auditory training to strengthen central processing mechanisms.
Ringing in the ears is a form of tinnitus in which sufferers perceive a high-frequency, tonal noise. It can occur in one or both ears and varies in volume and frequency. Causes range from noise damage and otosclerosis to changes in the central auditory pathway. Diagnostics include distortion factor audiometry, OAE, and tinnitus screening to determine frequency and level. Treatment approaches include sound enrichment with noise generators, cognitive behavioral therapy, and, if indicated, medication.
The pharyngotympanic tube connects the middle ear to the nasopharynx and enables pressure equalization and ventilation. It opens when swallowing or yawning and otherwise prevents the backflow of secretions into the middle ear. Dysfunctions lead to tube catarrh, middle ear effusion, and hearing loss. Functional tests such as tube function testing and tympanometry assess its ability to open. Balloon dilation and ear tubes are used therapeutically to prevent long-term complications.
The phonatory reflex refers to the involuntary adjustment of voice volume and pitch to the perceived volume of one's own voice. When speaking in a noisy environment, people automatically increase their volume (Lombard effect) to ensure speech intelligibility. This reflex is controlled by auditory feedback loops in the brain. Hearing loss disrupts the phonatory reflex, resulting in changes in voice level and articulation. Speech therapy can retrain the reflex function and improve speech intelligibility.
A phoneme is the smallest meaningful unit of sound in a language, e.g., /p/ vs. /b/ in English. Phonemes are encoded in the auditory system as specific frequency and time patterns and retrieved from the linguistic lexicon. In audiometry and speech therapy, phoneme tests are used to assess articulation and perception abilities. Hearing aid programs often emphasize phoneme-relevant frequency bands to optimize speech comprehension. Misperceptions of individual phonemes are typical in cases of high-frequency hearing loss or central processing disorders.
Phonosurgery encompasses microsurgical procedures on the ear that are intended to improve hearing or alleviate tinnitus, such as stapedotomy, myringoplasty, or implant placement. The aim is to reconstruct the ossicular chain, eardrum, or direct auditory nerve stimulation. Precision and preservation of residual hearing are paramount, often supported by intraoperative monitoring. Postoperative audiometry and tympanometry document the success of the procedure. Innovations such as endoscopic techniques reduce tissue trauma and rehabilitation time.
Phonotypy refers to individual physiological conditions and motor patterns of sound formation, i.e., how speakers articulate phonemes. It includes lip, tongue, and jaw movements as well as glottis shape. Hearing loss often causes unconscious changes in phonotypy, leading to unclear pronunciation. Speech therapy analyzes phonotypy and provides targeted training in articulation patterns. Video and biofeedback improve awareness of sound formation processes.
The pinna is the visible outer ear made of elastic cartilage that captures sound waves and directs them into the ear via the ear canal. Its complex folds create frequency-dependent filter effects that help locate sound sources in the vertical plane. Variations in the shape of the pinna result in individual HRTFs and influence spatial hearing. When fitting hearing aids, the pinna adaptation of the otoplasty must be taken into account to ensure comfort and sound fidelity. Reconstructive surgery (otoplasty) corrects malformations or injuries to the pinna.
In tinnitus, the plateau phenomenon refers to a phase in which the pitch and volume of the ear noise remain stable over a period of time before fluctuating again. This stability provides diagnostic certainty in tinnitus screening and facilitates sound therapy settings. Plateau phases vary in duration from minutes to hours and can be interrupted by stress or noise. Therapeutically, plateaus are used to precisely adjust noise profiles and promote habituation. Documenting plateau duration helps to monitor the progression of tinnitus.
The brachial plexus is a network of nerves originating from the cervical spinal nerves C5–Th1 and innervates the shoulder and arm. Although anatomically located outside the ear area, the accessory nerve (XI cranial nerve) can be manipulated near the brachial plexus during surgery on the mastoid or cerebellopontine angle. Injuries lead to weakness in the shoulder and pain, which can indirectly promote postural changes and tension in the neck, jaw, and ear area. Interdisciplinary planning in otoneurosurgery minimizes plexus damage. Postoperative physical therapy ensures functional preservation and pain reduction.
The padding of an earpiece is usually made of soft silicone or foam and ensures an optimal fit in the ear canal. It dampens mechanical pressure peaks, prevents pressure points, and increases wearing comfort during prolonged listening. At the same time, the padding influences the acoustic tightness and thus the feedback-free performance and frequency response of the hearing system. Different degrees of hardness and material thicknesses allow individual adaptation to the anatomy of the ear and hearing loss profile. Regular replacement prevents material fatigue and hygiene-related sound changes.
The postauricular muscles (m. auricularis anterior, superior, and posterior) are tiny, often rudimentary muscles around the auricle. In some people, they can move the ear slightly, thereby slightly affecting the position of the otoplasty. Their contraction does not usually play a significant role in hearing, but can be observed in certain reflexes and mimic movements. In rare cases, spasms of these muscles lead to objective tinnitus ("pulsating clicking sound"). EMG measurements of these muscles can reveal muscular causes of tinnitus.
The potential distribution in electrocochleography (ECochG) describes the amplitudes and latencies of cochlear and nerve potentials along the scala tympani. A needle electrode on the eardrum or ear canal electrode is used to measure the summation potential (SP) and action potential (AP). The SP/AP ratio serves as an indicator of endolymphatic hydrops in Meniere's disease. In addition, the distribution of potentials across different stimulation levels shows the functional reserve of the hair cells. ECochG potential patterns help to differentiate between cochlear and retrocochlear pathologies.
The pre-canalicular shape refers to a variant of the outer ear in which the ear canal entrance is particularly narrow or barely covered by the concha. This anatomy can make it difficult to insert ITE hearing aids and increases the risk of cerumen impaction in the cartilaginous canal. When taking impressions, the impression material must be carefully inserted into this area and removed again to ensure complete earmolds. Audiologists often choose open earmold designs for the pre-canal form to optimize ventilation and reduce feedback. Surgical corrections are only indicated in exceptional cases where there are functional problems.
The prevalence of hearing disorders indicates the proportion of people affected in a defined population and varies depending on age, noise exposure, and region. According to the WHO, around 5% of the global population suffers from hearing loss requiring treatment, with this figure rising to over 30% among people aged 65 and over. In industrialized countries, age-related hearing loss (presbycusis) is the most common cause, while infectious causes are more prevalent in developing regions. Prevalence studies form the basis for health planning, care provision, and prevention programs. Long-term data show an increase in age- and noise-induced hearing disorders due to demographic change and environmental factors.
Presbycusis is age-related sensorineural hearing loss, which typically begins with a decline in high-frequency hearing. It is caused by degeneration of hair cells, synaptic wear and tear, and reduced microvascular perfusion of the cochlea. Symptoms include poorer speech comprehension in noise, reduced loudness perception, and tinnitus. Treatment involves hearing aids with high-frequency amplification and central auditory training to promote neural plasticity. Prevention through noise protection and avoidance of ototoxicity can delay the onset.
Pseudohyperacusis refers to an apparent hypersensitivity to sound in which measurements show normal comfort thresholds, but patients perceive loud sounds as painful. It is psychogenic or caused by attention and anxiety disorders and is not attributable to peripheral damage. Objective tests (OAE, AEP) are crucial for differential diagnosis in order to rule out true hyperacusis. Treatment includes education, cognitive behavioral therapy, and gradual desensitization with controlled noise exposure. Interdisciplinary care by audiologists and psychologists improves the prognosis.
Psychophysical methods identify correlations between physical stimulus parameters (level, frequency) and subjective perception (loudness, pitch, masking). Standard procedures include threshold determination (hearing threshold), loudness scaling, and difference threshold measurement (JND tests). Adaptive methods dynamically adjust stimuli to test subjects' responses and optimize measurement duration and accuracy. They form the basis for standard curves, hearing aid calibration, and psychoacoustic modeling. Validity depends on test subjects' attention, the test environment, and the stimulus protocol.
Psychoacoustics is the science of sound perception by the human ear and brain. It investigates phenomena such as loudness perception, masking, pitch resolution, and spatial hearing. Psychoacoustic findings are incorporated into the development of hearing aids, audio compression (MP3), and room acoustics design. Methodologically, it combines physical measurements, behavioral studies, and neural modeling. Fields of application range from hearing diagnostics and sound design to tinnitus and hyperacusis therapy procedures.
Q
A Q band is a narrowly defined frequency interval that is selectively processed by a bandpass or notch filter. In hearing aids, Q bands are used to specifically amplify or attenuate specific speech or interference frequencies (e.g., tinnitus frequencies). The bandwidth of a Q-band is defined by the Q factor: the higher the Q factor, the narrower the band. Narrowband filters minimize unwanted effects on adjacent frequencies and allow for precise sound shaping. Adaptive hearing systems dynamically adjust Q-bands to changing listening situations to ensure optimal intelligibility.
The Q factor or quality factor of a filter describes the ratio of center frequency to bandwidth and quantifies the sharpness of the resonance. A high Q factor means a narrow bandwidth with steep edges and a pronounced resonance peak, while a low Q factor produces wider, flatter filter bands. In hearing aids, the Q factor is set for bell and notch filters to emphasize speech formants or suppress tinnitus frequencies. However, excessive Q values can cause phase distortion and sound artifacts. Fine-tuning the Q factor is part of hearing aid fitting to optimize naturalness and comfort.
Q-mapping is a method for representing spectral information in which the frequency spectrum is divided into bands with a constant Q factor. Unlike linear or octave-based analyses, Q bands adjust their width proportionally to the center frequency, enabling consistent relative resolution across the entire spectrum. In audiology, Q-mapping allows precise characterization of otoacoustic emissions and masking effects. It is also used in room acoustics to identify resonance modes and room modes. Software-based Q-mapping tools visualize complex spectral data in a clear and interactive way.
A Q-peak refers to the maximum resonance point within a narrowband filter or acoustic system. It marks the frequency at which the amplification or attenuation is strongest. In hearing aids, Q-peaks can cause unwanted sound coloration if resonance peaks are not carefully controlled. During filter calibration, the Q-peak is used to identify and reduce problematic resonances (e.g., housing reflections). In room acoustics, the analysis of Q-peaks reveals standing waves and room modes that can be reduced by sound-absorbing measures.
The Q value is a dimensionless parameter that describes the quality or efficiency of an element in various audio engineering contexts. In filter design, it corresponds to the quality factor (see Q factor), and in loudspeaker systems, it corresponds to the ratio of resonance frequency to bandwidth. A high Q value in loudspeakers indicates narrow bass resonances, which can lead to booming effects. In hearing aid development, the Q value is used in the evaluation of microphone designs and amplifier circuits. Consistent Q values are a prerequisite for reproducible sound quality and system stability.
Distressing noises are acoustic stimuli that are perceived as extremely disturbing or painful, such as drilling noises, shrill screeches, or sudden loud impulses. They often exceed the discomfort threshold and can contribute to stress reactions, auditory fatigue, and hyperacusis. In audiotherapy, such noises are specifically used in desensitization programs to gradually raise the tolerance threshold. Environmental and occupational safety guidelines define limit values to minimize distressing noises. Technical measures such as silencers, insulation, and active noise cancellation effectively reduce exposure.
Hearing quality encompasses objective parameters such as hearing threshold, dynamic range, and frequency resolution, as well as subjective aspects such as sound fidelity, comfort, and satisfaction. It is assessed using audiometric tests, questionnaires (e.g., SSQ scale), and everyday observations. High hearing quality enables precise speech comprehension, enjoyment of music, and reliable localization of sound sources. Hearing aid fitting aims to optimize all quality dimensions by fine-tuning filters, compression, and microphone modes. Regular follow-up checks and hearing training ensure sustainable hearing quality.
Cross-sensitivity refers to the influence of signals in adjacent bands on perception in a frequency band, such as masking effects. It occurs when filter-related slope steepness is insufficient and energy "spills over" into adjacent channels. In hearing aid development, filter qualities and slope steepness are selected to minimize crosstalk. Psychoacoustic tests measure masking level differences to determine individual crosstalk patterns. Fitting software takes this data into account to reduce overlap and improve speech intelligibility.
Cross-coupling describes interactions between auditory and vestibular systems, such as when loud sound stimuli trigger vestibular reflexes. Sound-induced vibrations can stimulate endolymphatic movements and cause nystagmus or nausea ("Tulio phenomenon"). This phenomenon is used in diagnostics to detect labyrinth fistulas or perilymph leaks. Avoiding extreme air pressure or sound peaks reduces unwanted vestibular reactions. Treatment focuses on vestibular rehabilitation to reduce cross-stimulation.
A quiet zone is an acoustically shielded area in which background noise is below the threshold of hearing, often used for sensitive audiometry or OAE measurements. It is achieved through sound insulation, decoupling, and active noise cancellation. In research, a quiet zone creates ideal conditions for precise psychoacoustic experiments. In clinical practice, quiet zones ensure reproducible hearing test results without environmental artifacts. Standards define maximum permissible background noise levels for quiet zones in medical facilities.
In audiology, "quotient" is often used for ratio values, such as SP/AP quotient in ECochG or speech reception quotient in speech intelligibility tests. The SP/AP quotient (summation potential/action potential) serves as a diagnostic marker for endolymphatic hydrops. A speech reception quotient indicates the ratio of correctly understood words to the total number and quantifies speech comprehension. Quotients enable standardized comparisons between patients and measurements. They are an integral part of diagnostic reports and care decisions.
R
Radial fibers are afferent nerve fibers that originate from the inner hair cells and run radially to the modiolar ganglion. They transmit the primary auditory signals with high precision in terms of temporal and amplitude information to the auditory nerve. Radial fibers have large, myelinated axons that enable fast conduction speeds and are crucial for speech intelligibility. Damage to these fibers, for example due to noise or ototoxins, leads to hidden hearing loss despite normal hearing thresholds. Research approaches aim to protect or regenerate radial fibers in order to compensate for synaptic wear and tear.
Raphael's ligaments (ligamenta spiralia interni) are fine connective tissue structures in the modiolar region of the cochlea that stabilize nerve fibers and blood vessels. They run radially between the modiolus and the basilar membrane and support the spatial arrangement of the afferent and efferent fibers. Their integrity is important for undistorted signal transmission and nutrient supply to the hair cells. Histological studies show that aging processes and inflammation can contribute to degeneration of Raphael's ligaments. A better understanding of this structure could open up new therapeutic approaches for sensorineural hearing loss.
The noise floor is the lowest constant background noise of an electronic or acoustic system in the absence of an input signal. In hearing aids, it defines the lower limit of amplification, as otherwise quiet ambient noises would be masked by the inherent noise. A low noise floor is desirable in order to make weak signals clearly audible without the user perceiving a constant hum. Technical measures such as noise reduction algorithms and high-quality components reduce the noise floor. Audiological measurements document the noise floor during calibration and quality control of hearing systems.
Reafferent signals are sensory feedback signals that arise during speech when sound reaches the ear via air and bone conduction. They enable self-monitoring of volume, pitch, and articulation and control the phonatory reflex. Hearing loss or masking of these signals impairs speech modulation, resulting in a voice that is too loud or too soft. In cochlear implant users, reafferentation is partially restored through direct electrical stimulation. Research is investigating how enhanced reafferent feedback can improve speech therapy outcomes.
Recruitment refers to a pathologically altered perception of loudness in which loud sounds are suddenly perceived as much louder, while soft sounds are not heard. This effect occurs in sensorineural hearing loss when the compression properties of the cochlea are disturbed. Clinically, recruitment is measured using loudness scaling tests and Bekesy audiometry. In hearing aids, recruitment is compensated for by customized compression algorithms to improve comfort and intelligibility. Without compensation, those affected perceive loud noises as unpleasant or painful.
Reflex audiometry measures acoustically evoked muscle reflexes in the middle ear (stapedius reflex) and facial nerve area to test the function of the middle ear and brainstem pathways. A test stimulus (tone, broadband noise) triggers an impedance change, which is recorded using tympanometry. Reflex threshold and latency provide information about sound conduction, nerve integrity, and central processing. Asymmetric or absent reflexes indicate otosclerosis, nerve lesion, or central disorders. Reflex audiometry supplements tone and speech audiometry with objective diagnostic data.
The ear control loop describes feedback loops between the hearing system, brain, and feedback mechanisms such as the stapedius reflex. It regulates amplification, protective reflexes, and phonatory adjustments to maintain homeostasis in the auditory system. Disruptions in the control loop lead to hyperacusis, tinnitus, or poor volume control. Computer models of the control loop support the development of adaptive hearing aid technologies. Understanding the dynamics of the control loop is crucial for targeted therapies and rehabilitation strategies.
The stimulus threshold is the minimum stimulus level (sound pressure, voltage) that triggers a measurable physiological or psychological response. In audiology, it corresponds to the hearing threshold, defined for each frequency in the audiogram. In evoked potential measurements (ABR, ECochG), the stimulus threshold is also referred to as the lowest level that still generates a signal. Stimulus thresholds are basic data for fitting decisions and fitting algorithms in hearing aids. Changes over time document progression or therapy effects.
Auditory stimulus transmission involves mechanical, chemical, and electrical processes from the outer ear to the auditory cortex. Sound waves are transmitted via the eardrum and ossicular chain to the cochlear fluids, where hair cells generate electrochemical signals. Afferent nerve fibers transmit action potentials via brainstem stations to the cortex. Each relay extracts specific features such as time or level differences. Disruptions along the chain lead to different forms of hearing loss and processing deficits.
Resonance is the amplification of vibrations when the excitation frequency matches the natural frequency of a system. In the ear, resonances in the ear canal and cavum conchae cause certain speech frequencies around 2–4 kHz to be emphasized. Technical resonators such as Helmholtz filters in hearing aids use the same principle to shape sound spectra. Excessive resonance can lead to sound coloration and feedback. Room acoustic resonances (room modes) are controlled by damping and diffusers.
Residual hearing refers to the remaining, still usable hearing in cases of hearing loss and is defined in the audiogram as the difference between the hearing threshold and the comfort threshold. It determines which signal components can be perceived without amplification and which must be supplemented by a hearing aid. Greater residual hearing improves speech comprehension and facilitates hearing aid acceptance. Measurements of residual hearing are taken into account when selecting compression parameters and amplification limits. Changes in residual hearing over time indicate progression or therapeutic success.
A retrocochlear lesion affects the auditory system beyond the cochlea, usually in the area of the vestibulocochlear nerve or higher up in the brainstem. It leads to central auditory pathway disorders, which can manifest themselves in combination tests (e.g., ABR latency prolongation) and in speech comprehension tests. Those affected often have discordant findings, such as a normal otoacoustic emission pattern but disturbed evoked potentials. Causes include acoustic neuromas, multiple sclerosis, or vascular infarcts. Diagnosis requires imaging techniques such as MRI for localization and follow-up.
The retrolabyrinthine space, also known as the vestibular space, lies behind the bony labyrinth and encompasses cranial nerves, vessels, and connective tissue between the labyrinth and the cerebellopontine angle. It is clinically relevant in cases of tumors (e.g., acoustic neuroma) and inflammatory processes that cause vertigo and hearing loss. Surgery in this area requires careful intraoperative monitoring of the auditory brainstem outputs. Anatomical knowledge of the retrolabyrinthine space is essential for access in otoneurosurgery. Postoperative imaging checks the completeness of resection and complications.
The receptors in the inner ear are the inner and outer hair cells on the basilar membrane of the organ of Corti. They convert mechanical vibrations into electrochemical signals by stereocilia opening mechanosensitive ion channels. Inner hair cells primarily encode acoustic information, while outer hair cells implement the cochlear amplifier through active feedback. Damage to these receptors, for example due to noise or ototoxins, leads to sensorineural hearing loss and reduced frequency resolution. Research aims to regenerate receptors through gene therapy or stem cells.
Reciprocal inhibition in the stapedius reflex describes the neurological counteraction whereby activation of the stapedius muscle inhibits contraction of the tensor tympani muscle. This reciprocal inhibition optimizes middle ear mechanics by preventing excessive damping and enabling reflex adaptation to different types of noise. An intact reciprocal mechanism ensures balanced protective reflexes in response to impulsive and continuous sound. Pathological disturbances of reciprocal inhibition can lead to reduced reflex amplitude and increased noise sensitivity. The examination is performed using combined reflex audiometry and EMG measurements.
In the superior olive complex in the brainstem, there are reciprocal connections between the left and right nuclear areas, which exchange interaural time and level information contralaterally. This networking enables binaural processing and precise localization of sound sources. Each half of the nucleus inhibits the opposite side depending on the level difference in order to achieve contrast enhancement. Reciprocal connections are fundamental for functions such as the binaural masking advantage. Lesions in this network lead to central auditory processing disorders and poorer directional perception.
A directional microphone is a type of microphone that primarily picks up sound from a specific direction—usually from the front—and attenuates noise coming from the sides or rear. In modern hearing aids, it improves the signal-to-noise ratio by reducing background noise from other directions. Different directional characteristics (cardioid, supercardioid, omnidirectional) allow adaptation to specific listening situations. Adaptive systems automatically switch between directional and omnidirectional modes depending on the ambient noise. Directional microphones improve speech comprehension, especially in noisy environments.
The Rinne test is a clinical hearing test in which a tuning fork is held alternately on the mastoid (bone conduction) and in front of the ear (air conduction). A positive Rinne result (air conduction better than bone conduction) indicates normal or sensorineural hearing. A negative result indicates conductive hearing loss in the tested ear. The test can be performed quickly and is used to make an initial differentiation between conductive and sensorineural hearing loss. The Weber test is also used to check for lateralization.
The tube sound, also known as tympanic sound, is a hollow or tube-like sound pattern that the patient perceives during ear irrigation or when the eardrum is thin. It occurs when sound waves in the air-filled middle ear are modulated by fluid particles. Clinically, hearing this phenomenon helps to diagnose effusion or eardrum perforation. Specific audiometry tones can reproduce the tube sound in audiometry. Therapeutically, any tympanic sound symptoms are addressed with targeted otitis treatment or tympanic tube insertion.
Acoustic feedback occurs when the signal emitted by the loudspeaker is picked up again by the microphone and amplified once more, resulting in a feedback loop with whistling or humming. In hearing aids and public address systems, feedback is suppressed by adaptive algorithms, tight-fitting earmolds, or directional microphones. Mechanical measures such as sealing and microphone placement minimize the risk of feedback. Uncontrolled feedback can severely impair listening comfort and speech comprehension. Modern systems detect feedback early and adjust filters in real time.
The round window is a flexible membrane opening at the end of the scala tympani that allows pressure relief when the oval window is mechanically stimulated by the stapes. It ensures that the fluid volume in the cochlea remains constant and enables traveling waves on the basilar membrane. Injuries or stiffening of the round window, for example due to surgery or trauma, lead to sound conduction problems and can trigger a perilymph fistula. Clinically, the round window is used as an access point for cochlear implants. Pathological changes can be detected in CT scans and by tympanogram analysis.
The round window membrane is a thin, gelatinous membrane that closes the round window and provides mechanical flexibility. It transmits pressure fluctuations from the scala tympani to the perilymph and acts as a passive relief valve. Its elasticity and thickness vary along the membrane and influence impedance adaptation. Damage leads to perilymph loss, vertigo, and hearing loss. In microsurgical procedures, the membrane is reconstructed with filling materials in cases of perilymph fistula to restore tightness.
S
Sound is a mechanical wave consisting of pressure and density fluctuations that propagates in elastic media such as air or liquid. The frequency and amplitude of these waves determine the pitch and volume that the human ear perceives via mechanical and neural transduction. Sound is used in audiology to test hearing ability (audiometry) and calibrate hearing aids. Excessive sound pressure levels can cause damage to hair cells and noise-induced hearing loss. Technical applications range from ultrasound diagnostics to room acoustics and noise protection.
Sound absorption is the conversion of sound energy into heat when it hits absorbent materials. Absorbers such as mineral wool or acoustic foam reduce reverberation time and reflections in rooms. The degree of absorption is measured using the absorption coefficient α (0–1) per frequency. In listening rooms and sound reinforcement systems, targeted absorption ensures better speech intelligibility. Measurements are taken in anechoic chambers or using impulse response analysis on site.
Sound adaptation refers to the adjustment of the auditory system to sustained stimuli, whereby the perception of continuous sound decreases over time. It protects against sensory overload and enables focus on new signals. Adaptation effects manifest themselves in shifted loudness perceptions and altered hearing thresholds for continuous tones. In hearing aid technology, adaptation characteristics are taken into account in compression algorithms in order to maintain natural sound. Disturbances in adaptation can lead to hyperacusis or auditory fatigue.
Sound propagation describes how sound waves propagate in a medium, influenced by velocity, attenuation, and reflection. In air, the speed of sound is approximately 343 m/s at 20 °C. Propagation laws (inverse square law) explain level decay with distance. Room geometry, absorption, and diffusion shape the sound field and influence reverberation and early reflections. Sound propagation models form the basis for sound reinforcement planning, noise protection, and acoustic simulations.
Sound pressure is the local pressure change relative to static atmospheric pressure, measured in pascals (Pa). It is the physical quantity that causes the eardrum to vibrate, enabling hearing. The audible range extends from approximately 20 µPa (0 dB SPL) to over 20 Pa (140 dB SPL). Sound pressure measurements are central to audiometry, room acoustics, and noise measurement. Microphones and artificial ears calibrate sound pressure for precise hearing options and standards compliance.
The sound pressure level (SPL) is the logarithmic representation of sound pressure in decibels: 20·log10(p/p₀), with reference p₀=20 µPa. It forms the basis for dB‑A and dB‑C ratings in environmental and occupational safety. SPL measuring devices show real-time levels and time curves to document noise exposure. In hearing aid fitting, amplification is adjusted to the expected SPL in everyday situations. Levels above 85 dB A are considered harmful to health during prolonged exposure.
Sound conduction is the mechanical transmission of sound energy through the middle ear, i.e., the eardrum and ossicular chain. It converts airborne sound into fluid movements in the cochlea and overcomes the impedance difference. The efficiency of conduction is approximately 30 dB. Disorders such as perforation or otosclerosis reduce conduction and cause conductive hearing loss. Conduction properties are examined using tympanometry and bone conduction audiometry.
Acoustic emissions are sound waves generated by an object or organ itself, e.g., otoacoustic emissions from the cochlea. They serve as non-invasive diagnostic signals for hair cell function and system integrity. In the quality control of acoustic devices, unwanted emissions are checked as an indicator of mechanical faults. Emission spectra help to detect resonances and leaks in housings. Measurement methods require high sensitivity and a soundproof environment.
A sound field is the spatial distribution of sound pressure and particle motion in a room. A distinction is made between free field, diffuse field, and near field, depending on the reflection and distance characteristics. Sound fields are analyzed in test chambers and lecture halls to optimize acoustic parameters such as SPL distribution and reverberation time. In audiometry, sound fields are measured to ensure standardized test conditions. Simulation tools calculate sound fields for sound system design and noise protection.
Sound frequency is the number of vibration cycles per second, measured in hertz (Hz). It determines the pitch that the human ear perceives between approximately 20 Hz and 20 kHz. Frequency analysis is central to audiometry, OAE and EEG measurements, and hearing aid design. The cochlea and auditory cortex are organized tonotopically, with each frequency having a specific location for processing. The frequency response of devices and rooms is measured to ensure sound neutrality or targeted filtering.
A sound indicator is a key figure or graphic that summarizes sound exposure or acoustic parameters such as SPL, noise exposure, or reverberation time. Examples include daily noise level Lday or Speech Transmission Index (STI). Indicators serve as a basis for decision-making regarding noise protection measures and room acoustics optimization. In sound reinforcement systems, a real-time indicator provides an overview of critical frequencies and levels. Standards define threshold values for various indicators to ensure health and intelligibility.
Sound intensity is the sound energy transported per unit area and is measured in watts per square meter (W/m²). It objectively describes how much acoustic power hits a surface and correlates with the perceived loudness. In noise measurement, intensity is used to calculate levels and exposure values according to standards such as ISO 9612. Clinically, it helps to determine exposure limits for hearing protection. Weaker intensities require higher amplification by hearing systems, while strong intensities can trigger reflex protection.
Sound conduction refers to the air- or bone-conducted path through which sound reaches the inner ear. In air conduction, sound is transmitted through the ear canal, eardrum, and ossicular chain; in bone conduction, it is transmitted directly to the cochlea through skull vibrations. Comparing air and bone conduction thresholds in the audiogram allows differentiation between conductive and sensorineural hearing loss. Conductive hearing loss, such as eardrum perforations, typically results in a dip in the air conduction curve. The efficiency of both pathways forms the basis for hearing solutions, such as bone conduction hearing systems.
Conductive hearing loss occurs when the transmission of sound through air or bone conduction to the inner ear is impaired. Causes include cerumen impaction, tympanic membrane perforation, otosclerosis, or middle ear infections. On an audiogram, it shows up as a spread between normal bone conduction thresholds and increased air conduction thresholds. Treatment options include surgical reconstruction (myringoplasty), removal of obstructions, or bone conduction hearing aids. The prognosis is usually good, as the sensory function in the inner ear remains intact.
Sound localization is the ability to determine the direction of a sound source in space. The brain uses interaural time and level differences (ITD, ILD) as well as spectral filter effects through the outer ears. Precise directional hearing increases safety in everyday life and supports communication in noisy environments. Hearing aids with binaural networking receive these cues by processing signals from both ears synchronously. Tests in an anechoic chamber quantify localization accuracy and help to identify central processing disorders.
Sound masking describes the effect whereby a loud sound prevents the perception of a simultaneous, quieter sound of the same or a similar frequency. It is used psychoacoustically to prevent cross-hearing in audiometry and to employ deliberate maskers for tinnitus in hearing aids. The masking level difference phenomenon shows how binaural processing reduces masking. Compression algorithms take masking into account to make speech signals optimally audible in the presence of background noise. However, incorrectly set masking can unintentionally cover up parts of speech.
The sound level is the logarithmic representation of sound pressure in decibels (dB SPL) and describes the perceived loudness. It is calculated using 20·log₁₀(p/p₀) with reference p₀ = 20 µPa. In noise protection practice, paired level ratings (dB A, dB C) are used for different frequency ratings. Level meters with integration modes record time sequences (Leq, Lmax, Lmin). When fitting hearing aids, audiologists adjust the amplification to typical sound levels in everyday life.
Sound reflection occurs when sound waves are reflected back at an interface (e.g., wall, floor). Reflections determine the spatial sound image and influence reverberation time and early reflections. In room acoustics, absorbers, diffusers, and resonators are used to control reflection patterns and optimize speech intelligibility. Excessive reflections lead to echoes and sound blurring, while too few make the room seem dead. Measurements of the impulse response allow the visualization of reflection times and intensities.
Sound insulation encompasses measures to reduce harmful or disruptive noise in the environment, at work, and at home. Technical solutions range from noise barriers and absorbers to soundproof windows and in-ear hearing protection. Standards for compliance with sound insulation classes (see below) apply in public buildings. Personal protection such as earplugs prevents noise damage in the workplace and during leisure activities. The planning and simulation of sound insulation measures use sound propagation models for effective implementation.
Sound insulation classes (e.g., DIN 4109 classes) classify components such as walls, windows, or doors according to their sound insulation index (Rw) in levels. Each class defines minimum sound insulation requirements in order to comply with legal requirements for living and working spaces. Higher classes (e.g., 4–5) are mandatory in noisy areas to ensure quiet and communication conditions. Sound insulation classes help architects and acousticians with material selection and construction. Laboratory measurements and construction site tests verify compliance with the specified values.
Noise protection regulations are legal frameworks at state or federal level that specify permissible noise levels for residential, commercial, and industrial areas. They define nighttime and daytime limits (e.g., Lden, Lnight) and require local authorities to implement noise action plans. Violations can result in fines, and affected citizens are entitled to noise protection measures. Manufacturers and planners of infrastructure facilities must carry out environmental impact assessments with noise evaluations. Regulations ensure long-term quality of life and living.
The term sound temperature refers to the equivalent temperature at which the average kinetic energy of acoustic particle movements corresponds to that of a thermal noise signal. In thermoacoustics, it is used to describe the inherent noise of electronic circuits. Lower effective sound temperatures are desirable for sensitive measurement microphones and microphone preamplifiers. It influences the signal-to-noise ratio in OAE and AEP measurements. Technical noise reduction and shielding lower the effective sound temperature.
Sound transmission describes the transmission of sound through walls, ceilings, or other building structures. It is quantified by transmission loss (TL) in dB, which indicates how much the level on the receiving side is reduced. Material thickness, density, and stiffness determine the transmission properties. In building acoustics, suspended ceilings and soundproof walls are designed to minimize the transmission of noise between rooms. Measurements in the laboratory (control room method) and on site (directional radiation plate) verify the design results.
A first-order sound wave is a spherical wave that propagates undisturbed in all directions from a point source. Its sound pressure level follows the inverse square law (6 dB level drop per doubling of distance). This ideal type is assumed in free-field measurements when reflections are negligible. In practice, first order is only achieved in the near field and in anechoic chambers. It forms the basis for calibrations of sound sources and level meters.
Sound waves are longitudinal mechanical waves in which particles are excited to vibrate along the direction of propagation. They consist of compressive and rarefactive zones, whose periodicity defines the frequency. They are characterized by parameters such as wavelength, frequency, amplitude, and phase. In audiology, sound waves are used both as test stimuli (tones, noise) and for diagnosis (impulse response, OAE). Technical applications range from ultrasound imaging to acoustic sensor systems.
Acoustic impedance is the product of density and sound velocity in a medium and describes how much it impedes sound transmission. It determines which part of a sound wave is reflected or transmitted at an interface. Impedance differences between air and ear fluid are overcome in the middle ear by means of the ossicular chain. Deviations in acoustic impedance, e.g., due to fluid in the middle ear, alter the tympanogram curve. In hearing technology, impedance matching is used to optimally couple loudspeakers and microphones.
Schauditometry is an objective measurement method in which mechanical or electrical stimuli are applied to the ear and the resulting evoked potentials (OAE, AEP) are recorded. It enables the diagnosis of hearing thresholds without the active cooperation of the patient. In infant hearing screening, audiometry is used as an automatic ABR procedure. The analysis of waveform and latency allows conclusions to be drawn about peripheral and central auditory pathway function. Audiometry complements tone and speech audiometric diagnostics, especially in uncooperative patients.
Narrowband noise is noise whose spectrum is limited to a narrow frequency band, typically used to mask or test specific frequency ranges. In audiometry, it serves as a masker for determining air and bone conduction thresholds when there is a risk of cross-hearing. In psychoacoustics, narrowband noise is used to investigate masking effects and critical bandwidths. In hearing aids, adaptive narrowband filters can suppress noise in defined bands. Narrowband noise helps to test frequency selectivity and channel separation.
The cochlea is the spiral-shaped inner ear organ in which sound is transduced into nerve impulses. Hair cells are located on the basilar membrane, which encode sounds of different frequencies depending on the location of the deflection (tonotopy). The fluid movements in the scala vestibuli and tympani activate the hair cells and generate electrical signals. These signals travel via the auditory nerve to the cortex, where they are perceived as sounds and speech. Diseases of the cochlea lead to sensorineural hearing loss and are an indication for cochlear implants.
The shoulder-head reflex is a vestibulospinal reflex in which head movements involuntarily trigger a counter-movement of the shoulder muscles in order to maintain balance and stability. It is initiated by vestibular receptors in the semicircular canals and otolith organs. Disorders of this reflex manifest themselves in unsteady gait and postural instability. Clinically, it is tested as part of the neurological examination of patients with vertigo. Vestibular training can rehabilitate the reflex in cases of lesions.
Hearing loss refers to a reduction in hearing that impairs everyday life and communication. It is classified as mild, moderate, severe, or profound, based on the shift in the hearing threshold on the audiogram. There are many causes: conductive, sensorineural, or combined forms. Treatment includes medical, surgical, and technical measures such as hearing aids or implants. Early detection and continuous care improve speech development and quality of life.
Sensorineural hearing loss is caused by damage to hair cells, the auditory nerve, or central auditory pathways. It manifests itself in increased air and bone conduction thresholds in the audiogram without air-bone difference. Causes include noise trauma, age, ototoxins, or genetic defects. Technical treatment involves hearing aids or cochlear implants, while rehabilitative measures include auditory training. Sensorineural loss is usually permanent, as hair cells do not regenerate in humans.
Speech audiometry tests speech comprehension by presenting words or sentences at a defined sound pressure level or signal-to-noise ratio. Results are given as a percentage of correctly understood words or as the speech reception threshold (SRT). They supplement tone audiograms with functional aspects of everyday hearing. Test environments can be free field or headphones; masking ensures ear separation. Speech audiometry is crucial for fine-tuning hearing aids and proving their effectiveness.
Speech comprehension is the ability to recognize spoken language and process it semantically. It depends on peripheral hearing function, central processing, and cognitive abilities. Disorders can occur despite normal hearing thresholds, e.g., in cases of central auditory processing disorders. Measurement is performed using standardized tests (e.g., the Freiburg Monosyllable Test) in quiet and noisy environments. Hearing aids and implants aim to maximize speech comprehension in real-life situations.
The stapedius reflex is the contraction of the stapedius muscle in response to loud stimuli, which stiffens the ossicular chain and protects the inner ear. It can be measured in reflex audiometry via impedance changes. The reflex threshold and latency provide information about middle ear function and brain stem integrity. A missing or asymmetrical reflex indicates otosclerosis, nerve lesion, or central disorder. The reflex contributes to the attenuation of impulsive sound peaks.
The stapes is the smallest bone in the human body and the third link in the ossicular chain. It transmits vibrations from the incus to the oval window of the cochlea. Its leverage effect amplifies sound pressure by approximately 1.3 times. In otosclerosis, the attachment region of the stapes often ossifies, causing conductive hearing loss. In stapedotomy surgery, part of the stapes is removed and replaced with a prosthesis to restore sound transmission.
Silence refers to the absence of perceptible sound sources and is used in audiometry as a test condition for threshold determination. A true silent room achieves background levels below 20 dB SPL and minimizes background noise. Silence is necessary for objective measurements such as OAE and AEP detection. Psychoacoustically, absolute silence leads to increased perception of internal noises such as tinnitus. In tinnitus therapy, controlled silence is used as a contrast stimulus to promote habituation.
Noise is any unwanted sound that interferes with the understanding of useful signals such as speech. Its characteristics include level, frequency spectrum, and temporal structure. Noise reduction algorithms and directional microphones are used in hearing aids to reduce noise. Masking studies investigate how noise affects speech comprehension. Optimal signal-to-noise ratios are crucial for hearing comfort and communication ability.
Subjective tinnitus is an auditory perception without an external sound source that only the affected person can hear. It is caused by spontaneous neural activity in the cochlea or central auditory pathways. Common accompanying symptoms include sleep disorders, concentration problems, and psychological stress. Treatment includes sound enrichment, cognitive behavioral therapy, and tinnitus retraining. Objective measurements are not possible; progression is documented using questionnaires and loudness matches.
T
The telecoil is a coil in the hearing aid that receives electromagnetic signals from induction loop systems (e.g., in theaters or churches) and feeds them directly into the hearing system. It bypasses microphones and significantly improves the signal-to-noise ratio by filtering out ambient noise. The telecoil is activated manually or automatically, depending on the hearing aid model. Standardized induction loops generate a standardized magnetic field of 100 mA/m, which the telecoil scans optimally. The telecoil is essential for barrier-free communication in public facilities.
Daily hearing fluctuations describe natural changes in hearing threshold or tinnitus level throughout the day. They result from circadian rhythms, hormone levels, and fluctuations in middle ear and cochlear fluids. Patients often report better hearing in the morning and increased tinnitus in the evening. In diagnostics, repeated measurements at different times of the day are recommended in order to obtain representative findings. Treatment plans take fluctuations into account by adjusting hearing aid programs and noise generator settings over time.
The tegmen tympani is the thin bony roof of the tympanic cavity and separates the middle ear from the middle cranial fossa. It protects the brain from inflammation originating in the middle ear and serves as an access point for certain neurotological surgeries. Defects in the tegmen can lead to cerebrospinal fluid fistulas and cerebral infections. Imaging techniques (CT, MRI) are used to check the integrity of the tegmen in cases of chronic otitis media. Surgical reconstruction with autologous or alloplastic materials restores the barrier function.
Temporal resolution is the ability of the auditory system to perceive closely spaced sound events as separate. It is measured using tests such as gap detection or double-click audiometry. Good temporal resolution is crucial for speech comprehension in fast speech passages and for music perception. Temporal resolution is often reduced in cases of central auditory processing disorders or hidden hearing loss. Auditory training can improve the neural processing of temporally fine stimuli.
The temporal lobe is the area of the brain where the primary auditory cortex (Heschl's gyrus) is located. It processes basic sound characteristics such as frequency and volume and is involved in speech comprehension (Wernicke's area). Lesions in the temporal lobe lead to auditory agnosia, speech comprehension disorders, and tinnitus processing difficulties. Functional imaging (fMRI, PET) shows activation patterns during acoustic and linguistic tasks. The plasticity of the temporal lobe enables successful rehabilitation after hearing loss and implantations.
Therapeutic listening is the targeted use of acoustic stimuli—such as music, speech exercises, or noise—to treat hearing disorders and tinnitus. It combines auditory training, desensitization, and cognitive therapy approaches. Programs are individualized and can be carried out in clinical sessions or via app-supported home training. The aim is to improve speech comprehension, reduce tinnitus distress, and promote neural plasticity. Studies show long-term effects on hearing comfort and quality of life.
Tinnitus is the perception of sounds (e.g., whistling, hissing) without an external sound source. It is caused by spontaneous neural activity in the auditory system, often following damage to hair cells or central maladjustments. Tinnitus can be pulsatile, tonal, or noise-like and varies in volume and severity. Diagnosis includes medical history, tinnitus screening (frequency and level determination), and exclusion of organic causes. Treatment approaches range from sound therapy and tinnitus retraining to cognitive behavioral therapy.
Tinnitus retraining therapy (TRT) combines sound therapy with psychological counseling to promote habituation to tinnitus. A noiser or broad noise is played continuously or situationally to mask the tinnitus signal and enable neural adaptation. At the same time, cognitive strategies are learned to reduce negative reactions to tinnitus. The process usually takes 12–18 months and shows a significant reduction in tinnitus distress in many patients. Regular evaluations adjust sound profiles and counseling content.
The tinnitus generator is the specific location or mechanism in the auditory system that produces tinnitus, e.g., damaged hair cells, increased central gain control, or somatosensory influences. It can be localized using electrocochleography, OAE mapping, or imaging techniques. Knowledge of the generator enables targeted therapies, such as focal drug administration or neurostimulation. In complex cases, multiple generators exist at the peripheral and central levels. Research uses animal models to decipher generators and their interactions.
A tinnitus masker is a device or function that generates an external noise signal to mask the tinnitus. Maskers can be broadband noise, notch filter noise, or narrowband tinnitus spectral sounds. The aim is to suppress the tinnitus signal in the consciousness and promote habituation. Integrated maskers in hearing aids allow situational activation and adjustment of volume and spectrum. Masker therapy improves sleep and concentration in tinnitus patients.
Tinnitus perception encompasses the subjective experience of tinnitus, including sound characteristics, volume, localization, and emotional response. It is assessed using questionnaires (e.g., TFI, THI) and acoustic matching procedures. Perception dimensions only partially correlate with objective measurements, as cognitive and emotional factors play a major role. Therapy success is mainly evaluated based on changes in tinnitus perception. Long-term tracking of perception helps to individualize therapy approaches and make adjustments.
Tone audiometry is the standard method for determining hearing thresholds for pure tones via air and bone conduction. Test tones at defined frequencies (125 Hz–8 kHz) are presented to the test subject via headphones or bone conduction; the minimum perceived levels are recorded in the audiogram. It differentiates between conductive and sensorineural hearing loss by comparing both transmission paths. Automated and manual protocols ensure precision and reproducibility. The results form the basis for hearing aid fitting and diagnosis of middle and inner ear pathologies.
Pitch resolution describes the ability to perceive two tones of different frequencies as separate. It is determined psychoacoustically using dense tone or difference tone tests and is expressed as the smallest detectable frequency difference (Δf). Good resolution is essential for understanding music and perceiving speech, as it differentiates between formants and melodic patterns. Cochlear damage impairs resolution, resulting in blurred sound. Hearing aid and implant strategies aim to preserve remaining tonotopic precision.
Pitch perception is the ability to determine the absolute or relative pitch of a sound, such as in melodies or telephone conversations. Tests such as melody discrimination or musical intervals assess this ability. It depends on coherent processing in the cochlea and auditory cortex. Disorders occur in cases of central auditory processing disorders or after a stroke in the temporal lobe. Musical auditory training can improve pitch perception through plasticity.
The scale test is a psychoacoustic procedure in which test subjects must recognize or reproduce successive scales (ascending/descending). It tests pitch recognition, sequence memory, and musical abilities. In audiology, it is used to assess sound quality and temporal processing in hearing aid users. Differences in test performance before and after hearing aid fitting demonstrate the success of the fitting in musical scenarios. Variations with different intervals analyze frequency resolution in detail.
Scale hearing refers to the perception and cognitive processing of scales as a musical structure. It includes recognizing scale type (major, minor), intervals, and melodic progressions. Neuroimaging shows specific activation patterns in the temporal lobe and associated areas. Hearing loss reduces scale listening due to impaired frequency and time resolution. Rehabilitative music therapy uses scale exercises to promote auditory processing and quality of life.
Tonotopy is the systematic spatial mapping of frequencies along the cochlea (base = high frequencies, apex = low frequencies) and in the auditory cortex. It forms the basis for frequency coding in hearing and enables precise filtering in hearing aids. Tonotopic maps in the cortex show how auditory stimuli of different frequencies are topographically mapped. Damage to certain regions of the cochlea leads to frequency-specific hearing loss. Cochlear implants utilize tonotopy by stimulating electrodes along the cochlea in a frequency-specific manner.
Tone threshold elevation refers to the raising of the hearing threshold for tones in certain frequency ranges, as shown in the audiogram as hearing loss. It can be mild (20–40 dB), moderate (41–70 dB), or severe (>70 dB). Causes include noise trauma, presbycusis, or ototoxic damage to hair cells. The increase provides information about the frequencies affected and initiates targeted amplification in hearing systems. Progress measurements document progression or recovery after therapy.
Toxic hearing loss is caused by ototoxins such as aminoglycoside antibiotics, cisplatin, or solvents, which destroy hair cells and synaptic connections. It usually begins in the high-frequency range and progresses downward with further exposure. Early detection via OAE monitoring during therapy can reduce irreversible damage. Protective strategies include dose adjustment, otoprotective substances, and regular audiological checks. Long-term consequences range from tinnitus to permanent sensorineural hearing loss.
The tragus is the cartilaginous protrusion in front of the ear canal, which partially shields the entrance and serves as natural sound insulation. It influences interaural level differences and thus the localization of sound sources. Clinically, it serves as an anatomical reference point for otoscopy and tragus reflex testing. Pressure on the tragus can cause pain during foreign reflex testing and indicate inflammation in the ear canal. In otoplasty design, the contour of the tragus is precisely replicated to ensure a seal and comfort.
The tragus reflex (also known as the otalgia reflex) is a pain or chewing reflex triggered by pressure on the tragus or pulling on the earlobe. A positive reflex indicates inflammation or pressure pain in the external auditory canal (otitis externa). It supplements otoscopy with a functional test of the skin and sensitivity in the canal. In terms of differential diagnosis, it helps to distinguish otogenic pain from causes related to the teeth or temporomandibular joint. The reflex is triggered by light finger pressure; intensification in the case of pathology is typical.
TEOAE are acoustic responses of the cochlea to short clicks or pulses measured in the external auditory canal. They are generated by active feedback from the outer hair cells and are an objective indicator of cochlear health. TEOAE screening is used in newborn hearing screening because it works without active cooperation. The absence of TEOAE indicates damage to the outer hair cells and possible sensorineural hearing loss. Measurement takes place within a few milliseconds after stimulation and offers high sensitivity and specificity.
Transmission sound refers to sound that is transmitted from one room to another through walls, ceilings, or other structures. It is examined in construction to ensure noise protection between apartments or offices. Measured variables are transmission loss (TL) and weighted sound reduction index (Rw). Structural measures such as double walls, vibrating substructures, and insulation layers minimize transmission sound. Standards specify minimum requirements for residential and work areas.
Transmission sound loss is the difference between the incoming and outgoing sound pressure levels at a partition wall, expressed in dB. It characterizes the sound insulation properties of building components. Higher values indicate better insulation. Tests are carried out in laboratories with standardized sound fields; field measurements validate the results on site. Transmission sound loss is crucial for sound insulation classes and building acoustics planning.
The eardrum (membrana tympani) is a thin, semi-transparent membrane that separates the outer ear from the middle ear and converts sound into mechanical vibrations. It consists of three layers: skin, connective tissue, and mucous membrane. Intact mobility and tension are essential for effective sound conduction. Perforations or scarring impair impedance matching and lead to conductive hearing loss. Surgical reconstruction (myringoplasty) restores integrity and function.
A tympanic membrane perforation is a defect in the tympanic membrane caused by infection, trauma, or barotrauma. It appears otoscopically as a hole or tear and leads to conductive hearing loss and an increased risk of infection. Small perforations can heal spontaneously, while larger ones require myringoplasty. Tympanometry documents the degree of perforation via flat curves and an increased compliance signal. Postoperative monitoring ensures successful closure and hearing gain.
A tympanogram is a graphical representation of middle ear impedance as a function of external air pressure. It is produced during tympanometry when the eardrum is stimulated with varying pressure and compliance is measured. Typical curve types (A, B, C) indicate a normal middle ear, effusion, or Eustachian tube dysfunction. Tympanograms help to differentiate between sound conduction disorders and assess the need for ear tubes. Normal values vary depending on age and measurement system.
Tympanometry is the measurement of middle ear impedance by varying the air pressure in the ear canal. It assesses eardrum mobility and the ventilation status of the tympanic cavity. A tympanometer generates a tympanogram, which allows conclusions to be drawn about fluids, perforations, or functional disorders of the Eustachian tube. It is rapid, objective, and complements audiometry and otoscopy in ENT diagnostics. Normative curves help to identify pathologies such as otitis media with effusion.
Tympanoplasty is the surgical reconstruction of the eardrum and ossicular chain to restore sound conduction. Procedures range from classic myringoplasty (eardrum closure) to combined tympanomastoidoplasty for cholesteatoma. The goals are to seal the middle ear, control infection, and improve hearing. The procedure is performed under a microscope, often using autologous graft material. Long-term success is monitored by audiometry and imaging.
U
Hypersensitivity in an acoustic context describes an increased perception of volume, in which even normal everyday noises are experienced as unpleasant or painful. It can be a consequence of hyperacusis, but can also occur temporarily after exposure to noise or stress-related central modifications. Diagnostically, discomfort thresholds (UCL) are determined to quantify the degree of hypersensitivity. Therapeutic approaches include gradual desensitization with controlled noise stimuli and cognitive behavioral therapy to reduce emotional distress. In hearing aid fitting, compression is carefully adjusted so as not to exacerbate hypersensitivity.
A conductive hearing disorder refers to any functional impairment in which sound does not reach the inner ear efficiently through air conduction or bone conduction. Causes include cerumen impaction, tympanic membrane perforations, or ossicular fixations such as otosclerosis. Clinically, this manifests as a spread between normal bone conduction thresholds and elevated air conduction thresholds on the audiogram. Treatment depends on the cause: surgical reconstruction, removal of obstacles, or use of bone conduction hearing systems. Regular tympanometry and otoscopy monitor the success of the therapy.
Auditory adaptation is the decrease in volume perception during continuous or repeated sound stimulation in order to protect the auditory system from permanent overstimulation. It manifests itself as an increase in the hearing threshold for continuous tones or noise over time. Adaptation mechanisms occur in hair cells, cochlear synapses, and central auditory pathways. In hearing aid technology, adaptive compression algorithms are being developed that mimic these natural processes in order to maintain sound consistency. Lack of or delayed adaptation can lead to fatigue and discomfort.
Auditory fatigue refers to the temporary reduction in loudness perception and hearing acuity after prolonged exposure to sound, especially at high levels. It manifests itself in increased hearing thresholds and reduced discrimination ability, which recover after periods of rest. The mechanisms involved are hair cell fatigue, synaptic exhaustion, and central adaptation processes. Audiologically, fatigue is quantified with tests before and after noise exposure in order to determine risk limits for hearing protection. Rehabilitation through staggered listening breaks and programmed "recovery noise" supports regeneration.
Auditory filtering describes the ability of the ear to separate relevant sound components (e.g., speech) from background noise based on frequency, time, and spatial cues. In the cochlea, basilar membrane, receptor, and neural filters emphasize or attenuate certain frequency bands. Central filtering mechanisms in the auditory pathway and cortex select signals according to meaning and context. In hearing aids, this is technically replicated by multiband filters, noise reduction, and directional microphones. Efficient filtering improves speech comprehension in noisy environments and reduces cognitive load.
Auditory localization is the ability to determine the direction and distance of a sound source. It is based on interaural time differences (ITD) and interaural level differences (ILD), as well as spectral filter effects of the outer ear and head-body transfer functions. Central processing centers in the brainstem (olive complex) combine these cues to enable spatial hearing. Damage to binaural signal processing leads to localization limitations and reduced situational awareness. Hearing systems with binaural networking support natural localization by maintaining cues synchronously.
Auditory masking describes the phenomenon whereby loud sounds mask quiet sounds of the same or similar frequencies, preventing them from being heard. This creates critical bands within the ear where masking energy is particularly effective. Masking is used as a diagnostic tool in audiometry and in hearing aids for tinnitus masking or noise suppression. Adaptive masking filters take individual critical bandwidths into account for effective noise suppression. Psychoacoustic masking effects are fundamental to compression and noise management algorithms.
Auditory plasticity is the ability of the auditory system to adapt structurally and functionally to changes in acoustic stimuli or hearing loss. It includes the formation of new synapses, cortical reorganization, and changes in auditory pathway connections. Plasticity enables recovery after sudden hearing loss, adaptation to hearing aids and cochlear implants, and the learning of new hearing strategies. Rehabilitation training and musical listening promote plastic processes and improve speech comprehension and sound perception. Plasticity decreases with age, which is why early intervention is recommended.
The auditory threshold is the minimum audible sound pressure level for a stimulus at a given frequency and duration. It is documented in the audiogram as the hearing threshold for tones (dB HL) and forms the basis for diagnosing hearing loss. Shifts in the threshold of more than 20 dB from the norm indicate hearing impairment. Different types of thresholds—absolute, terminal, and discomfort thresholds—characterize the entire dynamic hearing experience. Repeated threshold measurements enable progress monitoring during therapy or noise protection measures.
Auditory processing encompasses all central neural mechanisms that transform and interpret acoustic signals from the cochlea to the cortex. It includes temporal and spectral analysis, pattern recognition, and speech comprehension. Disorders of processing—such as central auditory processing disorders—lead to comprehension difficulties despite normal peripheral function. Diagnostic procedures such as evoked potentials and dichotic tests examine the processing levels. Rehabilitation through auditory training uses plastic adaptation to strengthen deficient processing components.
Auditory perception refers to the conscious experience of sound characteristics such as volume, pitch, timbre, and spatial location. It arises through the integration of peripheral stimuli and cognitive processes in the auditory cortex and associated areas. Perceptual phenomena such as gestalt formation (auditory scene analysis) and attention control determine which sound sources are in focus. Perception is measured psychophysically using threshold and discrimination tests. Impairments occur in cases of tinnitus, hidden hearing loss, or central disorders and require targeted training.
Ultrasonic frequencies are sound frequencies above the human hearing range (>20 kHz). Although not consciously audible, they can cause resonances and nonlinear effects in the outer and inner ear acoustics. In otoacoustics, ultra-high frequency emissions (up to 100 kHz) are used to test outer hair cell functions with high resolution. Ultrasound in the hearing range is used in medicine (Doppler sonography) and materials testing, but not for conventional hearing tests. Research is investigating the possible biological effects of ultra-high frequencies in hearing aids and ambient noise.
Ambient noise refers to all acoustic signals in the environment that are not part of the target stimulus, such as traffic noise, conversations, or machine noise. These signals affect speech comprehension, listener fatigue, and the performance of hearing aids. Audiologists measure signal-to-noise ratios (SNR) in typical everyday situations in order to optimize treatment concepts. Noise reduction algorithms and directional microphones in hearing aids reduce disruptive ambient noise. In room planning, noise maps and acoustic simulations are used to control ambient noise levels.
The uncomfortable level (UCL) is the sound pressure level at which a sound is perceived as unpleasant or painful. It is typically 80–100 dB HL above the hearing threshold and varies individually with frequency and hearing status. UCL measurements are important for setting the maximum output power of hearing aids to avoid overamplification. Deviations may indicate hyperacusis or central auditory dysregulation. Follow-up checks of the UCL help to adjust comfort parameters to the situation.
The just-noticeable difference (JND) is the smallest perceptible difference in an acoustic stimulus, e.g., in volume or frequency. It is determined using methods such as the double comparison method and is frequency- and level-dependent. Typical loudness JNDs are around 1 dB, while frequency JNDs are 0.2–1% of the carrier frequency. In hearing aids, JND values are used in the fine tuning of compression and filter bandwidths. Increased JNDs indicate reduced resolution and can explain speech comprehension problems.
V
Valid hearing threshold determination reliably records the minimum audible sound pressure levels of a hearing test subject at defined frequencies. It requires standardized test conditions (quiet booth, calibrated audiometer) and clear instructions to the patient. Validity is increased by checking test-retest consistency and clinical plausibility, for example through cross-checks with otoacoustic emissions. Psychometric methods such as catch trials can reveal psychogenic response patterns. Only valid thresholds form a reliable basis for diagnosis and hearing aid fitting.
Validation audiometry comprises objective and subjective test procedures that check the consistency between measured audiograms and everyday experiences. It combines standard audiometry with speech audiometry, OAE screening, and self-assessment questionnaires (e.g., APHAB). The aim is to verify the success of the fitting and the quality of the adjustment, as well as to identify any discrepancies. Adaptive test sets simulate realistic hearing situations to ensure that the results are practical. The results are used to readjust the hearing system parameters and to document the quality of the fitting.
The vanish effect describes the temporary disappearance or attenuation of tinnitus when a specific sound signal is played, often immediately after the stimulus ends. This phenomenon indicates cortical reorganization and central inhibitory pathways that modulate the tinnitus generator network. It is used in studies to identify effective masking profiles and investigate neural plasticity. Clinically, the vanishing effect can provide information about suitable sound therapy parameters. Long-term use of the identified stimuli can contribute to permanent habituation.
A variable filter dynamically adjusts its center frequency, bandwidth, and slope to changing acoustic environments. In hearing aids, it allows speech to be emphasized in noisy situations and reduces background noise. Algorithms continuously analyze the input signal and adjust filters in real time to optimize the balance between speech intelligibility and natural sound. Adaptive filters can also detect feedback peaks and initiate countermeasures. Using machine learning approaches, modern systems learn user preferences in order to individualize filter strategies.
Processing in the brain refers to the central analysis, integration, and interpretation of auditory signals after peripheral transduction. It involves pathways in the brainstem, thalamus, and primary and secondary auditory cortex areas. Here, time and level differences, speech patterns, and music-specific information are extracted and linked to memory content. Plasticity enables adaptation to hearing loss or hearing aids through the reorganization of neural networks. Disorders at this level lead to central auditory processing disorders and require targeted therapies.
The masking effect describes the suppression of soft sounds by loud noise or tones present at the same time. It is psychoacoustically essential for masking phenomena and determines which sounds remain audible in complex sound mixtures. In audiometry, targeted masking prevents cross-hearing and isolates the ear being tested. In hearing aids, controlled maskers are used to mask tinnitus or attenuate disturbing frequencies. Masking patterns are determined individually to achieve an optimal balance between signal preservation and noise suppression.
Ossification describes pathological bone remodeling in the middle ear, usually characteristic of otosclerosis, which leads to fixation of the ossicular chain. The stapes footplate is particularly frequently affected, which greatly reduces sound conduction. The audiogram shows a prototypical air-bone threshold spread. Therapeutically, ossification is corrected by stapedotomy, whereby the ossified stapes is bypassed and replaced by a prosthesis. Long-term follow-ups confirm the stability of the reconstruction and hearing gain.
An amplifier circuit in hearing aids consists of a preamplifier, signal processor, and output stage, which amplifies weak microphone signals to an audible level. Digital amplifier circuits enable multiband compression, feedback management, and adaptive filtering. Linearity and output power determine sound fidelity and maximum volume. Signal-to-noise ratio and total harmonic distortion are critical parameters for amplifier quality. Modern ASICs integrate amplifiers and DSPs in small form factors with low power consumption.
Amplification describes the increase in the sound pressure level of an input signal to make it audible to the residual hearing. In hearing systems, this is done on a frequency-dependent basis in line with the hearing loss profiles in an audiogram. Compression algorithms ensure that loud signals are not overcompressed and quiet signals are amplified appropriately. Amplification can be linear (same factor) or nonlinear (dynamic adjustment). The aim is to achieve maximum speech intelligibility with a subjectively natural sound.
The vestibular system comprises the saccule, utricle, and three semicircular canals in the inner ear and registers acceleration and head movements. It sends signals via the vestibular portion of the VIII cranial nerve to the brain stem and cerebellum to control balance and eye reflexes. Dysfunctions lead to dizziness, nystagmus, and balance disorders. Diagnostic procedures include caloric testing, VEMP, and video nystagmography. Vestibular rehabilitation trains central compensation and stabilizes gait and balance control.
Vestibular vertigo is a spinning or tilting sensation caused by disorders of the vestibular system in the inner ear or its central connections. Causes can include vestibular neuritis, Meniere's disease, or vestibular migraine equivalent. Accompanying symptoms include nausea, nystagmus, and balance disorders. Diagnostics include caloric testing, VEMP, and video nystagmography to distinguish peripheral from central causes. Treatment involves corticosteroids, vestibular rehabilitation, and, in recurrent cases, intratympanic gentamicin therapy.
The vestibular system consists of the otolith organs (sacculus, utriculus) and the three semicircular canals, which register linear and rotational accelerations. It sends information about head movements and position to the brain stem, cerebellum, and somatosensory cortex to control balance and spatial orientation. Reflexes such as the vestibulo-ocular reflex ensure stable gaze during head movement. Disorders lead to dizziness, unsteadiness, and nausea. Rehabilitation promotes central compensation through exercise programs and neurofeedback.
Der vestibulookuläre Reflex (VOR) stabilisiert das Bild auf der Netzhaut, indem Augenbewegungen entgegengesetzt zu Kopfbewegungen gesteuert werden. Er hat eine sehr kurze Latenz (<10 ms) und wird über direkte Verbindungen zwischen vestibulären Kernen und okulomotorischen Neuronen realisiert. Ein intakter VOR ist essenziell für klare Sicht beim Gehen oder Laufen. Pathologische VOR‑Parameter (Gain, Phase) werden in der Video‑Head‑Impulse‑Test (vHIT) gemessen. Therapie bei VOR‑Schwäche umfasst gezieltes Blick‑Stabilisationstraining.
Vibration sensitivity is the perception of mechanical vibrations transmitted via Pacini and Meissner corpuscles in the skin and deeper tissues. In the ear, vibration sensitivity is used in bone conduction audiometry, where a sound transducer generates vibrations in the mastoid. The threshold is typically 0.2–0.5 g at 250–500 Hz. Changes in vibration perception can indicate neuropathic or vestibular disorders. Vibration measurements support the diagnosis of bone conduction pathways and tactile feedback in hearing systems.
Vibration conduction (bone conduction) transmits sound by stimulating the cochlea directly with vibrations of the skull, without involving the eardrum. It is tested audiometrically to distinguish between sound conduction and sound perception disorders. Implantable bone conduction devices (BAHS, Bonebridge) use vibration conduction to treat middle ear pathologies. Efficiency depends on the location and frequency of vibration; mastoid implants offer better low bass. Vibration conduction also plays a role in the somatosensory interaction of the vestibular system.
A vibration plate generates low-frequency whole-body vibrations for the rehabilitation of vestibular and musculoskeletal functions. In hearing rehabilitation, it is used experimentally to combine vestibular stimulation with auditory training. Vibration parameters (frequency, amplitude) are selected so that they activate the balance system without causing nausea. Studies show improved VOR gain and gait stability after combined vibration-vestibular training. Its use is still being clinically tested, but promises multisensory therapeutic effects.
A virtual acoustic environment (VAE) simulates realistic 3D sound fields via headphones or speaker systems using HRTF-based rendering. It is used in hearing research and training to safely represent complex everyday situations (restaurants, streets). VAEs allow controlled manipulation of background noise, sound source movement, and reverberation. In hearing aid development, adaptive algorithms are tested under realistic conditions. Users benefit from individualized simulations for targeted rehabilitation.
Visual reinforcement describes the support of hearing through visual information, such as lip reading, gestures, or text subtitles. Multisensory integration in the superior temporal sulcus improves speech comprehension in noisy situations. Augmented reality systems project real-time transcriptions into the field of vision to optimize visual reinforcement. Neuroplasticity promotes neural connections between visual and auditory areas in cases of hearing loss. Training combines auditory and visual stimuli to strengthen cross-modal compensation.
The voicing feature distinguishes voiced (e.g., /b/, /d/) from unvoiced consonants (e.g., /p/, /t/) based on vocal fold vibration. Voiced sounds exhibit a fundamental frequency in the spectrum, while unvoiced sounds mainly correspond to turbulent noise. In speech audiometry, voicing recognition is tested to diagnose high-frequency losses and time resolution problems. Hearing aid programs emphasize voicing-relevant frequency bands to compensate for articulation deficits. Misperception of voicing leads to speech comprehension errors, especially in noisy environments.
The vocal tract comprises the throat, oral cavity, and nasal cavity, which act as variable resonators to form speech sounds. Changes in the shape and length of the vocal tract produce different formants that characterize vowels. Acoustic models of the vocal tract are used in hearing research and speech synthesis. Resonance shifts caused by hearing aid earmolds can minimally alter vowel formants. Speech therapy training takes vocal tract mechanics into account in order to specifically promote articulation in cases of hearing loss.
W
Perception in an auditory context refers to the conscious process by which the brain interprets acoustic stimuli and translates them into sensory impressions. It encompasses the detection, discrimination, and cognitive processing of volume, pitch, and timbre. Auditory perception is closely linked to attention and memory, which enables complex tasks such as speech comprehension in noisy environments. Disorders, such as central auditory processing disorders, manifest themselves despite normal peripheral function. Rehabilitative training programs improve perceptual performance through targeted multisensory integration exercises.
A sound transducer (speaker, headphones, or bone conduction transducer) converts electrical signals into acoustic waves and vice versa. In audiometry, calibrated transducers are used to ensure defined sound pressure levels at test frequencies. The quality and linearity of the transducer determine the precision of hearing threshold measurements and OAE detection. Miniature transducers (receivers) are integrated into hearing aids, which deliver speech signals directly into the ear canal. Transducer designs optimize frequency response, low distortion, and energy consumption.
The waiting area is a soundproof antechamber in front of the testing booth where patients are prepared acoustically and psychologically before the test. It minimizes the influence of door noises and ambient noise on the test conditions. It usually contains control panels for the audiologist and visual communication devices for the patient. A correctly designed waiting area is part of the standard requirements (DIN standards) for audiological laboratories. It also serves to explain test procedures and reassure patients before tests.
The Weber test is a simple tuning fork test for lateralizing bone conduction sound. The vibrating fork is placed in the center of the crown or forehead; the patient indicates in which ear they hear the sound louder. In cases of conductive hearing loss, the sound is lateralized to the affected ear; in cases of sensorineural hearing loss, it is lateralized to the healthy ear. The Weber test complements the Rinne test for distinguishing between sound conduction and sound perception disorders. It can be performed quickly and leads to targeted further diagnostics.
Modern hearing aids offer several programs (e.g., quiet, restaurant, music) that adjust acoustic parameters such as compression and microphone characteristics. Programs can be changed manually using buttons on the device, via remote control, or automatically via environment analysis. Automatic program changes recognize acoustic scenarios and adjust seamlessly to optimize speech comprehension and comfort. Training users in program changes improves self-management and hearing satisfaction. Log files document program change frequency for fine-tuning.
Bilateral hearing loss refers to a situation in which both ears are hard of hearing, but to varying degrees or in different ways (e.g., one ear conductive, the other sensorineural). This asymmetry affects lateralization ability and binaural processing. Audiologically, separate air and bone conduction curves are recorded for both ears and masked during testing to avoid cross-hearing. Fitting strategies must adjust each ear individually and ensure binaural synchronization. Asymmetric loss requires special attention to directional microphone and compression parameters.
Soft cerumen is a moist, usually yellowish form of earwax that is easier to remove from the ear canal than hard, dark cerumen. It is caused by high activity of the cerumen glands and can lead to blockages if produced in excess. Treatment involves using cerumen-dissolving drops (e.g., oil- or water-based) and gentle rinsing. Regular check-ups prevent blockages and conductive hearing loss. In hearing aid fitting, soft cerumen can promote feedback if the ear molds do not fit tightly.
White noise contains all audible frequencies at equal power and is perceived psychoacoustically as a uniform "hissing" sound. It is used in hearing therapy as a masker for tinnitus and in sleep aids to promote relaxation. In audiometry, white noise helps with speech audiometry as a competing masker. Technically, it is used to calibrate loudspeakers and microphones to identify frequency response deviations. White noise can cause hearing damage at excessive volumes.
The waveform represents the sound pressure or electrical signal voltage over time and shows amplitude, period, and pulse characteristics. In audiometry, waveforms of clicks and tones are visualized for quality assurance of stimuli. Waveform analysis helps to detect artifacts and distortions and to adjust stimuli. In signal processing, time and frequency domain analysis (Fourier transform) are used for diagnosis and filter development. Clear waveforms are a prerequisite for reproducible measurements of evoked potentials.
The wavelength is the spatial distance between two consecutive phase-equivalent points of a sound wave, calculated as the speed of sound divided by frequency. High frequencies have short wavelengths and are more directional, which is important for localization cues. Wavelength comparison in the head area creates interaural differences that the brain uses for directional detection. In room acoustics, wavelengths influence the effectiveness of absorbers and diffusers; low frequencies with long wavelengths are more difficult to attenuate. Knowledge of wavelength is essential for loudspeaker placement and acoustic design planning.
A waveguide directs sound or electromagnetic waves in a defined direction with minimal energy loss. In audiology, acoustic waveguides are used in earphones or hearing aids to direct sound to the eardrum in a focused manner. Technical waveguides in hearing aids shape the sound field at the microphone input to achieve directionality. The dimensions and material of the waveguide determine the cutoff frequency and attenuation. Optimized waveguides improve the signal-to-noise ratio and speech intelligibility.
Resistive impedance is the real part of acoustic or electrical impedance that describes energy loss due to friction or ohmic resistance. In middle ear mechanics, it corresponds to the damping properties of the ossicular chain and membranes. In tympanometry, an increased resistance component influences the shape of the impedance curve and indicates stiffness or fluid. In hearing aid circuits, low resistance reduces noise and improves energy efficiency. Impedance matching minimizes reflections at interfaces.
Wind noise suppression is a signal processing function in hearing aids and microphones that detects and reduces turbulent sound from wind at the microphone opening. Algorithms detect characteristic low-frequency components and activate adaptive filters or microphone switching. This improves speech intelligibility outdoors without manual intervention. Mechanical windshields (foam caps) complement digital suppression. Effectiveness is verified in real-world field tests at various wind speeds.
A windscreen is a physical cover (e.g., foam, fur) that is placed over microphones or loudspeakers to dampen wind noise. It prevents turbulent air movements at the microphone inlet and reduces low-frequency noise. Windshield materials are acoustically transparent for speech frequencies but dampen disruptive air pressure peaks. In hearing aids and audio recorders, they improve recording quality in free-field conditions. Regular replacement prevents contamination and material wear.
The angle of sound refers to the direction from which a sound source arrives relative to the body or device axis. Binaural cues such as interaural time and level differences encode this angle in the auditory system. Hearing systems with multi-microphone arrays reconstruct sound angles to adaptively control directional microphones. Measurements in the sound field determine directional characteristics and frontal amplification. Precise angle determination improves localization and speech comprehension in complex environments.
Efficiency in hearing aid technology describes the ratio of acoustic output power to electrical input power. High efficiency means longer battery life and less heat generation. Influencing factors include microphone sensitivity, amplifier circuits, and receiver efficiency. Manufacturers optimize circuit topologies and components to achieve efficiencies of >50%. Efficient efficiency is particularly important for small in-ear systems with limited space and battery life.
A power amplifier is an amplifier circuit that provides most of the sound amplification in hearing aids. It follows the preamplifier and filter stages and drives the loudspeaker (receiver). Properties such as linearity, noise figure, and distortion factor determine sound quality and listening comfort. Modern active amplifiers integrate feedback suppression and dynamic compression. Optimized layouts minimize interference and electromagnetic interference.
A word discrimination test assesses how well subjects can distinguish between similar words, for example by listening to minimal pairs ("comb" vs. "can"). It measures central processing performance and speech comprehension beyond the pure hearing threshold. The results help to identify specific deficits in consonant or vowel differentiation. Test environments vary the signal-to-noise ratio to simulate everyday situations. Discrimination results are incorporated into adaptation strategies for filters and compression in hearing aids.
The speech reception threshold (SRT) is the lowest level at which 50% of a list of predetermined words can be correctly reproduced. It is measured in dB SPL or dB HL and correlates with hearing thresholds from tone audiometry. Deviations between SRT and pitch hearing thresholds indicate speech comprehension problems or cognitive deficits. SRT is essential for adjusting amplification in speech ranges in hearing aids. Regular SRT checks document the success of the treatment.
Word identification measures the percentage of correctly recognized words in standardized tests at a specified level or signal-to-noise ratio. It reflects functional speech comprehension and central processing ability. The results form the basis for fine-tuning hearing aids and assessing rehabilitation progress. Different word lists (monosyllabic, multisyllabic) test different levels of complexity. Test repetitions in background noise quantify everyday performance.
Word spectral analysis breaks down speech signals into their frequency spectrum and shows formants, harmonics, and noise components. It helps to identify phoneme-relevant frequency bands and tune hearing aid filters accordingly. Research is being conducted into spectral adjustments by hearing aids and their influence on speech comprehension. Software-supported spectral analysis visualizes real-time changes in speech production and perception. The results are incorporated into adaptive signal processing algorithms and speech coding techniques.
X
The X-axis in an audiogram represents the frequency of the test tone, typically from 125 Hz to 8 kHz (up to 16 kHz in high-frequency audiometry). It is logarithmically scaled to clearly show the broad range of human hearing and to visualize tonotopy. Each point on the X-axis corresponds to a test frequency at which the hearing threshold is determined. In combination with the Y-axis (hearing threshold in dB HL), this results in the individual hearing curve. The display allows quick identification of frequency-specific hearing loss patterns such as high-frequency or low-frequency losses.
X-linked inheritance describes genetic disorders in which the responsible gene is located on the X chromosome and the frequency and severity vary depending on gender. Men (XY) are more frequently and severely affected because they only have one X chromosome, while women (XX) are usually carriers and show mild or no symptoms. Known X-linked hearing disorders include certain forms of otosclerosis and rare syndromes with hearing loss. Molecular genetic diagnostics use blood or saliva samples to identify mutations on the X chromosome. Genetic counseling is essential to assess family risks and initiate early measures such as newborn screening.
Y
Y-chromosomal mutations are rare genetic changes on the Y chromosome that can lead to isolated or syndromic hearing disorders in men. Since women do not have a Y chromosome, they are not affected by such mutations, whereas men usually show a pronounced phenotype. Mutations often affect genes involved in the development of hair cells or cochlear signal transmission. Targeted sequencing of the Y chromosome is performed for diagnostic purposes when other inheritance patterns have been ruled out. Genetic counseling clarifies carrier status and risk for male offspring.
Y-frequency shift refers to the psychophysical phenomenon whereby very loud sounds are perceived as having a slightly higher pitch. It occurs because cochlear nonlinearities and the activity of the outer hair cells alter the effective tonotopy on the basilar membrane. Measurements of the shift are made using comparison tones and pitch matching procedures. This effect is relevant for the fine tuning of hearing aids, as amplification profiles at high levels can slightly alter the pitch. In research, the study of Y-shift helps to better understand cochlear compression mechanisms.
The Y value is a specific indicator in the audiogram that quantifies the ratio of speech intelligibility at different signal-to-noise ratios. It is often expressed as the percentage difference between recognition rates at +5 dB and +10 dB SNR. A high Y value indicates robust speech intelligibility even in noisy environments, while a low value indicates difficulties in noisy environments. Audiologists use the Y value to optimize hearing aid compression and noise reduction. It supplements classic threshold indicators with a functional assessment of the hearing aid situation.
Z
Central auditory processing involves the neural mechanisms in the brainstem, thalamus, and auditory cortex that interpret acoustic signals from the cochlea. This is where time and level differences are analyzed, patterns are recognized, and speech is understood. Disorders of this processing manifest themselves in symptoms such as poor speech comprehension in noise, despite normal peripheral function. Diagnostic procedures such as evoked potentials (ABR, MLR, CAEP) and dichotic hearing tests examine central processing pathways. Rehabilitation aims to promote neural plasticity through targeted auditory training and cognitive therapy.
The central loudness control regulates the subjective perception of volume in the brain and adapts it to environmental conditions. It integrates information from both ears and prioritizes relevant signals to ensure comfort and protection. Dysfunctions lead to hyperacusis or insufficient compression in hearing aids. Measurements of the discomfort threshold (UCL) and loudness scaling tests provide information about central loudness adjustments. Modern hearing aid models mimic this control through adaptive compression and automatic level adjustment.
The central auditory memory stores acoustic impressions—words, melodies, and sound patterns—for seconds to minutes to enable speech comprehension and music reproduction. It links auditory stimuli with semantic and emotional memory content in the temporal lobe and hippocampus. Impairments, e.g., due to dementia or traumatic brain injury, lead to difficulties in following longer passages of speech. Tests such as the Auditory Continuous Performance Test measure auditory memory span and memory performance. Auditory training and mnemonic strategies can strengthen central auditory memory.
Central nervous hearing loss is caused by lesions in the auditory cortex or brain stem and manifests itself in poor speech comprehension despite normal hearing thresholds. Causes include stroke, multiple sclerosis, or tumors in the central auditory pathways. Audiologically, normal OAE is observed, but delayed evoked potentials and impaired dichotic listening tests. Therapy involves rehabilitation of central processing functions through targeted hearing and speech training. Interdisciplinary care with neurologists and audiologists is crucial.
Cervical reflexes are muscle-neuronal reactions in the neck and shoulder area that are triggered by vestibular stimuli, e.g., during head acceleration. They help to stabilize the head-trunk position and are measured in clinical vestibular diagnostics using EMG recordings. Changes in reflex amplitude or latency indicate peripheral or central vestibular disorders. Tests such as the vestibulospinal reflex (VSR) complement caloric and vHIT tests. Rehabilitation trains cervical reflex pathways to restore head stability.
Room volume refers to typical everyday noise levels indoors, usually between 30 and 50 dB A. It includes quiet conversation, typewriter clicks, or background music. Audiologically, room volume is used as a reference point for hearing aid amplification to ensure comfort in living spaces. Standards recommend not overcompensating hearing aid amplification at these levels to avoid feedback. Measurements in the living environment help to define individual fitting parameters.
Zinc-air batteries are small, high-performance batteries that are widely used in hearing aids. They use oxygen from the air as cathode material, which enables high energy density and long running times. Activation occurs by removing a protective film; decreasing voltage indicates consumption. Disadvantages include limited service life after activation and sensitivity to moisture. Modern hearing aids optimize consumption through energy-saving modes and inform the wearer about the remaining battery life.
Pineal gland volume regulation is a hypothetical, scientifically unproven idea that melatonin rhythms in the pineal gland could influence hearing sensitivity. To date, there are no reliable studies proving a direct link between melatonin levels and hearing thresholds. Instead, research focuses on circadian fluctuations in vestibular functions and hormone balance mechanisms. Clinically relevant are daily hearing fluctuations, which are more likely to be due to pressure and fluid changes in the ear. Therefore, the pineal gland currently plays no role in hearing medicine.
Circular hearing loss is a rare finding in which the audiogram shows concentric drops around a mid-frequency, meaning that both sides of a peak are reduced. It indicates band-shaped damage to the basilar membrane or specific hair cell damage. Causes can include ototoxic substances or certain noise patterns. DPOAE mapping and electrocochleography are used for differential diagnosis. Treatment requires targeted filtering and amplification in the affected frequency band.
Sibilants are high-frequency consonants such as /s/, /ʃ/ and /z/, which are formed by turbulent airflow at the teeth. They have strong energy in the 4–8 kHz range and are particularly susceptible to high-frequency loss. In speech audiometry, sibilant recognition is tested in order to optimize high-frequency amplification in the hearing aid. Misperception of sibilants leads to intelligibility problems, especially in German. Fitting software emphasizes sibilant frequencies to improve discrimination.
Vestibular-related tremors are subtle, involuntary oscillations of the eyes (nystagmus) or head caused by malfunctions in the vestibular system. They occur when signals are processed incorrectly in the semicircular canals or central vestibular nuclei. Clinically, tremors are observed during caloric tests or head impulse tests. Their characteristics (direction, frequency) provide information about the location of the lesion. Vestibular rehabilitation aims to suppress pathological oscillations through adaptation and substitution.
Sensitivity to drafts describes the phenomenon whereby sudden air movements in the ear canal trigger cold stimuli and can cause ear pain or increased tinnitus. It is caused by irritation of exposed nerve endings when there is little earwax protection or perforation. Those affected report sharp pains or pressure fluctuations when windows are opened for ventilation or fans are in operation. It is recommended to protect the ear canal from strong drafts with soft earplugs or hearing protection. In severe cases, an ENT doctor will check the integrity of the eardrum and treat any inflammation.
An auxiliary amplifier is an external device that further amplifies the hearing aid signal, such as an FM receiver or Bluetooth streamer. It increases speech levels in difficult situations such as lectures or the theater by feeding the useful signal directly into the hearing aid. Modern auxiliary amplifiers connect wirelessly and synchronize with the hearing aid's automatic volume control. They extend the dynamic range beyond the internal amplifier circuitry. Audiologists configure auxiliary amplifier profiles according to the listening environment and user needs.
Zygomatic tension refers to the activity of the zygomaticus major muscle during smiling and facial expressions, which runs via facial nerves close to the ear canal. Strong muscle contractions can mechanically narrow the ear canal and cause short-term changes in air conduction audiometry. In tone audiometry, attention is paid to relaxing the facial muscles in order to avoid artifacts. Zygomatic tension can play a role in mimic-induced objective tinnitus (snapping sounds). Clinically, facial expressions are controlled in order to rule out unconscious disruptive factors during hearing tests.