HÖRST
glossary
S
Sound is a mechanical wave consisting of pressure and density fluctuations that propagates in elastic media such as air or liquid. The frequency and amplitude of these waves determine the pitch and volume that the human ear perceives via mechanical and neural transduction. Sound is used in audiology to test hearing ability (audiometry) and calibrate hearing aids. Excessive sound pressure levels can cause damage to hair cells and noise-induced hearing loss. Technical applications range from ultrasound diagnostics to room acoustics and noise protection.
Sound absorption is the conversion of sound energy into heat when it hits absorbent materials. Absorbers such as mineral wool or acoustic foam reduce reverberation time and reflections in rooms. The degree of absorption is measured using the absorption coefficient α (0–1) per frequency. In listening rooms and sound reinforcement systems, targeted absorption ensures better speech intelligibility. Measurements are taken in anechoic chambers or using impulse response analysis on site.
Sound adaptation refers to the adjustment of the auditory system to sustained stimuli, whereby the perception of continuous sound decreases over time. It protects against sensory overload and enables focus on new signals. Adaptation effects manifest themselves in shifted loudness perceptions and altered hearing thresholds for continuous tones. In hearing aid technology, adaptation characteristics are taken into account in compression algorithms in order to maintain natural sound. Disturbances in adaptation can lead to hyperacusis or auditory fatigue.
Sound propagation describes how sound waves propagate in a medium, influenced by velocity, attenuation, and reflection. In air, the speed of sound is approximately 343 m/s at 20 °C. Propagation laws (inverse square law) explain level decay with distance. Room geometry, absorption, and diffusion shape the sound field and influence reverberation and early reflections. Sound propagation models form the basis for sound reinforcement planning, noise protection, and acoustic simulations.
Sound pressure is the local pressure change relative to static atmospheric pressure, measured in pascals (Pa). It is the physical quantity that causes the eardrum to vibrate, enabling hearing. The audible range extends from approximately 20 µPa (0 dB SPL) to over 20 Pa (140 dB SPL). Sound pressure measurements are central to audiometry, room acoustics, and noise measurement. Microphones and artificial ears calibrate sound pressure for precise hearing options and standards compliance.
The sound pressure level (SPL) is the logarithmic representation of sound pressure in decibels: 20·log10(p/p₀), with reference p₀=20 µPa. It forms the basis for dB‑A and dB‑C ratings in environmental and occupational safety. SPL measuring devices show real-time levels and time curves to document noise exposure. In hearing aid fitting, amplification is adjusted to the expected SPL in everyday situations. Levels above 85 dB A are considered harmful to health during prolonged exposure.
Sound conduction is the mechanical transmission of sound energy through the middle ear, i.e., the eardrum and ossicular chain. It converts airborne sound into fluid movements in the cochlea and overcomes the impedance difference. The efficiency of conduction is approximately 30 dB. Disorders such as perforation or otosclerosis reduce conduction and cause conductive hearing loss. Conduction properties are examined using tympanometry and bone conduction audiometry.
Acoustic emissions are sound waves generated by an object or organ itself, e.g., otoacoustic emissions from the cochlea. They serve as non-invasive diagnostic signals for hair cell function and system integrity. In the quality control of acoustic devices, unwanted emissions are checked as an indicator of mechanical faults. Emission spectra help to detect resonances and leaks in housings. Measurement methods require high sensitivity and a soundproof environment.
A sound field is the spatial distribution of sound pressure and particle motion in a room. A distinction is made between free field, diffuse field, and near field, depending on the reflection and distance characteristics. Sound fields are analyzed in test chambers and lecture halls to optimize acoustic parameters such as SPL distribution and reverberation time. In audiometry, sound fields are measured to ensure standardized test conditions. Simulation tools calculate sound fields for sound system design and noise protection.
Sound frequency is the number of vibration cycles per second, measured in hertz (Hz). It determines the pitch that the human ear perceives between approximately 20 Hz and 20 kHz. Frequency analysis is central to audiometry, OAE and EEG measurements, and hearing aid design. The cochlea and auditory cortex are organized tonotopically, with each frequency having a specific location for processing. The frequency response of devices and rooms is measured to ensure sound neutrality or targeted filtering.
A sound indicator is a key figure or graphic that summarizes sound exposure or acoustic parameters such as SPL, noise exposure, or reverberation time. Examples include daily noise level Lday or Speech Transmission Index (STI). Indicators serve as a basis for decision-making regarding noise protection measures and room acoustics optimization. In sound reinforcement systems, a real-time indicator provides an overview of critical frequencies and levels. Standards define threshold values for various indicators to ensure health and intelligibility.
Sound intensity is the sound energy transported per unit area and is measured in watts per square meter (W/m²). It objectively describes how much acoustic power hits a surface and correlates with the perceived loudness. In noise measurement, intensity is used to calculate levels and exposure values according to standards such as ISO 9612. Clinically, it helps to determine exposure limits for hearing protection. Weaker intensities require higher amplification by hearing systems, while strong intensities can trigger reflex protection.
Sound conduction refers to the air- or bone-conducted path through which sound reaches the inner ear. In air conduction, sound is transmitted through the ear canal, eardrum, and ossicular chain; in bone conduction, it is transmitted directly to the cochlea through skull vibrations. Comparing air and bone conduction thresholds in the audiogram allows differentiation between conductive and sensorineural hearing loss. Conductive hearing loss, such as eardrum perforations, typically results in a dip in the air conduction curve. The efficiency of both pathways forms the basis for hearing solutions, such as bone conduction hearing systems.
Conductive hearing loss occurs when the transmission of sound through air or bone conduction to the inner ear is impaired. Causes include cerumen impaction, tympanic membrane perforation, otosclerosis, or middle ear infections. On an audiogram, it shows up as a spread between normal bone conduction thresholds and increased air conduction thresholds. Treatment options include surgical reconstruction (myringoplasty), removal of obstructions, or bone conduction hearing aids. The prognosis is usually good, as the sensory function in the inner ear remains intact.
Sound localization is the ability to determine the direction of a sound source in space. The brain uses interaural time and level differences (ITD, ILD) as well as spectral filter effects through the outer ears. Precise directional hearing increases safety in everyday life and supports communication in noisy environments. Hearing aids with binaural networking receive these cues by processing signals from both ears synchronously. Tests in an anechoic chamber quantify localization accuracy and help to identify central processing disorders.
Sound masking describes the effect whereby a loud sound prevents the perception of a simultaneous, quieter sound of the same or a similar frequency. It is used psychoacoustically to prevent cross-hearing in audiometry and to employ deliberate maskers for tinnitus in hearing aids. The masking level difference phenomenon shows how binaural processing reduces masking. Compression algorithms take masking into account to make speech signals optimally audible in the presence of background noise. However, incorrectly set masking can unintentionally cover up parts of speech.
The sound level is the logarithmic representation of sound pressure in decibels (dB SPL) and describes the perceived loudness. It is calculated using 20·log₁₀(p/p₀) with reference p₀ = 20 µPa. In noise protection practice, paired level ratings (dB A, dB C) are used for different frequency ratings. Level meters with integration modes record time sequences (Leq, Lmax, Lmin). When fitting hearing aids, audiologists adjust the amplification to typical sound levels in everyday life.
Sound reflection occurs when sound waves are reflected back at an interface (e.g., wall, floor). Reflections determine the spatial sound image and influence reverberation time and early reflections. In room acoustics, absorbers, diffusers, and resonators are used to control reflection patterns and optimize speech intelligibility. Excessive reflections lead to echoes and sound blurring, while too few make the room seem dead. Measurements of the impulse response allow the visualization of reflection times and intensities.
Sound insulation encompasses measures to reduce harmful or disruptive noise in the environment, at work, and at home. Technical solutions range from noise barriers and absorbers to soundproof windows and in-ear hearing protection. Standards for compliance with sound insulation classes (see below) apply in public buildings. Personal protection such as earplugs prevents noise damage in the workplace and during leisure activities. The planning and simulation of sound insulation measures use sound propagation models for effective implementation.
Sound insulation classes (e.g., DIN 4109 classes) classify components such as walls, windows, or doors according to their sound insulation index (Rw) in levels. Each class defines minimum sound insulation requirements in order to comply with legal requirements for living and working spaces. Higher classes (e.g., 4–5) are mandatory in noisy areas to ensure quiet and communication conditions. Sound insulation classes help architects and acousticians with material selection and construction. Laboratory measurements and construction site tests verify compliance with the specified values.
Noise protection regulations are legal frameworks at state or federal level that specify permissible noise levels for residential, commercial, and industrial areas. They define nighttime and daytime limits (e.g., Lden, Lnight) and require local authorities to implement noise action plans. Violations can result in fines, and affected citizens are entitled to noise protection measures. Manufacturers and planners of infrastructure facilities must carry out environmental impact assessments with noise evaluations. Regulations ensure long-term quality of life and living.
The term sound temperature refers to the equivalent temperature at which the average kinetic energy of acoustic particle movements corresponds to that of a thermal noise signal. In thermoacoustics, it is used to describe the inherent noise of electronic circuits. Lower effective sound temperatures are desirable for sensitive measurement microphones and microphone preamplifiers. It influences the signal-to-noise ratio in OAE and AEP measurements. Technical noise reduction and shielding lower the effective sound temperature.
Sound transmission describes the transmission of sound through walls, ceilings, or other building structures. It is quantified by transmission loss (TL) in dB, which indicates how much the level on the receiving side is reduced. Material thickness, density, and stiffness determine the transmission properties. In building acoustics, suspended ceilings and soundproof walls are designed to minimize the transmission of noise between rooms. Measurements in the laboratory (control room method) and on site (directional radiation plate) verify the design results.
A first-order sound wave is a spherical wave that propagates undisturbed in all directions from a point source. Its sound pressure level follows the inverse square law (6 dB level drop per doubling of distance). This ideal type is assumed in free-field measurements when reflections are negligible. In practice, first order is only achieved in the near field and in anechoic chambers. It forms the basis for calibrations of sound sources and level meters.
Sound waves are longitudinal mechanical waves in which particles are excited to vibrate along the direction of propagation. They consist of compressive and rarefactive zones, whose periodicity defines the frequency. They are characterized by parameters such as wavelength, frequency, amplitude, and phase. In audiology, sound waves are used both as test stimuli (tones, noise) and for diagnosis (impulse response, OAE). Technical applications range from ultrasound imaging to acoustic sensor systems.
Acoustic impedance is the product of density and sound velocity in a medium and describes how much it impedes sound transmission. It determines which part of a sound wave is reflected or transmitted at an interface. Impedance differences between air and ear fluid are overcome in the middle ear by means of the ossicular chain. Deviations in acoustic impedance, e.g., due to fluid in the middle ear, alter the tympanogram curve. In hearing technology, impedance matching is used to optimally couple loudspeakers and microphones.
Schauditometry is an objective measurement method in which mechanical or electrical stimuli are applied to the ear and the resulting evoked potentials (OAE, AEP) are recorded. It enables the diagnosis of hearing thresholds without the active cooperation of the patient. In infant hearing screening, audiometry is used as an automatic ABR procedure. The analysis of waveform and latency allows conclusions to be drawn about peripheral and central auditory pathway function. Audiometry complements tone and speech audiometric diagnostics, especially in uncooperative patients.
Narrowband noise is noise whose spectrum is limited to a narrow frequency band, typically used to mask or test specific frequency ranges. In audiometry, it serves as a masker for determining air and bone conduction thresholds when there is a risk of cross-hearing. In psychoacoustics, narrowband noise is used to investigate masking effects and critical bandwidths. In hearing aids, adaptive narrowband filters can suppress noise in defined bands. Narrowband noise helps to test frequency selectivity and channel separation.
The cochlea is the spiral-shaped inner ear organ in which sound is transduced into nerve impulses. Hair cells are located on the basilar membrane, which encode sounds of different frequencies depending on the location of the deflection (tonotopy). The fluid movements in the scala vestibuli and tympani activate the hair cells and generate electrical signals. These signals travel via the auditory nerve to the cortex, where they are perceived as sounds and speech. Diseases of the cochlea lead to sensorineural hearing loss and are an indication for cochlear implants.
The shoulder-head reflex is a vestibulospinal reflex in which head movements involuntarily trigger a counter-movement of the shoulder muscles in order to maintain balance and stability. It is initiated by vestibular receptors in the semicircular canals and otolith organs. Disorders of this reflex manifest themselves in unsteady gait and postural instability. Clinically, it is tested as part of the neurological examination of patients with vertigo. Vestibular training can rehabilitate the reflex in cases of lesions.
Hearing loss refers to a reduction in hearing that impairs everyday life and communication. It is classified as mild, moderate, severe, or profound, based on the shift in the hearing threshold on the audiogram. There are many causes: conductive, sensorineural, or combined forms. Treatment includes medical, surgical, and technical measures such as hearing aids or implants. Early detection and continuous care improve speech development and quality of life.
Sensorineural hearing loss is caused by damage to hair cells, the auditory nerve, or central auditory pathways. It manifests itself in increased air and bone conduction thresholds in the audiogram without air-bone difference. Causes include noise trauma, age, ototoxins, or genetic defects. Technical treatment involves hearing aids or cochlear implants, while rehabilitative measures include auditory training. Sensorineural loss is usually permanent, as hair cells do not regenerate in humans.
Speech audiometry tests speech comprehension by presenting words or sentences at a defined sound pressure level or signal-to-noise ratio. Results are given as a percentage of correctly understood words or as the speech reception threshold (SRT). They supplement tone audiograms with functional aspects of everyday hearing. Test environments can be free field or headphones; masking ensures ear separation. Speech audiometry is crucial for fine-tuning hearing aids and proving their effectiveness.
Speech comprehension is the ability to recognize spoken language and process it semantically. It depends on peripheral hearing function, central processing, and cognitive abilities. Disorders can occur despite normal hearing thresholds, e.g., in cases of central auditory processing disorders. Measurement is performed using standardized tests (e.g., the Freiburg Monosyllable Test) in quiet and noisy environments. Hearing aids and implants aim to maximize speech comprehension in real-life situations.
The stapedius reflex is the contraction of the stapedius muscle in response to loud stimuli, which stiffens the ossicular chain and protects the inner ear. It can be measured in reflex audiometry via impedance changes. The reflex threshold and latency provide information about middle ear function and brain stem integrity. A missing or asymmetrical reflex indicates otosclerosis, nerve lesion, or central disorder. The reflex contributes to the attenuation of impulsive sound peaks.
The stapes is the smallest bone in the human body and the third link in the ossicular chain. It transmits vibrations from the incus to the oval window of the cochlea. Its leverage effect amplifies sound pressure by approximately 1.3 times. In otosclerosis, the attachment region of the stapes often ossifies, causing conductive hearing loss. In stapedotomy surgery, part of the stapes is removed and replaced with a prosthesis to restore sound transmission.
Silence refers to the absence of perceptible sound sources and is used in audiometry as a test condition for threshold determination. A true silent room achieves background levels below 20 dB SPL and minimizes background noise. Silence is necessary for objective measurements such as OAE and AEP detection. Psychoacoustically, absolute silence leads to increased perception of internal noises such as tinnitus. In tinnitus therapy, controlled silence is used as a contrast stimulus to promote habituation.
Noise is any unwanted sound that interferes with the understanding of useful signals such as speech. Its characteristics include level, frequency spectrum, and temporal structure. Noise reduction algorithms and directional microphones are used in hearing aids to reduce noise. Masking studies investigate how noise affects speech comprehension. Optimal signal-to-noise ratios are crucial for hearing comfort and communication ability.
Subjective tinnitus is an auditory perception without an external sound source that only the affected person can hear. It is caused by spontaneous neural activity in the cochlea or central auditory pathways. Common accompanying symptoms include sleep disorders, concentration problems, and psychological stress. Treatment includes sound enrichment, cognitive behavioral therapy, and tinnitus retraining. Objective measurements are not possible; progression is documented using questionnaires and loudness matches.