HÖRST
Glossary
S
Sound is a mechanical wave of pressure and density fluctuations that propagates in elastic media such as air or liquid. The frequency and amplitude of these waves determine pitch and volume, which the human ear perceives via mechanical and neuronal transduction. Sound is used in audiology to test hearing ability (audiometry) and to calibrate hearing aids. Excessive sound pressure levels can cause hair cell damage and noise-induced hearing loss. Technical applications range from ultrasound diagnostics to room acoustics and noise protection.
Sound absorption is the conversion of sound energy into heat when it hits absorbent materials. Absorbers such as mineral wool or acoustic foam reduce reverberation time and reflections in rooms. The degree of absorption is measured using the absorption coefficient α (0-1) per frequency. In listening rooms and sound reinforcement systems, targeted absorption ensures better speech intelligibility. Measurements are carried out in reflection chambers or via impulse response analysis on site.
Sound adaptation refers to the adaptation of the auditory system to continuous stimuli, whereby the perception of continuous sound decreases over time. It protects against stimulus overload and enables the auditory system to focus on new signals. Adaptation effects can be seen in shifted loudness perceptions and altered hearing thresholds for continuous sound. In hearing aid technology, adaptation properties are taken into account in compression algorithms in order to maintain natural sound. Disturbances in adaptation can lead to hyperacusis or auditory fatigue.
Sound propagation describes how sound waves spread in a medium, influenced by speed, attenuation and reflection. In air, the speed of sound is around 343 m/s at 20 °C. Propagation laws (inverse square law) explain the drop in level with distance. Room geometry, absorption and diffusion shape the sound field and influence reverberation and initial reflections. Models of sound propagation are the basis for sound planning, noise protection and acoustic simulations.
Sound pressure is the local change in pressure relative to the static atmospheric pressure, measured in pascals (Pa). It is the physical quantity that causes the eardrum to vibrate and thus makes hearing possible. The audible range extends from around 20 µPa (0 dB SPL) to over 20 Pa (140 dB SPL). Sound pressure measurements are central to audiometry, room acoustics and noise measurement. Microphones and artificial ears calibrate sound pressure for precise hearing options and compliance with standards.
The sound pressure level (SPL) is the logarithmic representation of the sound pressure in decibels: 20-log10(p/p₀), with reference p₀=20 µPa. It forms the basis for dB-A and dB-C assessments in environmental protection and occupational safety. SPL meters show real-time levels and time histories to document noise exposure. In hearing aid fitting, amplification is adjusted to the expected SPL in everyday situations. Levels above 85 dB A are considered harmful for prolonged exposure.
Sound conduction is the mechanical transmission of sound energy through the middle ear, i.e. the eardrum and ossicular chain. It converts airborne sound into fluid movements in the cochlea and overcomes the impedance difference. The efficiency of the duction lies in an amplification of around 30 dB. Disorders such as perforation or otosclerosis reduce conduction and cause conductive hearing loss. Conduction properties are examined using tympanometry and bone conduction audiometry.
Acoustic emissions are sound waves generated by an object or organ itself, e.g. otoacoustic emissions from the cochlea. They serve as non-invasive diagnostic signals for hair cell function and system integrity. In the quality control of acoustic devices, unwanted emissions are tested as an indicator of mechanical faults. Emission spectra help to detect resonances and leaks in housings. Measurement methods require high sensitivity and a soundproof environment.
A sound field is the spatial distribution of sound pressure and particle movement in a room. A distinction is made between free field, diffuse field and near field, depending on the reflection and distance characteristics. Sound fields are analyzed in test chambers and listening rooms in order to optimize acoustic parameters such as SPL distribution and reverberation time. In audiometry, sound fields are measured to ensure standardized test conditions. Simulation tools calculate sound fields for sound system design and noise protection.
Sound frequency is the number of oscillation cycles per second, measured in Hertz (Hz). It determines the pitch that the human ear perceives between approximately 20 Hz and 20 kHz. Frequency analysis is central to audiometry, OAE and EEG measurements and hearing aid design. The cochlea and auditory cortex are organized tonotopically, with each frequency having a specific processing location. Frequency response of devices and rooms is measured to ensure sound neutrality or targeted filtering.
A sound indicator is a key figure or graphic that summarizes sound exposure or acoustic parameters such as SPL, noise exposure or reverberation time. Examples are daytime noise level Lday or Speech Transmission Index (STI). Indicators serve as a basis for decisions on noise protection measures and room acoustics optimization. In sound reinforcement systems, a real-time indicator visualizes an overview of critical frequencies and levels. Standards define threshold values for various indicators to ensure health and intelligibility.
Sound intensity is the sound energy transported per unit area and is measured in watts per square meter (W/m²). It objectively describes how much acoustic power hits a surface and correlates with the perceived loudness. In noise measurement, the intensity is used to calculate levels and exposure values according to standards such as ISO 9612. Clinically, it helps to determine exposure limits for hearing protection. Weaker intensities require higher amplification by hearing aids, while strong intensities can trigger reflex protection.
Sound conduction refers to the air- or bone-conducted pathway through which sound reaches the inner ear. In air conduction, sound is transmitted through the auditory canal, eardrum and ossicular chain, whereas in bone conduction, sound is transmitted directly to the cochlea through skull vibrations. Comparison of air and bone conduction thresholds in the audiogram makes it possible to differentiate between conductive and sensorineural hearing loss. Disturbances in sound conduction - e.g. eardrum perforations - lead to a typical lowering of the air conduction curve. The efficiency of both paths is the basis for fitting solutions, such as bone conduction hearing systems.
Conductive hearing loss occurs when the transmission of sound in air or bone conduction to the inner ear is impaired. Causes include cerumen plugs, eardrum perforation, otosclerosis or middle ear infections. The audiogram shows a spread between normal bone conduction thresholds and increased air conduction thresholds. Treatment options include surgical reconstruction (myringoplasty), removal of obstructions or bone conduction hearing systems. Prognosis is usually good as the sensory system in the inner ear is preserved.
Sound localization is the ability to determine the direction of a sound source in space. The brain uses interaural time and level differences (ITD, ILD) as well as spectral filter effects through the auricles. Precise directional hearing increases safety in everyday life and supports communication in noisy environments. Hearing aids with binaural networking receive these cues by processing signals from both ears synchronously. Tests in an anechoic chamber quantify localization fidelity and help to detect central processing disorders.
Sound masking describes the effect that a loud sound prevents the perception of a simultaneous, quieter sound of the same or neighboring frequency. It is used psychoacoustically to avoid cross-hearing in audiometry and to use conscious maskers for tinnitus in hearing aids. The masking level difference phenomenon shows how binaural processing reduces masking. In compression algorithms, masking is taken into account to make speech signals optimally audible in background noise. However, incorrectly set masking can unintentionally cover up speech components.
The sound level is the logarithmic representation of the sound pressure in decibels (dB SPL) and describes the perception of loudness. It is calculated using 20-log₁₀(p/p₀) with reference p₀ = 20 µPa. In noise protection practice, paired level weightings (dB A, dB C) are used for different frequency weightings. Level meters with integration modes record time curves (Leq, Lmax, Lmin). When fitting hearing systems, audiologists adjust the amplification to typical everyday sound levels.
Sound reflection occurs when sound waves are reflected back at a boundary surface (e.g. wall, floor). Reflections determine the room sound image, influence reverberation time and initial reflections. In room acoustics, absorbers, diffusers and resonators are used to control reflection patterns and optimize speech intelligibility. Excessive reflections lead to echo and sound wash, too few make the room seem dead. Measurements of the impulse response allow the visualization of reflection times and strengths.
Noise protection includes measures to reduce harmful or disruptive noise in the environment, at work and at home. Technical solutions range from noise barriers and absorbers to soundproof windows and in-ear hearing protection. In public buildings, standards apply for compliance with sound insulation classes (see below). Personal protection such as earplugs prevents noise damage in the workplace and during leisure activities. Planning and simulation of sound insulation measures use sound propagation models for effective implementation.
Sound insulation classes (e.g. DIN 4109 classes) classify components such as walls, windows or doors into levels according to their sound reduction index (Rw). Each class defines minimum requirements for sound insulation in order to meet legal requirements for living and working spaces. Higher classes (e.g. 4-5) are prescribed in noisy areas to ensure quiet and communication conditions. Sound insulation classes help architects and acousticians when selecting materials and constructions. Laboratory measurements and construction site tests verify compliance with the specified values.
Noise protection ordinances are legal regulations at state or federal level that define permissible noise levels for residential, commercial and industrial areas. They define night-time and day-time limit values (e.g. Lden, Lnight) and oblige local authorities to carry out noise action planning. Violations can result in fines and affected citizens are entitled to noise protection measures. Manufacturers and planners of infrastructure facilities must carry out environmental impact assessments with noise evaluation. Ordinances safeguard the quality of living and quality of life in the long term.
The term sound temperature refers to the equivalent temperature at which the average kinetic energy of acoustic particle movements corresponds to that of a thermal noise signal. In thermoacoustics, it is used to describe the inherent noise of electronic circuits. Lower effective sound temperatures are desirable for sensitive measurement microphones and microphone preamplifiers. It influences the signal-to-noise ratio in OAE and AEP measurements. Technical noise suppression and shielding reduce the effective sound temperature.
Sound transmission describes the transmission of sound through walls, ceilings or other building structures. It is quantified by transmission loss (TL) in dB, which indicates how much the level is reduced on the receiving side. Material thickness, density and rigidity determine the transmission properties. In building acoustics, false ceilings and soundproof walls are planned to minimize the transmission of noise between rooms. Measurements in the laboratory (control room method) and on site (directional radiation plate) verify construction results.
A first-order sound wave is a spherical wave that propagates undisturbed in all directions from a point source. Its sound pressure drop follows the inverse square law (6 dB level drop per doubling of distance). This ideal type is assumed in free-field measurements if reflections are negligible. In practice, first order is only achieved in the near field and in anechoic chambers. It forms the basis for calibrations of sound sources and level meters.
Sound waves are longitudinal mechanical waves in which particles are excited to oscillate along the direction of propagation. They are composed of compressive and rarefactive zones whose periodicity defines the frequency. They are characterized by parameters such as wavelength, frequency, amplitude and phase. In audiology, sound waves are used both as test stimuli (tones, noise) and for diagnosis (impulse response, OAE). Technical applications range from ultrasound imaging to acoustic sensor systems.
Acoustic impedance is the product of the density and speed of sound of a medium and describes the extent to which it impedes sound transmission. It determines which part of a sound wave is reflected or transmitted at a boundary surface. Impedance differences between air and auditory fluid are overcome in the middle ear by means of an ossicular chain. Deviations in sound resistance, e.g. due to fluid in the middle ear, change the tympanogram curve. In hearing technology, impedance matching is used to optimally couple loudspeakers and microphones.
Schauditometry is an objective measurement procedure in which mechanical or electrical stimuli are applied to the ear and the resulting evoked potentials (OAE, AEP) are recorded. It enables the diagnosis of hearing thresholds without the active cooperation of the patient. In infant hearing screening, schauditometry is used as an automatic ABR procedure. The analysis of waveform and latency allows conclusions to be drawn about peripheral and central auditory pathway function. Schauditometry complements sound and speech audiometry diagnostics, especially in uncooperative patients.
Narrowband noise is noise whose spectrum is limited to a narrow frequency band, typically used for masking or testing specific frequency ranges. In audiometry, it is used as a masker to determine air and bone conduction thresholds in cross-hearing situations. In psychoacoustics, narrowband noise is used to investigate masking effects and critical bandwidths. In hearing aids, adaptive narrowband filters can suppress noise in defined bands. Narrowband noise helps to test frequency selectivity and channel separation.
The cochlea is the spiral-shaped inner ear organ in which sound is transduced into nerve impulses. Hair cells are located on the basilar membrane, which encode sounds of different frequencies depending on the place of deflection (tonotopy). The fluid movements in the scala vestibuli and tympani activate the hair cells and generate electrical signals. These signals reach the cortex via the auditory nerve, where they are perceived as sounds and speech. Diseases of the cochlea lead to sensorineural hearing loss and are an indication for cochlear implants.
The shoulder-head reflex is a vestibulospinal reflex in which head movements involuntarily trigger a counter-movement of the shoulder muscles in order to maintain balance and stability. It is initiated via vestibular receptors in the semicircular canals and otolith organs. Disorders of this reflex are reflected in an unsteady gait and postural instability. It is clinically tested as part of the neurological examination of vertigo patients. Vestibular training can rehabilitate the reflex in lesions.
Hearing loss is a hearing impairment that affects everyday life and communication. It is categorized into mild, moderate, profound and bordering on deafness, based on the shift in the hearing threshold in the audiogram. There are many causes: conductive, sensorineural or combined forms. Therapy includes medical, surgical and technical measures such as hearing aids or implants. Early detection and continuous care improve speech development and quality of life.
Sensorineural hearing loss is caused by damage to the hair cells, auditory nerve or central auditory pathways. It is indicated by increased air and bone conduction thresholds in the audiogram without an air-bone difference. Causes include noise trauma, age, ototoxins or genetic defects. Technical care is provided with hearing aids or cochlear implants, rehabilitative measures with hearing training. Sensorineural loss is usually permanent, as hair cells do not regenerate in humans.
Speech audiometry tests speech comprehension by presenting words or sentences at a defined sound pressure level or signal-to-noise ratio. Results are given as a percentage of correctly understood words or as Speech Reception Threshold (SRT). They supplement sound audiograms with functional aspects of hearing in everyday life. Test environments can be free-field or headphones; masking ensures ear separation. Speech audiometry is crucial for hearing aid fine-tuning and fitting verification.
Speech comprehension is the ability to recognize and semantically process spoken language. It depends on peripheral auditory function, central processing and cognitive abilities. Disorders occur despite normal hearing thresholds, e.g. in the case of central auditory processing disorders. Measurement is carried out using standardized tests (e.g. Freiburg single-silver test) in a quiet and disturbed environment. Hearing aid and implant fittings aim to maximize speech comprehension in real-life situations.
The stapedius reflex is the contraction of the stapedius muscle in response to loud stimuli, which stiffens the ossicular chain and protects the inner ear. It can be measured in reflex audiometry via impedance changes. Reflex threshold and latency provide information about middle ear function and brainstem integrity. An absent or asymmetrical reflex indicates otosclerosis, nerve lesion or central disorder. The reflex contributes to the attenuation of impulse-like sound peaks.
The stapes is the smallest ossicle in the human body and the third link in the ossicular chain. It transmits vibrations from the anvil to the oval window of the cochlea. Its lever effect amplifies the sound pressure by about 1.3 times. In otosclerosis, the attachment region of the stapes often ossifies, causing conductive hearing loss. Surgical stapedotomy involves removing part of the stapes and replacing it with a prosthesis to restore sound transmission.
Silence refers to the absence of perceptible sound sources and is used in audiometry as a test condition for threshold determination. A true quiet room achieves background levels below 20 dB SPL and minimizes background noise. Silence is required for objective measurements such as OAE and AEP detection. Psychoacoustically, absolute silence leads to increased perception of internal sounds such as tinnitus. In tinnitus therapy, controlled silence is used as a contrast stimulus to promote habituation.
Noise is any unwanted sound that impedes the understanding of useful signals such as speech. Characteristics are level, frequency spectrum and temporal structure. Noise reduction algorithms and directional microphones are used in hearing aids to reduce noise. Masking studies investigate how noise affects speech understanding. Optimal signal-to-noise ratios are crucial for hearing comfort and communication ability.
Subjective tinnitus is an ear perception without an external sound source that only the person affected hears. It is caused by spontaneous neuronal activity in the cochlea or central auditory pathways. Common accompanying symptoms are sleep disorders, concentration problems and psychological stress. Therapy includes sound enrichment, cognitive behavioral therapy and tinnitus retraining. Objective measurements are not possible; progress is documented using questionnaires and loudness matches.