How Your Ears Pinpoint Danger in Traffic When Hearing a Car Horn

TL;DR;

  • Your brain localizes sound using three major cues: tiny time differences between ears, level differences, and subtle spectral fingerprints created by the shape of your outer ear.12
  • These cues only work well when the sound has enough bandwidth—a wide spread of frequencies to “grab onto.” Pure beeps and single tones are much harder to localize.34
  • The pinna (the visible outer ear) acts like a 3D acoustic antenna, sculpting sound differently depending on whether it comes from front vs. back, above vs. below.56
  • In echoey real-world environments, the brain uses the precedence effect to lock onto the first-arriving sound, which is crucial for hearing where a horn is really coming from.78
  • Car-horn-like sounds are especially effective because they are both broadband and instantly recognizable as “road danger.” Bicycle horns that mimic this timbre tap into the same localization and recognition machinery.910

In our first article on sound and reaction time, we looked at how the auditory system plugs straight into fight-or-flight circuits and beats vision on speed. This second article follows the next question drivers and people cycling actually care about:

Once you hear the horn, how do you know where it’s coming from?

To understand why some horns work much better than others, we need to unpack how the brain reconstructs 3D space from mere pressure waves at the two eardrums.


1. Three dimensions, three classes of cues

Auditory localization is about recovering three things: left–right, up–down, and near–far. The nervous system solves this with three broad families of cues:12

  1. Interaural Time Differences (ITDs) – tiny differences in arrival time between your two ears.
  2. Interaural Level Differences (ILDs) – differences in loudness, mostly at higher frequencies, caused by the “head shadow.”
  3. Monaural spectral cues – direction-dependent filtering by your pinna and head that imprints subtle peaks and dips in the sound spectrum.

All three are complementary:

  • ITDs are most useful for low and mid frequencies (think engine rumble).
  • ILDs shine at higher frequencies, where your head blocks sound more strongly.
  • Spectral pinna cues are critical for front–back and up–down discrimination, and they lean heavily on higher frequencies as well.256

This combination is sometimes called the duplex theory of sound localization: phase/time cues at low frequencies, level cues at high frequencies, plus spectral pinna fingerprints layered on top.3

From the perspective of safety signals, there’s a key takeaway already:

If you want a horn that people can localize quickly and accurately, it must deliver usable information to all three systems—time, level, and spectrum.

That’s exactly what broadband, car-like horn sounds do.


2. Interaural time and level: the horizontal “steering wheel”

Imagine a horn sounding off to your right. Because your ears are separated by about 18–20 cm, the sound hits your right ear slightly earlier and slightly louder than your left. Your brain can detect both differences with remarkable precision.12

2.1 Interaural Time Differences (ITDs)

  • For a source directly to the side, the ITD is on the order of 600–700 microseconds (millionths of a second).1
  • Special neurons in the brainstem act as coincidence detectors, firing maximally when inputs from each ear arrive together; the pattern of activity across these neurons encodes azimuth (left–right position).

ITDs work best for frequencies below ~1.5 kHz, where the wavelength of the sound is long compared to head size and phase differences are unambiguous.3

2.2 Interaural Level Differences (ILDs)

At higher frequencies, your head casts an acoustic shadow. A sound coming from the right will be noticeably quieter at the left ear:

  • ILDs can exceed 20 dB at the highest audible frequencies.
  • The auditory system uses ILDs as a strong cue for lateral position when ITDs become ambiguous at high frequencies.23

Put together, ITDs and ILDs give a fairly accurate horizontal “bearing” for most natural sounds. But they have some blind spots:

  • Pure tones (single-frequency beeps) provide very weak ILD information at low frequencies and can create ambiguous patterns at higher ones.
  • ITDs and ILDs alone cannot fully disambiguate front vs. back (the “cone of confusion” problem) or up vs. down.

This is where the pinna comes in.


3. The pinna: a 3D acoustic antenna on each side of your head

The visible part of your ear is not just decorative cartilage. It is a carefully evolved directional filter.

As sound arrives from different directions, it bounces off the ridges and cavities of the pinna before entering the ear canal. This creates direction-dependent spectral coloring—specific frequencies are amplified or attenuated in characteristic ways.56

These spectral signatures, together with head and torso effects, are summarized in what engineers call Head-Related Transfer Functions (HRTFs)—essentially a lookup table that maps direction → frequency response.2

3.1 Vertical and front–back localization

Studies on humans and animal models show that:56[^11]

  • Pinna cues are crucial for elevation (up vs. down) and front–back discrimination.
  • When pinna shape is altered (e.g., by molds, surgery, or microphone placement behind the ear), vertical and front–back localization degrade significantly.
  • Over time, the brain can partially re-learn new pinna/HRTF mappings, but performance is never quite as good as with the original “hardware.”

A 2020 study of cochlear-implant users showed that adding pinna-imitating microphone directionality improved localization, particularly for front–back judgments, compared to standard behind-the-ear microphones.5 More recent work in normal-hearing listeners found that the pinna enhances angular discrimination in the central frontal region—the area most relevant for oncoming traffic.6

3.2 Why broadband is essential for pinna cues

Pinna-based spectral cues live primarily in the mid-to-high frequency range, where the ear and head do the most sculpting. If a sound doesn’t contain those frequencies, the brain has nothing to work with.25

  • A broadband noise burst (like a car horn) produces rich, direction-specific spectral patterns.
  • A pure low-frequency tone may carry ITD information, but almost no spectral cues for elevation or front–back.
  • A narrow high-frequency beep offers only limited ITD information and can be ambiguous when reflections are present.

This is why minimal warning sounds for vehicles—especially quiet EVs—are specified to include low and high components rather than just a single tone: they need to be both detectable and localizable.10


4. Broadband sounds localize better (and feel more “real”)

From a safety standpoint, the most important property of a horn sound is not just loudness, but how quickly and accurately people can tell where it’s coming from.

Several lines of research converge on the same conclusion:34

  • Localization performance improves with increasing bandwidth. Broad frequency ranges give the brain access to both ITD and ILD cues as well as pinna-based spectral cues.
  • Eye- and head-movement studies show that people orient faster and more precisely to broadband bursts than to narrowband or tonal sounds, especially in noisy backgrounds.4
  • When particularly poor spectral content is used (e.g., narrow tones), people compensate by moving their heads more to create artificial motion cues, which takes time.3

Think about how different it feels to localize:

  • A single-frequency phone beep somewhere in a busy office vs.
  • A broadband clap or shout.

You can almost “feel” the location of the clap; the beep seems to hover ambiguously until you look around. In traffic, ambiguity costs time.

The ideal horn sound is like an acoustic flare: broad, abrupt, and information-rich. It should make your nervous system say “That’s over there” in as few milliseconds as possible.


5. Real streets are echoey: the precedence effect

City streets are filled with reflective surfaces—buildings, cars, the road itself. Every horn blast produces a direct sound plus a whole constellation of echoes. Yet we usually perceive a single, stable location rather than a confusing cloud of phantom sources.

This stability comes from the precedence effect (also called the “law of the first wavefront”).78

When the same sound arrives multiple times with small delays (within tens of milliseconds):

  • The auditory system fuses them into a single percept.
  • The perceived direction is dominated by the earliest-arriving sound, even if later echoes are louder.
  • Localization is thus tied to the direct path rather than the reflections, which is exactly what you want for hazards.

In practice:

  • A horn blast from a car or bike to your right reaches your right ear first via the direct line-of-sight path.
  • Reflections from walls, parked cars, or trucks arrive slightly later and are largely suppressed for localization purposes.
  • The result is a robust sense that “the horn is over there,” even in a reverberant canyon of parked SUVs.

Broadband signals again help here: sharp onsets and rich spectra make it easier for the auditory system to identify the true first wavefront and discount the rest.78


6. Recognizable horn timbres: localization meets learning

So far we’ve talked mostly about geometry and physics. But there’s another layer on top: learning how particular sounds interact with your own ears.

Every time you hear a car horn in the real world and see where it came from, your auditory system is quietly updating a map: “this is what that horn timbre looks like after being filtered by my head and pinnae, from this direction and distance.” Over years, it learns to separate:

  • Features that belong to the horn itself (its intrinsic spectrum and dual-tone structure), from
  • Features added by your anatomy (the pinna- and head-related filtering discussed above).

For familiar, car-like horn sounds, this learned separation makes localization more precise. Tiny differences between your two ears—subtle spectral ripples and level changes created by your unique ear shape—are easier to interpret because your brain already “knows” what aspects of the spectrum should stay the same as the source moves, and what aspects should change with direction.2349

With novel or synthetic warning sounds, that prior experience is missing. The nervous system cannot easily tell which spectral quirks come from the source itself and which are imposed by reflections or the pinnae. As a result, localization is often slower and less accurate, and people rely more on head movements or vision to resolve ambiguities—especially in reverberant or noisy streets.34

We explore the “what does this sound mean and how should I react?” side of recognition in more detail in our article on reaction time and horn perception. Here, the key point is that recognizable, car-horn-like timbres don’t just tell you that something is wrong—they give the localization system a well-trained template to compare against.

For people on bikes, a horn that closely mimics the spectral shape and dual-tone character of a car horn (like the Loud Mini from Loud Bicycle) therefore leverages both geometry and learning: drivers’ brains have practiced localizing that specific class of broadband signals for years and can lock onto its direction quickly, often before they consciously realize it is coming from a bicycle rather than a car.910


7. Design lessons for safer horns (and quieter streets)

Pulling all of this together, we can articulate a few design principles:

  1. Broadband over beeps. Warning sounds should cover a broad frequency range, with both low and high components, to feed ITD, ILD, and pinna cues.
  2. Sharp onsets, short bursts. Clear starts and stops make the precedence effect more effective and allow people to localize the direct sound quickly, without a long tail of reverberant clutter.
  3. Recognizable yet restrained timbre. Sounds that belong to a well-understood “danger” category (like traditional car-horn timbres) support faster interpretation, but they should be reserved for genuine emergencies to avoid desensitization.
  4. Compatibility across users. People with hearing loss often retain better sensitivity at some frequencies than others; broadband signals are more likely to land somewhere they can actually hear.
  5. Context matters. In dense urban areas with high background noise, broadband horns help cut through the mix—but a long-term goal should be quieter streets overall, where necessary emergency sounds don’t have to fight a constant roar.

For cyclists specifically:

  • A true emergency horn that sounds like a car horn gives you the best chance that a driver can rapidly localize you and react, especially when they can’t see you yet (blind corners, mirrors, A-pillars, etc.).
  • Using it sparingly and purposefully keeps it from becoming just another annoying noise and preserves its biological punch.

In the end, sound localization is not a bolt-on feature—it’s baked into the structure of our ears, heads, and brains. Horns that work with that structure (broadband, directional, and instantly meaningful) give everyone on the street a better chance of getting home in one piece.


References

Footnotes

  1. Carlini, A., Bordeau, C., & Ambard, M. (2024). “Auditory localization: a comprehensive practical review.” Frontiers in Psychology. 2 3 4

  2. Risoud, M., et al. (2018). “Sound source localization.” European Annals of Otorhinolaryngology. 2 3 4 5 6 7 8

  3. “Sound localization.” Wikipedia (Duplex theory overview). 2 3 4 5 6 7 8

  4. Zheng, Y., et al. (2022). “Sound Localization of Listeners With Normal Hearing: Effects of Stimulus Bandwidth.” American Journal of Audiology. 2 3 4 5

  5. Fischer, T., et al. (2020). “Pinna-imitating microphone directionality improves sound localization and speech understanding in noise in cochlear implant users.” Journal of Clinical Medicine. 2 3 4 5 6

  6. “The pinna enhances angular discrimination in the frontal horizontal plane.” Journal of the Acoustical Society of America, 2022. 2 3 4 5

  7. Brown, A. D., et al. (2014). “The precedence effect in sound localization.” Frontiers in Neuroscience. 2 3

  8. Shinn-Cunningham, B. (2013). “Auditory Precedence Effect.” In Encyclopedia of Computational Neuroscience. 2 3

  9. Lemaitre, G., et al. (2009). “The sound quality of car horns: designing new representative sounds.” Acta Acustica united with Acustica. 2 3

  10. U.S. National Highway Traffic Safety Administration (NHTSA). “Minimum Sound Requirements for Hybrid and Electric Vehicles.” Federal Motor Vehicle Safety Standards, 2013. 2 3

Related Articles

The Importance of Covering Your Eyes When Biking

Why cyclists should treat eye protection as essential safety gear, from debris and UV to glare, reaction time, and long-term vision health.

read-more →

Age-Related Changes in Driver Attention

As populations age, understanding how attention, vision, and hearing change behind the wheel can help design safer roads, vehicles, and alerts for drivers of all ages.

read-more →