Apple has been granted a patent (number 11,197,119) for “acoustically effective room volume.” Its goal is to make the HomePod mini and any future acoustic/visual devices more effective at producing 3D surround sound.
About the patent
In the patent filing, Apple notes that virtual reality (VR) and augmented reality (AR) technologies have emerged to be powerful tools for a wide variety of applications, e.g., in science, design, medicine, gaming and engineering, as well as in more visionary applications such as the creation of “virtual spaces” that aim to simulate the look and sound of their real world environment counterparts.
However, the tech giant says that most of the innovation in recent years has been focused on creating virtual visual renderings (e.g., VR headsets and video gaming systems, and the like). In order to increase the sense of immersion in such virtual environments to be as realistic of a simulation as possible, Apple says it’s important to consider multiple sensory stimuli beyond just the simulation of visual stimuli, e.g., the simulation of sound stimuli–and even smell and/or touch stimuli.
For example, simulations, spatial audio signals may be generated that take into account various models of sound wave reflections, as well as models of sound wave reverberations, in three-dimensional environments. Such spatial audio may be generated, e.g., using Digital Audio Workstation (DAW) software or the like, and may be used for various applications, such as room planning and/or musical or architectural sound simulations.
Changes in a room’s architecture or scene composition can have a significant impact on the way that sound waves in the room should be simulated at any given instant in real-time. Which is why Apple says there’s a need for improved techniques for the physically accurate auralization of virtual 3D environments in real-time.
This includes environments wherein any (or all) of: the sound sources, the sound receiver, and the geometry/surfaces in the virtual environment may be dynamically changing as the sound sources are being simulated. Such techniques may also be applied in Augmented Reality (AR) scenarios, e.g., wherein additional sound information is added to a listener’s real-world environment to accurately simulate the presence of a “virtual” sound source that is not actually present in the listener’s real-world environment; mixed reality scenarios; sound visualization applications; room planning; and/or 3D sound mixing applications.
Summary of the patent
Here’s Apple’s abstract of the patent: “This disclosure relates to techniques for generating physically accurate auralization of sound propagation in complex environments, while accounting for important wave effects, such as sound absorption, sound scattering, and airborne sound insulation between rooms. According to some embodiments, techniques may be utilized to determine more accurate, e.g., “acoustically-effective” room volumes that account for open windows, open doors, acoustic dead space, and the like. According to other embodiments disclosed herein, techniques may be utilized to perform optimized hybrid acoustical ray tracing, including grouping coherent rays by processing core.
“According to other embodiments disclosed herein, techniques may be utilized to translate simulated ray tracing results into natural-sounding reverberations by deriving and resampling spatial-time-frequency energy probability density functions that more accurately account for the laws of physics and then converting this data into a spatial impulse response function, which may then be used for realistic 3D audio reproduction, e.g., via headphones or loudspeakers.”
Article provided with permission from AppleWorld.Today