Customize Spatial Audio with TrueDepth Camera
This announcement came and went fairly quickly, but it had us scratching our heads immediately. The idea, it seems, is that spatial audio sounds more realistic if it can take into account aspects of the physicality of the listener that affect their perception of space. Apparently, this is a thing—called Head-Related Transfer Functions—and by capturing data using the iPhone’s TrueDepth camera, Apple could personalize the otherwise average HRTF that combines data from thousands of people.
— Read on tidbits.com/2022/06/13/seven-head-scratching-features-from-wwdc-2022/
I worked with HRTFs in grad school, trying to implement the filters in the wavelet domain (More here) so this is interesting to me. Looks like we’ll be able to use some combination of camera + lidar to capture the pinnae and derive personal HRTFs from that.
I cannot wait. Guess I’ll need to explore the spatial music and maybe movies now.