Atthe104thMPEGmeetingthe3DGChairsummarizedtheneedsofanAudioAugmentedRealitysystemasfollows:
-
Analyzesceneandcaptureappropriatemetadata(e.g.acousticreflectionproperties)andattachittoascene
-
Attachdirectionalmetadatatoaudioobject
-
HRTFinformationfortheuser
-
Theaudioaugmentedrealitysystemshouldbeofsufficientlylowcomplexity(orwehavesufficientlyhighcomputationalcapability)thatthereisverylowlatencybetweentherealaudioandtheaugmentedrealityaudio.
The3DGChairclarifiedthatlessthan50msisarealisticrequirementfor3Daudiolatency.
The3DGChairgaveanexampleofAugmentedReality:apersonhasamobiledevicewithatransparentvisualdisplay.Thepersonseesandhearstherealworldaroundhim,andcanlookthroughthetransparentvisualdisplay(i.e.“seethroughmode)toseetheaugmentedreality(e.g.withanavatarrenderedinthescene).
Theavatarhasapre-codedaudiostream(e.g.MPEG-4AACbitstream)thatitcanplayout,andARAFknowsitsspatiallocationandorientation(i.e.whichwayitisfacing).Therequiredtheaudiometadatais:
-
Radiationpropertiesofavatar(i.e.radiationpowerasafunctionofangleanddistance)
Theavataraudiocouldbepresentedviaheadphonesorearbuds.TheAudioChairnotedthatARAFmaywanttoincorporateaheadtrackersothattheavatarcanremainfixedwithinthephysicalworldcoordinatesystem.The3DGChairnotedthatif“seethrough”modeisusedthentheorientationofthemobiledevicescreenwouldbesufficient.Inthatcase,audiocouldbepresentedviathemobiledevicespeakers.
TheAudioChairnotedthattheavatarcouldfunctionasa“virtualloudspeaker”inMPEG-H3DAudiosuchthatthe3DAudiorenderercouldbeusedforpresentationviaheadphones.However,3DAudioisnotabletoaddenvironmentaleffects(e.g.roomreverberation)thatareseparatefromtheavatarsound.Furthermore,3DAudiocannotaddsuchenvironmentaleffectsbasedonlistenerlocationororientation(e.g.reverberationchangeswhenthelistenermovedclosertoareflectingsurface).
YeshwantMuthusamy,Samsung,notedthatKHRONOSandOpenSELSprovidea3DAudioAPIthatdoessupporte.g.reverberationbasedonuserlocationandmightofferasolution.However,theAudioChairnotedthatARAFdoesnotknowtheacousticproperties(i.e.surfacereflectiveproperties)ofthereal-worldenvironment(unlessitcandeducethemfromthevisualscene)andthusaudioeffectsbasedonreal-worldenvironmentisnotpossible,whetherusingMPEG-H3DAudioorKHRONOS.3DGexpertswillconsiderthisproblemandreportbackatafuturejointmeeting.
Thefollowingstepsaredefined:
Step1:IntegrateandtesttheBIFS3DAudionodes
Step2:IntegrationoftheMPEG-Henginethatwillbeabletosynthetizetheaudiosignalataspecificposition
Step3:Investigateaudiopropagationofvirtualsourcestakingintoconsiderationthephysicalmediumproperties
BIFS3Dexistsinthreeimplementations(Technicolor,Orange,Fraunhofer).
Mariuswillasktheownersoftheimplementationsforcontributions.
|