Abstract:
|
The analysis of acoustic scenes requires several
functionalities, being perhaps recognition (speech, speaker,
other acoustic events) and spatial localization the two most
relevant ones. For a reduced invasiveness, the microphones
are far away from the sound sources, and possibly grouped
in arrays, which may be distributed, not arranged, in the
room. Aiming at an increased performance, the usual model-
based approach employed for sound recognition or detection
can be extended to other co-occurrent tasks like source
localization, so both tasks can be carried out jointly, using
the same formulation and processing. In this paper, we
intend to illustrate that point by presenting together a few
new model-based techniques that deal with the problems of
overlapped-sounds recognition, multi-source localization,
and channel selection. They are briefly described, and tested
in a smart-room environment with a multiple microphone-
array setup. |