US8909523B2 - Method and acoustic signal processing system for interference and noise suppression in binaural microphone configurations - Google Patents

Method and acoustic signal processing system for interference and noise suppression in binaural microphone configurations Download PDF

Info

Publication number
US8909523B2
US8909523B2 US13/154,738 US201113154738A US8909523B2 US 8909523 B2 US8909523 B2 US 8909523B2 US 201113154738 A US201113154738 A US 201113154738A US 8909523 B2 US8909523 B2 US 8909523B2
Authority
US
United States
Prior art keywords
noise
spectral density
power spectral
auto power
estimate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/154,738
Other versions
US20110307249A1 (en
Inventor
Walter Kellermann
Klaus Reindl
Yuanhang Zheng
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sivantos Pte Ltd
Original Assignee
Siemens Medical Instruments Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Medical Instruments Pte Ltd filed Critical Siemens Medical Instruments Pte Ltd
Publication of US20110307249A1 publication Critical patent/US20110307249A1/en
Assigned to SIEMENS MEDICAL INSTRUMENTS PTE. LTD. reassignment SIEMENS MEDICAL INSTRUMENTS PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: REINDL, KLAUS, Zheng, Yuanhang, KELLERMANN, WALTER
Application granted granted Critical
Publication of US8909523B2 publication Critical patent/US8909523B2/en
Assigned to Sivantos Pte. Ltd. reassignment Sivantos Pte. Ltd. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SIEMENS MEDICAL INSTRUMENTS PTE. LTD.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02165Two microphones, one receiving mainly the noise signal and the other one mainly the speech signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02168Noise filtering characterised by the method used for estimating noise the estimation exclusively taking place during speech pauses
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L2021/065Aids for the handicapped in understanding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • H04R2430/25Array processing for suppression of unwanted side-lobes in directivity characteristics, e.g. a blocking matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones

Definitions

  • the present invention relates to a method and an acoustic signal processing system for noise and interference estimation in a binaural microphone configuration with reduced bias. Moreover, the present invention relates to a speech enhancement method and hearing aids.
  • Binaural multi-channel Wiener filtering approaches preserving binaural cues for the speech and noise components are state of the art. For multi-channel techniques determining the noise components in each individual microphone is desirable. Since, in practice, it is almost impossible to obtain these separate noise estimates, the combination of a common noise estimate with single-channel Wiener filtering techniques to obtain binaural output signals is investigated.
  • FIG. 1 depicts a well known system for blind binaural signal extraction and a two microphone setup (M 1 , M 2 ). Hearing aid devices with a single microphone at each ear are considered.
  • the mixing of the original sources s q [k] is modeled by a filter of length M denoted by an acoustic mixing system AMS.
  • the filter model captures reverberation and scattering at the user's head.
  • a blocking matrix BM forces a spatial null to a certain direction ⁇ tar which is assumed to be the target speaker location to assure that the source signal s 1 [k] arriving from that direction can be suppressed well.
  • an estimate for all noise and interference components is obtained which is then used to drive speech enhancement filters w i [k], i ⁇ 1, 2 ⁇ .
  • the enhanced binaural output signals are denoted by y i [k], i ⁇ 1, 2 ⁇ .
  • noise estimate ⁇ [v,n] is given in the time-frequency domain by
  • v and n denote the frequency band and the block index, respectively.
  • b p [v,n], p ⁇ 1, 2 ⁇ denoteS the spectral weights of the blocking matrix BM. Since with such blocking matrices only a common noise estimate ⁇ [v,n] is available it is essential to compute a single speech enhancement filter applied to both microphone signals x 1 [k], x 2 [k].
  • a well-known single Wiener filter approach is given in the time-frequency domain by
  • is a real number and can be chosen to achieve a trade-off between noise reduction and speech distortion.
  • ⁇ ⁇ [v,n] and ⁇ v p v p [v,n], p ⁇ 1, 2 ⁇ denote auto power spectral density (PSD) estimates from the estimated noise signal ⁇ [v,n] and the filtered microphone signals.
  • PSD auto power spectral density
  • noise estimation procedures e.g. subtracting the signals from both channels x 1 [k], x 2 [k] or more sophisticated approaches based on blind source separation
  • bias an unavoidable systematic error
  • a method for a bias reduced noise and interference estimation in a binaural microphone configuration with a right and a left microphone signal at a timeframe with a target speaker active comprises the following method steps:
  • the method uses a target voice activity detection and exploits the magnitude squared coherence of the noise components contained in the individual microphones.
  • the magnitude squared coherence is used as criterion to decide if the estimated noise signal obtains a large or a weak bias.
  • the magnitude squared coherence (MSC) is calculated as
  • MSC
  • ⁇ v,n 1 n 2 is the cross power spectral density of the by a blocking matrix filtered noise and interference components contained in the right and left microphone signals
  • ⁇ v,n 1 v,n 1 is the auto power spectral density of the by said blocking matrix filtered noise and interference components contained in the right microphone signal
  • ⁇ v,n 2 V,n 2 is the auto power spectral density of the by said blocking matrix filtered noise and interference components contained in the left microphone signal.
  • ⁇ ⁇ is the auto power spectral density estimate of the common noise estimate.
  • the above object is solved by a further method for a bias reduced noise and interference estimation in a binaural microphone configuration with a right and a left microphone signal.
  • the bias reduced auto power spectral density estimate is determined in different frequency bands.
  • the above object is further solved by a method for speech enhancement with a method described above, wherein the bias reduced auto power spectral density estimate is used for calculating filter weights of a speech enhancement filter.
  • an acoustic signal processing system for a bias reduced noise and interference estimation at a timeframe in which a target speaker is active with a binaural microphone configuration comprising a right and left microphone with a right and a left microphone signal.
  • the system comprises:
  • a power spectral density estimation unit determining the auto power spectral density estimate of the common noise estimate comprising noise and interference components of the right and left microphone signals;
  • a bias reduction unit modifying the auto power spectral density estimate of the common noise estimate by using an estimate of the magnitude squared coherence of the noise and interference components contained in the right and left microphone signals determined at a time frame without a target speaker active.
  • ⁇ ⁇ is the auto power spectral density estimate of the common noise.
  • the acoustic signal processing system further comprises a speech enhancement filter with filter weights which are calculated by using the bias reduced auto power spectral density estimate.
  • a hearing aid with an acoustic signal processing system as outlined above.
  • a computer program product with a computer program which comprises software means for executing a method for bias reduced noise and interference estimation according to the invention, if the computer program is executed in a processing unit.
  • the invention offers the advantage over existing methods that no assumption about the properties of noise and interference components is made. Moreover, instead of introducing heuristic parameters to constrain the speech enhancement algorithm to compensate for noise estimation errors, the invention directly focuses on reducing the bias of the estimated noise and interference components and thus improves the noise reduction performance of speech enhancement algorithms. Moreover, the invention helps to reduce distortions for both, the target speech components and the residual noise and interference components.
  • FIG. 1 a block diagram of an acoustic signal processing system for binaural noise reduction without bias correction according to prior art
  • FIG. 2 a block diagram of an acoustic signal processing system for binaural noise reduction with bias correction
  • FIG. 3 an overview about four test scenarios
  • FIG. 4 a diagram of SIR improvement for the invented system depicted in FIG. 2 .
  • the core of the invention is a method to obtain a noise PSD estimate with reduced bias.
  • equation 3 can be written in the time-frequency domain as
  • the estimated bias ⁇ ⁇ is then given as the difference between the obtained common noise PSD estimate ⁇ ⁇ and the optimum noise PSD estimate ⁇ n o n o and reads
  • the noise PSD estimation bias ⁇ ⁇ is described by the correlation of the noise components in the individual microphone signals x 1 , x 2 . As long as the correlation of the noise components in the individual channels x 1 , x 2 is high, this bias ⁇ ⁇ is also high. Only for ideally uncorrelated noise components, the bias ⁇ ⁇ will be zero. As the noise PSD estimation bias ⁇ ⁇ is signal-dependent (equation (7) depends on the PSD estimates of the source signals ⁇ s q s q ) and the signals are highly non-stationary as we consider speech signals, equation (7) can hardly be estimated at all times and all frequencies. Only if the target speaker s 1 is inactive, the noise PSD estimation bias ⁇ ⁇ can be obtained as the microphone signals x 1 , x 2 contain only noise and interference components and thus the bias of the noise PSD estimate ⁇ ⁇ can be reduced.
  • a valuable quantity is the well-known Magnitude Squared Coherence (MSC) of the noise components.
  • MSC Magnitude Squared Coherence
  • a noise PSD estimate ⁇ ⁇ with reduced bias can be obtained by:
  • a target Voice Activity Detector VAD for each time-frequency bin is necessary (just as in standard single-channel noise suppression) to have access to the quantities described previously. If the target speaker is inactive (s 1 ⁇ 0), the by BM filtered microphone signals x 1 , x 2 can directly be used as noise estimate. The PSD estimate ⁇ v p v p of the filtered microphone signals is then given by
  • the MSC of the noise components in the right and left channel x 1 , x 2 is estimated.
  • the estimated MSC is applied to decide whether the common noise PSD estimate ⁇ ⁇ (equation 5) exhibits a strong or a low bias.
  • the MSC of the filtered noise components in the right and left channel x 1 , x 2 is given by
  • MSC
  • ⁇ ⁇ (equation 5) represents an estimate with strong bias, since
  • ⁇ ⁇ it is needed to estimate three different quantities, namely the MSC, a target VAD for each time-frequency bin, and an estimate of ⁇ v,n 1 v,n 1 + ⁇ v,n 2 v,n 2 .
  • FIG. 2 shows a block diagram of an acoustic signal processing system for binaural noise reduction with bias correction according to the invention described above.
  • the system for blind binaural signal extraction comprises a two microphone setup, a right microphone M 1 and a left microphone M 2 .
  • the system can be part of binaural hearing aid devices with a single microphone at each ear.
  • the mixing of the original sources s q is modeled by a filter denoted by an acoustic mixing system AMS.
  • the acoustic mixing system AMS captures reverberation and scattering at the user's head.
  • a blocking matrix BM forces a spatial null to a certain direction ⁇ tar which is assumed to be the target speaker location assuring that the source signal s 1 arriving from this direction can be suppressed well.
  • the output of the blocking matrix BM is an estimated common noise signal ⁇ , an estimate for all noise and interference components.
  • the microphone signals x 1 , x 2 , the common noise signal ⁇ , and a voice activity detection signal VAD are used as input for a noise power density estimation unit PU.
  • the noise and interference PSD ⁇ v,n p v,n p , p ⁇ 1, 2 ⁇ as well as the common noise PSD ⁇ ⁇ and the MSC are calculated. These calculated values are inputted to a bias reduction unit BU.
  • the bias reduction unit the common noise PSD ⁇ ⁇ is modified according to equation 13 in order to get a desired bias reduced common noise PSD ⁇ ⁇ .
  • the bias reduced common noise PSD ⁇ ⁇ is then used to drive speech enhancement filters w 1 , w 2 which transfer the microphone signals x 1 , x 2 to enhanced binaural output signals y 1 , y 2 .
  • the estimate of the MSC of the noise components is considered to be based on an ideal VAD.
  • the MSC of the noise components is in the time-frequency domain given by
  • MSC ⁇ [ v , n ]
  • v denotes the frequency bin and n is the frame index.
  • ⁇ n 1 n 2 [v,n] represents the cross PSD of the noise components n 1 [v,n] and n 2 [v,n].
  • ⁇ n p n p [v,n], p ⁇ 1, 2 ⁇ denotes the auto PSD of n p [v,n], p ⁇ 1, 2 ⁇ .
  • the noise components n p [v,n], p ⁇ 1, 2 ⁇ are only accessible during the absence of the target source, consequently, the MSC can only be estimated at these time-frequency points and is calculated by:
  • the time-frequency points [v I ,n] represent the set of those time-frequency points where the target source is inactive, and, correspondingly, [v A ,n] denote those time-frequency points dominated by the active target source. Note that here we use v,n p [v I ,n] instead of n p [v I ,n], since in equation 13 the coherence of the filtered noise components is considered. Besides, in order to have reliable estimates, the obtained MSC is recursively averaged with a time constant 0 ⁇ 1:
  • MSC _ ⁇ [ v I , n ] ⁇ ⁇ MSC _ ⁇ [ v I , n - 1 ] + ( 1 - ⁇ ) ⁇
  • the second term to be estimated for equation 13 is the sum of the power of the noise components contained in the individual microphone signals.
  • ⁇ v 1 v 1 [v I ,n]+ ⁇ v 2 v 2 [v I ,n] ⁇ v,n 1 v,n 1 [v I ,n]+ ⁇ v,n 2 v,n 2 [v I ,n].
  • This correction function ⁇ Corr [v I n] is then used to correct the original noise PSD estimate ⁇ ⁇ [v I ,n] to obtain an estimate of the separated noise PSD ⁇ v,n 1 v,n 1 [v I ,n]+ ⁇ v,n 2 v,n 2 [v I ,n] that is necessary for equation 13.
  • the estimates are recursively averaged with a time constant 0 ⁇ 1:
  • the proposed scheme ( FIG. 2 ) with the enhanced noise estimate (equation 24) and the improved Wiener filter (equation 25) is evaluated in various different scenarios with a hearing aid as illustrated in FIG. 3 .
  • the desired target speaker is denoted by s and is located in front of the hearing aid user.
  • the interfering point sources are denoted by n i , i ⁇ 1, 2, 3 ⁇ and background babble noise is denoted by n b p , p ⁇ 1, 2 ⁇ . From Scenario 1 to Scenario 3, the number of interfering point sources n i is increased. In Scenario 4, additional background babble noise n b p is added (in comparison to Scenario 3).
  • the SIR (signal-to-interference-ratio) of the input signal decreases from ⁇ 0.3 dB to ⁇ 4 dB.
  • the signals were recorded in a living-room-like environment with a reverberation time of about T 60 ⁇ 300 ms.
  • an artificial head was equipped with Siemens Life BTE hearing aids without processors. Only the signals of the frontal microphones of the hearing aids were recorded.
  • the sampling frequency was 16 kHz and the distance between the sources and the center of the artificial head was approximately 1.1 m.
  • FIG. 4 illustrates the SIR improvement for a living-room-like environment (T 60 ⁇ 300 ms) and 256 subbands.
  • the SIR improvement is defined by
  • ⁇ n out p 2 represent the (long-time) signal power of the speech components and the residual noise and interference components at the output of the proposed scheme ( FIG. 2 ), respectively.
  • ⁇ n i ⁇ ⁇ n p 2 represent the (long-time) signal power of the speech components and the noise and interference components at the input.
  • the first column in FIG. 4 for each scenario shows the SIR improvement obtained for the scheme depicted in FIG. 1 without the proposed method for bias reduction.
  • the noise estimate is obtained by equation 2 and the spectral weights b p [v,n], p ⁇ 1, 2 ⁇ are obtained by using a BSS-based algorithm.
  • the spectral weights for the speech enhancement filter are obtained by equation 3.
  • the second column in FIG. 4 represents the maximum performance achieved by the invented method to reduce the bias of the common noise estimate (equations 13 and 25). Here, it is assumed that all terms that in reality need to be estimated are known.
  • the last column depicts the SIR improvement achieved by the invented approach with the estimated MSC (equations 17 and 18), the estimated noise PSD (equation 24), and the improved speech enhancement filter given by equation 25.
  • the target VAD for each time-frequency bin is still assumed to be ideal. It can be seen that the proposed method can achieve about 2 to 2.5 dB maximum improvement compared to the original system, where the bias of the common noise PSD is not reduced. Even with the estimated terms (last column), the proposed approach can still achieve an SIR improvement close to the maximum performance.

Abstract

A method determines a bias reduced noise and interference estimation in a binaural microphone configuration with a right and a left microphone signal at a time-frame with a target speaker active. The method includes a determination of the auto power spectral density estimate of the common noise formed of noise and interference components of the right and left microphone signals and a modification of the auto power spectral density estimate of the common noise by using an estimate of the magnitude squared coherence of the noise and interference components contained in the right and left microphone signals determined at a time frame without a target speaker active. An acoustic signal processing system and a hearing aid implement the method for determining the bias reduced noise and interference estimation. The noise reduction performance of speech enhancement algorithms is improved by the invention. Further, distortions of the target speech signal and residual noise and interference components are reduced.

Description

CROSS-REFERENCE TO RELATED APPLICATION
This application claims the priority, under 35 U.S.C. §119, of European patent application EP 10005957, filed Jun. 9, 2010; the prior application is herewith incorporated by reference in its entirety.
BACKGROUND OF THE INVENTION Field of the Invention
The present invention relates to a method and an acoustic signal processing system for noise and interference estimation in a binaural microphone configuration with reduced bias. Moreover, the present invention relates to a speech enhancement method and hearing aids.
Until recently, only bilateral speech enhancement techniques were used for hearing aids, i.e., the signals were processed independently for each ear and thereby the binaural human auditory system could not be matched. Bilateral configurations may distort crucial binaural information as needed to localize sound sources correctly and to improve speech perception in noise. Due to the availability of wireless technologies for connecting both ears, several binaural processing strategies are currently under investigation. Binaural multi-channel Wiener filtering approaches preserving binaural cues for the speech and noise components are state of the art. For multi-channel techniques determining the noise components in each individual microphone is desirable. Since, in practice, it is almost impossible to obtain these separate noise estimates, the combination of a common noise estimate with single-channel Wiener filtering techniques to obtain binaural output signals is investigated.
FIG. 1 depicts a well known system for blind binaural signal extraction and a two microphone setup (M1, M2). Hearing aid devices with a single microphone at each ear are considered. The mixing of the original sources sq[k] is modeled by a filter of length M denoted by an acoustic mixing system AMS.
This leads to the microphone signals xp[k]
x p [ k ] = q = 1 Q κ = 0 M - 1 h qp [ κ ] s q [ k - κ ] + n b p [ k ] , p { 1 , 2 } , ( 1 )
where hqp[k], k=0, . . . , M−1 denote the coefficients of the filter model from the q-th source sq[k], q=1, . . . , Q to the p-th sensor xp[k], pε{1, 2}. The filter model captures reverberation and scattering at the user's head. The source s1[k] is seen as the target source to be separated from the remaining Q−1 interfering point sources sq[k], q=2, . . . , Q and babble noise denoted by nbp[k], pε{1, 2}. In order to extract desired components from the noisy microphone signals xp[k], a reliable estimate for all noise and interference components is necessary. A blocking matrix BM forces a spatial null to a certain direction φtar which is assumed to be the target speaker location to assure that the source signal s1[k] arriving from that direction can be suppressed well. Thus, an estimate for all noise and interference components is obtained which is then used to drive speech enhancement filters wi[k], iε{1, 2}. The enhanced binaural output signals are denoted by yi[k], iε{1, 2}.
For all speech enhancement algorithms a good noise estimate is the key for the best possible noise reduction. For binaural hearing aids and a two-microphone setup, the easiest way to obtain a noise estimate is to subtract both channels x1[k], x2[k] assuming that the desired signal component is the same in both channels. There are also more sophisticated solutions that can also deal with reverberation. Generally, the noise estimate ñ[v,n] is given in the time-frequency domain by
n ~ [ v , n ] = p = 1 2 b p [ v , n ] · x p [ v , n ] = p = 1 2 v p [ v , n ] , ( 2 )
where v and n denote the frequency band and the block index, respectively. bp[v,n], pε{1, 2} denoteS the spectral weights of the blocking matrix BM. Since with such blocking matrices only a common noise estimate ñ[v,n] is available it is essential to compute a single speech enhancement filter applied to both microphone signals x1[k], x2[k]. A well-known single Wiener filter approach is given in the time-frequency domain by
w [ v , n ] = w 1 [ v , n ] = w 2 [ v , n ] = 1 - μ S ^ n ~ n ~ [ v , n ] S ^ v 1 v 1 [ v , n ] + S ^ v 2 v 2 [ v , n ] , ( 3 )
where μ is a real number and can be chosen to achieve a trade-off between noise reduction and speech distortion. Ŝññ[v,n] and Ŝv p v p [v,n], pε{1, 2} denote auto power spectral density (PSD) estimates from the estimated noise signal ñ[v,n] and the filtered microphone signals. The microphone signals are filtered with the coefficients of the blocking matrix according to equation 2.
The noise estimation procedures (e.g. subtracting the signals from both channels x1[k], x2[k] or more sophisticated approaches based on blind source separation) lead to an unavoidable systematic error (=bias).
SUMMARY OF THE INVENTION
It is accordingly an object of the invention to provide a method and acoustic signal processing system for interference and noise suppression in binaural microphone configurations which overcome the above-mentioned disadvantages of the heretofore-known devices and methods of this general type and which provide for noise and interference estimation in a binaural microphone configuration with reduced bias. It is a further object to provide a related speech enhancement method and a related hearing aid.
With the foregoing and other objects in view there is provided, in accordance with the invention, a method for a bias reduced noise and interference estimation in a binaural microphone configuration with a right and a left microphone signal at a timeframe with a target speaker active. The method comprises the following method steps:
determining the auto power spectral density estimate of a common noise estimate comprising noise and interference components of the right and left microphone signals and
modifying the auto power spectral density estimate of the common noise estimate by using an estimate of the magnitude squared coherence of the noise and interference components contained in the right and left microphone signals determined at a time frame without a target speaker active.
The method uses a target voice activity detection and exploits the magnitude squared coherence of the noise components contained in the individual microphones. The magnitude squared coherence is used as criterion to decide if the estimated noise signal obtains a large or a weak bias.
According to a further preferred embodiment of the method, the magnitude squared coherence (MSC) is calculated as
MSC = | S ^ v , n 1 v , n 2 | 2 S ^ v , n 1 v , n 1 S ^ v , n 2 v , n 2 ,
where Ŝv,n 1 n 2 is the cross power spectral density of the by a blocking matrix filtered noise and interference components contained in the right and left microphone signals, Ŝv,n 1 v,n 1 is the auto power spectral density of the by said blocking matrix filtered noise and interference components contained in the right microphone signal and Ŝv,n 2 V,n 2 is the auto power spectral density of the by said blocking matrix filtered noise and interference components contained in the left microphone signal.
In accordance with an additional feature of the invention, the bias reduced auto power spectral density estimate Ŝ{circumflex over (n)}{circumflex over (n)} of the common noise is calculated as
Ŝ {circumflex over (n)}{circumflex over (n)} =MSC·(Ŝ v,n 1 v,n 1 v,n 2 v,n 2 )+(1−MSC)·Ŝññ,
where Ŝññ is the auto power spectral density estimate of the common noise estimate.
In accordance with an additional feature of the invention, the above object is solved by a further method for a bias reduced noise and interference estimation in a binaural microphone configuration with a right and a left microphone signal. At timeframes during which a target speaker is active, the bias reduced auto power spectral density estimate is determined according to the method for a bias reduced noise and interference estimation according to the invention and at time frames during which the target speaker is inactive, the bias reduced auto power spectral density estimate is calculated as Ŝññv,n 1 v,n 1 v,n 2 v,n 2 .
In accordance with a preferred embodiment of the invention, the bias reduced auto power spectral density estimate is determined in different frequency bands.
According to the present invention, the above object is further solved by a method for speech enhancement with a method described above, wherein the bias reduced auto power spectral density estimate is used for calculating filter weights of a speech enhancement filter.
With the above and other objects in view there is also provided, in accordance with the invention, an acoustic signal processing system for a bias reduced noise and interference estimation at a timeframe in which a target speaker is active with a binaural microphone configuration comprising a right and left microphone with a right and a left microphone signal. The system comprises:
a power spectral density estimation unit determining the auto power spectral density estimate of the common noise estimate comprising noise and interference components of the right and left microphone signals; and
a bias reduction unit modifying the auto power spectral density estimate of the common noise estimate by using an estimate of the magnitude squared coherence of the noise and interference components contained in the right and left microphone signals determined at a time frame without a target speaker active.
According to a further preferred embodiment of the acoustic signal processing system, the bias reduced auto power spectral density estimate Ŝññ of the common noise is calculated as
Ŝ ññ =MSC·(Ŝ v,n 1 v,n 1 v,n 2 v,n 2 )+(1−MSCŜ ññ.
where Ŝññ is the auto power spectral density estimate of the common noise.
In accordance with again an added feature of the invention, the acoustic signal processing system further comprises a speech enhancement filter with filter weights which are calculated by using the bias reduced auto power spectral density estimate.
With the above and other objects in view there is also provided, in accordance with the invention, a hearing aid with an acoustic signal processing system as outlined above.
Finally, there is provided a computer program product with a computer program which comprises software means for executing a method for bias reduced noise and interference estimation according to the invention, if the computer program is executed in a processing unit.
The invention offers the advantage over existing methods that no assumption about the properties of noise and interference components is made. Moreover, instead of introducing heuristic parameters to constrain the speech enhancement algorithm to compensate for noise estimation errors, the invention directly focuses on reducing the bias of the estimated noise and interference components and thus improves the noise reduction performance of speech enhancement algorithms. Moreover, the invention helps to reduce distortions for both, the target speech components and the residual noise and interference components.
The above described methods and systems are preferably employed for the speech enhancement in hearing aids. However, the present application is not limited to such use only. The described methods can rather be utilized in connection with other binaural/dual-channel audio devices.
Other features which are considered as characteristic for the invention are set forth in the appended claims.
Although the invention is illustrated and described herein as embodied in a method and acoustic signal processing system for interference and noise suppression in binaural microphone configurations, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made therein without departing from the spirit of the invention and within the scope and range of equivalents of the claims.
The construction and method of operation of the invention, however, together with additional objects and advantages thereof will be best understood from the following description of specific embodiments when read in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
FIG. 1 a block diagram of an acoustic signal processing system for binaural noise reduction without bias correction according to prior art,
FIG. 2 a block diagram of an acoustic signal processing system for binaural noise reduction with bias correction,
FIG. 3 an overview about four test scenarios and
FIG. 4 a diagram of SIR improvement for the invented system depicted in FIG. 2.
DETAILED DESCRIPTION OF THE INVENTION
The core of the invention is a method to obtain a noise PSD estimate with reduced bias.
In the following, for the sake of clarity, the block index n as well as the subband index v are omitted. Assuming that the necessary noise estimate ñ is obtained by equation 2, equation 3 can be written in the time-frequency domain as
w = 1 - μ q = 2 Q ( b 1 2 h q 1 2 + b 2 2 h q 2 2 + 2 S ^ s q s q q = 1 Q ( | b 1 | 2 | h q 1 | 2 + | b 2 | 2 | h q 2 | 2 ) · S ^ s q s q , ( 4 )
where hqp denotes the spectral weight from source q=1, . . . , Q to microphone p, pε{1, 2} for the frequency band v. s1 is assumed to be the desired source and sq, q=2, . . . , Q denote interfering point sources. By equation (4), an optimum noise suppression can only be achieved if the noise components in the numerator are the same as in the denominator. Assuming an optimum desired speech suppression by the blocking matrix BM and defining s1 as desired speech signal to be extracted from the noisy signal xp, pε{1, 2}, we derive a noise PSD estimation bias ΔŜññ. The common noise PSD estimate Ŝññ is identified from equations 2, 3, and 4 as
S ^ n ~ n ~ = q = 2 Q ( | b 1 | 2 | h q 1 | 2 + | b 2 | 2 | h q 2 | 2 + 2 { b 1 b 2 * h q 1 h q 2 * } ) · S ^ s q s q . ( 5 )
Applying the well-known standard Wiener filter theory to equation (4), the optimum noise estimate Ŝn o n o that would be necessary to achieve a best noise suppression reads however
S ^ n o n o = q = 2 Q ( | b 1 | 2 | h q 1 | 2 + | b 2 | 2 | h q 2 | 2 ) · S ^ s q s q . ( 6 )
The estimated bias ΔŜññ is then given as the difference between the obtained common noise PSD estimate Ŝññ and the optimum noise PSD estimate Ŝn o n o and reads
Δ S ^ n ~ n ~ = S ^ n ~ n ~ - S ^ n o n o = q = 2 Q 2 { b 1 b 2 * h q 1 h q 2 * } · S ^ s q s q . ( 7 )
From equation (7) it can be seen that the noise PSD estimation bias ΔŜññ is described by the correlation of the noise components in the individual microphone signals x1, x2. As long as the correlation of the noise components in the individual channels x1, x2 is high, this bias ΔŜññ is also high. Only for ideally uncorrelated noise components, the bias ΔŜññ will be zero. As the noise PSD estimation bias ΔŜññ is signal-dependent (equation (7) depends on the PSD estimates of the source signals Ŝs q s q ) and the signals are highly non-stationary as we consider speech signals, equation (7) can hardly be estimated at all times and all frequencies. Only if the target speaker s1 is inactive, the noise PSD estimation bias ΔŜññ can be obtained as the microphone signals x1, x2 contain only noise and interference components and thus the bias of the noise PSD estimate Ŝññ can be reduced.
In order to obtain a bias reduced noise PSD estimate Ŝññ even if the target speaker s1 is active, reliable parameters related to the noise PSD estimation bias ΔŜññ that can be applied even if the target speaker is active, need to be estimated. This is important as speech signals are considered as interference which are highly non-stationary signals. Thus it is not sufficient to estimate the noise PSD estimation error ΔŜññ during target speech pauses only.
According to the invention, a valuable quantity is the well-known Magnitude Squared Coherence (MSC) of the noise components. On the one hand, if the MSC is low (close to zero), then ΔŜññ (equation 7) is low, since the cross-correlation between the noise components in the right and left channels x1, x2 is weak. On the other hand, if the MSC is close to one, the noise PSD estimation bias |ΔŜññ| (equation 7) becomes quite high as the noise components contained in the microphone signals x1, x2 are strongly correlated. Using the MSC it is possible to decide whether the common noise estimate exhibits a strong or a low bias ΔŜññ.
In summary, a noise PSD estimate Ŝññ with reduced bias can be obtained by:
    • using the microphone signals x2 as noise and interference estimate during target speech pauses; and
    • applying the MSC of the noise and interference components of the microphone signals estimated during target speech pauses to decide whether the common noise estimate exhibits a strong or a low bias.
We now describe the way how to reduce the bias ΔŜññ if the target speaker is active and the MSC is close to one will be discussed next. First of all, a target Voice Activity Detector VAD for each time-frequency bin is necessary (just as in standard single-channel noise suppression) to have access to the quantities described previously. If the target speaker is inactive (s1≡0), the by BM filtered microphone signals x1, x2 can directly be used as noise estimate. The PSD estimate Ŝv p v p of the filtered microphone signals is then given by
S ^ v p v p = S ^ v , n p v , n p = q = 2 Q | b p | 2 | h qp | 2 S ^ s q s q p ε { 1 , 2 } , ( 8 )
where Ŝv,n p v,n p describes the by the blocking matrix BM filtered noise components of the right and left channel x1, x2, respectively. Thus, the noise PSD estimate with reduced bias Ŝññ is given by
Ŝ ññ v,n 1 v,n 1 v,n 2 v,n 2   (9)
Moreover, during target speech pauses, the MSC of the noise components in the right and left channel x1, x2 is estimated. The estimated MSC is applied to decide whether the common noise PSD estimate Ŝññ (equation 5) exhibits a strong or a low bias. The MSC of the filtered noise components in the right and left channel x1, x2 is given by
MSC = | S ^ v , n 1 v , n 2 | 2 S ^ v , n 1 v , n 1 S ^ v , n 1 v , n 2 ( 10 )
and is always in the range of 0≦MSC≦1. MSC=1 indicates ideally correlated signals whereas MSC=0 means ideally de-correlated signals. If the MSC is low, the common noise PSD estimate Ŝññ given by equation 5 is already an estimate with low bias and thus we can use:
Ŝññññ.  (11)
If the MSC is close to one, Ŝññ (equation 5) represents an estimate with strong bias, since |ΔŜññ| (equation 7) becomes quite high. In this case, the following combination is proposed to obtain the bias reduced noise PSD estimate Ŝññ:
Ŝ {circumflex over (n)}{circumflex over (n)} =MSC·(Ŝ v,n 1 v,n 1 v,n 2 v,n 2 )+(1−MSCŜ ññ,  (12)
where Ŝv,n 1 v,n 1 v,n 2 v,n 2 is an estimate taken from the most recent data frame with s1=0. In general, the noise PSD estimate with reduced bias Ŝññ is given by
Ŝ ññ=α·(Ŝ v,n 1 v,n 1 v,n 2 v,n 2 )+(1−α)·Ŝ ññ  (13)
where α=1 if the target speaker is inactive, otherwise α=MSC. For obtaining Ŝññ obviously it is needed to estimate three different quantities, namely the MSC, a target VAD for each time-frequency bin, and an estimate of Ŝv,n 1 v,n 1 v,n 2 v,n 2 .
FIG. 2 shows a block diagram of an acoustic signal processing system for binaural noise reduction with bias correction according to the invention described above. The system for blind binaural signal extraction comprises a two microphone setup, a right microphone M1 and a left microphone M2. For example, the system can be part of binaural hearing aid devices with a single microphone at each ear. The mixing of the original sources sq is modeled by a filter denoted by an acoustic mixing system AMS. The acoustic mixing system AMS captures reverberation and scattering at the user's head. The source s1 is seen as the target source to be separated from the remaining Q−1 interfering point sources sq, q=2, . . . , Q and babble noise denoted by nbp, pε{1, 2}. In order to extract desired components from the noisy microphone signals xp, a reliable estimate for all noise and interference components is necessary. A blocking matrix BM forces a spatial null to a certain direction Φtar which is assumed to be the target speaker location assuring that the source signal s1 arriving from this direction can be suppressed well. The output of the blocking matrix BM is an estimated common noise signal ñ, an estimate for all noise and interference components.
The microphone signals x1, x2, the common noise signal ñ, and a voice activity detection signal VAD are used as input for a noise power density estimation unit PU. In the unit PU, the noise and interference PSD Ŝv,n p v,n p , pε{1, 2} as well as the common noise PSD Ŝññ and the MSC are calculated. These calculated values are inputted to a bias reduction unit BU. In the bias reduction unit the common noise PSD Ŝññ is modified according to equation 13 in order to get a desired bias reduced common noise PSD Ŝññ.
The bias reduced common noise PSD Ŝññ is then used to drive speech enhancement filters w1, w2 which transfer the microphone signals x1, x2 to enhanced binaural output signals y1, y2.
Estimation of the MSC
The estimate of the MSC of the noise components is considered to be based on an ideal VAD. The MSC of the noise components is in the time-frequency domain given by
MSC [ v , n ] = | S ^ n 1 n 2 [ v , n ] | 2 S ^ n 1 n 1 [ v , n ] S ^ n 2 n 2 [ v , n ] , ( 14 )
where v denotes the frequency bin and n is the frame index. Ŝn 1 n 2 [v,n] represents the cross PSD of the noise components n1[v,n] and n2[v,n]. Ŝn p n p [v,n], pε{1, 2} denotes the auto PSD of np[v,n], pε{1, 2}. The noise components np[v,n], pε{1, 2} are only accessible during the absence of the target source, consequently, the MSC can only be estimated at these time-frequency points and is calculated by:
MSC _ [ v I , n ] = | S ^ v , n 1 v , n 2 [ v I , n ] | 2 S ^ v , n 1 v , n 1 [ v I , n ] S ^ v , n 2 v , n 2 [ v I , n ] ( 15 ) = | S ^ v 1 v 2 [ v I , n ] | 2 S ^ v 1 v 1 [ v I , n ] S ^ v 2 v 2 [ v I , n ] , ( 16 )
where v,np[vI,n], pε{1, 2} are the filtered noise components and vp[vI,n], pε{1, 2} are the filtered microphone signals x1, x2. The time-frequency points [vI,n] represent the set of those time-frequency points where the target source is inactive, and, correspondingly, [vA,n] denote those time-frequency points dominated by the active target source. Note that here we use v,np[vI,n] instead of np[vI,n], since in equation 13 the coherence of the filtered noise components is considered. Besides, in order to have reliable estimates, the obtained MSC is recursively averaged with a time constant 0<β<1:
MSC _ [ v I , n ] = β · MSC _ [ v I , n - 1 ] + ( 1 - β ) · | S ^ v 1 v 2 [ v I , n ] | 2 S ^ v 1 v 1 [ v I , n ] S ^ v 2 v 2 [ v I , n ] . ( 17 )
Since the noise components are not accessible at the time-frequency point of the active target source, MSC cannot be updated but keeps the value estimated at the same frequency bin of the previous frame:
MSC[v A ,n]= MSC[v A ,n−1].  (18)
Estimation of the Separated Noise PSD
The second term to be estimated for equation 13 is the sum of the power of the noise components contained in the individual microphone signals. During target speech pauses, due to the absence of the target speech signal, there is access to these components getting
Ŝ v 1 v 1 [v I ,n]+Ŝ v 2 v 2 [v I ,n]=Ŝ v,n 1 v,n 1 [v I ,n]+Ŝ v,n 2 v,n 2 [v I ,n].
Now, a correction function is introduced given by
f Corr [ v I , n ] = S ^ v 1 v 1 [ v I , n ] + S ^ v 2 v 2 [ v I , n ] S ^ n ~ n ~ [ v I , n ] . ( 19 )
This correction function ƒCorr[vIn] is then used to correct the original noise PSD estimate Ŝññ[vI,n] to obtain an estimate of the separated noise PSD Ŝv,n 1 v,n 1 [vI,n]+Ŝv,n 2 v,n 2 [vI,n] that is necessary for equation 13. Again, in order to obtain a reliable estimate of the correction function, the estimates are recursively averaged with a time constant 0<γ<1:
f Corr [ v I , n ] = γ · f Corr [ v I , n - 1 ] + ( 1 - γ ) · S ^ v 1 v 1 [ v I , n ] + S ^ v 2 v 2 [ v I , n ] S ^ n ~ n ~ [ v I , n ] ( 20 )
An estimate of Ŝv,n 1 v,n 1 [vI,n]+Ŝv,n 2 v,n 2 [vI,n] can now be obtained by
Ŝ v,n 1 v,n 1 [v I ,n]+Ŝ v,n 2 v,n 2 [v I ,n]=Ŝ v 1 v 1 [v I ,n]+Ŝ v 2 v 2 [v I ,n]=ƒ Corr [v I ,n]·Ŝ ññ [v I ,n].  (21)
However, at the time-frequency points of active target speech Ŝv 1 v 1 [vA,n]+Ŝv 2 v 2 [vA,n]+Ŝv,n 1 v,n 1 [vA,n]+Ŝv,n 2 v,n 2 [vA,n] is not true and the correction function (equation 19) cannot be updated. But, since the PSD estimates are obtained by time-averaging, the spectra of the signals are supposed to be similar for neighboring frames. Therefore, at the time-frequency points of active target speech, one can take the correction function estimated at the same frequency bin for the previous frame:
ƒCorr [v A ,n]=ƒ Corr [v A ,n−1],  (22)
such that Ŝv,n 1 v,n 1 [vA,n]+Ŝv,n 2 v,n 2 [vA,n] can be estimated by:
Ŝ v,n 1 v,n 1 [v A ,n]+Ŝ v,n 2, v,n 2 [v A ,n]=ƒ Corr [v A ,n]·Ŝ ññ [v A ,n].  (23)
Now, based on the estimated MSC and the estimated noise PSD, the improved common noise estimate can be calculated by:
Ŝ {circumflex over (n)}{circumflex over (n)} [v,n]= MSC[v,n]·(Ŝ v,n 1 v,n 1 [v,n]+Ŝ v,n 2 v,n 2 [v,n])+(1MSC[v,n])·Ŝ ññ [v,n].  (24)
Then, the original speech enhancement filter given by equation 3 can now be recalculated with a noise PSD estimate that obtains a reduced bias:
w Im p [ v , n ] = 1 - μ S ^ n ~ n ~ [ v , n ] S ^ v 1 v 1 [ v , n ] + S ^ v 2 v 2 [ v , n ] , ( 25 )
where Ŝññ[v,n] is obtained by equation (24).
Evaluation
In the sequel, the proposed scheme (FIG. 2) with the enhanced noise estimate (equation 24) and the improved Wiener filter (equation 25) is evaluated in various different scenarios with a hearing aid as illustrated in FIG. 3. The desired target speaker is denoted by s and is located in front of the hearing aid user. The interfering point sources are denoted by ni, iε{1, 2, 3} and background babble noise is denoted by nb p , pε{1, 2}. From Scenario 1 to Scenario 3, the number of interfering point sources ni is increased. In Scenario 4, additional background babble noise nb p is added (in comparison to Scenario 3).
Corresponding to the scenarios 1 to 4, the SIR (signal-to-interference-ratio) of the input signal decreases from −0.3 dB to −4 dB. The signals were recorded in a living-room-like environment with a reverberation time of about T60≈300 ms. In order to record these signals, an artificial head was equipped with Siemens Life BTE hearing aids without processors. Only the signals of the frontal microphones of the hearing aids were recorded. The sampling frequency was 16 kHz and the distance between the sources and the center of the artificial head was approximately 1.1 m.
FIG. 4 illustrates the SIR improvement for a living-room-like environment (T60≈300 ms) and 256 subbands. The SIR improvement is defined by
SIR gain = 1 2 p = 1 2 ( SIR out p - SIR in p ) B ( 26 ) = 1 2 p = 1 2 ( σ s out p 2 σ n out p 2 - σ s in p 2 σ n in p 2 ) B . ( 27 )
σ s out p 2
and
σ n out p 2
represent the (long-time) signal power of the speech components and the residual noise and interference components at the output of the proposed scheme (FIG. 2), respectively.
σ s i n p 2
and
σ n i n p 2
represent the (long-time) signal power of the speech components and the noise and interference components at the input.
The first column in FIG. 4 for each scenario shows the SIR improvement obtained for the scheme depicted in FIG. 1 without the proposed method for bias reduction. The noise estimate is obtained by equation 2 and the spectral weights bp[v,n], pε{1, 2} are obtained by using a BSS-based algorithm. The spectral weights for the speech enhancement filter are obtained by equation 3. The second column in FIG. 4 represents the maximum performance achieved by the invented method to reduce the bias of the common noise estimate (equations 13 and 25). Here, it is assumed that all terms that in reality need to be estimated are known. The last column depicts the SIR improvement achieved by the invented approach with the estimated MSC (equations 17 and 18), the estimated noise PSD (equation 24), and the improved speech enhancement filter given by equation 25. It should be noted that the target VAD for each time-frequency bin is still assumed to be ideal. It can be seen that the proposed method can achieve about 2 to 2.5 dB maximum improvement compared to the original system, where the bias of the common noise PSD is not reduced. Even with the estimated terms (last column), the proposed approach can still achieve an SIR improvement close to the maximum performance.
These results show that the novel method for reducing the noise bias of the common noise estimate according to the invention works well in practical applications and achieves a high improvement compared to an approach in which the noise PSD estimation bias is not taken into account.

Claims (14)

The invention claimed is:
1. A method for determining a bias reduced noise and interference estimation in a binaural microphone configuration, the method which comprises:
receiving with the binaural microphone configuration a right microphone signal and a left microphone signal during a time-frame with a target speaker active;
determining an auto power spectral density estimate of a common noise containing noise components and interference components of the right and left microphone signals; and
modifying the auto power spectral density estimate of the common noise by using an estimate of a magnitude squared coherence of the noise components and interference components contained in the right and left microphone signals determined during a time frame without a target speaker active.
2. The method according to claim 1, which comprises calculating the magnitude squared coherence estimate MSC as
MSC = | S ^ v , n 1 v , n 2 | 2 S ^ v , n 1 v , n 1 S ^ v , n 2 v , n 2 ,
where:
Ŝv,n 1 v,n 2 is a cross power spectral density of the estimated noise and interference components computed by a blocking matrix from filtered noise and interference components contained in the right and left microphone signals;
Ŝv,n 1 v,n 1 is the auto power spectral density of the noise and interference components contained in the right microphone signal filtered by the blocking matrix; and
Ŝv,n 2 v,n 2 is the auto power spectral density of the noise and interference components contained in the left microphone signal filtered by the blocking matrix.
3. The method according to claim 1, which comprises calculating the bias reduced auto power spectral density estimate Ŝññ of the common noise as

Ŝ ññ =MSC·(Ŝ v,n 1 v,n 1 v,n 2 v,n 2 )+(1−MSCŜ ññ,
where Ŝññ is the auto power spectral density estimate of the common noise.
4. A method for a bias reduced noise and interference estimation in a binaural microphone configuration with a right microphone signal and a left microphone signal, the method which comprises:
at time frames with a target speaker inactive, calculating the bias reduced auto power spectral density estimate Ŝññ as

Ŝ ññ v,n 1 v,n 1 v,n 2 v,n 2
where
Ŝv,n 1 v,n 1 is the auto power spectral density of the noise and interference components contained in the right microphone signal filtered by the blocking matrix; and
Ŝv,n 2 v,n 2 is the auto power spectral density of the noise and interference components contained in the left microphone signal filtered by the blocking matrix; and
at time frames with the target speaker active, carrying out the method according to claim 1 to determine the bias reduced auto power spectral density estimate Ŝññ.
5. The method according to claim 4, which comprises determining the bias reduced auto power spectral density estimate in different frequency bands.
6. The method according to claim 1, which comprises determining the bias reduced auto power spectral density estimate in different frequency bands.
7. A speech enhancement method, which comprises:
providing a speech enhancement filter; and
performing the method according to claim 1 for determining a bias reduced auto power spectral density estimate; and
utilizing the bias reduced auto power spectral density estimate for calculating filter weights of the speech enhancement filter.
8. A speech enhancement method, which comprises:
providing a speech enhancement filter; and
performing the method according to claim 4 for determining a bias reduced auto power spectral density estimate; and
utilizing the bias reduced auto power spectral density estimate for calculating filter weights of the speech enhancement filter.
9. An acoustic signal processing system for a bias reduced noise and interference estimation at a timeframe with a target speaker active, comprising:
a binaural microphone configuration including a right microphone and a left microphone respectively outputting a right microphone signal and a left microphone signal;
a power spectral density estimation unit connected to receive the right and left microphone signals from said binaural microphone configuration and configured for determining an auto power spectral density estimate of a common noise containing noise and interference components of the right and left microphone signals; and
a bias reduction unit connected to said power spectral density estimation unit and configured for modifying the auto power spectral density estimate of the common noise by using an estimate of a magnitude squared coherence of the noise and interference components contained in the right and left microphone signals determined at a time frame without a target speaker active.
10. The acoustic signal processing system according to claim 9, wherein the bias reduced auto power spectral density estimate Ŝññ of the common noise is calculated as

Ŝ ññ =MSC·(Ŝ v,n 1 v,n 1 v,n 2 v,n 2 )+(1−MSCŜ ññ,
where
MSC is the magnitude squared coherence of the noise and interference components;
Ŝññ is the auto power spectral density estimate of the common noise estimate;
Ŝv,n 1 v,n 1 is the auto power spectral density of the noise and interference components contained in the right microphone signal filtered by a blocking matrix; and
Ŝv,n 2 v,n 2 is the auto power spectral density of the noise and interference components contained in the left microphone signal filtered by the blocking matrix.
11. The acoustic signal processing system according to claim 10, which comprises a speech enhancement filter with filter weights that are calculated by using the bias reduced auto power spectral density estimate.
12. The acoustic signal processing system according to claim 9, which comprises a speech enhancement filter with filter weights that are calculated by using the bias reduced auto power spectral density estimate.
13. A hearing aid, comprising the acoustic signal processing system according to claim 9.
14. A computer program product, comprising a non-transitory computer program with computer-executable software means configured to execute the method according to claim 1 when the computer program is loaded onto and executed in a processing unit.
US13/154,738 2010-06-09 2011-06-07 Method and acoustic signal processing system for interference and noise suppression in binaural microphone configurations Active 2033-10-09 US8909523B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP20100005957 EP2395506B1 (en) 2010-06-09 2010-06-09 Method and acoustic signal processing system for interference and noise suppression in binaural microphone configurations
EP10005957 2010-06-09

Publications (2)

Publication Number Publication Date
US20110307249A1 US20110307249A1 (en) 2011-12-15
US8909523B2 true US8909523B2 (en) 2014-12-09

Family

ID=42666546

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/154,738 Active 2033-10-09 US8909523B2 (en) 2010-06-09 2011-06-07 Method and acoustic signal processing system for interference and noise suppression in binaural microphone configurations

Country Status (3)

Country Link
US (1) US8909523B2 (en)
EP (1) EP2395506B1 (en)
DK (1) DK2395506T3 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9736599B2 (en) 2013-04-02 2017-08-15 Sivantos Pte. Ltd. Method for evaluating a useful signal and audio device

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010091077A1 (en) * 2009-02-03 2010-08-12 University Of Ottawa Method and system for a multi-microphone noise reduction
WO2013101088A1 (en) * 2011-12-29 2013-07-04 Advanced Bionics Ag Systems and methods for facilitating binaural hearing by a cochlear implant patient
KR101934999B1 (en) * 2012-05-22 2019-01-03 삼성전자주식회사 Apparatus for removing noise and method for performing thereof
US9210499B2 (en) * 2012-12-13 2015-12-08 Cisco Technology, Inc. Spatial interference suppression using dual-microphone arrays
WO2014138774A1 (en) 2013-03-12 2014-09-18 Hear Ip Pty Ltd A noise reduction method and system
CN103475986A (en) * 2013-09-02 2013-12-25 南京邮电大学 Digital hearing aid speech enhancing method based on multiresolution wavelets
EP3113508B1 (en) * 2014-02-28 2020-11-11 Nippon Telegraph and Telephone Corporation Signal-processing device, method, and program
DE102015211747B4 (en) * 2015-06-24 2017-05-18 Sivantos Pte. Ltd. Method for signal processing in a binaural hearing aid
US10425745B1 (en) * 2018-05-17 2019-09-24 Starkey Laboratories, Inc. Adaptive binaural beamforming with preservation of spatial cues in hearing assistance devices
US10629226B1 (en) * 2018-10-29 2020-04-21 Bestechnic (Shanghai) Co., Ltd. Acoustic signal processing with voice activity detector having processor in an idle state

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5400409A (en) * 1992-12-23 1995-03-21 Daimler-Benz Ag Noise-reduction method for noise-affected voice channels
US6473733B1 (en) * 1999-12-01 2002-10-29 Research In Motion Limited Signal enhancement for voice coding
US20030014248A1 (en) * 2001-04-27 2003-01-16 Csem, Centre Suisse D'electronique Et De Microtechnique Sa Method and system for enhancing speech in a noisy environment
US20080159559A1 (en) * 2005-09-02 2008-07-03 Japan Advanced Institute Of Science And Technology Post-filter for microphone array
US20100166199A1 (en) * 2006-10-26 2010-07-01 Parrot Acoustic echo reduction circuit for a "hands-free" device usable with a cell phone
US7953596B2 (en) * 2006-03-01 2011-05-31 Parrot Societe Anonyme Method of denoising a noisy signal including speech and noise components
US8098844B2 (en) * 2002-02-05 2012-01-17 Mh Acoustics, Llc Dual-microphone spatial noise suppression
US8116478B2 (en) * 2007-02-07 2012-02-14 Samsung Electronics Co., Ltd Apparatus and method for beamforming in consideration of actual noise environment character
US8121311B2 (en) * 2007-11-05 2012-02-21 Qnx Software Systems Co. Mixer with adaptive post-filtering
US8195246B2 (en) * 2009-09-22 2012-06-05 Parrot Optimized method of filtering non-steady noise picked up by a multi-microphone audio device, in particular a “hands-free” telephone device for a motor vehicle
US8238575B2 (en) * 2008-12-12 2012-08-07 Nuance Communications, Inc. Determination of the coherence of audio signals
US8296136B2 (en) * 2007-11-15 2012-10-23 Qnx Software Systems Limited Dynamic controller for improving speech intelligibility
US8392184B2 (en) * 2008-01-17 2013-03-05 Nuance Communications, Inc. Filtering of beamformed speech signals
US8620672B2 (en) * 2009-06-09 2013-12-31 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for phase-based processing of multichannel signal
US8660281B2 (en) * 2009-02-03 2014-02-25 University Of Ottawa Method and system for a multi-microphone noise reduction

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5400409A (en) * 1992-12-23 1995-03-21 Daimler-Benz Ag Noise-reduction method for noise-affected voice channels
US6473733B1 (en) * 1999-12-01 2002-10-29 Research In Motion Limited Signal enhancement for voice coding
US20030014248A1 (en) * 2001-04-27 2003-01-16 Csem, Centre Suisse D'electronique Et De Microtechnique Sa Method and system for enhancing speech in a noisy environment
US8098844B2 (en) * 2002-02-05 2012-01-17 Mh Acoustics, Llc Dual-microphone spatial noise suppression
US20080159559A1 (en) * 2005-09-02 2008-07-03 Japan Advanced Institute Of Science And Technology Post-filter for microphone array
US7953596B2 (en) * 2006-03-01 2011-05-31 Parrot Societe Anonyme Method of denoising a noisy signal including speech and noise components
US20100166199A1 (en) * 2006-10-26 2010-07-01 Parrot Acoustic echo reduction circuit for a "hands-free" device usable with a cell phone
US8116478B2 (en) * 2007-02-07 2012-02-14 Samsung Electronics Co., Ltd Apparatus and method for beamforming in consideration of actual noise environment character
US8121311B2 (en) * 2007-11-05 2012-02-21 Qnx Software Systems Co. Mixer with adaptive post-filtering
US8296136B2 (en) * 2007-11-15 2012-10-23 Qnx Software Systems Limited Dynamic controller for improving speech intelligibility
US8392184B2 (en) * 2008-01-17 2013-03-05 Nuance Communications, Inc. Filtering of beamformed speech signals
US8238575B2 (en) * 2008-12-12 2012-08-07 Nuance Communications, Inc. Determination of the coherence of audio signals
US8660281B2 (en) * 2009-02-03 2014-02-25 University Of Ottawa Method and system for a multi-microphone noise reduction
US8620672B2 (en) * 2009-06-09 2013-12-31 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for phase-based processing of multichannel signal
US8195246B2 (en) * 2009-09-22 2012-06-05 Parrot Optimized method of filtering non-steady noise picked up by a multi-microphone audio device, in particular a “hands-free” telephone device for a motor vehicle

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
Carter, G. Clifford, C. Knapp, and Albert H. Nuttall. "Statistics of the estimate of the magnitute-coherence function." Audio and Electroacoustics, IEEE Transactions on 21.4 (1973): 388-389. *
European Patent Office Search Report, dated Sep. 20, 2010.
Freudenberger, Jürgen, Sebastian Stenzel, and Benjamin Venditti. "A noise PSD and cross-PSD estimation for two-microphone speech enhancement systems." Statistical Signal Processing, 2009. SSP'09. IEEE/SP 15th Workshop on. IEEE, 2009. *
Guérin, Alexandre, Régine Le Bouquin-Jeannés, and Gérard Faucon. "A two-sensor noise reduction system: applications for hands-free car kit." EURASIP Journal on Applied Signal Processing 2003 (2003): 1125-1134. *
Hu, Rong, et al., "Fast Noise Compensation for Speech Separation in Diffuse Noise", Acoustics, Speech and Signal Processing, ICASSP Proceedings, 2006 IEEE International Conference in Toulouse, France, pp. 866, IEEE, Piscataway, NJ, USA (ISBN: 978-1-4244-0469-8).
Le Bouquin, Regine, et al., "On Using the Coherence Function for Noise Reduction", Signal Processing V: Theories and Applications. Proceedings of EUSIPCO-90 Fifth European Signal Processing Conference, Sep. 18-21,1990, pp. 1103-1106, vol. 1, Elsevier, Amsterdam, Netherlands (ISBN: 978-0-444-88636-1).
McCowan, lain A., and HervëBourlard. "Microphone array post-filter based on noise field coherence." Speech and Audio Processing, IEEE Transactions on 11.6 (2003): 709-716. *
Reindl, K., et al., "Speech Enhancement for Binaural Hearing Aids Based on Blind Source Separation", Proceedings of the 4th International Symposium on Communication, ISCSP, Mar. 3-5, 2010, pp. 1-6.
Wittkop, Thomas, and Volker Hohmann. "Strategy-selective noise reduction for binaural digital hearing aids." Speech Communication 39.1 (2003): 111-138. *
Zhang, Xuefeng, et al., "Decision Based Noise Cross Power Spectral Density Estimation for Two-Microphone Speech Enhancement Systems", 2005, pp. 813-816, Intel China Research Center, Beijing, China.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9736599B2 (en) 2013-04-02 2017-08-15 Sivantos Pte. Ltd. Method for evaluating a useful signal and audio device

Also Published As

Publication number Publication date
EP2395506B1 (en) 2012-08-22
US20110307249A1 (en) 2011-12-15
EP2395506A1 (en) 2011-12-14
DK2395506T3 (en) 2012-09-10

Similar Documents

Publication Publication Date Title
US8909523B2 (en) Method and acoustic signal processing system for interference and noise suppression in binaural microphone configurations
US7761291B2 (en) Method for processing audio-signals
EP3701525B1 (en) Electronic device using a compound metric for sound enhancement
US7158933B2 (en) Multi-channel speech enhancement system and method based on psychoacoustic masking effects
US9064502B2 (en) Speech intelligibility predictor and applications thereof
US10218327B2 (en) Dynamic enhancement of audio (DAE) in headset systems
EP1465456B1 (en) Binaural signal enhancement system
EP2916321B1 (en) Processing of a noisy audio signal to estimate target and noise spectral variances
EP3509325A2 (en) A hearing aid comprising a beam former filtering unit comprising a smoothing unit
US10614788B2 (en) Two channel headset-based own voice enhancement
Luts et al. Multicenter evaluation of signal enhancement algorithms for hearing aids
US10154353B2 (en) Monaural speech intelligibility predictor unit, a hearing aid and a binaural hearing system
JP5659298B2 (en) Signal processing method and hearing aid system in hearing aid system
CN107147981B (en) Single ear intrusion speech intelligibility prediction unit, hearing aid and binaural hearing aid system
US9378754B1 (en) Adaptive spatial classifier for multi-microphone systems
EP3869821B1 (en) Signal processing method and device for earphone, and earphone
Doclo et al. Binaural speech processing with application to hearing devices
US8634581B2 (en) Method and device for estimating interference noise, hearing device and hearing aid
As' ad et al. Binaural beamforming with spatial cues preservation for hearing aids in real-life complex acoustic environments
US20220240026A1 (en) Hearing device comprising a noise reduction system
Tang et al. Binaural-cue-based noise reduction using multirate quasi-ANSI filter bank for hearing aids
Grimm et al. Wind Noise Reduction for a Closely Spaced Microphone Array
CN112424863A (en) Voice perception audio system and method

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: SIEMENS MEDICAL INSTRUMENTS PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KELLERMANN, WALTER;REINDL, KLAUS;ZHENG, YUANHANG;SIGNING DATES FROM 20110506 TO 20110509;REEL/FRAME:033476/0532

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: SIVANTOS PTE. LTD., SINGAPORE

Free format text: CHANGE OF NAME;ASSIGNOR:SIEMENS MEDICAL INSTRUMENTS PTE. LTD.;REEL/FRAME:036089/0827

Effective date: 20150416

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8