Signal processing tips from Hackaday

Signal processing is an electrical engineering subfield that focuses on analysing, modifying and synthesizing signals such as sound, images and biological measurements. Electronic signal processing was first revolutionized by the MOSFET and then single-chip digital signal processor (DSP). Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors (DSP chips).

Hackaday has published an in interesting series of articles on signal processing, and here are some picks from it:

RTFM: ADCs And DACs
https://hackaday.com/2019/10/16/rtfm-adcs-and-dacs/

DSP Spreadsheet: IQ Diagrams
https://hackaday.com/2019/11/15/dsp-spreadsheet-iq-diagrams/

Sensor Filters For Coders
https://hackaday.com/2019/09/06/sensor-filters-for-coders/

DSP Spreadsheet: FIR Filtering
https://hackaday.com/2019/10/03/dsp-spreadsheet-fir-filtering/

Fourier Explained: [3Blue1Brown] Style!
https://hackaday.com/2019/07/13/fourier-explained-3blue1brown-style/

DSP Spreadsheet: Frequency Mixing
https://hackaday.com/2019/11/01/dsp-spreadsheet-frequency-mixing/

Spice With A Sound Card
https://hackaday.com/2019/07/03/spice-with-a-sound-card/
- check also A real-time netlist based audio circuit plugin at https://github.com/thadeuluiz/RTspice

Reverse Engineering The Sound Blaster
https://hackaday.com/2019/06/19/reverse-engineering-the-sound-blaster/

FM Signal Detection The Pulse-Counting Way
https://hackaday.com/2019/08/28/fm-signal-detection-the-pulse-counting-way/

DSP Spreadsheet: IQ Diagrams< https://hackaday.com/2019/11/15/dsp-spreadsheet-iq-diagrams/

Here is an extra, not from Hackaday, but an interesting on-line signal processing tool for generating sounds
https://z.musictools.live/#95

116 Comments

  1. Tomi Engdahl says:

    Virtual Oscilloscope
    This online virtual oscilloscope allows you to visualise live sound input and get to grips with how to adjust the display
    https://academo.org/demos/virtual-oscilloscope/

    Reply
  2. Tomi Engdahl says:

    Parametric Press Unravels The JPEG Format
    https://hackaday.com/2023/02/14/parametric-press-unravels-the-jpeg-format/

    This is the first we’ve heard of Parametric Press — a digital magazine with some deep dives into a variety of subjects (such as particle physics, “big data” and such) that have interactive elements or simulations of various types embedded within each story.

    The first one that sprung up in our news feed is a piece by [Omar Shehata] on the humble JPEG image format. In it, he explains the how and why of the JPEG encoding process, allowing the reader to play with the various concepts along the way, in real time, within the browser.

    https://parametric.press/issue-01/unraveling-the-jpeg/

    Reply
  3. Tomi Engdahl says:

    A Guide to Choosing the Right Signal-Processing Technique
    March 29, 2023
    From audio beamforming to blind source separation, this article discusses the pros and cons of the different techniques for signal processing in your device design.
    https://www.electronicdesign.com/technologies/analog/article/21262990/audiotelligence-a-guide-to-choosing-the-right-signalprocessing-technique?utm_source=EG+ED+Analog+%26+Power+Source&utm_medium=email&utm_campaign=CPS230323066&o_eid=7211D2691390C9R&rdx.identpull=omeda|7211D2691390C9R&oly_enc_id=7211D2691390C9R

    What you’ll learn:

    What signal-processing techniques are available?
    How do the different signal-processing techniques work?
    Tips on choosing the right signal-processing technique for your application.

    Noise is all around us—at work and at home—making it difficult to pick out and clearly hear one voice amid the cacophony, especially as we reach middle age. Electronic devices have the same issue: Audio signals picked up by their microphones are often contaminated with interference, noise, and reverberation. Signal-processing techniques, such as beamforming and blind source separation, can come to the rescue. But what’s the best option, and for which applications?

    Intelligible speech is crucial for a wide variety of electronic devices, ranging from phones, computers, hearing-assistance devices, and conferencing systems to transcription services, car infotainment, and home assistants. But a one-size-fits-all approach isn’t the way to get the best performance out of such widely different devices.

    Variations in factors such as the number of microphones and the size of the microphone array will have an effect on which signal-processing technique is the most appropriate. The choice requires consideration not just of the performance you need, but the situation in which you need the application to work, as well as the physical constraints of the product you have in mind.

    Audio Beamforming

    Audio beamforming is one of the most versatile multi-microphone methods for emphasizing a particular source in an acoustic scene. Beamformers can be divided into two types, depending on how they work: data-independent or adaptive.

    One of the simplest forms of data-independent beamformers is a delay-and-sum beamformer, where the microphone signals are delayed to compensate for the different path lengths between a target source and the different microphones. This means that when the signals are summed, the target source coming from a certain direction will experience coherent combining, and it’s expected that signals arriving from other directions will suffer, to some extent, from destructive combining.

    However, in many audio consumer applications, these types of beamformers will be of little benefit because they need the wavelength of the signal to be small compared with the size of the microphone array. They work well in top-of-the-range conferencing systems with microphone arrays 1 m in diameter containing hundreds of microphones to cover the wide dynamic range of wavelengths. But such systems are expensive to produce and therefore only suitable for the business conferencing market.

    Consumer devices, on the other hand, usually contain just a few microphones in a small array. Consequently, delay-and-sum beamformers struggle as the large wavelengths of speech are arriving at a small microphone array.

    Another problem is the fact that sound doesn’t move in straight lines—a given source has multiple different paths to the microphones, each with differing amounts of reflection and diffraction. This means that simple delay-and-sum beamformers aren’t very effective at extracting a source of interest from an acoustic scene.

    Adaptive Beamformers

    Adaptive beamformers are a more advanced beamforming technique. One example is the minimum variance distortionless response (MVDR) beamformer. It tries to pass the signal arriving from the target direction in a distortionless way, while attempting to minimize the power at the output of the beamformer. This has the effect of trying to preserve the target source while attenuating the noise and interference.

    Such a technique can work well in ideal laboratory conditions, but in the real world, microphone mismatch and reverberation can lead to inaccuracy in modeling the effect of the source location relative to the array. The result is that these beamformers often perform poorly because they will start cancelling parts of the target source.

    A voice activity detector could be added to address the target cancellation problem, and the adaptation of the beamformer can be turned off when the target source is active. This typically works well when there’s just one target source. However, if there are multiple competing speakers, this technique has limited effectiveness.

    Many modern devices use another beamforming technique called adaptive sidelobe cancellation, which tries to null out the sources that aren’t from the direction of interest. These are state-of-the-art in modern hearing aids, allowing the user to concentrate on sources directly in front of them.

    Blind Source Separation

    An alternative approach to improving speech intelligibility in noisy environments is to use blind source separation (BSS) (see video below). Time-frequency masking BSS estimates the time-frequency envelope of each source and then attenuates the time-frequency points that are dominated by interference and noise.

    Another type of BSS uses linear multichannel filters. The acoustic scene is separated into its constituent parts using statistical models of how sources generally behave. BSS then calculates a multichannel filter whose output best fits these statistical models. In doing so, it intrinsically extracts all of the sources in the scene, not just one.

    The multichannel filter method can handle microphone mismatch and will deal well with reverberation and multiple competing speakers. It doesn’t need any prior knowledge of the sources, the microphone array, or the acoustic scene, since all of these variables are absorbed into the design of the multichannel filter. Changing a microphone, or a calibration error, simply changes the optimal multichannel filter.

    Because BSS works from the audio data rather than the microphone geometry, it’s a very robust approach that’s insensitive to calibration issues and can generally achieve much higher separation of sources in real-world situations than any beamformer. And, because it separates all sources irrespective of direction, it can be used to automatically follow a multi-way conversation.

    BSS Drawbacks

    However, BSS is not without its problems. For most BSS algorithms, the number of sources that can be separated depends on the number of microphones in the array. In addition, because it works from the data, BSS needs a consistent frame of reference.

    As a result, it limits the technique to devices that have a stationary microphone array. Examples include a tabletop hearing device, a microphone array for fixed conferencing systems, or video calling from a phone or tablet that’s being held steady in your hands or on a table.

    When there’s background chatter, BSS will generally separate the most dominant sources in the mix, which may include the annoyingly loud person at the next table. So, to work effectively, BSS needs to be combined with an ancillary algorithm to determine which of the sources are the sources of interest.

    On its own, BSS separates sources very well, but it doesn’t reduce the background noise by more than about 9 dB. To obtain really good performance, it must be paired with a noise-reduction technique.

    Many solutions for noise reduction use artificial intelligence (AI)—it’s utilized by Zoom and other conferencing systems, for example—to analyze the signal in the time-frequency domain. Then it tries to identify which components are due to the signal and those that are due to noise. This can work well with just a single microphone. The big problem with this technique, though, is that it extracts the signal by dynamically gating the time-frequency content, which can lead to unpleasant artifacts in poor signal-to-noise ratios (SNRs), and it may introduce considerable latency.

    A low-latency noise-suppression algorithm combined with BSS, on the other hand, gives up to 26 dB of noise suppression and makes products suitable for real-time use—with a latency of just 5 ms and a more natural sound with fewer distortions than AI solutions. Hearing devices, in particular, need ultra-low latency to keep lip sync; it’s extremely off-putting for users if the sound they hear lags behind the mouth movements of the person they are talking to.

    The number of electronic devices that need to receive clear audio to work effectively is rising every year

    Ask the Audio Experts: Separating Sounds with Blind Source Separation
    https://www.youtube.com/watch?v=qd7G-Xlktdw

    Reply
  4. Tomi Engdahl says:

    Hackaday Prize 2023: Learn DSP With The Portable All-in-One Workstation
    https://hackaday.com/2023/05/16/hackaday-prize-2023-learn-dsp-with-the-portable-all-in-one-workstation/

    Learning Digital Signal Processing (DSP) techniques traditionally involves working through a good bit of mathematics and signal theory. To promote a hands-on approach, [Clyne] developed the DSP PAW (Portable All-in-one Workstation). DSP PAW hardware and software provide a complete learning environment for any computer where DSP algorithms can be entered as C++ code through an Arduino-like IDE.

    The DSP PAW demonstrating attenuation controlled by a potentiometer.

    The DSP PAW hardware comprises a custom board that plugs onto an STM32 NUCLEO Development Board from STMicroelectronics.

    DSP PAW
    Design, study, and analyze DSP algorithms from anywhere.
    https://hackaday.io/project/190725-dsp-paw

    Reply
  5. Tomi Engdahl says:

    DIY Programmable Guitar Pedal Rocks The Studio & Stage
    https://hackaday.com/2023/05/16/diy-programmable-guitar-pedal-rocks-the-studio-stage/

    Ever wondered how to approach making your own digital guitar effects pedal? [Steven Hazel] and a friend have done exactly that, using an Adafruit Feather M4 Express board and a Teensy Audio Adapter board together to create a DIY programmable digital unit that looks ready to drop into an enclosure and get put right to work in the studio or on the stage.
    The bulk of the work is done with two parts, and can be prototyped easily on a breadboard.

    [Steven] also made a custom PCB to mount everything, including all the right connectors, but the device can be up and running with not much more than the two main parts and a breadboard.

    Building a Guitar Pedal Prototype with a Feather
    https://blog.blacklightunicorn.com/building-a-guitar-pedal-with-the-adafruit-feather-m4/

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*