SWHarden.com

The personal website of Scott W Harden

Insights Into FFTs, Imaginary Numbers, and Accurate Spectrographs

I’m attempting to thoroughly re-write the data assessment portions of my QRSS VD software, and rather than rushing to code it (like I did last time) I’m working hard on every step trying to optimize the code. I came across some notes I made about Fast Fourier Transformations from the first time I coded the software, and though I’d post some code I found helpful. Of particular satisfaction is an email I received from Alberto, I2PHD, the creator of Argo (the “gold standard” QRSS spectrograph software for Windows). In it he notes:

I think that [it is a mistake to] throw away the imaginary part of the FFT. What I do in Argo, in Spectran, in Winrad, in SDRadio and in all of my other programs is compute the magnitude of the [FFT] signal, then compute the logarithm of it, and only then I do a mapping of the colors on the screen with the result of this last computation.

Alberto, I2PHD (the creator of Argo)

UPDATE IN SEPTEMBER, 2020 (10 years later): I now understand that magnitude = sqrt(real^2 + imag^2) and this post is a bit embarrassing to read! Check out my .NET FFT library FftSharp for a more advanced discussion on this topic.

These concepts are simple to visualize when graphed. Here I’ve written a short Python script to listen to the microphone (which is being fed a 2kHz sine wave), perform the FFT, and graph the real FFT component, imaginary FFT component, and their sum. The output is:

Of particular interest to me is the beautiful complementary of the two curves. It makes me wonder what types of data can be extracted by the individual curves (or perhaps their difference?) down the road. I wonder if phase measurements would be useful in extracting weak carries from beneath the noise floor?

Here’s the code I used to generate the image above. Note that my microphone device was set to listen to my stereo output, and I generated a 2kHz sine wave using the command speaker-test -t sine -f 2000 on a PC running Linux. I hope you find it useful!

import numpy
import pyaudio
import pylab
import numpy

### RECORD AUDIO FROM MICROPHONE ###
rate = 44100
soundcard = 1  # CUSTOMIZE THIS!!!
p = pyaudio.PyAudio()
strm = p.open(format=pyaudio.paInt16, channels=1, rate=rate,
              input_device_index=soundcard, input=True)
strm.read(1024)  # prime the sound card this way
pcm = numpy.fromstring(strm.read(1024), dtype=numpy.int16)

### DO THE FFT ANALYSIS ###
fft = numpy.fft.fft(pcm)
fftr = 10*numpy.log10(abs(fft.real))[:len(pcm)/2]
ffti = 10*numpy.log10(abs(fft.imag))[:len(pcm)/2]
fftb = 10*numpy.log10(numpy.sqrt(fft.imag**2+fft.real**2))[:len(pcm)/2]
freq = numpy.fft.fftfreq(numpy.arange(len(pcm)).shape[-1])[:len(pcm)/2]
freq = freq*rate/1000  # make the frequency scale

### GRAPH THIS STUFF ###
pylab.subplot(411)
pylab.title("Original Data")
pylab.grid()
pylab.plot(numpy.arange(len(pcm))/float(rate)*1000, pcm, 'r-', alpha=1)
pylab.xlabel("Time (milliseconds)")
pylab.ylabel("Amplitude")
pylab.subplot(412)
pylab.title("Real FFT")
pylab.xlabel("Frequency (kHz)")
pylab.ylabel("Power")
pylab.grid()
pylab.plot(freq, fftr, 'b-', alpha=1)
pylab.subplot(413)
pylab.title("Imaginary FFT")
pylab.xlabel("Frequency (kHz)")
pylab.ylabel("Power")
pylab.grid()
pylab.plot(freq, ffti, 'g-', alpha=1)
pylab.subplot(414)
pylab.title("Real+Imaginary FFT")
pylab.xlabel("Frequency (kHz)")
pylab.ylabel("Power")
pylab.grid()
pylab.plot(freq, fftb, 'k-', alpha=1)
pylab.show()

After fighting for a while long with a “shifty baseline” of the FFT, I came to another understanding. Let me first address the problem. Taking the FFT of different regions of the 2kHz wave I got traces with the peak in the identical location, but the “baselines” completely different.

Like many things, I re-invented the wheel. Since I knew the PCM values weren’t changing, the only variable was the starting/stopping point of the linear sample. “Hard edges”, I imagined, must be the problem. I then wrote the following function to shape the PCM audio like a triangle, silencing the edges and sweeping the volume up toward the middle of the sample:

def shapeTriangle(data):
    triangle=numpy.array(range(len(data)/2)+range(len(data)/2)[::-1])+1
    return data*triangle

After shaping the data BEFORE I applied the FFT, I made the subsequent traces MUCH more acceptable. Observe:

Now that I’ve done all this experimentation/thinking, I remembered that this is nothing new! Everyone talks about shaping the wave to minimize hard edges before taking the FFT. They call it windowing. Another case of me re-inventing the wheel because I’m too lazy to read others’ work. However, in my defense, I learned a lot by trying all this stuff – far more than I would have learned simply by copying someone else’s code into my script. Experimentation is the key to discovery!


Smoothing Window Data Averaging in Python - Moving Triangle Tecnique

⚠️ SEE UPDATED POST: Signal Filtering in Python

While I wrote a pervious post on linear data smoothing with python, those scripts were never fully polished. Fred (KJ4LFJ) asked me about this today and I felt bad I had nothing to send him. While I might add that the script below isn’t polished, at least it’s clean. I’ve been using this method for all of my smoothing recently. Funny enough, none of my code was clean enough to copy and paste, so I wrote this from scratch tonight. It’s a function to take a list in (any size) and smooth it with a triangle window (of any size, given by “degree”) and return the smoothed data with or without flanking copies of data to make it the identical length as before. The script also graphs the original data vs. smoothed traces of varying degrees. The output is below. I hope it helps whoever wants it!

import numpy
import pylab


def smoothTriangle(data, degree, dropVals=False):
    """
    performs moving triangle smoothing with a variable degree.
    note that if dropVals is False, output length will be identical
        to input length, but with copies of data at the flanking regions
    """
    triangle = numpy.array(range(degree)+[degree]+range(degree)[::-1])+1
    smoothed = []
    for i in range(degree, len(data)-degree*2):
        point = data[i:i+len(triangle)]*triangle
        smoothed.append(sum(point)/sum(triangle))
    if dropVals:
        return smoothed
    smoothed = [smoothed[0]]*(degree+degree/2)+smoothed
    while len(smoothed) < len(data):
        smoothed.append(smoothed[-1])
    return smoothed


### CREATE SOME DATA ###
data = numpy.random.random(100)  # make 100 random numbers from 0-1
data = numpy.array(data*100, dtype=int)  # make them integers from 1 to 100
for i in range(100):
    data[i] = data[i]+i**((150-i)/80.0)  # give it a funny trend

### GRAPH ORIGINAL/SMOOTHED DATA ###
pylab.plot(data, "k.-", label="original data", alpha=.3)
pylab.plot(smoothTriangle(data, 3), "-", label="smoothed d=3")
pylab.plot(smoothTriangle(data, 5), "-", label="smoothed d=5")
pylab.plot(smoothTriangle(data, 10), "-", label="smoothed d=10")
pylab.title("Moving Triangle Smoothing")
pylab.grid(alpha=.3)
pylab.axis([20, 80, 50, 300])
pylab.legend()
pylab.show()

Simple Python Spectrograph with PyGame

While thinking of ways to improve my QRSS VD high-definitions spectrograph software, I often wish I had a better way to display large spectrographs. Currently I’m using PIL (the Python Imaging Library) with TK and it’s slow as heck. I looked into the PyGame project, and it seems to be designed with speed in mind. I whipped-up this quick demo, and it’s a simple case audio spectrograph which takes in audio from your sound card and graphs it time vs. frequency. This method is far superior to the method I was using previously to display the data, because while QRSS VD can only update the entire GUI (500px by 8,000 px) every 3 seconds, early tests with PyGame suggests it can do it about 20 times a second (wow!). With less time/CPU going into the GUI, the program can be more responsivle and my software can be less of a drain.

import pygame
import numpy
import threading
import pyaudio
import scipy
import scipy.fftpack
import scipy.io.wavfile
import wave
rate = 12000  # try 5000 for HD data, 48000 for realtime
soundcard = 2
windowWidth = 500
fftsize = 512
currentCol = 0
scooter = []
overlap = 5  # 1 for raw, realtime - 8 or 16 for high-definition


def graphFFT(pcm):
    global currentCol, data
    ffty = scipy.fftpack.fft(pcm)  # convert WAV to FFT
    ffty = abs(ffty[0:len(ffty)/2])/500  # FFT is mirror-imaged
    # ffty=(scipy.log(ffty))*30-50 # if you want uniform data
    print "MIN:t%stMAX:t%s" % (min(ffty), max(ffty))
    for i in range(len(ffty)):
        if ffty[i] < 0:
            ffty[i] = 0
        if ffty[i] > 255:
            ffty[i] = 255
    scooter.append(ffty)
    if len(scooter) < 6:
        return
    scooter.pop(0)
    ffty = (scooter[0]+scooter[1]*2+scooter[2]*3+scooter[3]*2+scooter[4])/9
    data = numpy.roll(data, -1, 0)
    data[-1] = ffty[::-1]
    currentCol += 1
    if currentCol == windowWidth:
        currentCol = 0


def record():
    p = pyaudio.PyAudio()
    inStream = p.open(format=pyaudio.paInt16, channels=1, rate=rate,
                      input_device_index=soundcard, input=True)
    linear = [0]*fftsize
    while True:
        linear = linear[fftsize/overlap:]
        pcm = numpy.fromstring(inStream.read(
            fftsize/overlap), dtype=numpy.int16)
        linear = numpy.append(linear, pcm)
        graphFFT(linear)


pal = [(max((x-128)*2, 0), x, min(x*2, 255)) for x in xrange(256)]
print max(pal), min(pal)
data = numpy.array(numpy.zeros((windowWidth, fftsize/2)), dtype=int)
# data=Numeric.array(data) # for older PyGame that requires Numeric
pygame.init()  # crank up PyGame
pygame.display.set_caption("Simple Spectrograph")
screen = pygame.display.set_mode((windowWidth, fftsize/2))
world = pygame.Surface((windowWidth, fftsize/2), depth=8)  # MAIN SURFACE
world.set_palette(pal)
t_rec = threading.Thread(target=record)  # make thread for record()
t_rec.daemon = True  # daemon mode forces thread to quit with program
t_rec.start()  # launch thread
clk = pygame.time.Clock()
while 1:
    for event in pygame.event.get():  # check if we need to exit
        if event.type == pygame.QUIT:
            pygame.quit()
            sys.exit()
    pygame.surfarray.blit_array(world, data)  # place data in window
    screen.blit(world, (0, 0))
    pygame.display.flip()  # RENDER WINDOW
    clk.tick(30)  # limit to 30FPS

Simple-Case PyGame Example

I’m starting to investigate PyGame as an alternative to PIL and K for my QRSS VD spectrograph project. This sample code makes a box bounce around a window.

import pygame, sys
pygame.init() #load pygame modules
size = width, height = 320, 240 #size of window
speed = [2, 2] #speed and direction
screen = pygame.display.set_mode(size) #make window
s=pygame.Surface((100,50)) #create surface 100px by 50px
s.fill((33,66,99)) #color the surface blue
r=s.get_rect() #get the rectangle bounds for the surface
clock=pygame.time.Clock() #make a clock
while 1: #infinite loop
        clock.tick(30) #limit framerate to 30 FPS
        for event in pygame.event.get(): #if something clicked
                if event.type == pygame.QUIT: #if EXIT clicked
                        sys.exit() #close cleanly
        r=r.move(speed) #move the box by the "speed" coordinates
        #if we hit a  wall, change direction
        if r.left < 0 or r.right > width: speed[0] = -speed[0]
        if r.top < 0 or r.bottom > height: speed[1] = -speed[1]
        screen.fill((0,0,0)) #make redraw background black
        screen.blit(s,r) #render the surface into the rectangle
        pygame.display.flip() #update the screen

MEPT Insulation Improves Stability

While it may not be perfect, it’s a whole lot better. Below is a capture from this morning of my signal (the waves near the bottom). Compare that to how it was before and you should notice a dramatic improvement! The MEPT is inside a metal box inside a 1-inch-thick Styrofoam box. Very cool!