SWHarden.com

The personal website of Scott W Harden

Realtime image pixelmap from Numpy array data in Qt

Consider realtime spectrograph software like QRSS VD. It’s primary function is to scroll a potentially huge data-rich image across the screen. In Python, this is often easier said than done.__ If you’re not careful, you can tackle this problem inefficiently and get terrible frame rates (<5FPS) or eat a huge amount of system resources (I get complaints often that QRSS VD takes up a lot of processor resources, and 99% of it is drawing the images). In the past, I’ve done it at least 4 different ways (one, two, three, four, five). Note that “four” seems to be the absolute fastest option so far. I’ve been keeping an eye out for a while now contemplating the best way to rapidly draw color-mapped 8-bit data in a python program. Now that I’m doing a majority of my graphical development with PyQt and QtDesigner (packaged with PythonXY), I ended-up with a solution that looks like this (plotting random data with a colormap):

1.) in QtDesigner, create a form with a scrollAreaWidget

2.) in QtDesigner, add a label inside the scrollAreaWidget

3.) in code, resize label and also **scrollAreaWidgetContents **to fit data (disable “widgetResizable”)

4.) in code, create a QImage from a 2D numpy array (dtype=uint8)

5.) in code, set label pixmap to QtGui.QPixmap.fromImage(QImage)

That’s pretty much it! Here are some highlights of my program. Note that the code for the GUI is in a separate file, and must be downloaded from the ZIP provided at the bottom. Hope it helps someone else out there who might want to do something similar!

import ui_main
import sys
from PyQt4 import QtCore, QtGui

import sys
from PyQt4 import Qt
import PyQt4.Qwt5 as Qwt
from PIL import Image
import numpy
import time

spectroWidth=1000
spectroHeight=1000

a=numpy.random.random(spectroHeight*spectroWidth)*255
a=numpy.reshape(a,(spectroHeight,spectroWidth))
a=numpy.require(a, numpy.uint8, 'C')

COLORTABLE=[]
for i in range(256): COLORTABLE.append(QtGui.qRgb(i/4,i,i/2))

def updateData():
    global a
    a=numpy.roll(a,-5)
    QI=QtGui.QImage(a.data, spectroWidth, spectroHeight, QtGui.QImage.Format_Indexed8)
    QI.setColorTable(COLORTABLE)
    uimain.label.setPixmap(QtGui.QPixmap.fromImage(QI))

if __name__ == "__main__":
    app = QtGui.QApplication(sys.argv)
    win_main = ui_main.QtGui.QWidget()
    uimain = ui_main.Ui_win_main()
    uimain.setupUi(win_main)

    # SET UP IMAGE
    uimain.IM = QtGui.QImage(spectroWidth, spectroHeight, QtGui.QImage.Format_Indexed8)
    uimain.label.setGeometry(QtCore.QRect(0,0,spectroWidth,spectroHeight))
    uimain.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0,0,spectroWidth,spectroHeight))

    # SET UP RECURRING EVENTS
    uimain.timer = QtCore.QTimer()
    uimain.timer.start(.1)
    win_main.connect(uimain.timer, QtCore.SIGNAL('timeout()'), updateData)

    ### DISPLAY WINDOWS
    win_main.show()
    sys.exit(app.exec_())

Wireless Microcontroller / PC Interface for $3.21

__Here I demonstrate a dirt-cheap method of transmitting data from any microchip to any PC using $3.21 in parts. __I’ve had this idea for a while, but finally got it working tonight. On the transmit side, I’m having a an ATMEL AVR microcontroller (ATMega48) transmit data (every number from 0 to 200 over and over) wirelessly using 433mhz wireless modules. The PC receives the data through the microphone port of a sound card, and a cross-platform Python script I wrote decodes the data from the audio and graphs it on the screen. I did something similar back in 2011, but it wasn’t wireless, and the software wasn’t nearly as robust as it is now.

This is a proof-of-concept demonstration, and part of a larger project. I think there’s a need for this type of thing though! It’s unnecessarily hard to transfer data from a MCU to a PC as it is. There’s USB (For AVR V-USB is a nightmare and requires a precise, specific clock speed, DIP chips don’t have native USB, and some PIC DIP chips do but then you have to go through driver hell), USART RS-232 over serial port works (but who has serial ports these days?), or USART over USB RS-232 interface chips (like FTDI FT-232, but surface mount only), but both also require precise, specific clock speeds. Pretend I want to just measure temperature once a minute. Do I really want to etch circuit boards and solder SMT components? Well, kinda, but I don’t like feeling forced to. Some times you just want a no-nonsense way to get some numbers from your microchip to your computer. This project is a funky out-of-the-box alternative to traditional methods, and one that I hope will raise a few eyebrows.

Ultimately, I designed this project to eventually allow multiple “bursting” data transmitters to transmit on the same frequency routinely, thanks to syncing and forced-sync-loss (read on). It’s part of what I’m tongue-in-cheek calling the Scott Harden RF Protocol (SH-RFP). In my goal application, I wish to have about 5 wireless temperature sensors all transmitting data to my PC. The receive side has some error checking in that it makes sure pulse sizes are intelligent and symmetrical (unlike random noise), and since each number is sent twice (with the second time being in reverse), there’s another layer of error-detection. This is *NOT* a robust and accurate method to send critical data. It’s a cheap way to send data. It is very range limited, and only is intended to work over a distance of ten or twenty feet. First, let’s see it in action!

The RF modules are pretty simple. At 1.56 on ebay (with free shipping), they’re cheap too! I won’t go into detail documenting the ins and out of these things (that’s done well elsewhere). Briefly, you give them +5V (VCC), 0V (GND), and flip their data pin (ATAD) on and off on the transmitter module, and the receiver module’s DATA pin reflects the same state. The receiver uses a gain circuit which continuously increases gain until signal is detected, so if you’re not transmitting it WILL decode noise and start flipping its output pin. Note that persistent high or low states are prone to noise too, so any protocol you use these things for should have rapid state transitions. It’s also suggested that you maintain an average 50% duty cycle. These modules utilize amplitude shift keying (ASK) to transmit data wirelessly. The graphic below shows what that looks like at the RF level. Transmit and receive is improved by adding a quarter-wavelength vertical antenna to the “ANT” solder pad. At 433MHz, that is about 17cm, so I’m using a 17cm copper wire as an antenna.

Transmitting from the microcontroller is easy as pie! It’s just a matter of copying-in a few lines of C. It doesn’t rely on USART, SPI, I2C, or any other protocol. Part of why I developed this method is because I often use ATTiny44A which doesn’t have USART for serial interfacing. The “SH-RFP” is easy to implement just by adding a few lines of code. I can handle that. How does it work? I can define it simply by a few rules:

To send a packet:

Decoding is the same thing in reverse. I use an eBay sound card at $1.29 (with free shipping) to get the signal into the PC. Synchronization is required to allow the PC to know that real data (not noise) is starting. Sending the same number twice (once with reversed bit polarity) is a proof-checking mechanisms that lets us throw-out data that isn’t accurate.

From a software side, I’m using PyAudio to collect data from the sound card, and the PythonXY distribution to handle analysis with numpy, scipy, and plotting with QwtPlot, and general GUI functionality with PyQt. I think that’s about everything.

The demonstration interface is pretty self-explanatory. The top-right shows a sample piece of data. The top left is a histogram of the number of samples of each pulse width. A clean signal should have 3 pulses (A=0, B=1, C=break). Note that you’re supposed to look at the peaks to determine the best lengths to tell the software to use to distinguish A, B, and C. This was intentionally not hard-coded because I want to rapidly switch from one microcontroller platform to another which may be operating at a different clock speed, and if all the sudden it’s running 3 times slower it will be no problem to decide on the PC side. Slick, huh? The bottom-left shows data values coming in. The bottom-right graphs those values. Rate reporting lets us know that I’m receiving over 700 good data points a second. That’s pretty cool, especially considering I’m recording at 44,100 Hz.

All source code (C files for an ATMega48 and Python scripts for the GUI) can be viewed here: SHRFP project on GitHub

If you use these concepts, hardware, or ideas in your project, let me know about it! Send me an email showing me your project – I’d love to see it. Good luck!


Realtime FFT Audio Visualization with Python

WARNING: this project is largely outdated, and some of the modules are no longer supported by modern distributions of Python.For a more modern, cleaner, and more complete GUI-based viewer of realtime audio data (and the FFT frequency data), check out my Python Real-time Audio Frequency Monitor project.

I’m no stranger to visualizing linear data in the frequency-domain. Between the high definition spectrograph suite I wrote in my first year of dental school (QRSS-VD, which differentiates tones to sub-Hz resolution), to the various scripts over the years (which go into FFT imaginary number theory, linear data signal filtering with python, and real time audio graphing with wckgraph), I’ve tried dozens of combinations of techniques to capture data, analyze it, and display it with Python. Because I’m now branching into making microcontroller devices which measure and transfer analog data to a computer, I need a way to rapidly visualize data obtained in Python. Since my microcontroller device isn’t up and running yet, linear data from a PC microphone will have to do. Here’s a quick and dirty start-to-finish project anyone can tease apart to figure out how to do some of these not-so-intuitive processes in Python. To my knowledge, this is a cross-platform solution too. For the sound card interaction, it relies on the cross-platform sound card interface library PyAudio. My python distro is 2.7 (python xy), but pythonxy doesn’t [yet] supply PyAudio.

The code behind it is a little jumbled, but it works. For recording, I wrote a class “SwhRecorder” which uses threading to continuously record audio and save it as a numpy array. When the class is loaded and started, your GUI can wait until it sees newAudio become True, then it can grab audio directly, or use fft() to pull the spectral component (which is what I do in the video). Note that my fft() relies on numpy.fft.fft(). The return is a nearly-symmetrical mirror image of the frequency components, which (get ready to cringe mathematicians) I simply split into two arrays, reverse one of them, and add together. To turn this absolute value into dB, I’d take the log10(fft) and multiply it by 20. You know, if you’re into that kind of thing, you should really check out a post I made about FFT theory and analyzing audio data in python.

Here’s the meat of the code. To run it, you should really grab the zip file at the bottom of the page. I’ll start with the recorder class:

import matplotlib
matplotlib.use('TkAgg') # THIS MAKES IT FAST!
import numpy
import scipy
import struct
import pyaudio
import threading
import pylab
import struct

class SwhRecorder:
    """Simple, cross-platform class to record from the microphone."""

    def __init__(self):
        """minimal garb is executed when class is loaded."""
        self.RATE=48100
        self.BUFFERSIZE=2**12 #1024 is a good buffer size
        self.secToRecord=.1
        self.threadsDieNow=False
        self.newAudio=False

    def setup(self):
        """initialize sound card."""
        #TODO - windows detection vs. alsa or something for linux
        #TODO - try/except for sound card selection/initiation

        self.buffersToRecord=int(self.RATE*self.secToRecord/self.BUFFERSIZE)
        if self.buffersToRecord==0: self.buffersToRecord=1
        self.samplesToRecord=int(self.BUFFERSIZE*self.buffersToRecord)
        self.chunksToRecord=int(self.samplesToRecord/self.BUFFERSIZE)
        self.secPerPoint=1.0/self.RATE

        self.p = pyaudio.PyAudio()
        self.inStream = self.p.open(format=pyaudio.paInt16,channels=1,
            rate=self.RATE,input=True,frames_per_buffer=self.BUFFERSIZE)
        self.xsBuffer=numpy.arange(self.BUFFERSIZE)*self.secPerPoint
        self.xs=numpy.arange(self.chunksToRecord*self.BUFFERSIZE)*self.secPerPoint
        self.audio=numpy.empty((self.chunksToRecord*self.BUFFERSIZE),dtype=numpy.int16)

    def close(self):
        """cleanly back out and release sound card."""
        self.p.close(self.inStream)

    ### RECORDING AUDIO ###

    def getAudio(self):
        """get a single buffer size worth of audio."""
        audioString=self.inStream.read(self.BUFFERSIZE)
        return numpy.fromstring(audioString,dtype=numpy.int16)

    def record(self,forever=True):
        """record secToRecord seconds of audio."""
        while True:
            if self.threadsDieNow: break
            for i in range(self.chunksToRecord):
                self.audio[i*self.BUFFERSIZE:(i+1)*self.BUFFERSIZE]=self.getAudio()
            self.newAudio=True
            if forever==False: break

    def continuousStart(self):
        """CALL THIS to start running forever."""
        self.t = threading.Thread(target=self.record)
        self.t.start()

    def continuousEnd(self):
        """shut down continuous recording."""
        self.threadsDieNow=True

    ### MATH ###

    def downsample(self,data,mult):
        """Given 1D data, return the binned average."""
        overhang=len(data)%mult
        if overhang: data=data[:-overhang]
        data=numpy.reshape(data,(len(data)/mult,mult))
        data=numpy.average(data,1)
        return data

    def fft(self,data=None,trimBy=10,logScale=False,divBy=100):
        if data==None:
            data=self.audio.flatten()
        left,right=numpy.split(numpy.abs(numpy.fft.fft(data)),2)
        ys=numpy.add(left,right[::-1])
        if logScale:
            ys=numpy.multiply(20,numpy.log10(ys))
        xs=numpy.arange(self.BUFFERSIZE/2,dtype=float)
        if trimBy:
            i=int((self.BUFFERSIZE/2)/trimBy)
            ys=ys[:i]
            xs=xs[:i]*self.RATE/self.BUFFERSIZE
        if divBy:
            ys=ys/float(divBy)
        return xs,ys

    ### VISUALIZATION ###

    def plotAudio(self):
        """open a matplotlib popup window showing audio data."""
        pylab.plot(self.audio.flatten())
        pylab.show()

And now here’s the GUI launcher:

import ui_plot
import sys
import numpy
from PyQt4 import QtCore, QtGui
import PyQt4.Qwt5 as Qwt
from recorder import *

def plotSomething():
    if SR.newAudio==False:
        return
    xs,ys=SR.fft()
    c.setData(xs,ys)
    uiplot.qwtPlot.replot()
    SR.newAudio=False

if __name__ == "__main__":
    app = QtGui.QApplication(sys.argv)

    win_plot = ui_plot.QtGui.QMainWindow()
    uiplot = ui_plot.Ui_win_plot()
    uiplot.setupUi(win_plot)
    uiplot.btnA.clicked.connect(plotSomething)
    #uiplot.btnB.clicked.connect(lambda: uiplot.timer.setInterval(100.0))
    #uiplot.btnC.clicked.connect(lambda: uiplot.timer.setInterval(10.0))
    #uiplot.btnD.clicked.connect(lambda: uiplot.timer.setInterval(1.0))
    c=Qwt.QwtPlotCurve()
    c.attach(uiplot.qwtPlot)

    uiplot.qwtPlot.setAxisScale(uiplot.qwtPlot.yLeft, 0, 1000)

    uiplot.timer = QtCore.QTimer()
    uiplot.timer.start(1.0)

    win_plot.connect(uiplot.timer, QtCore.SIGNAL('timeout()'), plotSomething)

    SR=SwhRecorder()
    SR.setup()
    SR.continuousStart()

    ### DISPLAY WINDOWS
    win_plot.show()
    code=app.exec_()
    SR.close()
    sys.exit(code)

Note that by commenting-out the FFT line and using “c.setData(SR.xs,SR.audio)” you can plot linear PCM data to visualize sound waves like this:

Download Source Code

Finally, here’s the zip file. It contains everything you need to run the program on your own computer (including the UI scripts which are not written on this page)

DOWNLOAD: SWHRecorder.zip

If you make a cool project based on this one, I’d love to hear about it. Good luck!


Realtime Data Plotting in Python

WARNING: this project is largely outdated, and some of the modules are no longer supported by modern distributions of Python.For a more modern, cleaner, and more complete GUI-based viewer of realtime audio data (and the FFT frequency data), check out my Python Real-time Audio Frequency Monitor project. I love using python for handing data. Displaying it isn’t always as easy. Python fast to write, and numpy, scipy, and matplotlib are an incredible combination. I love matplotlib for displaying data and use it all the time, but when it comes to realtime data visualization, matplotlib (admittedly) falls behind. Imagine trying to plot sound waves in real time. Matplotlib simply can’t handle it. I’ve recently been making progress toward this end with PyQwt with the Python X,Y distribution. It is a cross-platform solution which should perform identically on Windows, Linux, and MacOS. Here’s an example of what it looks like plotting some dummy data (a sine wave) being transformed with numpy.roll().

How did I do it? Easy. First, I made the GUI with QtDesigner (which comes with Python x,y). I saved the GUI as a .ui file. I then used the pyuic4 command to generate a python script from the .ui file. In reality, I use a little helper script I wrote designed to build .py files from .ui files and start a little “ui.py” file which imports all of the ui classes. It’s overkill for this, but I’ll put it in the ZIP anyway. Here’s what the GUI looks like in QtDesigner:

After that, I tie everything together in a little script which updates the plot in real time. It takes inputs from button click events and tells a clock (QTimer) how often to update/replot the data. Replotting it involves just rolling it with numpy.roll(). Check it out:

import ui_plot #this was generated by pyuic4 command
import sys
import numpy
from PyQt4 import QtCore, QtGui
import PyQt4.Qwt5 as Qwt

numPoints=1000
xs=numpy.arange(numPoints)
ys=numpy.sin(3.14159*xs*10/numPoints) #this is our data

def plotSomething():
    global ys
    ys=numpy.roll(ys,-1)
    c.setData(xs, ys)
    uiplot.qwtPlot.replot()

if __name__ == "__main__":
    app = QtGui.QApplication(sys.argv)
    win_plot = ui_plot.QtGui.QMainWindow()
    uiplot = ui_plot.Ui_win_plot()
    uiplot.setupUi(win_plot)

    # tell buttons what to do when clicked
    uiplot.btnA.clicked.connect(plotSomething)
    uiplot.btnB.clicked.connect(lambda: uiplot.timer.setInterval(100.0))
    uiplot.btnC.clicked.connect(lambda: uiplot.timer.setInterval(10.0))
    uiplot.btnD.clicked.connect(lambda: uiplot.timer.setInterval(1.0))

    # set up the QwtPlot (pay attention!)
    c=Qwt.QwtPlotCurve()  #make a curve
    c.attach(uiplot.qwtPlot) #attach it to the qwtPlot object
    uiplot.timer = QtCore.QTimer() #start a timer (to call replot events)
    uiplot.timer.start(100.0) #set the interval (in ms)
    win_plot.connect(uiplot.timer, QtCore.SIGNAL('timeout()'), plotSomething)

    # show the main window
    win_plot.show()
    sys.exit(app.exec_())

AVR Programming in 64-bit Windows 7

A majority of the microcontroller programming I do these days involves writing C for the ATMEL AVR series of microcontrollers. I respect PIC, but I find the open/free atmosphere around AVR to be a little more supportive to individual, non-commercial cross-platform programmers like myself. With that being said, I’ve had a few bumps along the way getting unofficial AVR programmers to work in Windows 7. Previously, I had great success with a $11 (shipped) clone AVRISP-mkII programmer from fun4diy.com. It was the heart of a little AVR development board I made and grew to love (which had a drop-in chip slot and also a little breadboard all in one) seen in a few random blog posts over the years. Recently it began giving me trouble because, despite downloading and installing various drivers and packages, I couldn’t get it to work with Windows Vista or windows 7. I needed to find another option. I decided against the official programmer/software because the programmer is expensive (for a college student) and the software (AVR studio 6) is terribly bloated for LED-blink type applications. “AStudio61.exe” is 582.17 Mb. Are you kidding me? Half a gig to program a microchip with 2kb of memory? Rediculous. I don’t use arduino because I’m comfortable working in C and happy reading datasheets. Furthermore, I like programming chips hot off the press, without requiring a special boot loader.

I got everything running on Windows 7 x64 with the following:

Here’s the “hello world” of microchip programs (it simply blinks an LED). I’ll assume the audience of this page knows the basics of microcontroller programming, so I won’t go into the details. Just note that I’m using an ATMega48 and the LED is on pin 9 (PB6). This file is named “blink.c”.

#define F_CPU 1000000UL
#include <avr/io.h>
#include <util/delay.h>

int main (void)
{
    DDRB = 255;
    while(1)
    {
        PORTB ^= 255;
        _delay_ms(500);
    }
}

Here’s how I compiled the code:

avr-gcc -mmcu=atmega48 -Wall -Os -o blink.elf blink.c
avr-objcopy -j .text -j .data -O ihex blink.elf blink.hex

In reality, it is useful to put these commands in a text file and call them “compile.bat” Here’s how I program the AVR. I used AVRDudess! I’ve been using raw AVRDude for years. It’s a little rough around the edges, but this GUI interface is pretty convenient. I don’t even feel the need to include the command to program it from the command line! If I encourage nothing else by this post, I encourage (a) people to use and support AVRDudess, and (b) AVRDudess to continue developing itself as a product nearly all hobby AVR programmers will use. Thank you 21-year-old Zak Kemble.

And finally, the result. A blinking LED. Up and running programming AVR microcontrollers in 64-bit Windows 7 with an unofficial programmer, and never needing to install bloated AVR Studio software.