The personal website of Scott W Harden

Using MOD Files in LTSpice

This page shows how to use the LM741 op-amp model file in LTSpice. This is surprisingly un-intuitive, but is a good thing to know how to do. Model files can often be downloaded by vendor sites, but LTSpice only comes pre-loaded with models of common LT components.

Step 1: Download a Model (.mod) File

I found LM741.MOD available on the TI's LM741 product page.

Save it wherever you want, but you will need to know the full path to this file later.

Step 2: Determine the Name

Open the model file in a text editor and look for the line starting with .SUBCKT. The top of LM741.MOD looks like this:

* connections:      non-inverting input
*                   |   inverting input
*                   |   |   positive power supply
*                   |   |   |   negative power supply
*                   |   |   |   |   output
*                   |   |   |   |   |
*                   |   |   |   |   |
.SUBCKT LM741/NS    1   2  99  50  28

The last line tells us the name of this model's sub-circuit is LM741/NS

Step 3: Include the Model File

Click the ".op" button on the toolbar, then add .include followed by the full path to the model file. After clicking OK place the text somewhere on your LTSpice circuit diagram.

Step 4: Insert a General Purpose Part

We know the part we are including is a 5-pin op-amp, so we can start by placing a generic component. Notice the description says you must give the value a name and include this file. We will do this in the next step.

Step 5: Configure the Component to use the Model

Right-click the op-amp and update its Value to match the name of the subcircuit we read from the model file earlier.

Step 6: Simulate Your Circuit

Your new component will run using the properties of the model you downloaded.

This article's source was last edited on September 27, 2020.
Have something to say about this article? Let me know!

Exponential Fit with Python

Fitting an exponential curve to data is a common task and in this example we'll use Python and SciPy to determine parameters for a curve fitted to arbitrary X/Y points. You can follow along using the fit.ipynb Jupyter notebook.

import numpy as np
import scipy.optimize
import matplotlib.pyplot as plt

xs = np.arange(12) + 7
ys = np.array([304.08994, 229.13878, 173.71886, 135.75499,
               111.096794, 94.25109, 81.55578, 71.30187, 
               62.146603, 54.212032, 49.20715, 46.765743])

plt.plot(xs, ys, '.')
plt.title("Original Data")

To fit an arbitrary curve we must first define it as a function. We can then call scipy.optimize.curve_fit which will tweak the arguments (using arguments we provide as the starting parameters) to best fit the data. In this example we will use a single exponential decay function.

def monoExp(x, m, t, b):
    return m * np.exp(-t * x) + b

In biology / electrophysiology biexponential functions are often used to separate fast and slow components of exponential decay which may be caused by different mechanisms and occur at different rates. In this example we will only fit the data to a method with a exponential component (a monoexponential function), but the idea is the same.

# perform the fit
p0 = (2000, .1, 50) # start with values near those we expect
params, cv = scipy.optimize.curve_fit(monoExp, xs, ys, p0)
m, t, b = params
sampleRate = 20_000 # Hz
tauSec = (1 / t) / sampleRate

# plot the results
plt.plot(xs, ys, '.', label="data")
plt.plot(xs, monoExp(xs, m, t, b), '--', label="fitted")
plt.title("Fitted Exponential Curve")

# inspect the parameters
print(f"Y = {m} * e^(-{t} * x) + {b}")
print(f"Tau = {tauSec * 1e6} µs")

Y = 2666.499 * e^(-0.332 * x) + 42.494
Tau = 150.422 µs

Extrapolating the Fitted Curve

We can use the calculated parameters to extend this curve to any position by passing X values of interest into the function we used during the fit.

The value at time 0 is simply m + b because the exponential component becomes e^(0) which is 1.

xs2 = np.arange(25)
ys2 = monoExp(xs2, m, t, b)

plt.plot(xs, ys, '.', label="data")
plt.plot(xs2, ys2, '--', label="fitted")
plt.title("Extrapolated Exponential Curve")

Constraining the Infinite Decay Value

What if we know our data decays to 0? It's not best to fit to an exponential decay function that lets the b component be whatever it wants. Indeed, our fit from earlier calculated the ideal b to be 42.494 but what if we know it should be 0? The solution is to fit using an exponential function where b is constrained to 0 (or whatever value you know it to be).

def monoExpZeroB(x, m, t):
    return m * np.exp(-t * x)

# perform the fit using the function where B is 0
p0 = (2000, .1) # start with values near those we expect
paramsB, cv = scipy.optimize.curve_fit(monoExpZeroB, xs, ys, p0)
mB, tB = paramsB
sampleRate = 20_000 # Hz
tauSec = (1 / tB) / sampleRate

# inspect the results
print(f"Y = {mB} * e^(-{tB} * x)")
print(f"Tau = {tauSec * 1e6} µs")

# compare this curve to the original
ys2B = monoExpZeroB(xs2, mB, tB)
plt.plot(xs, ys, '.', label="data")
plt.plot(xs2, ys2, '--', label="fitted")
plt.plot(xs2, ys2B, '--', label="zero B")
Y = 1245.580 * e^(-0.210 * x)
Tau = 237.711 µs

The curves produced are very different at the extremes (especially when time is 0), even though they appear to both fit the data points nicely. Which curve is more accurate? That depends on your application. A hint can be gained by inspecting the time constants of these two curves.

Parameter Fitted B Fixed B
m 2666.499 1245.580
t 0.332 0.210
Tau 150.422 µs 237.711 µs
b 42.494 0

By inspecting Tau I can gain insight into which method may be better for me to use in my application. I expect Tau to be near 250 µs, leading me to trust the fixed-B method over the fitted B method. Choosing the correct method has great implications on the value of m (which is also the value of the curve when time is 0).

This article's source was last edited on September 24, 2020.
Have something to say about this article? Let me know!

Signal Filtering in Python

Over a decade ago I posted code demonstrating how to filter data in Python, but there have been many improvements since then. My original posts (1, 2, 3, 4) required creating discrete filtering functions, but modern approaches can leverage Numpy and Scipy to do this more easily and efficiently. In this article we will use scipy.signal.filtfilt to apply low-pass, high-pass, and band-pass filters to reduce noise in an ECG signal (stored in ecg.wav (created as part of my Sound Card ECG project).

Moving-window filtering methods often result in a filtered signal that lags behind the original data (a phase shift). By filtering the signal twice in opposite directions filtfilt cancels-out this phase shift to produce a filtered signal which is nicely aligned with the input data.

import scipy.io.wavfile
import scipy.signal
import numpy as np
import matplotlib.pyplot as plt

# read ECG data from the WAV file
sampleRate, data = scipy.io.wavfile.read('ecg.wav')
times = np.arange(len(data))/sampleRate

# apply a 3-pole lowpass filter at 0.1x Nyquist frequency
b, a = scipy.signal.butter(3, 0.1)
filtered = scipy.signal.filtfilt(b, a, data)

# plot the original data next to the filtered data

plt.figure(figsize=(10, 4))

plt.subplot(121)
plt.plot(times, data)
plt.title("ECG Signal with Noise")
plt.margins(0, .05)

plt.subplot(122)
plt.plot(times, filtered)
plt.title("Filtered ECG Signal")
plt.margins(0, .05)

plt.tight_layout()
plt.show()

Cutoff Frequency

The second argument passed into the butter method customizes the cut-off frequency of the Butterworth filter. This value (Wn) is a number between 0 and 1 representing the fraction of the Nyquist frequency to use for the filter. Note that Nyquist frequency is half of the sample rate. As this fraction increases, the cutoff frequency increases. You can get fancy and express this value as 2 * Hz / sample rate.

plt.plot(data, '.-', alpha=.5, label="data")

for cutoff in [.03, .05, .1]:
    b, a = scipy.signal.butter(3, cutoff)
    filtered = scipy.signal.filtfilt(b, a, data)
    label = f"{int(cutoff*100):d}%"
    plt.plot(filtered, label=label)

plt.legend()
plt.axis([350, 500, None, None])
plt.title("Effect of Different Cutoff Values")
plt.show()

Improve Edges with Gustafsson’s Method

Something weird happens at the edges. There's not enough data "off the page" to know how to smooth those points, so what should be done?

Padding is the default behavior, where edges are padded with with duplicates of the edge data points and smooth the trace as if those data points existed. The drawback of this is that one stray data point at the edge will greatly affect the shape of your smoothed data.

Gustafsson’s Method may be superior to padding. The advantage of this method is that stray points at the edges do not greatly influence the smoothed curve at the edges. This technique is described in a 1994 paper by Fredrik Gustafsson. "Initial conditions are chosen for the forward and backward passes so that the forward-backward filter gives the same result as the backward-forward filter." Interestingly this paper demonstrates the method by filtering noise out of an EKG recording.

# A small portion of data will be inspected for demonstration
segment = data[350:400]

filtered = scipy.signal.filtfilt(b, a, segment)
filteredGust = scipy.signal.filtfilt(b, a, segment, method="gust")

plt.plot(segment, '.-', alpha=.5, label="data")
plt.plot(filtered, 'k--', label="padded")
plt.plot(filteredGust, 'k', label="Gustafsson")
plt.legend()
plt.title("Padded Data vs. Gustafsson’s Method")
plt.show()

Band-Pass Filter

Low-pass and high-pass filters can be selected simply by customizing the third argument passed into the filter. The second argument indicates frequency (as fraction of Nyquist frequency, half the sample rate). Passing a list of two values in for the second argument allows for band-pass filtering of a signal.

b, a = scipy.signal.butter(3, 0.05, 'lowpass')
filteredLowPass = scipy.signal.filtfilt(b, a, data)

b, a = scipy.signal.butter(3, 0.05, 'highpass')
filteredHighPass = scipy.signal.filtfilt(b, a, data)

b, a = scipy.signal.butter(3, [.01, .05], 'band')
filteredBandPass = scipy.signal.lfilter(b, a, data)

Filter using Convolution

Another way to low-pass a signal is to use convolution. In this method you create a window (typically a bell-shaped curve) and convolve the window with the signal. The wider the window is the smoother the output signal will be. Also, the window must be normalized so its sum is 1 to preserve the amplitude of the input signal.

There are different ways to handle what happens to data points at the edges (see numpy.convolve for details), but setting mode to valid delete these points to produce an output signal slightly smaller than the input signal.

# create a normalized Hanning window
windowSize = 40
window = np.hanning(windowSize)
window = window / window.sum()

# filter the data using convolution
filtered = np.convolve(window, data, mode='valid')

plt.subplot(131)
plt.plot(kernel)
plt.title("Window")

plt.subplot(132)
plt.plot(data)
plt.title("Data")

plt.subplot(133)
plt.plot(filtered)
plt.title("Filtered")

Different window functions filter the signal in different ways. Hanning windows are typically preferred because they have a mostly Gaussian shape but touch zero at the edges. For a discussion of the pros and cons of different window functions for spectral analysis using the FFT, see my notes on FftSharp.

Resources

This article's source was last edited on October 10, 2020.
Have something to say about this article? Let me know!

Test React Apps in Azure Pipelines

Azure Pipelines makes it easy to run tests in the cloud, but I found that a new React projects made with create-react-app fail to properly test in the cloud using the simple npm test command. Attempting this would display No tests found related to files changed since last commit but hang forever.

I solved this problem and got my React app to test properly in the cloud by adding -- --watchAll=false after npm test. This is my final azure-pipelines.yml file:

trigger:
  - master

pool:
  vmImage: "ubuntu-latest"

steps:
  - task: NodeTool@0
    inputs:
      versionSpec: "10.x"
    displayName: "Install Node.js"

  - script: npm install
    displayName: "Install NPM"

  - script: npm run build
    displayName: "Build"

  - script: npm test -- --watchAll=false
    displayName: "Test"

A working React app that tests properly with Azure Pipelines is GitHub.com/swharden/AliCalc

This article's source was last edited on September 23, 2020.
Have something to say about this article? Let me know!

Deleted from The Wayback Machine

This week my website was removed from the Wayback Machine. The Wayback Machine is an impressive website that lets you view what a website looked like years ago. As part of Archive.org Internet Archive, this website is truly impressive and holds entertainingly-old versions of most webpages. Just look at Amazon.com in the year 2000 for a good laugh.

I started this blog as a child twenty years ago and after seeing what the Wayback Machine pulled-up I realized that it may be best that the thoughts I had as a child stay in the past. I have personal copies of all my old blog posts, but with the wisdom of age and hindsight I'd much prefer that that material stay off the internet. Luckily I was able to get my website removed from the Wayback Machine, and this post documents how I did it.

For those of you wanting to do the same, this is how I did it: I sent an email to info@archive.org stating the following:

Please remove my website [MY URL] from the Wayback Machine. [MY URL]/robots.txt has been updated to indicate I do not wish this website to be archived.

https://lookup.icann.org/ shows that [MY URL] points to [HOSTING COMPANY] nameservers, and I have attached a recent invoice from [HOSTING COMPANY] as evidence that I own this domain.

If additional evidence or action is required (e.g., DMCA takedown notice) please let me know.

Thank you!
Scott

I'm not sure if editing robots.txt was necessary, but I felt it gave credence to the fact that I had control over the content of this domain. That file contains the following text. In the past I read this was all it took to get your website de-listed from the wayback machine, but I added this same file to another domain name of mine and it has not been de-listed.

User-agent: archive.org_bot
Disallow: /

User-agent: ia_archiver
Disallow: /

I attached an invoice from the present year showing a credit card payment to my hosting company for the domain as a PDF. Interestingly I did not have to show a history of domain ownership. I downloaded the invoice from my hosting company's billing page that day, and it displays my home address but not my email address.

Six days later, my site was removed. This is the email I received:

FROM: Office Manager (Internet Archive)

Hello,

The following has now been submitted for exclusion from the Wayback Machine at web.archive.org: [MY SITE]

Please allow up to a day for the automated portions of the process to run their course and for the changes to take effect.

– The Internet Archive Team

I reviewed a lot of websites before reaching my strategy. I was surprised to see some people using issuing DMCA takedown notices notices to Archive.org, and was happy to find this was not required in my case. Here are some of the resources I found helpful:

⚠️ WARNING: This may not be permanent. I'm not sure what will happen if I lose my domain name (and robots.txt file) in the future. It is possible that my site is still being archived, while not being available on the wayback machine, and that some time in the future my site will be re-listed.

If you have updated information send me an email so I can update this page! In the mean time, I hope this information will be useful for others interested in curating their historical online presence.

This article's source was last edited on September 22, 2020.
Have something to say about this article? Let me know!
page 1, page 2, page 3, page 4, page 5, page 6, page 7, page 8, page 9, page 10, page 11, page 12, page 13, page 14, page 15, page 16, page 17, page 18, page 19, page 20, page 21, page 22, page 23, page 24, page 25, page 26, page 27, page 28, page 29, page 30, page 31, page 32, page 33, page 34, page 35, page 36, page 37, page 38, page 39, page 40, page 41, page 42, page 43, page 44, page 45, page 46, page 47, page 48, page 49, page 50
All Blog Posts