Azure Pipelines makes it easy to run tests in the cloud, but I found that a new React projects made with create-react-app fail to properly test in the cloud using the simple npm test command. Attempting this would display No tests found related to files changed since last commit but hang forever.
I solved this problem and got my React app to test properly in the cloud by adding -- --watchAll=false after npm test. This is my final azure-pipelines.yml file:
trigger:- masterpool:vmImage:"ubuntu-latest"steps:- task:NodeTool@0inputs:versionSpec:"10.x"displayName:"Install Node.js"- script:npm installdisplayName:"Install NPM"- script:npm run builddisplayName:"Build"- script:npm test -- --watchAll=falsedisplayName:"Test"
This week my website was removed from the Wayback Machine. The Wayback Machine is an impressive website that lets you view what a website looked like years ago. As part of Archive.org Internet Archive, this website is truly impressive and holds entertainingly-old versions of most webpages. Just look at Amazon.com in the year 2000 for a good laugh.
I started this blog as a child twenty years ago and after seeing what the Wayback Machine pulled-up I realized that it may be best that the thoughts I had as a child stay in the past. I have personal copies of all my old blog posts, but with the wisdom of age and hindsight I’d much prefer that that material stay off the internet. Luckily I was able to get my website removed from the Wayback Machine, and this post documents how I did it.
For those of you wanting to do the same, this is how I did it: I sent an email to info@archive.org stating the following:
Please remove my website [MY URL] from the Wayback Machine.
[MY URL]/robots.txt has been updated to indicate I do not wish
this website to be archived.
https://lookup.icann.org/ shows that [MY URL] points to
[HOSTING COMPANY] nameservers, and I have attached a recent
invoice from [HOSTING COMPANY] as evidence that I own this domain.
If additional evidence or action is required (e.g., DMCA takedown
notice) please let me know.
Thank you!
Scott
I’m not sure if editing robots.txt was necessary, but I felt it gave credence to the fact that I had control over the content of this domain. That file contains the following text. In the past I read this was all it took to get your website de-listed from the wayback machine, but I added this same file to another domain name of mine and it has not been de-listed.
I attached an invoice from the present year showing a credit card payment to my hosting company for the domain as a PDF. Interestingly I did not have to show a history of domain ownership. I downloaded the invoice from my hosting company’s billing page that day, and it displays my home address but not my email address.
Six days later, my site was removed. This is the email I received:
FROM: Office Manager (Internet Archive)
Hello,
The following has now been submitted for exclusion from the
Wayback Machine at web.archive.org: [MY SITE]
Please allow up to a day for the automated portions of the process
to run their course and for the changes to take effect.
– The Internet Archive Team
I reviewed a lot of websites before reaching my strategy. I was surprised to see some people using issuing DMCA takedown notices notices to Archive.org, and was happy to find this was not required in my case. Here are some of the resources I found helpful:
Archive.org forums - many recent discussions about how to have websites removed. Ironically posting on a public forum may draw more attention to a sensitive website before it is shut down, so this doesn’t seem like a great strategy. However, it does seem to work for some.
⚠️ WARNING: This may not be permanent. I’m not sure what will happen if I lose my domain name (and robots.txt file) in the future. It is possible that my site is still being archived, while not being available on the wayback machine, and that some time in the future my site will be re-listed.
If you have updated information send me an email so I can update this page! In the mean time, I hope this information will be useful for others interested in curating their historical online presence.
After fifteen years using WordPress, I’m leaving it for a simpler alternative: flat markdown files. There were several reasons behind why I made the change. First, I was disappointed with how frequently I had to update WordPress (and upgrade my database) to stay current with security updates. Second, I didn’t like how abstract post content was. The text of posts was stored in SQL tables and references to image URLs weren’t easily accessible (posts point to content IDs, the URLs of which were stored in another table), and images and media were scattered all over the filesystem because the default image placement changed several times over the years. Finally I found that logging in to a web front-end just to write a post was a bit of a barrier that prevented me from writing more frequently.
I have been very active on GitHub over the last few years and used their platform to share my code instead of this website. Lots of code and notes belong in repositories there, yes, but sometimes I create neat things which would be better represented as one-off posts on my personal website. Some of my repositories have collected notes like these, so I look forward to migrating a lot of that content here. My hope is that the new system I put together will make it easier to share content by writing it in Markdown using the editors I’m already working in every day.
The system I’m using now is pretty simple. Every post is a folder, and each folder contains a markdown file along with all of the images and files that post references. At the top of the markdown file is a little header which has information like title, date, and categories (tags). I use a PHP script route HTTP requests and if a requested folder lacks index.html but has index.md, I serve that using Parsedown to convert it to HTML. I also add a few tweaks to do things like convert YouTube links to embedded videos and add syntax highlighting to code blocks. Backups are easy (I just zip the folder), and the website could be committed to source control. I’m leaning away from this because it’s about 1GB (lots of images), but I’ll consider it. Also, the URL is just the path to the folder.
There’s a clear path toward generating a static site. If a folder lacks index.html, index.md is parsed and served. Switching to and from a static site can be achieved just by pre-converting all the markdown files to html and deleting them. I’ll probably keep working on refining the PHP script until the conversions are reliably processing like I desire, then convert most of the old pages to static files. The cool thing about this method is that it lets me serve some posts statically but others dynamically.
The conversion from WordPress to Markdown was semi-automated, but still labor-intensive.
I first dumped the database to a SQL file, parsed-out the content and metadata (url, title, date, and privacy status), then created the filesystem and markdown files.
I then had to manually inspect every markdown file and reformat it, converting inline HTML to markdown (mostly images, galleries, and divs for alignment formatting). In many cases code formatting was damaged over the years, so lots of my old code was run through an autoformatter.
I also had to hunt-down the media (images, MP3s, ZIP files, etc.) for every post, copy it to the same folder, and update the URLs to be relative. This was especially hard for galleries which only point to meta content IDs (stored in a separate database table), and my database had gotten damaged somewhere along the way over the years so I really struggled to find the right content sometimes.
I also added tags to indicate categories, carefully reviewing content and code and marking posts as “old” if they contained out of date examples (lots of Python 2 code) or code that I deemed today to be of very poor quality. Part of me wanted to delete (hide) old posts with bad code, but I decided to leave them up. It’s a reminder of how long I’ve worked in improving my craft, and my revulsion to code I wrote in the past is an indication of how much I’ve learned since.
This process took me about 10 hours a day for 3 days in a row.
Along the way I had a few laughs at the ridiculousness of some of my old content. I think it’s probably a good thing to encourage teenagers to have personal websites, but I also encourage professionals and employers not to give too much credence to ramblings written by a person decades ago that Google happens to remember. I didn’t delete any content, but I marked most of the posts I made as a teenager as private and only exposed the ones that discuss this website.
After reviewing all of my posts I now have a really good understanding of the evolution of the technologies I used to serve my website over the years. Here’s a summary of the major events:
It started as a blog on GeoCities, with the oldest surviving post dating to June 2001. Back then adding content meant editing HTML files and using FTP to upload changes.
In 2002 I started hosting my website from a server at my house. Initially it was served with Windows/IIS using ASP for comments pages. On October 19, 2002 I switched to FreeBSD/Apache using PHP for comments pages.
I started using the Movable Type (a flat-file PHP-based CMS) on Aug 25, 2003.
I migrated to WordPress (a CMS that stored posts in a database) in 2005.
In 2020 I converted all my posts to Markdown using PHP to dynamically generate HTML (with an avenue to generate flat-file output).
I built a frequency counter with a USB interface based around a 74LV8154 32-bit counter, FTDI FT230XS (USB serial adapter), and an ATMega328 microcontroller. I’ve used this same counter IC in some old projects (1, 2, 3, 4, 5) this time I decided to I design the circuit a little more carefully, make a PCB, and use all surface-mount technology (SMT).
The micro USB port provides power and PC connectivity, and when running the device sends frequency to the computer every second. All the parameters can be customized in software, and source code is on the USB-Counter GitHub page.
I also added support for a 7-segment LED display. The counter works fine without the screen attached, but using the screen lets this device serve as a frequency counter without requiring a computer. This display is a MAX7219-driven display module which currently runs for $2 each on Amazon when ordered in packs of 5.
One advantage of this counter is that it is never reset. Since this circuit uses 32-bit counter IC, and every gate cycle transmits the current count to the computer over USB. Because every input cycle is measured high precision measurements of frequency over long periods of time are possible. For example, 1000 repeated measurements with a 1Hz gate allows frequency measurement to a precision of 0.01 Hz.
An optional external 1PPS gate can be used for precise timing. The microcontroller is capable of generating gate cycles in software. Precision is limited to that of the TCXO used to clock the microcontroller (2.5 PPM). For higher-precision gating a resistor may be lifted and an external gate applied (e.g., 1PPS GPS signal).
By clocking the microcontroller at 14.7456 MHz with a temperature-compensated crystal oscillator (TCXO) I’m able to communicate with the PC easily at 115200 baud, and with some clever timer settings and interrupts I’m able to toggle an output pin every 14,745,600 cycles to produce a fairly accurate 1PPS signal.
According to the SN74LV8154 datasheet the minimum expected maximum input frequency (fMAX) is 40 MHz. To count higher frequencies, a high-speed prescaler could be added to the input to divide-down the input signal to a frequency this counter can range. This was discussed in the original issue that kicked-off this project, and Onno Hoekstra (PA2OHH) recommended the SAB6456 divide-by-64/divide-by-256 prescaler which supports up to 1 GHz input frequency. However, present availability seems to be limited. A similar chip, or even a pair of octal flip-flops that work in the GHZ range could achieve this functionality.
By populating one of two input paths with components this device can serve as a sensitive frequency counter (with a small-signal amplifier front-end) or a pulse counter (with a simple 50 ohm load at the front-end).
An optional amplifier front-end has been added to turn weak input into strong square waves suitable for driving the TTL counter IC. It is designed for continuously running input, and will likely self-oscillate if it is not actively driven.
⚠️ WARNING: There is an error in this schematic. The protection diodes should be the other way around.
This simulation shows a small 1 MHz signal fed into a high impedance front-end being amplified to easily satisfy TTL levels. The 1k resistor (R3) could be swapped-out for a 50 Ohm resistor for a more traditional input impedance if desired. LTSpice source files are in the GitHub repository in case you want to refine the simulation.
Add a 7805 so 12V can be applied or USB. Use a 78L33 (not the reg on the FTDI chip) to power everything else.
This device doesn’t work when plugged into a wall USB cord (power only, no data). It seems an active USB connection is required to cause the 3.3V regulator (built into the FTDI chip) to deliver power. The next revision should use a discrete 3.3V regulator.
For a standalone (LED) device no USB connection is needed. Make a version that accepts 12V and displays the result on the LED. Make the optional external gate easy to access. Break-out the TX pin so PC logging is still very easy.
The design is similar (CMOS buffer driving an IRF510) but I used perfboard to make this one and placed it in an enclosure. There’s no low-pass filter on the amplifier itself, but I put a 30m low-pass filter in-line the coax before the antenna. It’s currently outputting 20PPV into 50 ohms (1 watt).