This article describes how I use GitHub Actions to deploy content using FTP without any third-party dependencies. Code executed in continuous deployment pipelines may have access to secrets (like FTP credentials and SSH keys). Supply-chain attacks are becoming more frequent, including self-sabotage by open-source authors. Without 2FA, the code of well-intentioned maintainers is one stolen password away from becoming malicious. For these reasons I find it imperative to eliminate third-party Actions from my CI/CD pipelines wherever possible.
โ ๏ธ WARNING: Third-party Actions in the GitHub Actions Marketplace may be compromised to run malicious code and leak secrets. There are dozens of public actions claiming to facilitate FTP deployment. I advise avoiding third-party actions in your CI/CD pipeline whenever possible.
This article assumes you have at least some familiarity with GitHub Actions, but if you’re never used them before I recommend taking 5 minutes to work through the Quickstart for GitHub Actions.
This workflow demonstrates how to use LFTP inside a GitHub Action to transfer files/folders with FTP without requiring a third-party dependency. Users can copy/paste this workflow and edit it as needed according to the LFTP manual.
Extra steps can be taken to record the host’s public certificate, store it as a GitHub Encrypted Secret, load it into the GitHub Action runner, and configure LFTP to compare against at run time.
1: Acquire your host’s entire certificate chain. The -showcerts argument was critically important for me.
To avoid storing passwords to disk you can pass them in with each lftp command using the -u argument. See the LFTP Documentation for details.
Although potentially insecure, some GitHub Marketplace Actions offer compelling features: One of the most popular is SamKirkland’s FTP Deploy Action which has advanced features like the use of server-stored JSON files to store file hashes to detect and selectively re-upload changed files. I encourage you to check them out, even though I try to avoid passing my secrets through third-party actions wherever possible.
I created a badge to dynamically display stats for any public GitHub repository using HTML and Vanilla JavaScript. I designed it so anyone can have their own badge by copying two lines of HTML into their website.
I don’t write web frontend code often, so after getting this idea I decided to see how far I could take it. I treated this little project as an opportunity to get some experience exploring a stack I don’t interact with often, and to see if I could take it all the way to something that would look nice and scale infinitely for free. This article documents what I learned along the way
<!-- paste anywhere in your site --><ahref="http://github.com/USER/REPO"id="github-stats-badge">GitHub</a>
<scriptsrc="https://swharden.github.io/repo-badge/badge.js"defer></script>
Because defer attribute is defined in the script element, the JavaScript will not run until after the page loads. This ensures all the elements it will interact with are present in memory before it starts editing the DOM. Note that the HTML added by the user is a link to the GitHub project, so even if the JS fails completely this link is still functional and useful.
The a with id github-stats-badge is identified and the href is read to determine the user and name of the repository to display on the badge
CSS is assembled in a style element and appended to the head
JavaScript deletes the content of the original a and replaces it with nested div, a, and span elements to build the badge in the DOM dynamically. Each stats block is hidden by settings its opacity to zero, preventing the user from seeing elements before they are filled with real data. This also fills-out the dimensions of the badge to prevent the page from shifting as its components are loaded individually.
Asynchronous requests are sent to GitHub’s RESTful API endpoints using fetch() and the JSON responses are parsed to get the latest release tag, star count, and number of forks
Information from the API is loaded into span elements and the opacity is set to one (with CSS transitions) so it fades in after the HTTP request returns a valid result. The fade-in effect makes the delayed appearance seem intentional, when in reality it’s just buying time for the HTTP request to complete its round-trip. Without this fade, the rapid appearance of text (or the replacement of dummy text with real values) is much more jarring.
I expect the HTTP request to return a JSON document with a tag_name element, but if not I build my own object containing this object (filed with dummy data) and pass it along.
The display code (which sets the text, increases opacity, and sets the link) doesn’t actually know whether the request succeeded or failed.
This is how I ensure the badge is always left in a presentable state.
I don’t use CSS fading that often, but I found it produced a fantastic result here. Here’s the magic bit of CSS that enables fading effects as JavaScript twiddles the opacity
GitHub has official MIT-licensed icons available as SVG files. These are fantastic because you can view their source and it’s plain text! You can copy that plain text directly into a HTML document, or in my case wrap it in JavaScript so I can serve it dynamically.
Note that the NS method and xmlns attribute are critical for SVG elements to work in the browser. For more information check out Mozilla’s Namespaces crash course
.
The non-minified plain-text JavaScript file is less than 8kb. This could be improved by minification and/or gzip compression, but I may continue to choose not to do this.
I appreciate HTML and JS which is human readable, especially when it was human-written by hand. Perhaps a good compromise would be to offer badge.js and badge.min.js, but even this would add complexity by necessitating a build step which is not currently required.
I organized this project so it could be served using GitHub Pages. Basically you just check a box on the GitHub repository settings page, then docs/index.html will be displayed when you go to USER.github.io/REPO in a browser. Building/publishing is performed automatically using GitHub Actions, and it works immediately without having to manually create a workflow yaml file.
Although GitHub pages supports a fancy markdown-based flat-file static website generation using Jekyll, I chose to create a project page using hand-crafted HTML, CSS, and Vanilla JS with no framework of build system. Web0 for the win!
GitHub stores and serves the content (with edge caching) so I’m protected in the unlikely case where this project goes viral and millions of people start downloading my JavaScript file. GitHub will scale horizontally as needed to infinity to meet the demand from increased traffic, and all the services I’m using are free.
Although the project page is simple, I wanted it to look nice. There are so many things to consider when making a new webpage! Here are a few that make my list, and most of them don’t apply to this small one-page website but I thought I’d share my whole list anyway.
Altogether the project page looks great and the badge seems to function as expected! I’ll continue to watch the repository so if anyone opens an issue or creates a pull request offering improvements I will be happy to review it.
This little Vanilla JS project touched a lot of interesting corners of web frontend development, and I’m happy I got to explore them today!
This article explores my recreation of the classic screensaver Mystify your Mind implemented using C#. I used SkiaSharp to draw graphics and FFMpegCore to encode frames into high definition video files suitable for YouTube.
The Mystify Sandbox application has advanced options allowing exploration of various configurations outside the capabilities of the original screensaver. Interesting configurations can be exported as video (x264-encoded MP4 or WebM format) or viewed in full-screen mode resembling an actual screensaver.
The original Mystify implementation did not clear the screen and between every frame. With GDI large fills (clearing the background) are expensive, and drawing many polygons probably challenged performance in the 90s. Instead only the leading wire was drawn, and the trailing wire was drawn-over using black. This strategy results in lines which appear to have single pixel breaks on a black background (magenta arrow). It may not have been particularly visible on CRT monitors available in the 90s, but it is quite noticeable on LCD screens today.
Observing videos of the classic screensaver I noticed that corners don’t bounce symmetrically off edges. After every bounce they change their speed slightly. This can be seen by observing the history of corners which reflect off edges of the screen demonstrating their change in speed (green arrow). I recreated this behavior using a weighted random number generator.
I used a HSL-to-RGB method to generate colors from hue (variable), saturation (always 100%), and luminosity (always 50%). By repeatedly ramping hue from 0% to 100% slowly I achieved a rainbow gradient effect. Increasing the color change speed (% change for every new wire) cycles the colors faster, and very high values produce polygons whose visible history spans a gradient of colors. Fade effect is achieved by increasing alpha of wire snapshots as they are drawn from old to new.
The FFMpegCore package is a C# wrapper for FFMpeg that can encode video from frames piped into it. Using this strategy required creation of a SkiaSharp.SKBitmap wrapper that implements FFMpegCore.Pipes.IVideoFrame. For a full explaination and example code see C# Data Visualization: Render Video with SkiaSharp.
It’s amusing to see retro screensavers running on modern gear! I can run this graphics model simulation at full-screen resolutions using thousands of wires at real-time frame rates. The most natural density of shapes for my 3440x1440 display was 20 wires with a history of 5.
Rendering the 2D image and encoding HD video using the x264 codec occupies all my CPU cores and runs a little above 500 frames per second. Encoding 24 hours of video (over 2 million frames) took this system 1 hour and 12 minutes and produced a 15.3 GB MP4 file. Encoding WebM format is considerably slower, with the same system only achieving an encoding rate of 12 frames per second.
Increasing the rate of color transition produces a rainbow effect within the visible history of polygons. The effect is made more striking by increasing the history length and decreasing the speed so the historical lines are closer together.
If the speed is greatly decreased and the number of historical records is greatly increased the resulting shape has little or no gap between historical traces and appears like a solid object. If fading is enabled (where opacity of older traces fades to transparent) the resulting effect is very interesting.
Adding 100 shapes produces a chaotic but interesting effect. This may be the first time the world has seen Mystify like this!
EDIT: All these lines are very stressful on the video encoder and produce large file sizes to achieve high quality (25 MB for 10 seconds). I’m showing this one as a JPEG but click here to view mystify-100.webm if you’re on a good internet connection.
This article describes how I safely use GitHub Actions to build a static website with Hugo and deploy it using SSH without any third-party dependencies. Code executed in continuous deployment pipelines may have access to secrets (like FTP credentials and SSH keys). Supply-chain attacks are becoming more frequent, including self-sabotage by open-source authors. Without 2FA, the code of well-intentioned maintainers is one stolen password away from becoming malicious. For these reasons I find it imperative to eliminate third-party Actions from my CI/CD pipelines wherever possible.
โ ๏ธ WARNING: Third-party Actions in the GitHub Actions Marketplace may be compromised to run malicious code and leak secrets. There are hundreds of public actions claiming to help with Hugo, SSH, and Rsync execution. I advise avoiding third-party actions in your CI/CD pipeline whenever possible.
This article assumes you have at least some familiarity with GitHub Actions, but if you’re never used them before I recommend taking 5 minutes to work through the Quickstart for GitHub Actions.
This is my cicd-website.yaml workflow for building a Hugo website and deploying it with SSH. Most people can just copy/paste what they need from here, but the rest of the article will discuss the purpose and rationale for each of these sections in more detail.
The on section determines which triggers will initiate this workflow (building/deploying the site). The following will run the workflow after every push to the GitHub repository. The workflow_dispatch allows the workflow to be triggered manually through the GitHub Actions web interface.
on:workflow_dispatch:push:
I store my hugo site in the subfolder ./website, so if I wanted to only rebuild/redeploy when the website files are changed (and not other files in the repository) I could add a paths filter. If your repository has multiple branches you likely want a branches filter as well.
on:workflow_dispatch:push:paths:- "website/**"branches:- main
This step defines the Hugo version I want as a temporary environment variable, downloads latest binary from the Hugo Releases page on GitHub, extracts it, and moves the executable file to the user’s bin folder so it can be subsequently run from any folder.
- name:โจ Setup Hugoenv:HUGO_VERSION:0.92.2run:| mkdir ~/hugo
cd ~/hugo
curl -L "https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_${HUGO_VERSION}_Linux-64bit.tar.gz" --output hugo.tar.gz
tar -xvzf hugo.tar.gz
sudo mv hugo /usr/local/bin
I store my hugo site in the subfolder ./website, so when I build the site I must define the source folder. Check-out the Hugo build commands page for documentation about all the available options.
This part is likely the most confusing for new users, so I’ll keep it as minimal as possible. Before you start, I recommend you follow your hosting provider’s guide for setting-up SSH. Once you can SSH from your own machine, it will be much easier to set it up in GitHub Actions.
To protect you from leaking your private key to a compromised host, you can retrieve your host’s public key and check against it later to be sure it does not change.
If you don’t want this protection, add -o StrictHostKeyChecking=no to your rsync command as shown at the top of the page.
If you do want this protection, use the following steps to store the host identity as a GitHub Encrypted Secret
To get keys for your hosts run the following command:
ssh-keyscan example.com
My hosting provider (SiteGround) uses a non-standard SSH port, so I must specify it with:
ssh-keyscan -p 18765 example.com
The host’s public keys will be written to the console as a block of text like this:
These commands will create text files in your .ssh folder containing your private key and the public keys of your host. Later rsync will complain if your private key is in a file with general read/write access, so the install command is used to create an empty file with user-only read/write access (chmod 600), then an echo command is used to populate that file with your private key information.
Rsync is an application for synchronizing files over networks which is available on most Linux distributions. It only sending files with different modification times and file sizes, so it can be used to efficiently deploy changes to very large websites.
store my remote destination as a GitHub Encrypted Secret - not because it’s private, but so I don’t accidentally mess it up by incorrectly managing my workflow yaml (which could result in remote data deletion)
display a small stats section after finishing (see screenshot)
The hosting provider SiteGround has a Dynamic Cache service that automatically caches static content. The dynamic cache can be cleared manually through the web interface, but that is a frustrating and manual process. To clear the dynamic cache programmatically from a GitHub Action, use the following SSH command to engage the site-tools-client application:
That’s a lot to figure-out and set-up the first time, but once you have your SSH keys ready and some YAML you can copy/paste across multiple projects it’s not that bad.
I find rsync to be extremely fast compared to something like FTP run in GitHub Actions, and I’m very satisfied that I can achieve all these steps using Linux console commands and not depending on any other Actions.
The official Hosting and Deployment site has information for:
Google Cloud, AWS, Azure, Netlify, GitHub Pages, KeyCDN, Render CDN, Bitbucket, Netlify, Firebase, GitLab, and Rsync over SSH.
A collection of my personal notes related to Hugo is in my code-notes/Hugo repository.
I recently had the need to determine the IP address of the server running my GitHib Action. Knowing this may be useful to match-up individual workflow runs with specific entries in log files, or temporarily whitelisting the action runner’s IP during testing.
I found that a cURL request to ipify.org can achieve this simply:
There are published/shared Actions which do something similar (e.g., haythem/public-ip) but whenever possible I avoid these because they are a potential vector for supply chain attacks (a compromised action could access secrets in environment variables).