@small-tech/site.js

Small Web construction set.

Usage no npm install needed!

<script type="module">
  import smallTechSiteJs from 'https://cdn.skypack.dev/@small-tech/site.js';
</script>

README

Site.js

Small web construction set.

Person lying on the ground, working on a laptop with the Site.js logo on screen

Develop, test, sync, and deploy (using a single tool that comes in a single binary).

Site.js is a small personal web tool for Linux, macOS, and Windows 10.

Most tools today are built for startups and enterprises. Site.js is built for people.

Like this? Fund us!

Small Technology Foundation is a tiny, independent not-for-profit.

We exist in part thanks to patronage by people like you. If you share our vision and want to support our work, please become a patron or donate to us today and help us continue to exist.

Feature Highlights

Note: Production use via startup daemon is only supported on Linux distributions with systemd.

Install

Copy and paste the following commands into your terminal:

(Note: all commands should be run in your regular account, not as root.) (As of 15.4.0, Site.js will refuse to run if launched from the root account.)

Native binaries

Before you pipe any script into your computer, always view the source code (Linux and macOS, Windows) and make sure you understand what it does.

Linux

wget -qO- https://sitejs.org/install | bash

(To use curl instead, see the macOS instructions, below.)

macOS

curl -s https://sitejs.org/install | bash

Windows 10 with PowerShell running under Windows Terminal

iex(iwr -UseBasicParsing https://sitejs.org/install.txt).Content

Node.js

npm i -g @small-tech/site.js

Alpha and Beta channels

On Linux and macOS, in addition to the release build channel, there is also an alpha build and beta build channel available. Pass either alpha or beta as an argument to the Bash pipe to install the latest build from the respective channel.

For example, to install the latest beta build on Linux:

wget -qO- https://sitejs.org/install | bash -s -- beta

Note: On Macs, wget is not installed by default but curl is so you can use that instead:

curl -s https://sitejs.org/install | bash -s -- beta

Alpha builds are strictly for local testing and should not, under any circumstances, be used in production. We do not test Alpha builds in production.

Servers deployed using release builds check for updates every six hours whereas beta builds check every 10 minutes.

Note that the latest alpha or beta build available may be older than the latest release build. You can check the date on the build via the version command.

System Requirements

Linux

Any recent Linux distribution should work. However, Site.js is most thoroughly tested at Small Technology Foundation on Ubuntu 20.04/Pop!_OS 20.04 (development and staging) and Ubuntu 18.04 LTS (production).

There are builds available for x64, ARM, and ARM64.

For production use, systemd is required.

macOS

macOS 10.14.x Mojave and macOS 10.15.x Catalina are supported (the latter as of Site.js 12.5.1).

Production use is not possible under macOS.

Windows 10

The current version of Windows 10 is supported with PowerShell running under Windows Terminal.

Windows Subsystem for Linux (WSL) is not supported. (You can install and run Site.js under WSL but seamless TLS certificate handling for local servers will not work out of the box as WSL and Windows 10 do not share certificate stores. If you do want to use Site.js under WSL, you have to first install Site.js on Windows 10 and run a local server (site) to create the certificate authority and certificates, then install and run Site.js under WSL and then manually copy the contents of ~/.small-tech.org/site.js/tls/local/ from Windows 10 to WSL.)

Production use is not possible under Windows.

Dependencies

Site.js tries to seamlessly install the dependencies it needs when run. That said, there are certain basic components it expects on a Linux-like system. These are:

  • sudo
  • bash (on Linux, macOS, etc.) or PowerShell running under Windows Terminal (on Windows 10).
  • wget or curl (on Linux and macOS) are required to download the installation script when installing Site.js using the one-line installation command. On Linux, you can install either via your distribution’s package manager (e.g., sudo apt install wget on Ubuntu-like systems). macOS comes with curl installed.

If it turns out that any of these prerequisites are a widespread cause of first-run woe, we can look into having them installed automatically in the future. Please open an issue if any of these affects you during your deployments or in everyday use.

Automatically-installed dependencies

For production use, passwordless sudo is required. On systems where the sudo configuration directory is set to /etc/sudoers.d, Site.js will automatically install this rule. On other systems, you might have to set it up yourself.

For localhost servers, the bundled mkcert requires certutil and the Network Security Services (NSS) dynamic libraries. Site.js will attempt to automatically install the required libraries using popular package managers. Please note that this will fail on PinePhones running UBPorts as NSS is missing from the apt package manager for that distribution. (The PinePhone issue has been resolved.)

Update (as of version 12.9.5; properly functioning as of version 12.9.6)

To seamlessly update the native binary if a newer version exists:

site update

This command will automatically restart a running Site.js daemon if one exists. If you are running Site.js as a regular process, it will continue to run and you will run the newer version the next time you launch a regular Site.js process.

Note: There is a bug in the semantic version comparison in the original release with the update feature (version 12.9.5) that will prevent upgrades between minor versions (i.e., between 12.9.5 and 12.10.x and beyond). This was fixed in version 12.9.6. If you’re still on 12.9.5 and you’re reading this after we’ve moved to 12.10.0 and beyond, please stop Site.js if it’s running and install the latest Site.js manually.

Automatic updates in production (as of version 12.10.0)

Production servers started with the enable command will automatically check for updates on first launch and then again at a set interval (currently every 6 hours) and update themselves as and when necessary.

This is a primary security feature given that Site.js is meant for use by individuals, not startups or enterprises with operations teams that can (in theory, at least) maintain servers with the latest updates.

Uninstall

To uninstall the native binary (and any created artifacts, like TLS certificates, systemd services, etc.):

site uninstall

Use

Development (servers @localhost)

Regular server

Start serving the current directory at https://localhost as a regular process using locally-trusted certificates:

$ site

Note that if your current working directory is inside a special subfolder of your site (.dynamic, .hugo, .wildcard, .db) Site.js (as of version 15.4.0) magically does the right thing and serves the site root instead of the folder you’re in. If you really do want to serve one of these folders or a subfolder thereof, specifically state your intent by passing the current folder (.) as an argument.

The above caveat aside, the command above is a shorthand for the full form of the serve command:

$ site serve . @localhost:443

Note: As of 15.4.0, Site.js will refuse to serve the root directory or your home directory for security reasons.

To serve on a different port

Just specify the port explicitly as in the following example:

$ site @localhost:666

That, again, is shorthand for the full version of the command, which is:

$ site serve . @localhost:666

Accessing your local server over the local area network

You can access local servers via their IPv4 address over a local area network.

This is useful when you want to test your site with different devices without having to expose your server over the Internet using a service like ngrok. For example, if your machine’s IPv4 address on the local area network is 192.168.2.42, you can just enter that IP to access it from, say, your iPhone.

To access your local machine from a different device on your local area network, you must transfer the public key of your generated local root certificate authority to that device and install and trust it.

For example, if you’re on an iPhone, hit the /.ca route in your browser:

http://192.168.2.42/.ca

The browser will download the local root certificate authority’s public key and prompt you to install the profile on your iPhone. You then have to go to Settings → Profile Downloaded → Tap Install when the Install Profile pop-up appears showing you the mkcert certificate you downloaded. Then, go to Settings → General → About → Certificate Trust Settings → Turn on the switch next to the mkcert certificate you downloaded. You should now be able to hit https://192.168.2.42 and see your site from your iPhone.

You can also tranfer your key to your other devices manually. You can find the key at ~/.small-tech/site.js/tls/local/rootCA.pem after you’ve created a local server at least once. For more details on transferring your key to other devices, please refer to the relevant section in the mkcert documentation.

Proxy server

You can use Site.js as a reverse proxy for HTTP and WebSocket connections. This is useful if you have a web app written in any language that only supports HTTP (not TLS) that you want to deploy securely.

For example, the following is a simple HTTP server written in Python 3 (server.py) that runs insecurely on port 3000:

from http.server import HTTPServer, BaseHTTPRequestHandler

class MyRequestHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        self.send_response(200)
        self.end_headers()
        self.wfile.write(b'Hello, from Python!')

server = HTTPServer(('localhost', 3000), MyRequestHandler)
server.serve_forever()

Run it (at http://localhost:3000) with:

$ python3 server

Then, proxy it securely from https://localhost using:

$ site :3000

Again, this is a convenient shortcut. The full form of this command is:

$ site serve :3000 @localhost:443

This will create and serve the following proxies:

  • http://localhost:3000 → https://localhost
  • ws://localhost:3000 → wss://localhost

Testing (servers @hostname)

Regular server

Start serving the my-site directory at your hostname as a regular process using globally-trusted Let’s Encrypt certificates:

$ site my-site @hostname

Note that as of 13.0.0, Site.js will refuse to start the server if your hostname (or the domain you specified manually using the --domain option and any aliases you may have specified using the --aliases option) fails to resolve or is unreachable. This should help you diagnose and fix typos in domain names as well as DNS misconfiguration and propagation issues. As of 14.1.0, you can use the --skip-domain-reachability-check flag to override this behaviour and skip the pre-flight checks.

Proxy server

Start serving http://localhost:1313 and ws://localhost:1313 at your hostname:

$ site :1313 @hostname

macOS notes

To set your hostname under macOS (e.g., to example.small-tech.org), run the following command:

$ sudo scutil --set HostName example.small-tech.org

Windows 10 notes

On Windows 10, you must add quotation marks around @hostname and @localhost. So the first example, above, would be written in the following way on Windows 10:

$ site my-site "@hostname"

Also, Windows 10, unlike Linux and macOS, does not have the concept of a hostname. The closest thing to it is your full computer name. Setting your full computer name is a somewhat convoluted process so we’ve documented it here for you.

How to set your full computer name on Windows 10

Say you want to set your hostname to my-windows-laptop.small-tech.org:

  1. Control Panel → System And Security → System → Change Settings link (next to Computer name) → [Change…] Button
  2. Under Computer name, enter your subdomain (my-windows-laptop)
  3. [More…] Button → enter your domain name (small-tech.org) in the Primary DNS suffix of this computer field.
  4. Press the various [OK] buttons to dismiss the various modal dialogues and restart your computer.

Making your server public

Use a service like ngrok (Pro+) to point a custom domain name to your temporary staging server. Make sure you set your hostname file (e.g., in /etc/hostname or via hostnamectl set-hostname <hostname> or the equivalent for your platform) to match your domain name. The first time you hit your server via your hostname it will take a little longer to load as your Let’s Encrypt certificates are being automatically provisioned by Auto Encrypt.

When you start your server, it will run as a regular process. It will not be restarted if it crashes or if you exit the foreground process or restart the computer.

Deployment

Pull and push

As of version 14.4.0, you can use the simplified pull and push commands if your local and remote setup adheres to the following Small Web conventions:

Local
  • The name of your local working folder is the same as your domain (if not, specify the domain using the --domain oiption)
  • Your SSH key is either found at ~/.ssh/id_{your domain}_ed25519 or you have an id_25519 or id_rsa file in your ~/.ssh folder. (The former is a Small Web convention, the latter is a fallback general convention.)
Remote
  • Account name: site
  • Folder being served: /home/site/public

If those requirements are met, from within your site’s folder on your local machine, you can pull (download) your site using:

site pull

And you can push (deploy) your site using:

site push

The legacy sync command will continue to work as before and is documented below.

Sync

Site.js can help you deploy your site to your live server with its sync feature.

$ site my-demo --sync-to=my-demo.site

The above command will:

  1. Generate any Hugo content that might need to be generated.
  2. Sync your site from the local my-demo folder via rsync over ssh to the host my-demo.site.

Without any customisations, the sync feature assumes that your account on your remote server has the same name as your account on your local machine and that the folder you are watching (my-demo, in the example above) is located at /home/your-account/my-demo on the remote server. Also, by default, the contents of the folder will be synced, not the folder itself. You can change these defaults by specifying a fully-qualified remote connection string as the --sync-to value.

The remote connection string has the format:

remoteAccount@host:/absolute/path/to/remoteFolder

For example:

$ site my-folder --sync-to=someOtherAccount@my-demo.site:/var/www

If you want to sync not the folder’s contents but the folder itself, use the --sync-folder-and-contents flag. e.g.,

$ site my-local-folder --sync-to=me@my.site:my-remote-folder --sync-folder-and-contents

The above command will result in the following directory structure on the remote server: /home/me/my-remote-folder/my-local-folder. It also demonstrates that if you specify a relative folder, Site.js assumes you mean the folder exists in the home directory of the account on the remote server.

(As of 15.4.0) If the sync command cannot connect in 5 seconds, it will time out. If this happens, check that you have the correct host and account details specified. If you do, there might be a problem with your connection.

(As of 16.1.0) It’s a common mistake to start the sync without specifying the --sync-from option when not in the root of your site but in one of the well-known subfolders (e.g., .hugo if you’re working on a Hugo site or your .dynamic folder if you happen to be in it because you’re working on a site that uses DotJS.) In these instances, Site.js will detect the mistake and understand that you want to sync the site, not the subfolder and behave accordingly. If you really want to sync one of the well-known subfolders for some reason, then specifically specify it by setting --sync-from=.. Note that this magic rewriting of the sync path doesn’t happen any time you specify a folder explicitly using the --sync-from option.

Live Sync

With the Live Sync feature, you can have Site.js watch for changes to your content and sync them to your server in real-time (e.g., if you want to live blog something or want to keep a page updated with local data you’re collecting from a sensor).

To start a live sync server, provide the --live-sync flag to your sync request.

For example:

$ site my-demo --sync-to=my-demo.site --live-sync

The above command will start a local development server at https://localhost. Additionally, it will watch the folder my-demo for changes and sync any changes to its contents via rsync over ssh to the host my-demo.site.

Production

Available on Linux distributions with systemd (most Linux distributions, but not these ones or on macOS or Windows).

For production use, passwordless sudo is required. On systems where the sudo configuration directory is set to /etc/sudoers.d, Site.js will automatically install this rule. On other systems, you might have to set it up yourself.

Please make sure that you are NOT running as root. (As of 15.4.0, Site.js will refuse to run if launched from the root account.)

On your live, public server, you can start serving the my-site directory at your hostname as a daemon that is automatically run at system startup and restarted if it crashes with:

$ site enable my-site

The enable command sets up your server to start automatically when your server starts and restart automatically if it crashes.

For example, if you run the command on a connected server that has the ar.al domain pointing to it and ar.al set in /etc/hostname, you will be able to access the site at https://ar.al. (Yes, of course, ar.al runs on Site.js.) The first time you hit your live site, it will take a little longer to load as your Let’s Encrypt certificates are being automatically provisioned by Auto Encrypt.

By default, the automatic TLS certificate provisioning gets certificates for your naked domain only, which it bases on your hostname.

If you want to serve your site at a domain that’s different to your hostname, specify it using the --domain option.

If you also want certificates for the www subdomain, specify it using the --aliases option. You can specify multiple subdomains to provision certificates for by separating them using commas (without spaces).

Note: As of 13.0.0, the enable will run pre-flight checks and refuse to install the service if the domain name and any aliases you have specified are not reachable. As of 14.1.0, you can use the --skip-domain-reachability-check flag to override this behaviour and skip the pre-flight checks. If you use this flag, the server launched by the installed service will also not check for reachability. This is useful if you want to set up a server via a script prior to DNS propagation. Just make sure you haven’t made any typos in any of the domain names as you will not be warned about any mistakes.

Note: As of 16.2.0, you can now also run proxy servers in production. To proxy whatever is running over HTTP and WS on port 8080 at https://your.domain, do:

$ site enable :8080

When the server is enabled, you can also use the following commands:

  • start: Start server.
  • stop: Stop server.
  • restart: Restart server.
  • disable: Stop server and remove from startup.
  • logs: Display and tail server logs (press Ctrl+C to exit).
  • status: Display detailed server information.

Site.js uses the systemd to start and manage the daemon. Beyond the commands listed above that Site.js supports natively (and proxies to systemd), you can make use of all systemd functionality via the systemctl and journalctl commands.

Build and test from source

Site.js is built using and supports Node.js 12 LTS (currently version 12.16.2).

It has also been tested to work with the latest LTS (14.x).

The build is created using Nexe and our own pre-built Nexe base Node.js binaries hosted on SiteJS.org. Please make sure that the version of your Node.js runtime matches the currently supported version stated above to ensure that the correct Nexe binary build is downloaded and used by the build script.

Install the source and run tests

# Clone and install.
mkdir site.js && cd site.js
git clone https://github.com/small-tech/site.js.git app
cd app
./install

# Make sure your computer is reachable from your
# hostname if you’re going to run the tests.
# (e.g., using PageKit or ngrok, etc.)

# Run tests.
npm test

Note that if you have a large amount of logs, the logs tests might fail due to a timeout. In this case, try clearing your journalctl logs:

journalctl --rotate
journalctl --vacuum-time=1s

Install as global Node.js module

After you install the source and run tests:

# Install the binary as a global module
npm i -g

# Serve the test site locally (visit https://localhost to view).
site test/site

Note: for commands that require root privileges (i.e., enable and disable), Site.js will automatically restart itself using sudo and Node must be available for the root account. If you’re using nvm, you can enable this via:

# Replace v10.16.3 with the version of node you want to make available globally.
sudo ln -s "$NVM_DIR/versions/node/v12.16.2/bin/node" "/usr/local/bin/node"
sudo ln -s "$NVM_DIR/versions/node/v12.16.2/bin/npm" "/usr/local/bin/npm"

If you forget to do this and run site enable, you will find the following error in the systemctl logs: /etc/systemd/system/site.js.service:15: Executable "node" not found in path. The command itself will fail with:

Error: Command failed: sudo systemctl start site.js
Failed to start site.js.service: Unit site.js.service has a bad unit file setting.
See system logs and 'systemctl status site.js.service' for details.

Native binaries

After you install the source and run tests:

# Build the native binary for your platform.
# To build for all platforms, use npm run build -- --all
npm run build

# Serve the test site (visit https://localhost to view).
# e.g., Using the Linux binary with version <binary-version>
# in the format (YYYYMMDDHHmmss).
dist/linux/<binary-version>/site test/site

Build and install native binary locally

After you install the source and run tests:

npm run install-locally

Update the Nexe base binary for your platform/architecture and Node version

(You will most likely not need to do this.)

npm run update-nexe

Deploying Site.js itself

(You will most likely not need to do this.)

# To cross-compile binaries for Linux (x64), macOS, and Windows
# and also copy them over to the Site.js web Site for deployment.
npm run deploy

Note that the deployment script requires a modern version of git to work (e.g., will not work on elementary OS 5.1.7 which is based on Ubuntu 18.04.) To install the latest version of git on Ubuntu-esque operating systems (like elementary OS):

sudo apt install software-properties-common
sudo add-apt-repository ppa:git-core/ppa
sudo apt update
sudo apt install git

Syntax

site [command] [folder|:port] [@host[:port]] [--options]
  • command: serve | enable | disable | start | stop | logs | status | update | uninstall | version | help
  • folder|:port: Path of folder to serve (defaults to current folder) or port on localhost to proxy.
  • @host[:port]: Host (and, optionally port) to sync. Valid hosts are @localhost and @hostname.
  • --options: Settings that alter command behaviour.

Key: [] = optional   | = or

Commands:

  • serve: Serve specified folder (or proxy specified :port) on specified @host (at :port, if given). The order of arguments is:

    1. what to serve,
    2. where to serve it at. e.g.,

    site serve my-folder @localhost

    If a port (e.g., :1313) is specified instead of my-folder, start an HTTP/WebSocket proxy.

  • enable: Start server as daemon with globally-trusted certificates and add to startup.

  • disable: Stop server daemon and remove from startup.

  • start: Start server as daemon with globally-trusted certificates.

  • stop: Stop server daemon.

  • restart: Restart server daemon.

  • logs: Display and tail server logs.

  • status: Display detailed server information.

  • update: Check for Site.js updates and update if new version is found.

  • uninstall: Uninstall Site.js.

  • version: Display version and exit.

  • help: Display help screen and exit.

If command is omitted, behaviour defaults to serve.

Options:

For both the serve and enable commands:

  • --domain: The main domain to serve (defaults to system hostname if not specified).

  • --aliases: Comma-separated list of additional domains to obtain TLS certificates for and respond to. These domains point to the main domain via a 302 redirect. Note that as of 13.0.0, the www alias is not added automatically. To specify it, you can use the shorthand form:--aliases=www

  • --skip-domain-reachability-check:Do not run pre-flight check for domain reachability.

  • --access-log-errors-only: Display only errors in the access log (HTTP status codes 4xx and 5xx). Successful access requests (1xx, 2xx, and 3xx) are not logged. This is useful during development if you feel overwhelmed by the output and miss other, non-access-related errors.

  • --access-log-disable: Completely disable the access log. No access requests, not even errors will be logged. Be careful when using this in production as you might miss important errors.

For the serve command:

  • --sync-to: The host to sync to.

  • --sync-from: The folder to sync from (only relevant if --sync-to is specified).

  • --live-sync: Watch for changes and live sync them to a remote server (only relevant if --sync-to is specified).

  • --sync-folder-and-contents: Sync folder and contents (default is to sync the folder’s contents only).

For the enable command:

  • --ensure-can-sync: Ensure server can rsync via ssh.

All command-line arguments are optional. By default, Site.js will serve your current working folder over port 443 with locally-trusted certificates.

When you serve a site at @hostname or use the enable command, globally-trusted Let’s Encrypt TLS certificates are automatically provisioned for you using Auto Encrypt the first time you hit your hostname. The hostname for the certificates is automatically set from the hostname of your system (and the www. subdomain is also automatically provisioned).

Usage examples

Develop using locally-trusted TLS certificates

Goal Command
Serve current folder* site
site serve
site serve .
site serve . @localhost
site serve . @localhost:443
Serve folder demo (shorthand) site demo
Serve folder demo on port 666 site serve demo @localhost:666
Proxy localhost:1313 to https://localhost* site :1313
site serve :1313 @localhost:443
Sync demo folder to my.site site demo --sync-to=my.site
Ditto, but use account me on my.site site demo --sync-to=me@my.site
Ditto, but sync to remote folder ~/www site demo --sync-to=me@my.site:www
Ditto, but specify absolute path site demo --sync-to=me@my.site:/home/me/www
Live sync current folder to my.site site --sync-to=my.site --live-sync

Stage and deploy using globally-trusted Let’s Encrypt certificates

Regular process:

Goal Command
Serve current folder site @hostname
Serve current folder at specified domain site @hostname --domain=my.site
Serve current folder also at aliases site @hostname --aliases=www,other.site,www.other.site
Serve folder demo* site demo @hostname
site serve demo @hostname
Proxy localhost:1313 to https://hostname site serve :1313 @hostname

Start-up daemon:

Goal Command
Install and serve current folder as daemon site enable
Ditto & also ensure it can rsync via ssh site enable --ensure-can-sync
Get status of daemon site status
Start server site start
Stop server site stop
Restart server site restart
Display server logs site logs
Stop and uninstall current daemon site disable

General:

Goal Command
Check for updates and update if found site update

* Alternative, equivalent forms listed (some commands have shorthands).

Native support for an Evergreen Web

What if links never died? What if we never broke the Web? What if it didn’t involve any extra work? It’s possible. And, with Site.js, it’s effortless.

The Archival Cascade

(As of version 13.0.0) If you have static archives of previous versions of your site, you can have Site.js automatically serve them for you.

Just put them into folder named .archive-1, .archive-2, etc.

If a path cannot be found in your current site, Site.js will search for it first in .archive-2 and, if it cannot find it there either, in .archive-1.

Paths in your current site will override those in .archive-2 and those in .archive-2 will, similarly, override those in .archive-1.

Use the archival old links will never die but if you do replace them with newer content in newer versions, those will take precedence.

Legacy method (pre version 13.0.0)

In older versions, the convention for specifying the archival cascade was as follows:

|- my-site
|- my-site-archive-1
|- my-site-archive-2
|- etc.

This legacy method of specifying the archival cascade is still supported but may be removed in a future release. Please use the recommended method outlined above instead.

Native 404 → 302 support

But what if the previous version of your site is a dynamic site and you either don’t want to lose the dynamic functionality or you simply cannot take a static backup. No worries. Just move it to a different subdomain or domain and make your 404s into 302s.

Site.js has native support for the 404 to 302 technique to ensure an evergreen web. Just serve the old version of your site (e.g., your WordPress site, etc.) from a different subdomain and tell Site.js to forward any unknown requests on your new static site to that subdomain so that all your existing links magically work.

To do so, create a simple file called 4042302 in the root directory of your web content and add the URL of the server that is hosting your older content. e.g.,

/4042302

https://the-previous-version-of.my.site

You can chain the 404 → 302 method any number of times to ensure that none of your links ever break without expending any additional effort to migrate your content.

For more information and examples, see 4042302.org.

Custom error pages

Screenshot of the custom 404 error page included in the unit tests

Custom static 404 and 500 error pages

You can specify a custom error page for 404 (not found) and 500 (internal server error) errors. To do so, create a folder with the status code you want off of the root of your web content (i.e., /404 and/or /500) and place at least an index.html file in the folder. You can also, optionally, put any assets you want to display on your error pages into those folders and load them in via relative URLs. Your custom error pages will be served with the proper error code and at the URL that was being accessed.

If you want to display the path that could not be found in your custom 404 page, use the following template placeholder somewhere on your page and it will be automatically substituted:

THE_PATH

e.g., The example from the test site shown in the screenshot uses the following code:

<p><strong>Sorry, I can’t find</strong> THE_PATH</p>

Custom Hugo 404 error page

As of version 15.4.0, if your site uses the Hugo static site generator, you can create a custom Hugo 404 error page.

Put a 404.html page in your layouts/ folder so that it gets created in your .generated folder when the site is built and it will be used instead of the default 404 page.

Note: If you have both a custom static 404 page (defined at /404/index.html) and a custom Hugo 404 page, the Hugo 404 page will take precedence.

Default 404 and 500 error pages

If you do not create custom error pages, the built-in default error pages will be displayed for 404 and 500 errors.

When creating your own servers (see API), you can generate the default error pages programmatically using the static methods Site.default404ErrorPage() and Site.default500ErrorPage(), passing in the missing path and the error message as the argument, respectively to get the HTML string of the error page returned.

Ephemeral statistics

When Site.js launches, you will see a line similar to the following in the console:

📊    ❨site.js❩ For statistics, see https://localhost/b64bd821d521b6a65a307c2b83060766

This is your private, cryptographically secure URL where you can access ephemeral statistics about your site. If you want to share your statistics, link to them publicly. If you want to keep them private, keep the URL secret.

Note: As of version 15.4.0, you can remind yourself of the statistics URL while running the Site.js daemon in production using the site status command while the server is active.

Screenshot of the statistics page

The statistics are ephemeral as they are only kept in memory and they reset any time your server restarts.

The statistics are very basic and they’re there only to give an idea about which parts of your site are most popular as well as to highlight missing pages, etc., They’re not there so you can spy on people (if you want to do that, this is not the tool for you).

Static site generation

As of version 13.0.0, Site.js includes the Hugo static site generator.

To create a new Hugo site and start serving it:

# Create a folder to hold your site and switch to it.
mkdir my-site
cd my-site

# Generate empty Hugo site.
site hugo new site .hugo

# Create the most basic layout template possible.
echo 'Hello, world!' > .hugo/layouts/index.html

# Start Site.js
site

When you hit https://localhost, you should see the ‘Hello, world!’ page.

This basic example doesn’t take advantage of any of the features that you’d want to use Hugo for (like markdown authoring, list page creation, etc.). For a slightly more advanced one that does, see the Basic Hugo Blog example.

Of course, if you already know how Hugo works, just download a theme and set up your configuration and you’ll be up and running in no time. Everything in your .hugo folder works exactly as it does in any other Hugo site.

Note: During development, this feature uses Site.js’s live reload instead of Hugo’s. Your web page must have at least a <body> tag for it to work.

How it works

If Site.js finds a folder called .hugo in your site’s root, it will build it using its integrated Hugo instance (you don’t need to install Hugo separately) and place the generated files into a folder called .generated in your site’s root. It will also automatically serve these files.

One difference with plain Hugo is that if you set a baseURL in your configuration, it will be ignored as Site.js sets the baseURL automatically to the correct value based on whether you are running locally in development or at your hostname during staging or production.

Note: You should add .generated to your .gitignore file so as not to accidentally add the generated content into your source code repository.

You can pass any command you would normally pass to Hugo using Site.js’s integrated Hugo instance:

site hugo [any valid Hugo command]

Please see the Hugo documentation for detailed information on how Hugo works.

Mounting Hugo sites

Site.js will automatically mount files in the .hugo directory at your site’s root.

If you want the generated Hugo site to be mounted at a different path, include the path structure you want in the name of the hugo folder, separating paths using two dashes. For example:

Folder name Mount path
.hugo /
.hugo--docs /docs
.hugo--second-level--blog /second-level/blog

You can include any number of Hugo sites in your site and mount them at different paths and the results will be weaved together into the .generated folder. We call this feature… ahem… Hugo Weaving (we’ll show ourselves out).

All regular Site.js functionality is still available when using Hugo generation. So you can, for example, have your blog statically-generated using Hugo and extend it using locally-hosted dynamic comments.

Note: Hugo’s Multilingual Multihost mode is not supported.

Dynamic sites

You can specify routes with dynamic functionality by specifying HTTPS and WebSocket (WSS) routes in two ways: either using DotJS – a simple file system routing convention ala PHP, but for JavaScript – or through code in a routes.js file.

In either case, your dynamic routes go into a directory named .dynamic in the root of your site.

DotJS

DotJS maps JavaScript modules defined in .js and .cjs files within a file system hierarchy to routes on your web site in a manner that will be familiar to anyone who has ever used PHP.

GET-only (simplest approach)

The easiest way to get started with dynamic routes is to simply create a JavaScript file in a folder called .dynamic in the root folder of your site. Any routes added in this manner will be served via HTTPS GET.

For example, to have a dynamic route at https://localhost, create the following file:

.dynamic/
    └ index.js

Inside index.js, all you need to do is to export your route handler:

let counter = 0

module.exports = (request, response) => {
  response
    .html(`
      <h1>Hello, world!</h1>
      <p>I’ve been called ${++counter} time${counter > 1 ? 's': ''} since the server started.</p>
    `)
}

To test it, run a local server (site) and go to https://localhost. Refresh the page a couple of times to see the counter increase.

Congratulations, you’ve just made your first dynamic route using DotJS.

In the above example, index.js is special in that the file name is ignored and the directory that the file is in becomes the name of the route. In this case, since we put it in the root of our site, the route becomes /.

Usually, you will have more than just the index route (or your index route might be a static one). In those cases, you can either use directories with index.js files in them to name and organise your routes or you can use the names of .js files themselves as the route names. Either method is fine but you should choose one and stick to it in order not to confuse yourself later on (see Precedence, below).

So, for example, if you wanted to have a dynamic route that showed the server CPU load and free memory, you could create a file called .dynamic/server-stats.js in your web folder with the following content:

const os = require('os')

function serverStats (request, response) {

  const loadAverages = `<p> ${os.loadavg().reduce((a, c, i) => `${a}\n<li><strong>CPU ${i+1}:</strong> ${c}</li>`, '<ul>') + '</ul>'}</p>`

  const freeMemory = `<p>${os.freemem()} bytes</p>`

  const page = `<html><head><title>Server statistics</title><style>body {font-family: sans-serif;}</style></head><body><h1>Server statistics</h1><h2>Load averages</h2>${loadAverages}<h2>Free memory</h2>${freeMemory}</body></html>`

  response.html(page)
}

module.exports = serverStats

Site.js will load your dynamic route at startup and you can test it by hitting https://localhost/server-stats using a local web server. Each time you refresh, you should get the latest dynamic content.

Note: You could also have named your route .dynamic/server-stats/index.js and still hit it from https://localhost/server-stats. It’s best to keep to one or other convention (either using file names as route names or directory names as route names). Using both in the same app will probably confuse you (see Precedence, below).

Specifying parameters

Your DotJS routes can also define named parameters that will be passed to your routes when they are triggered.

To specify a named parameter, separate it from the rest of the route name using an underscore (_). At use, named parameters are provided to the route via the request path and are made available in the route callback as properties on the request.params object.

For example, to have a route that greets people by their first name, create a file called:

.dynamic/hello_name.js

And add the following content:

module.exports = (request, response) => {
  response.html(`<h1>Hello, ${request.params.name}!</h1>`)
}

Now run a local server (site) and hit https://localhost/hello/Laura to see Hello, Laura! in the browser.

You can also specify static path fragments that must be included verbatim in between parameters. You do this by using two underscores (__) instead of one.

For example, to have a route that returns the author ID and book ID that it is passed in a JSON structure, create a file called:

.dynamic/author/index_authorId__book_bookId.js

(Note: you can also call it .dynamic/author_authorId__book_bookId.js. Just make sure you pick one convention and stick to it so you don’t confuse yourself later on.)

Then, add the following content to it:

module.exports = (request, response) => {
  response.json({
    authorId: request.params.authorId,
    bookId: request.params.bookId
  })
}

Now run a local server (site) and hit:

https://localhost/author/philip-pullman/book/his-dark-materials

To see the following JSON object returned:

{
  "authorId": "philip-pullman",
  "bookId": "his-dark-materials"
}

DotJS parameters save you from having to use advanced routing if all you want are named parameters for your routes. The only time you should have to use the latter is if you want to use regular expressions in your route definitions.

Using node modules

Since Site.js contains Node.js, anything you can do with Node.js, you do with Site.js, including using node modules and npm. To use custom node modules, initialise your .dynamic folder using npm init and use npm install. Once you’ve done that, any modules you require() from your DotJS routes will be properly loaded and used.

Say, for example, that you want to display a random ASCII Cow using the Cows module (because why not?) To do so, create a package.json file in your .dynamic folder (e.g., use npm init to create this interactively). Here’s a basic example:

{
  "name": "random-cow",
  "version": "1.0.0",
  "description": "Displays a random cow.",
  "main": "index.js",
  "author": "Aral Balkan <mail@ar.al> (https://ar.al)",
  "license": "AGPL-3.0-or-later"
}

Then, install the cows node module using npm:

npm i cows

This will create a directory called node_modules in your .dynamic folder and install the cows module (and any dependencies it may have) inside it. Now is also a good time to create a .gitignore file in the root of your web project and add the node_modules directory to it if you’re using Git for source control so that you do not end up accidentally checking in your node modules. Here’s how you would do this using the command-line on Linux-like systems:

echo 'node_modules' >> .gitignore

Now, let’s create the route. We want it reachable at https://localhost/cows (of course), so let’s put it in:

.dynamic/
    └ cows
        └ index.js

And, finally, here’s the code for the route itself:

const cows = require('cows')()

module.exports = function (request, response) {
  const randomCowIndex = Math.round(Math.random()*cows.length)-1
  const randomCow = cows[randomCowIndex]

  function randomColor () {
    const c = () => (Math.round(Math.random() * 63) + 191).toString(16)
    return `#${c()}${c()}${c()}`
  }

  response.html(`
    <!doctype html>
    <html lang='en'>
    <head>
      <meta charset='utf-8'>
      <meta name='viewport' content='width=device-width, initial-scale=1.0'>
      <title>Cows!</title>
      <style>
        html { font-family: sans-serif; color: dark-grey; background-color: ${randomColor()}; }
        body {
          display: grid; align-items: center; justify-content: center;
          height: 100vh; vertical-align: top; margin: 0;
        }
        pre { font-size: 24px; color: ${randomColor()}; mix-blend-mode: difference;}
      </style>
    </head>
    <body>
        <pre>${randomCow}</pre>
    </body>
    </html>
  `)
}

Now if you run site on the root of your web folder (the one that contains the .dynamic folder) and hit https://localhost/cows, you should get a random cow in a random colour every time you refresh.

If including HTML and CSS directly in your dynamic route makes you cringe, feel free to require your templating library of choice and move them to external files. As hidden folders (directories that begin with a dot) are ignored in the .dynamic folder and its subfolders, you can place any assets (HTML, CSS, images, etc.) into a directory that starts with a dot and load them in from there.

For example, if I wanted to move the HTML and CSS into their own files in the example above, I could create the following directory structure:

.dynamic/
    └ cows
        ├ .assets
        │     ├ index.html
        │     └ index.css
        └ index.js

For this example, I’m not going to use an external templating engine but will instead rely on the built-in template string functionality in JavaScript along with eval() (which is perfectly safe to use here as we are not processing external input).

So I move the HTML to the index.html file (and add a template placeholder for the CSS in addition to the existing random cow placeholder):

<!doctype html>
<html lang='en'>
<head>
  <meta charset='utf-8'>
  <meta name='viewport' content='width=device-width, initial-scale=1.0'>
  <title>Cows!</title>
  <style>${css}</style>
</head>
<body>
    <pre>${randomCow}</pre>
</body>
</html>

And, similarly, I move the CSS to its own file, index.css:

html {
  font-family: sans-serif;
  color: dark-grey;
  background-color: ${randomColor()};
}

body {
  display: grid;
  align-items: center;
  justify-content: center;
  height: 100vh;
  vertical-align: top;
  margin: 0;
}

pre {
  font-size: 24px;
  mix-blend-mode: difference;
  color: ${randomColor()};
}

Then, finally, I modify my cows route to read in these two template files and to dynamically render them in response to requests. My index.js now looks like this:

// These are run when the server starts so sync calls are fine.
const fs = require('fs')
const cssTemplate = fs.readFileSync('cows/.assets/index.css')
const htmlTemplate = fs.readFileSync('cows/.assets/index.html')
const cows = require('cows')()

module.exports = function (request, response) {
  const randomCowIndex = Math.round(Math.random()*cows.length)-1
  const randomCow = cows[randomCowIndex]

  function randomColor () {
    const c = () => (Math.round(Math.random() * 63) + 191).toString(16)
    return `#${c()}${c()}${c()}`
  }

  function render (template) {
    return eval('`' + template + '`')
  }

  // We render the CSS template first…
  const css = render(cssTemplate)

  // … because the HTML template references the rendered CSS template.
  const html = render(htmlTemplate)

  response.html(html)
}

When you save this update, Site.js will automatically reload the server with your new code (version 12.9.7 onwards). When you refresh in your browser, you should see exactly the same behaviour as before.

As you can see, you can create quite a bit of dynamic functionality just by using DotJS with its most basic file-based routing mode. However, with this convention you are limited to GET routes. To use both GET and POST routes, you have to do a tiny bit more work, as explained in the next section.

GET and POST routes

If you need POST routes (e.g., you want to post form content back to the server) in addition to GET routes, the directory structure works a little differently. In this case, you have to create a .get directory for your GET routes and a .post directory for your post routes.

Otherwise, the naming and directory structure conventions work exactly as before.

So, for example, if you have the following directory structure:

site/
  └ .dynamic/
        ├ .get/
        │   └ index.js
        └ .post/
            └ index.js

Then a GET request for https://localhost will be routed to site/.dynamic/.get/index.js and a POST request for https://localhost will be routed to site/.dynamic/.post/index.js.

These two routes are enough to cover your needs for dynamic routes and form handling.

WebSocket (WSS) routes

Site.js is not limited to HTTPS, it also supports secure WebSockets.

To define WebSocket (WSS) routes alongside HTTPS routes, modify your directory structure so it resembles the one below:

site/
  └ .dynamic/
        ├ .https/
        │   ├ .get/
        │   │   └ index.js
        │   └ .post/
        │       └ index.js
        └ .wss/
            └ index.js

Note that all we’ve done is to move our HTTPS .get and .post directories under a .https directory and we’ve created a separate .wss directory for our WebSocket routes.

Here’s how you would implement a simple echo server that sends a copy of the message it receives from a client to that client:

module.exports = (client, request) => {
  client.on('message', (data) => {
    client.send(data)
  })
}

You can also broadcast messages to all or a subset of connected clients. Here, for example, is a naïve single-room chat server implementation that broadcasts messages to all connected WebSocket clients (including the client that originally sent the message and any other clients that might be connected to different WebSocket routes on the same server):

module.exports = (currentClient, request) {
  ws.on('message', message => {
    this.getWss().clients.forEach(client => {
      client.send(message)
    })
  })
})

To test it out, run Site.js and then open up the JavaScript console in a couple of browser windows and enter the following code into them:

const socket = new WebSocket('https://localhost/chat')
socket.onmessage = message => console.log(message.data)
socket.send('Hello!')

For a slightly more sophisticated example that doesn’t broadcast a client’s own messages to itself and selectively broadcasts to only the clients in the same “rooms”, see the Simple Chat example. And here’s a step-by-step tutorial that takes you through how to build it.

Here’s a simplified listing of the code for the server component of that example:

module.exports = function (client, request) {
  // A new client connection has been made.
  // Persist the client’s room based on the path in the request.
  client.room = this.setRoom(request)

  console.log(`New client connected to ${client.room}`)

  client.on('message', message => {
    // A new message has been received from a client.
    // Broadcast it to every other client in the same room.
    const numberOfRecipients = this.broadcast(client, message)

    console.log(`${client.room} message broadcast to ${numberOfRecipients} recipient${numberOfRecipients === 1 ? '' : 's'}.`)
  })
}

Custom Middleware

As of version 16.5.0, you can now add any piece of standard Express middleware to your server by defining them as modules in a .middleware directory in your project.

For example, to have your server allow all cross-origin requests, define the following middleware in .middleware/allow-all-cors.js:

module.exports = (request, response, next) => {
  response.header('Access-Control-Allow-Origin', '*')
  response.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept')
  next()
}

This gives you full flexibility in customising your server however you like.

Persisting data on the server with JavaScript Database (JSDB)

The chat examples so far have been ephemeral; the chat log is not stored anywhere. While that has its uses, it does mean, for example, that someone coming into a conversation after it has already started will not see what was said. You can easily implement that feature using the bundled JavaScript Database (JSDB).

JSDB is a transparent, in-memory, streaming write-on-update JavaScript database for Small Web applications that persists to a JavaScript transaction log.

What that means in practice is that it’s very simple to use and great for storing small pieces of data on the server. (Note that whenever possible, you should store data on the client not the server for privacy reasons.)

Your Site.js server has a global database called db that you can use from any route.

Here’s how you would persist the data in our simple chat example using JSDB:

// Ensure the messages table exists.
if (!db.messages) {
  db.messages = []
}

module.exports = function (client, request) {
  // A new client connection has been made.
  // Persist the client’s room based on the path in the request.
  client.room = this.setRoom(request)

  console.log(`New client connected to ${client.room}`)

  // Send new clients all existing messages.
  client.send(JSON.stringify(db.messages))

  client.on('message', message => {
    // Persist the message.
    db.messages.push(message)

    // A new message has been received from a client.
    // Broadcast it to every other client in the same room.
    const numberOfRecipients = this.broadcast(client, message)

    console.log(`${client.room} message broadcast to ${numberOfRecipients} recipient${numberOfRecipients === 1 ? '' : 's'}.`)
  })
}

Here’s a break down of the changes:

  1. You implement a global check that occurs when the module of your route is loaded to create the messages table (in this case, an array, although it can also be an object):

    if (!db.messages) {
      db.messages = []
    }
    
  2. When a new client joins, you serialise the messages array in JSON format and send it to that client.

    client.send(JSON.stringify(db.messages))
    
  3. When a message is sent by a client, you persist it in the messages table.

    db.messages.push(message)
    

If none of this feels like you’re using a database, that’s by design. JSDB is in-process, in-memory, and JavaScript through and through. It uses proxies to make it feel like you’re just working with plain old JavaScript objects. It even persists the data as JavaScript code (not JSON) in a format called JavaScript Data Format (JSDF).

And you’re not limited to only persisting and loading data, you can also query it. You do so using the JavaScript Query Language (JSQL).

Just like the other aspects of JSDB, JSQL is designed for ease of use. For most regular use, it should feel like you’re asking a question in plain English.

For example, if you wanted to get all the messages send by the person whose nickname is Aral, you would write the following:

db.messages.where('nickname').is('Aral').get()

The result would be an array of messages.

Similarly, if you wanted just the first message that contained the word kitten, you would write:

db.messages.where('text').includes('kitten').getFirst()

As of version 16.6.0, Site.js stores its JSDB tables with the .cjs extension instead of .js for better compatibility when used in mixed CommonJS/ESM projects.

You can learn more about JSDB in the JSDB documentation.

Advanced routing (routes.js file)

DotJS should get you pretty far for simpler use cases, but if you need full flexibility in routing (to use regular expressions in defining route paths, for example, or for initialising global objects that need to survive for the lifetime of the server), simply define a routes.js in your .dynamic folder:

site/
  └ .dynamic/
        └ routes.js

The routes.js file should export a function that accepts a reference to the Express app created by Site.js and defines its routes on it. For example:

module.exports = app => {
  // HTTPS route with a parameter called thing.
  app.get('/hello/:thing', (request, response) => {
    response.html(`<h1>Hello, ${request.params.thing}!</h1>`)
  })

  // WebSocket route: echos messages back to the client that sent them.
  app.ws('/echo', (client, request) => {
  client.on('message', (data) => {
    client.send(data)
  })
}

When using the routes.js file, you can use all of the features in express and our fork of express-ws (which itself wraps ws).