The latest posts from Chargen.One.

from steve

An Amiga Workbench desktop with this article being written in a text editor

Having decided that I'm unlikely to get the Aston Martin DB5 in Gunmetal grey I wanted as a kid, I got the next best thing and bought an Amiga 4000. Normally when I tell people this, I get one of two reactions:

  • You jammy, jammy sod
  • You bought a what?

For those who've never experienced the Amiga first-hand, number 2 is understandable. Most people who have are in the first camp.

For those in the 2nd camp, the Amiga 4000 is the final set of models in the classic Amiga series. This is the Amiga equivalent of a Ferrari F40. Sleek, spectacular, crazy expensive to run for what it is, and entirely impractical. There are some mildly insane design choices, bugs and because Commodore cheaped out at the last minute, several pretty fatal things that can happen to it if it's not taken care of exceptionally well over it's lifetime.

Owning an Amiga 4000 is not like owning an Amiga 1200 or 500. This isn't a machine built for gaming. After all, I can game just fine with near perfect emulation thanks to WHDLoad, and a Raspberry Pi is pretty much the fastest gaming Amiga you can get.

I'm using the Amiga 4000 for productivity, mostly creative. Yes, you read that right. No, I'm not insane. It's 2019, and I've bought an Amiga to use for actual day to day creative things. As shown in the screenshot above, this article was even written on the Amiga.

To be creative I need to move files back and forth. The Amiga 4000 is the only regularly used device I have with a floppy drive, so that's out as a medium.

Thankfully the Amiga 4000 has a DVD-rewriter. Transferring files over DVD/CD works well. The Amiga uses the Joliet, not UDF filesystem, and has some slight preferences for odd CD writing configurations. On the whole, it works.

Burning CDs for small amounts of data gets old after a while though, and I'd prefer some sort of network connectivity, at least till the MNT ZZ9000 comes online. The easiest and cheapest way to do this is with an X-Surf 100, an Ethernet card available for around 100 Euro. As I won't have a use for the X-Surf after my ZZ9000 arrives, I'm trying a serial link to a Raspberry Pi instead. Here's how it's set up.


First you'll need some hardware. Some of this you can build yourself or you could buy the parts and salvage things lying around like I did. You will need:

  • A Raspberry Pi, power cable, Micro SD card, Raspbian etc.
  • An Amiga with a 25-pin serial port
  • A 9-pin to 25-pin serial adapter. I used this one from Amigakit
  • A USB-Serial cable

Stage 1, Basic Connectivity

To start, connect the Raspberry Pi to the USB cable, the USB cable to the 25 pin adapter, and the 25 pin adapter to the Amiga. Congrats, your Amiga is now physically linked to the Pi!

I'm using Term v4.8 from Aminet to get basic terminal emulation running. You'll want to configure serial settings as follows:

Amiga Term configured for 115200 xfer with 8/n1 and no flow control.

You'll be asked about enabling RTS/CTS, things seem to work fine with it switched off.

Note: Term needs paths defined (Settings –> Paths from the Menu) or your files won't be saved. Also make sure that you save your settings from the pulldown Settings menu.

On the Raspberry Pi you'll need to install screen via apt-get. In a console, enter the following:

screen /dev/ttyUSB0 115200

The device name might vary dependent upon the USB-Serial converter you're using or how you're connected. It could be /dev/ttyAMA0 or /dev/ttyACM0 in some cases. Check dmesg and the contents of /dev/ if you get stuck.

Hello From the Amiga on the PC

If you type in the screen window, you should see the text echoed in the Term session on the Amiga. If you type on the Amiga you should see text show up on the screen session on the Pi.

Hello from the PC on the Amiga

It'd be a little boring if this was all you could do. Lets transfer some files. I downloaded Delitracker and the update to 2.34 onto the Raspberry Pi with wget. In the screen session, I pressed Ctrl-A and typed in the following:

: exec sz -b delitracker232.lha

The Term session should spring to life and start receiving a file, which will be saved in the path you specified earlier. Extract delitracker, run the installer and repeat with the update file. Of course, now you have a mod music player, it's only fair that you should go to UnExoticA and get some tunes to play.

Sending files back.

Sending files back to the Raspberry Pi is pretty easy, getting screen to receive them is only slightly more involved. Drag and drop the file you want to send onto Term's “term Upload queue” icon. On the Raspberry Pi's screen session, press Ctrl-A and enter : exec !! rz.

Transfer config on the Amiga side

The Amiga's term window will ask you about the file you're about to send. Set it to binary transfer and it'll land in the directory where you originally launched screen.

PC Receiving a file

Going further

You could run a full login terminal on the Raspberry Pi over serial and use that to log into the Pi via Term. While it's certainly cute, it reduces a very expensive Amiga 4000 to a dumb terminal. Instead I plan on using the Pi as a support system for the Amiga, where it does things that are menial, boring or just too slow for the Amiga to take care of. The next thing for me to do is to get TCP/IP networking via PPP, which I'll cover in another post.

In the meantime, here's the Amiga in it's home, on the right of this picture.

My Battlestation setup, with the Amiga on the right


from h3artbl33d

You might have noticed that if you run NextCloud on OpenBSD, with the chroot option enabled in php-fpm.ini, that the occ command and cronjob fail miserably. That can be fixed!

You might have stumbled upon the following error if you have tried to run the occ command or that the cronjob fails:

Your data directory is invalid
Ensure there is a file called ".ocdata" in the root of the data directory.

Cannot create "data" directory
This can usually be fixed by giving the webserver write access to the root directory. See

That is due to the chroot option in php-fpm.conf. Both the occ command and the cronjob use the cli interpreter, rather than fpm. Disabling that feels like giving a piece of your sanity to the devil. So, let's fix that! Fire up your favorite editor and open config/config.php in the NextCloud docroot. You specifically want to edit the datadirectory variable:

'datadirectory' => '/ncdata',

Change this one to:

'datadirectory' => ((php_sapi_name() == 'cli') ? '/var/www' : '') . '/ncdata',

...and it's fixed! Basically, that this does, is it prepends /var/www if a NextCloud function is called from the commandline.


from steve


In a previous article I wrote about how I've changed my relationship with my phone. One of the benefits of degoogling is that your device is a little less spied upon. The downside of running lineage for microg is that certain functionality is a bit harder to come by.

I like minimal notifications, but there are things happening I want to know about. On iOS I used prowl to tell me about reboots. As there's no f-droid client I found myself without a emergency notification system. I saw gotify in the f-droid app and thought I'd give it a go. So far, I'm pretty happy with it.

I recently rebuilt an old unused box to self-host low-priority services. I'm a big fan of self-hosting having been burnt several times by online services. I'm not against online services making a living, but I'd rather own my stuff than rent.

The box was rebuilt to use docker and docker-compose. I find docker a double-edged sword. You either have to maintain your own docker repository or trust someone else's. This box only runs low-priority services. I'm ok running images from other people's repositories.

Installing Gotify Server With Docker-Compose

I set up caddy as a front-end service to manage letsencrypt. I prefer nginx but for docker, Caddy's fine. I also use ouroboros to auto-update images when new ones come out. If I'm going to use other people's repos I may as well get some value out of it.

Creating a gotify docker-compose entry was easy. I've included ouroboros and the caddy frontend in mine below:

version: '3'
    container_name: ouroboros
    hostname: ouroboros
    image: pyouroboros/ouroboros
      - CLEANUP=true
      - INTERVAL=300
      - LOG_LEVEL=info
      - SELF_UPDATE=true
      - IGNORE=mongo influxdb postgres mariadb
      - TZ=Europe/London
    restart: unless-stopped
      - /var/run/docker.sock:/var/run/docker.sock
    container_name: caddy
    image: abiosoft/caddy:no-stats
    restart: unless-stopped
      - ./caddy/Caddyfile:/etc/Caddyfile
      - ./caddy/caddycerts:/etc/caddycerts
      - ./caddy/data:/data:ro
      - "80:80"
      - "443:443"
      - ./caddy/caddy.env
    container_name: gotify
    image: gotify/server
    restart: unless-stopped
      - ./apps/gotify/data:/app/data

My caddy config needed an additional section for the new host: {
  root /data
  log stdout
  errors stdout
  proxy / gotify:80 {

Hostnames have been changed to protect the innocent. When using caddy, specify websocket in the proxy section. The Android app uses websockets to handle notifications.

A quick docker-compose up -d and I was up and running. The default username and password is admin/admin. Change that first, then create a user account to receive notifications.

After creating the user account, log out of admin, and log back in as the new user. Notifications are per-application and per-user. You'll have to send notifications for each user. I hope group notifications will be possible at some point.

Gotify notifications

I added a cute puppy picture to my app, making unexpected reboots all the more cute. The installed the gotify app from f-droid and added my server. I checked the app and server logs for HTTP 400 errors. This would stop notifications from working.

A Portable Commandline Notification Tool

I wrote a quick python-based tool to send notifications from the command line. You can use the official gotify client tool, or even curl. I wanted something portable that would work without 3rd-party libraries.

#!/usr/bin/env python
# - A python gotify client using only built-in modules
import json, urllib, urllib2, argparse
parser = argparse.ArgumentParser(description='gotify python client')
parser.add_argument('-p','--priority', help="priority number (higher's more intrusive)", type=int, required=True)
parser.add_argument('-t','--title', help="title notification", required=True)
parser.add_argument('-m','--message', help="message to display", required=True)
parser.add_argument('-v','--verbose', help="print response", action='store_true')
args = parser.parse_args()
url = ''
data = urllib.urlencode({"message": args.message, 
			"priority": args.priority,
			"title": args.title})
req = urllib2.Request(url, data)
resp = urllib2.urlopen(req)
if args.verbose:

If you use the script, don't forget to change the token value in the url variable to one for your app.

The final thing to do is to set up a reboot notification for the box. We can do this on OpenBSD using a cron job. I've copied into /usr/local/bin and set up a cron job as a normal user to run on reboot:

@reboot python2 /usr/local/bin/ -p 8 -t "" -m "Rebooted at `date`"

Now if we reboot the system, we can check that it's working by looking in /var/cron/log:

Apr 6 16:11:10 chargen cron[41858]: (asdf) CMD (python2 /usr/local/bin/ -p 8 -t "" -m "Rebooted at `date`")

Please note that some OSes only run @reboot jobs for root. If you're having trouble, check your cron daemon supports non-root @reboot jobs.

If you're wondering what else I plan to use this for, it's not really much. I like only having serious event notifications and want to keep it minimal. Some of the things I'll use this for include:

  • Reboot notifications across servers
  • New device connected to home networks
  • Motioneye detected movement in the conservatory

For pretty much everything else, there's email and I can pick that up in slow time.


from V6Shell (Jeff)

the latest release as of Thursday, 2019/03/28 =^)

#original #UNIX command interpreter ( #cli aka shell )

Links to all immediately relevant files for this release are available via the primary Sources page at . has everything a person might need/want to get started with etsh-5.4.0; other useful files include:

... There is a screenshot below this paragraph (see caption); the shells/etsh OpenBSD package/port installs using PREFIX=/usr/local and SYSCONFDIR=/etc by default.

pev-example.png caption: etsh-5.4.0 running as an interactive login shell and executing the pev script for fun

Enjoy! =)

Jeff ( for short )


from High5!

Recently we wrote a post on Moving back to Lighttpd and Michael Dexter thought I could spend my time wisely and do a short write-up on our use of dehydrated with Lighttpd.

In order to start with dehydrated we of course need to install it:

# pkg install dehydrated

Once it's all installed you can find the dehydrated configration /usr/local/etc/dehydrated

Your hosts and domains you want to get certificates for need to be added to domains.txt. For example:

The first host/domain listed will be used as filename to store the keys and certificates. There are a number of examples in the file itself if you want to get funky.


If you want to restart services or do anything special, for example in the case when new certificates are generated, there is a file called This script allows you to hook into any part of the process and run commands during that part of the process.

The hook we are using is for deploy_cert(). We are going to use this hook for: – creating a PEM certificate for Lighttpd – change owner to www – restart Lighttpd

What that looks like is something like this:

deploy_cert() {
    cat "${KEYFILE}" "${CERTFILE}" > "${BASEDIR}/certs/${DOMAIN}/combined.pem"
    chown -R www "${KEYFILE}" "${FULLCHAINFILE}" "${BASEDIR}/certs/${DOMAIN}/combined.pem"
    service lighttpd restart

The last part that is needed is to make sure this is run every day with cron.

@daily  root /usr/local/bin/dehydrated -c

In most cases this will be all that is needed to get going with dehydrated.


You will need to let Lighttpd know about dehydrated and point it to acme-challange in the .well-known directory. You can do this with an alias like:

alias.url += ("/.well-known/acme-challenge/" => "/usr/local/www/dehydrated/")

The Lighttpd config we are using for SSL/TLS is the following:

$SERVER["socket"] == ":443" {
  ssl.engine = "enable" 
  ssl.pemfile = "/usr/local/etc/dehydrated/certs/" = "/usr/local/etc/dehydrated/certs/"
  ssl.dh-file = "/usr/local/etc/ssl/dhparam.pem" = "secp384r1"
  setenv.add-response-header = (
    "Strict-Transport-Security" => "max-age=31536000; includeSubdomains",
    "X-Frame-Options" => "SAMEORIGIN",
    "X-XSS-Protection" => "1; mode=block",
    "X-Content-Type-Options" => "nosniff",
    "Referrer-Policy" => "no-referrer",
    "Feature-Policy" =>  "geolocation none; midi none; notifications none; push none; sync-xhr none; microphone none; camera none; magnetometer none; gyroscope none; speaker none; vibrate none; fullscreen self; payment none; usb none;"  

To finish it all you can now run dehydrated, in most cases would be:

# dehydrated -c

The complete Lighttpd config can be found in our Git Repository.


from OpenBSD Amsterdam

The post written about rdist(1) on sparked us to write one as well. It's a great, underappreciated, tool. And we wanted to show how we wrapped doas(1) around it.

There are two services in our infrastructure for which we were looking to keep the configuration in sync and to reload the process when the configuration had indeed changed. There is a pair of nsd(8)/unbound(8) hosts and a pair of hosts running relayd(8)/httpd(8) with carp(4) between them.

We didn't have a requirement to go full configuration management with tools like Ansible or Salt Stack. And there wasn't any interest in building additional logic on top of rsync or repositories.

Enter rdist(1), rdist is a program to maintain identical copies of files over multiple hosts. It preserves the owner, group, mode, and mtime of files if possible and can update programs that are executing.

The only tricky part with rdist(1) is that in order to copy files and restart services, owned by a privileged user, has to be done by root. Our solution to the problem was to wrap doas(1) around rdist(1).

We decided to create a separate user account for rdist(1) to operate with on the destination host, for example:

ns2# useradd -m rupdate

Create an ssh key on the source host where you want to copy from:

ns1# ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_rdist

Copy the public key to the destination host for the rupdate user in .ssh/authorized_keys.

In order to wrap doas(1) around rdistd(1) we have to rename the original file. It's the only way we were able to do this.

Move rdistd to rdistd-orig on the destination host:

ns2# mv /usr/bin/rdistd /usr/bin/rdistd-orig

Create a new shell script rdistd with the following:

/usr/bin/doas /usr/bin/rdistd-orig -S

Make it executable:

ns2# chmod 555 /usr/bin/rdistd

Add rupdate to doas.conf(5) like:

permit nopass rupdate as root cmd /usr/bin/rdistd
permit nopass rupdate as root cmd /usr/bin/rdistd-orig

Once that is all done we can create the files needed for rdist(1).

To copy the nsd(8) and unbound(8) configuration we created a distfile like:

HOSTS = ( )

FILES = ( /var/nsd )

EXCL = ( nsd.conf *.key *.pem )

${FILES} -> ${HOSTS}
	install ;
	except /var/nsd/db ;
	except /var/nsd/etc/${EXCL} ;
	except /var/nsd/run ;
	special "logger rdist update: $REMFILE" ;
	cmdspecial "rcctl reload nsd" ;

/var/unbound/etc/unbound.conf -> ${HOSTS}
	install ;
	special "logger rdist update: $REMFILE" ;
	cmdspecial "rcctl reload unbound" ;

The distfile describes the destination HOSTS, the FILES which need to be copied and need to be EXCLuded. When it runs it will copy the selected FILES to the destination HOSTS, except the directories listed.

The install command is used to copy out-of-date files and/or directories.

The except command is used to update all of the files in the source list except for the files listed in name list.

The special command is used to specify sh(1) commands that are to be executed on the remote host after the file in name list is updated or installed.

The cmdspecial command is similar to the special command, except it is executed only when the entire command is completed instead of after each file is updated.

In our case the unbound(8) config doesn't change very often, so we used a label to only update this when needed. With:

ns1# rdist unbound

To keep our relayd(8)/httpd(8) in sync we did something like:

HOSTS = ( )

FILES = ( /etc/acme /etc/ssl /etc/httpd.conf /etc/relayd.conf /etc/acme-client.conf )

${FILES} -> ${HOSTS}
	install ;
	special "logger rdist update: $REMFILE" ;
	cmdspecial "rcctl restart relayd httpd" ;

If you want cron(8) to pick this via the system script daily(8) you can save the file as /etc/Distfile.

To make sure the correct username and key are used you can add this to your .ssh/config file:

	User rupdate
	IdentityFile ~/.ssh/id_ed25519_rdist

When you don't store the distfile in /etc you can add the following to your .profile:

alias rdist='rdist -f ~/distfile'

Running rdist will result in the following type of logging on the destination host:

==> /var/log/daemon <==
Nov 13 09:59:15 name2 rdistd-orig[763]: ns2: startup for

==> /var/log/messages <==
Nov 13 09:59:15 ns2 rupdate: rdist update: /var/nsd/zones/reverse/

==> /var/log/daemon <==
Nov 13 09:59:16 ns2 nsd[164]: zone read with success                     

You can follow us on Twitter and Mastodon.