Using an Apple Keyboard with Xubuntu

7 August 2019

This is how I got my external Apple Keyboard (pictured) to work with Xubuntu in a similar way to how it works with OSX on my Macbook at work. It’s not perfect, but it does avoid a lot of frustration from trying to use muscle memory OSX keyboard shortcuts on Linux. I am using a British layout keyboard so these changes might need to be adjusted depending on your locale.

Apple keyboard - used under CC-BY-SA from https://commons.wikimedia.org/wiki/File:Apple_Keyboard_with_Numeric_Keyboard_9612.jpg

Key re-mapping

I used XKB to change the following mappings of ‘real’ keys on the left to ‘perceived’ keys in the OS:

  • Swap Cmd <-> Left Ctrl
  • Left Ctrl -> Super/Menu/“Windows” key
  • Caps Lock -> Left Ctrl (I find this much more comfortable to use on a Mac keyboard, however here we have swapped Cmd and Ctrl anyway so it’s not so useful).
  • Left Alt+3 -> Hash (#)
  • Swap ± and ` (the default keymap appeared to have them the wrong way around)

I followed a useful guide to XKB to learn how to make the necessary modifications.

I made the following change to /usr/share/X11/xkb/symbols/gb:

diff -ur usr/share/X11/xkb/symbols/gb /usr/share/X11/xkb/symbols/gb
--- usr/share/X11/xkb/symbols/gb	2018-10-25 12:10:20.000000000 +0100
+++ /usr/share/X11/xkb/symbols/gb	2019-05-21 22:31:21.540369459 +0100
@@ -167,10 +167,10 @@
 
     key <AE02> {	[               2,              at,         EuroSign	]	};
     key <AE03> {	[               3,        sterling,       numbersign	]	};
-    key <TLDE> {	[         section,       plusminus ]	};
-    key <LSGT> {	[           grave,      asciitilde ]	};
+    key <LSGT> {	[         section,       plusminus ]	};
+    key <TLDE> {	[           grave,      asciitilde ]	};
 
-    include "level3(ralt_switch)"
+    include "level3(lalt_switch)"
     include "level3(enter_switch)"
 };
 

NB: Changing files in /usr/share is not generally encouraged (your changes will affect other users on the system and can be overwritten by software upgrades) but I found this to be the most expedient solution at the time. Make a backup of /usr/share/X11/xkb/symbols/gb first by running:

cp /usr/share/X11/xkb/symbols/gb{,.bak}

I then edited /etc/default/keyboard to contain the following:

# Only XKBVARIANT and XKBOPTIONS needed to be changed
XKBMODEL="pc105"
XKBLAYOUT="gb"
XKBVARIANT="mac"
XKBOPTIONS="ctrl:swap_lwin_lctl,ctrl:nocaps"

BACKSPACE="guess"

Window Switching

Open the Xfce Settings manager -> Window Manager -> Keyboard:

  • Switch window for same application: ctrl + ` (reality is Cmd + `)
  • Cycle windows: Ctrl + tab (reality is Cmd + tab)
  • Cycle windows (reverse): Ctrl + shift + tab (reality is Cmd + shift + tab)

Spotlight

Open the “Keyboard -> Application Shortcuts” settings menu in Xfce. set xfce4-popup-whiskermenu to Ctrl + space (on your keyboard this will actually be Cmd + space)

Screenshots

I commonly take screenshots of an area of the screen with Cmd + Ctrl + Shift + 4 on OSX. You can achieve similar functionality by adding an Application Shortcut (like in the last step) in Xfce for xfce4-screenshooter -r -c as Ctrl + shift + super + 4

Changing fn key mode

I prefer to swap the fn key mode so that F keys do not activate their media functions unless the fn key is depressed.

This can be done by editing /etc/modprobe.d/hid_apple.conf to contain the following:

options hid_apple fnmode=2

Reboot for the change to take effect.

Changing mouse scroll speed

This is not strictly keyboard related, but I found that the default mouse scroll rate was much slower on Linux than on OSX. I changed it using the following instructions from the Unix Stackexchange.

Leftovers

Other things I’d like to do if I were to refine this setup:

  • Make F13 to F19 usable
  • Mimic the behaviour of the excellent SizeUp for OSX. I believe some of this is already possible in Xfce, however I found there were problematic conflicts with other applications using my chosen shortcuts of:
    • Cmd+Alt+left arrow -> window to left of screen
    • Cmd+Alt+right arrow -> window to right of screen
    • Cmd+Alt+m -> maximise
  • Get my common VSCode motion shortcuts working (should be possible in VSCode settings):
    • Cmd + up - start of file
    • Cmd + down - end of file
    • Cmd + right - end
    • Cmd + left - home
    • Alt + right/left - left/right one word

Using an Apple Keyboard with Xubuntu - Comments

Resizing a whole directory of images recursively on OSX

1 August 2019

This is a quick script to resize all JPEGs in a folder recursively and output them to another folder. It will only resize new images on subsequent runs to save time. The settings in the example are to max width 1280px and file size 200KB.

  1. Install brew and dependencies:
brew install imagemagick jpegoptim
  1. Download or copy and paste the following script resize-pics.sh:

    #!/bin/bash
    
    set -o nounset
    set -o errexit
    
    DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
    
    cd "$DIR"
    
    export SOURCE_DIR="Pics"
    export TARGET_DIR="Pics Resized"
    
    mkdir -p "$TARGET_DIR"
    
    function resizeimage {
    
        IMAGE_PATH="${1#*/}"
        OUTPUT_PATH="${TARGET_DIR}/${IMAGE_PATH}"
    
        if [[ "$OUTPUT_PATH" =~ '/' ]]
        then
            mkdir -p "$(dirname "$OUTPUT_PATH")"
        fi
    
        if [ ! -e "${OUTPUT_PATH}" ]
        then
            echo "Resizing ${IMAGE_PATH}"
            convert -resize '1280x>' "${SOURCE_DIR}/${IMAGE_PATH}" "$OUTPUT_PATH"
            jpegoptim -S200 "$OUTPUT_PATH"
        fi
    }
    
    export -f resizeimage
    
    find "$SOURCE_DIR" -type f \( -iname '*.jpg' -o -iname '*.jpeg' \) -exec bash -c 'resizeimage "$@"' bash {} \;
    
    read -rp "Done. Press Enter to exit"

  2. Customize SOURCE_DIR (default Pics) and TARGET_DIR (default Pics Resized) in the script.

  3. Make the script executable:

chmod u+x resize-pics.sh
  1. Either run the script from Terminal as ./resize-pics.sh or set it up to be double-clickable in Finder (see https://stackoverflow.com/questions/5125907/how-to-run-a-shell-script-in-os-x-by-double-clicking)

Resizing a whole directory of images recursively on OSX - Comments

Debugging HTTP 502 errors on Google Kubernetes Engine

27 January 2019

This is a walkthrough of how I debugged and fixed intermittent HTTP 502 (Bad Gateway) errors on Google Kubernetes Engine (GKE).

Infrastructure Setup

  • GKE Kubernetes cluster
    • 2 Nodes
    • 1 Deployment, scaled to two pods. Each pod running a single Node.js-based HTTP server application
    • 1 Service
    • 1 GCE Ingress. It manages Google Cloud HTTP Load Balancers via the Kubernetes Ingress API. I was using a Network Endpoint Group (NEG) as a backend, which allows pods to be connected to from the load balancer directly.
HTTP Load Balancer Architecture
The vanilla HTTP Load Balancer Architecture. In my setup, NEGs replace Instance Groups.
NEGs with Container
NEGs with Containers

Application Server Setup

Requests resulted in HTTP 502s seemingly at random. Running the load test suite was sufficient to reproduce the issue almost every time.

The HTTP Load Balancing docs have information about timeouts and retries. The Load Balancer keeps TCP connections idle for up to 10 minutes, therefore the application server’s timeout must be longer than this to avoid race conditions. My initial Node.js code to do this was as follows, but did not resolve the issue.

// https://nodejs.org/api/http.html#http_event_connection
server.on('connection', function(socket) {
  // Set the socket timeout to 620 seconds
  socket.setTimeout(620e3);
});

Checking for Known Issues

There was an open issue on the GCE Ingress GitHub with several ideas.

Some related to switching from externalTrafficPolicy: Cluster (which is the default for services) to externalTrafficPolicy: Local. By default, the GCE ingress will create an Instance Group targeting all nodes in the cluster. Any nodes not running a pod of the target Service need to forward traffic to another node which is. Using Network Endpoint Groups avoids this situation, as the pods are directly targeted.

There were also suggestions that nodes might be being terminated while receiving traffic (common if using pre-emptible machine types). That was not the issue in my case.

Checking the Logs

Stackdriver Logging creates logs for much of Google Cloud Platform by default, including HTTP Load Balancers:

resource.type="http_load_balancer"
httpRequest.status=502

The jsonPayload.statusDetails field had the value backend_connection_closed_before_data_sent_to_client in all cases, indicating that the backend (my application) had closed the connection unexpectedly.

This was puzzling since I had set the socket timeout in my application code. I opened a telnet session in the container to the server without sending any data and it was indeed closed after 620 seconds, indicating that the TCP timeout was set correctly.

The Server’s View

To see what was happening to these failed requests from the server’s view, I installed tshark (the CLI companion to Wireshark). I scaled down the deployment to a single pod, monitored the network traffic during a load-test run, and saved the output to a pcap file. kubectl cp makes it blessedly easy to download files from Kubernetes containers. I then opened the pcap file locally in the Wireshark GUI.

Looking for HTTP 502 errors in the trace would not be fruitful, because these errors were being sent by the load balancer, not the server. I tagged each request with a random X-My-Uuid header, and logged failed UUIDs during the load-test run.

Using a failed UUID as a display filter in Wireshark let me track down one of the failed requests. I then filtered the trace to only show packets from the same TCP connection.

Wireshark TCP stream
The trace for the TCP connection containing the failed request. The second column is elapsed time in seconds. 130.211.0.143 is the load balancer. 10.56.5.6 is my server.

Two requests were served correctly in the space of 4 seconds. The failed request came 5 seconds later and resulted in a TCP RST from the server, closing the connection. This is the backend_connection_closed_before_data_sent_to_client seen in the Stackdriver logs.

Debugging the server and looking through the Node.js source for the HTTP server module yielded the following likely looking code in the ‘response finished’ callback:

// ...
} else if (state.outgoing.length === 0) {
    if (server.keepAliveTimeout && typeof socket.setTimeout === 'function') {
      socket.setTimeout(0);
      socket.setTimeout(server.keepAliveTimeout);
      state.keepAliveTimeoutSet = true;
    }
  }
// ...

server.keepAliveTimeout (default 5 seconds) was replacing the socket timeout I had set in the connection event listener whenever a request was received! Apparently this default keepAlive timeout was new in Node 8, there did not used to be a default timeout.

Setting the default timeout as follows resolved the issue:

server.keepAliveTimeout = 620e3;

I made a PR to the connection event listener docs to hopefully save someone some time in future.


Debugging HTTP 502 errors on Google Kubernetes Engine - Comments

Port Forwarding Behind a Carrier Grade NAT

6 May 2017

Hosting Internet-accessible services (conventionally) requires a public IP address. In many cases, consumer Internet subscriptions are provided with a dynamic (rather than a static) IP address to alleviate IPv4 address exhaustion. The dynamic IP problem can be solved by using a dynamic DNS service such as No-IP which gives you a fixed hostname to use to find your server.

Most home Internet setups will incoporate a router with NAT (Network Address Translation). This allows multiple devices to share a single public IP. It also means that for inbound connections, port forwarding is needed to link external ports to specific devices on the private network. However, some ISPs, including my own (Hyperoptic in the UK) implement a Carrier Grade NAT (CGNAT). This means that multiple customers share a public IP address, and port forwarding is not possible. This is a major pain if you want to run public Internet services from home.

Fortunately there are workarounds - the easiest way to get around this is to use a pre-packaged reverse tunnelling solution such as Ngrok. There is a free version but there are limitations such as having to use a randomised hostname for your service, so I rolled my own system.

High Level Steps

  • Set up SSHD on your public server and allow TCP forwarding.
  • Set your home device up to connect persistently to the public server and allow remote tunnelling.

These steps are based on Ubuntu Server 16.10, so some steps may vary depending on your Linux distribution.

Setting up your public server

Edit /etc/ssh/sshd_config. Ensure that the line AllowTcpForwarding yes is present (if there is no mention of AllowTcpForwarding this is okay too as the default is allow). Also ensure that the line GatewayPorts clientspecified is present (otherwise the remote tunnel will only be accessible from localhost on the public server).

Create an SSH user and set up public key authentication. See a guide such as this one on DigitalOcean.

Ensure that if you have a firewall (including at service provider level, such as AWS Security Groups) the TCP port you want to access publicly is open.

Setting up your home device

As an example, we’ll run SSHD on the home device, so you can SSH straight into the home device via the public IP.

Install SSHD on your home device (many guides online).

Connect to the public server with SSH and setup the remote tunnel:

ssh -nNTv -R 0.0.0.0:2048:localhost:22 server.example.com

(explanation)

You should now be able to run

ssh -p 2048 server.example.com

to SSH into your home server!

Making it resilient

SSH does not handle unreliable connections very well by default, so you can use autossh which automatically restarts ssh if the connection to the external server fails.

Install autossh on the home device:

sudo apt-get update && sudo apt-get install autossh

Run autossh to connect to the public server:

autossh -M 0 -o ServerAliveInterval=30 -o ServerAliveCountMax=3 -nNTv -R 0.0.0.0:2048:localhost:22 server.example.com

(explainshell can’t do autossh arguments at the moment, but you can see the autossh man page).

Running autossh on startup

It’s useful to have autossh run on startup, so if your device restarts (as the Raspberry Pi can do often) the connection will be re-established. The steps here depend if you are using Sysvinit or Systemd. These steps work for Sysvinit which is what my Raspbian Wheezy installation is using.

Create a passwordless SSH key on the home device:

ssh-keygen -t rsa -b 4096 -f id-autossh-rsa -q -N ""

(explanation)

chmod 700 id-autossh-rsa

(make permissions strict enough for ssh to accept them)

Add the public key to the user’s authorized_keys file on the public server:

no-pty,no-X11-forwarding,permitopen="255.255.255.255:9",command="/bin/false" <contents of id-autossh-rsa.pub>'

The sshd_config man page and sshd man page explain the options used. Essentially we only allow remote tunnels to be opened when using this key and disable running a useful shell. (Thanks to this article for the idea to disable the opening of local tunnels.)

Edit /etc/init.d/autossh and add the following, adjusting the TUNNEL_* and KEY_PATH variables to match your setup:

#! /bin/sh
# author: Andrew Moss
# date: 06/05/2017
# source: https://gist.github.com/Clement-TS/48ae8d23f6452cd1a3a071640c1bd07b
# source: https://gist.github.com/suma/8134207
# source: http://stackoverflow.com/questions/34094792/autossh-pid-is-not-equal-to-the-one-in-pidfile-when-using-start-stop-daemon

### BEGIN INIT INFO
# Provides:          autossh
# Required-Start:    $remote_fs $syslog
# Required-Stop:     $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: autossh initscript
# Description:       establish a tunnelled connection for remote access
### END INIT INFO

. /etc/environment
. /lib/init/vars.sh
. /lib/lsb/init-functions

TUNNEL_HOST=server.example.com
TUNNEL_USER=andrew
TUNNEL_PORT=2048
KEY_PATH=/home/andrew/.ssh/id-autossh-rsa

NAME=autossh
DAEMON=/usr/lib/autossh/autossh
AUTOSSH_ARGS="-M 0 -f"
SSH_ARGS="-nNTv -o ServerAliveInterval=30 -o ServerAliveCountMax=3 -o IdentitiesOnly=yes -o StrictHostKeyChecking=no \
         -i $KEY_PATH -R 0.0.0.0:$TUNNEL_PORT:localhost:22 $TUNNEL_USER@$TUNNEL_HOST"

DESC="autossh for reverse ssh"
SCRIPTNAME=/etc/init.d/$NAME
DAEMON_ARGS=" $AUTOSSH_ARGS $SSH_ARGS"

# Export PID for autossh
AUTOSSH_PIDFILE=/var/run/$NAME.pid
export AUTOSSH_PIDFILE

do_start() {
    start-stop-daemon --start --background --name $NAME --exec $DAEMON --test > /dev/null || return 1
    start-stop-daemon --start --background --name $NAME --exec $DAEMON -- $DAEMON_ARGS    || return 2
}

do_stop() {
    start-stop-daemon --stop --name $NAME --retry=TERM/5/KILL/9 --pidfile $AUTOSSH_PIDFILE
    rm -f "$AUTOSSH_PIDFILE"
    RETVAL="$?"
    [ "$RETVAL" = 2 ] && return 2
    start-stop-daemon --stop --oknodo --retry=0/5/KILL/9 --exec $DAEMON
    [ "$?" = 2 ] && return 2
    return "$RETVAL"
}

case "$1" in
  start)
    log_daemon_msg "Starting $DESC" "$NAME"
    do_start
    case "$?" in
        0|1) log_end_msg 0 ;;
        2) log_end_msg 1 ;;
    esac
    ;;
  stop)
    log_daemon_msg "Stopping $DESC" "$NAME"
    do_stop
    case "$?" in
        0|1) log_end_msg 0 ;;
        2) log_end_msg 1 ;;
    esac
    ;;
  status)
    status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
    ;;
  *)
    echo "Usage: $SCRIPTNAME {start|stop|status|restart}" >&2
    exit 3
    ;;
esac

Now run the following to have this run on startup, and also start it now:

sudo chmod +x /etc/init.d/autossh
sudo update-rc.d -f autossh defaults 90 90 > /dev/null 2>&1
sudo service autossh start

(Thanks to Clement-TS, whose init script this section derives from)

Conclusion

There are some limitations to the remote tunnelling approach:

  • Reverse tunnelling adds latency to your home services because all traffic needs to be routed through the public server. If you are running high-traffic services it may also cost you.
  • These solutions do not work for UDP traffic (I’m planning to do another article about UDP tunnelling over TCP).

However, it works pretty well with the robustness of autossh and is cost-efficient if you have a VPS or other server already running.


Port Forwarding Behind a Carrier Grade NAT - Comments

A Brief Overview of Container Orchestration

4 March 2017

This article aims to provide an overview of some of the problems encountered when running containers in production.

Containers isolate applications by providing separate user-spaces (rather than entirely separate operating system instances, as in full virtualisation). This can yield benefits in security, repeatability1 and efficient resource utilisation.

I know about Docker, but what is an ‘orchestration system’, and why would I need one?

An orchestration system helps you run production services, in containers, as part of clusters. They can be thought of as the next layer up in the operational stack from manual container usage.

Service Registration and Health Checks

A service might consist of one or more container definitions2 which define the container images that are to be run as well as additional metadata such as CPU and memory limits and storage attachments.

Container Orchestration systems allow registering containers as part of a service, which acts as a logical unit for autoscaling and load balancing. Services are composed of a set of containers, with the goal being to maintain a desired number of containers running. The individual containers should be considered ephemeral (a good practice in general when running server applications3) as they can be terminated and replaced at any time. The adage containers should be cattle, not pets encapsulates this philosophy.

Exposing an interface for the orchestration system to check your containers' health is crucial for many features to work effectively. A simple HTTP endpoint can be used to check if a container responds in a timely manner with a 200 OK, indicating it is able to service user requests.

Service Discovery may also be integrated to allow your applications to find each other easily in the cluster without additional tooling.

Scheduling

Placement strategies allow schedulers to decide which servers4 your containers will run on.

These can vary depending on the goals of your service. You may want to spread containers as diffusely as possible across the available server pool to minimise the impact of a crashed server. Or you might want to bin pack containers into as few servers as possible to reduce costs.

Deployments & Upgrades

Real applications need to be deployed more than once. Container orchestration systems often provide mechanisms for:

  • Automated blue/green5 redeployments of services, including verifying that the new containers are working before terminating all the old ones by integrating with health checks.
  • Automatic restarting of crashed containers (if a whole server has crashed, for example, Docker’s built-in restart is not sufficient)
  • Connection draining from old containers to avoid interruptions to user sessions.
  • Rapid rollbacks if needed.

Auto Scaling

One of the big advantages of cloud computing is the ability to elastically adjust capacity based on demand, bringing cost savings in troughs and meeting demand at peak times. For container clusters, this involves adding or removing containers as well as the underlying servers which provide the resources.

Automatic scaling actions may be defined based on:

  • CPU/Memory Usage - what resources are the containers actually using?
  • CPU/Memory Reservation - what do the container definitions say that the containers need?
  • Time schedules - if your demand is predictable you can preemptively ‘warm up’ more containers to increase service capacity.

Grouping of Containers

It is often useful to group a set of containers with different definitions together to work as a whole, for example, having a web server container and a log drain container running side-by-side. A Kubernetes pod (services are collections of pods) and an Amazon ECS task definition can both group multiple container definitions.

Notes on Software and Providers

I wrote this article as part of research into available options and am not intimately familiar with all of these products. If you spot anything I’ve written which seems incorrect, please let me know. I have used ECS most heavily out of the following.

Product Notes Billing Related
Kubernetes At the heart of many other offerings, seems like a solid bet for portability is probably the most popular tool in its class. Open Source Google Borg
Docker Swarm (now part of Docker engine as of 1.12) Open Source
Google Container Engine Hosted Kubernetes with additional integrations with Google Cloud Flat fee per cluster hour + compute Kubernetes
Amazon ECS Largely proprietary (open source ecs-agent) - heavily integrated with other AWS products (ALB, IAM, ASG) Compute Usage Hours (EC2) Host agent is open-source (ecs-agent)
Microsoft Azure Container Service Compute Usage Hours Docker Swarm, DC/OS, or Kubernetes
Apache Mesos Not specific to containers - pitched as a ‘distributed systems kernel’ for co-ordinating compute resources generically. Open Source
Marathon Container orchestration built on Mesos.
Mesosphere Makers of DC/OS (Data Center Operating System) which uses Mesos. Enterprise (support plans & deployment footprint based) Apache Mesos
Rancher Open source with multiple base options - seems to bear some similarity to a self-hosted Azure Container Service. Open Source & Premium Support Kubernetes, Swarm, Mesos

Conclusion

I’ve outlined some of the problems that this plethora of tools (many of which you may have heard of) are trying to solve. The feature sets are broadly similar across several of them, so I would simply advise reading the docs thoroughly and evaluating the risk of vendor lock-in when choosing how to invest your time.


  1. The full runtime environment of your application is defined in one place, rather than being an accumulation of scripting and manual changes to servers over time. ↩︎

  2. In Amazon ECS, these are called Task Definitions↩︎

  3. https://12factor.net/disposability ↩︎

  4. In Amazon ECS, these are called Container Instances↩︎

  5. https://martinfowler.com/bliki/BlueGreenDeployment.html ↩︎

  6. https://docs.docker.com/engine/reference/run/#/expose-incoming-ports ↩︎


A Brief Overview of Container Orchestration - Comments