Category Archives: Uncategorized

VMWare Clarity/Angular modal dialogs

The VMWare Clarity Design System documentation is a bit vague about how modal dialogs should be handled. The examples presented are not really appropriate for real-world applications, where dialogs need to be reusable components, usually containing forms. So, here’s a more realistic example of how to use modal dialogs in clarity applications.

The code is available as a stackblitz at https://stackblitz.com/github/rogerarmstrong/clarity-sample-modal.

What we want to achieve is a modal component which can be called from anywhere in the application and which takes a model object as input and returns a modified model as output (i.e. the dialog does nothing with the object except allow the user to edit it – the caller has control over what to do with the edited model object).

The application is simple – it displays a user’s details and and an edit button which displays a dialog allowing first and last name to be edited.

click the button to show the dialog
edit the user details
user details are updated

The applications consists of two components “home” and “edit-user-dialog”.

In the home component html, we insert the <edit-user-dialog> tag.

 

In the home component code,  we reference the edit-user-dialog via a @ViewChild. When the Edit User button is clicked, we invoke the open method on the modal, passing the user object we wish to edit. We then subscribe to the onOK event from the modal (which takes the modified user returned from the dialog.

The dialog html is simple – a form allowing the properties of the user to be edited.

The dialog component has an open and close method.

Notes: setTimeout is needed to reliably set focus in a modal (over repeated opens) because angular has no DOM ready function to tell us when focus can be set.

Letsencrypt in 15 minutes

I was looking for a simple  way to use Let’s Encrypt to enable https for a web site and I found a Docker image nmarus/docker-haproxy-certbot which met my needs.

Remember, Let’s Encrypt represents a complete break from traditional certificate issuers in that:
(a) its free.
(b) certificate creation, installation and renewal is fully automated.

These are huge advantages relative to working with the previous certificate issuers and anyone who deploys anything to the internet should immediately take advantage of them. Let’s Encrypt’s audacious goal is to improve the whole internet by getting everyone to use https.

Let’s Encrypt provides a “certbot” which handles the whole lifecycle of the certificates for you. There’s plenty of Let’s Encrypt documentation on how to install the certbot into popular web servers (like apache) or proxy servers (like HAProxy). However, what we are doing below is packaging a HAProxy instance with certbot installed as a Docker container, so you can simply put it in front of one or more web properties you want certificates for. That way, you don’t need to touch your existing server or proxy configuration to use Let’s Encrypt certificates.

In our case, our web site was already a Docker container, so I just had to modify the docker-compose file from:

to:

Where haproxy-certbot is our certificate issuing, SSL terminating transparent proxy which takes care of all certificate related activities and then passes http requests on to our original service.

I first needed to create the three directories “~/data/config”, “~/data/letsencrypt” and “~/data/certs” on my docker host (which the haproxy-certbot container needs for persistent storage of its proxy configuration file and the certificates).

I then took the example haproxy.cfg file provided (see https://hub.docker.com/r/nmarus/haproxy-certbot), and copied it to the ~/data/config directory and changed the backend “my_http_backend” in the haproxy.cfg file to:

This means that the proxy server now forwards requests to port 80 (http) on the address “web-site”, which is the address of the web-site container, provided to the proxy container via the docker links instruction.

I brought both containers up with “docker-compose up -d”, and checked that my web-site was still available over http.

At this point, our new proxy is passing through http requests to the backend. But it is also ready to handle the requests which Let’s Encrypt will use to verify that you own the domain and issue you the requested certificate.

I then asked Let’s Encrypt to create a certificate with the command:

This caused Let’s Encrypt to verify that I really owned the domain by visiting the address I provided and checking that it reaches the container. Let’s Encrypt issued the certificate and the certificate was stored (in the ~/data/certs directory which I provided to the container).

I then refreshed the proxy with:

And I was then immediately able to visit the website with https.

Let’s Encrypt certificates are short-lived (a few months), but the haproxy-certbot container automatically renews them for you before they expire.

On another server I had several different microservices for which I wanted https access, so I configured a second haproxy instance as the back-end rather than a web-site. That way, I had one proxy instance handling SSL termination/certificate administration and another routing requests to the various microservices (based on HAProxy host header rules).

The great thing about this approach is that you don’t have to mess around with your existing http services or proxies, instead you simply put this container in front of them.

Backing up ESXi VMs with ghettoVCB

We’ve been using the free ESXi ghettoVCB backup utility for the last 5 years to backup about 150 VMs daily without a glitch. ghettoVCB snapshots the VM, copies away the files (with a configurable retention period) and then removes the snapshot. The resulting backup is a snapshot of the VM which means that when you need it, you can directly run the backup copy of the VM with ESXi and start it without having to restore it. ghettoVCB is fast and reliable (it copies sparse disks correctly to an NFS backup share so the resulting backup is as compact as the original VM disks).

ghettoVCB has no deduplication capabilities, so its usually not appropriate for offsite backup of VMs. We use borg to archive the ghettoVCB backups offsite. This has the advantage that you get indefinite retention offsite (since borg does very efficient deduplication). You can also mount any borg backup (using its FUSE mounting), so you can run any backed up VM straight out of the archive.

ghettoVCB is available at https://github.com/lamw/ghettoVCB

Highly recommended!

Borg backup

Borg backup (https://github.com/borgbackup) is an open source backup tool which, in addition to the usual backup features like strong client-side encryption and compression, has several important characteristics which make it particularly suitable for handling large offsite backups (like virtual machine backups):

  1. Deduplication: this ensures that even if the source files move or change names, that they will not be re-backed up unnecessarily.
  2. The backup can be moved. Borg backups are just directories – this means that you can make the first, full backup locally, copy it to the destination via a USB disk and the continue incremental backups over the network.

We are currently using borg for several offsite backups, including a weekly offsite backup of VMWare ghettovcb local backups (VM image clones) and the resulting backup traffic is less than 10% of the file size of the local backups (this obviously depends on how much change is occurring between backups, but 90% deduplication is the average over 15 VMs backed up offsite).

The steps to do an initial backup of the directory (data_dir) locally, move it to a remote server (server1) and resume incremental backups to that server (via ssh to user1@server) are as follows:

Notes:
(1) borg should be installed on both the client machine and the server machine.
(2) When you move the backup (repository), the name of the directory you move it to does not need to match that which it was moved from.

Thanks to Gabriel for recommending borg!

clustered cron jobs

We’ve been recently running rest APIs on active-active server pairs (docker containers running on pairs of VMs on separate hosts) with postgres-BDR (multi-master bidirectional replication) for fault-tolerant storage and a pair of fault-tolerant HAProxy instances for incoming request routing. This is a robust setup which provides zero downtime during rolling updates or hardware maintenance or failure.

However, clustering scheduled jobs (i.e. ensuring that scheduled jobs execute exactly once) becomes a problem in this configuration. Multi-master replicated databases avoid a single point of failure but they are not suitable for use with database-based clustered schedulers like Quartz, so we needed to consider other options. There are complex clustered job schedulers, but we wanted to keep it simple and use Linux crond for scheduling. We finally settled on using keepalived to maintain a single master across the cluster and schedule the cron jobs identically on all servers, using a small if_master.sh script to ensure that a crontab entry only runs if the server is currently the keepalived master. The crontab entries then look like:

if_master.sh checks if the server is currently keepalived master (by looking for the floating ip address)

and only executing the command if the server is currently master (the same crontabs execute on the other servers, but do nothing if the server is not the keepalive master).

This was the simplest solution we could find which allows us to keep using cron and configure crontabs on all servers identically.

 

Finally, perfect wifi coverage at home

We live in an old house, on three levels. Its always been a challenge to achieve consistent wifi coverage throughout the house. We neglected to install ethernet cabling when we renovated and have been struggling with wifi issues ever since. We tried power-line networking (Devolo, TP-Link) and, although it worked most of the time, it provided very inconsistent performance and it was impossible to figure out why. We then reverted to a central wireless router and range extenders (Apple, TP-Link). Coverage was pretty bad in many parts of the house. Last weekend, we installed the AmplifiHD mesh networking system from Ubiquiti and we finally have the full performance of our internet provider (40-60Mbps LTE, depending on the time of day) from any or all devices, anywhere in the house.

516535-ubiquity-amplifi-hd-home-wi-fi-system

Replacing VMWare with KVM

I’ve been using an i5 Intel NUC at home as a home server.

I initially installed ESX on the NUC and ran an ubuntu VM with iptables, DNS, DHCP etc. However, I wanted to put the firewall between the home network and the LTE router, so I needed two network interfaces. The NUC only has one, so I thought I’d use VLANs to split the network.  That turned out to be pretty complicated to manage so I ended up buying a USB3 ethernet adapter (AX88179) for the NUC instead.

Getting that to work with ESX was a pain (I tried pass-through, but couldn’t get it to work reliably), so in the end I replaced ESX on the NUC with KVM. Worked great – the USB ethernet adapter installed out of the box with Ubuntu and the whole configuration only took an hour to set up.

Like with ESX, I still need an extra VM on my macbook (Ubuntu desktop  under Fusion) to run virt-manager to manage KVM (instead of the Windows VM where I used to run the vmware client to manage the ESX installation).

I used to backup the VMs with ghettoVCB to a Synology via nfs (which worked completely reliably). Now I’ll use LVM snapshots and and file copy to backup the KVM VMs to the same Synology.

Powerline networking with mixed Devolo and TP-LINK

We’ve been using Devolo powerline networking at home for years – we have an old house with wifi-proof walls. We’d been having plenty of problems with them, possibly due to overheating of the wifi plugs. After replacing the dLan 500 adapters with the more expensive Devolo dLan Pro adapters without any improvement, I finally decided to try TP-LINK AV500s instead.

The TP-LINKs are much cheaper (base plug plus two WLAN repeaters for < EUR 100). They are also compatible with the Devolo dLan 500s, so I’m still using a few of the Devolos (for a printer and in the cellar). Unfortunately they don’t perform better than the Devolos – I still get about 30Mbit throughput (measured with iperf between two devices connected by ethernet to the TP-LINKs). Should’ve installed CAT5 when wiring the house :(.

Addendum: the Devolos did not play so well with the TP-LINK network, so I ended up replacing them with some additional TP-LINKs. For the time being, the network seems to run reliably.