Advanced Installation

This section goes further in explaining how to setup your bastion. You should have completed the basic installation first.

Encryption & signature GPG keys

Note

This section is a prequisite to both the Rotation, encryption & backup of ttyrec files and the Configuring keys, accounts & groups remote backup steps further down this documentation

There are 2 pairs of GPG keys being used by the bastion:

  • The bastion GPG key

    • The private key is used by the bastion to sign the ttyrec files

    • The public key is used by the admins to verify the signature and prove non-repudiation and non-tampering of the ttyrec files

  • The admins GPG key

    • The public key is used by the bastion to encrypt the backups and the ttyrec files

    • The private key is used by the admins to decrypt the backups when a restore operation is needed, and the ttyrec files

Generating the bastion GPG key

Generate a GPG key that will be used by the bastion to sign files, this might take a while especially if the server is idle:

 /opt/bastion/bin/admin/setup-gpg.sh --generate

 gpg: directory `/root/.gnupg' created
 gpg: Generating GPG key, it'll take some time.

 Not enough random bytes available.  Please do some other work to give
 the OS a chance to collect more entropy! (Need 39 more bytes)
 ..........+++++
 gpg: /root/.gnupg/trustdb.gpg: trustdb created
 gpg: key A4480F26 marked as ultimately trusted
 gpg: done
 gpg: checking the trustdb
 gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
 gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u

 Configuration file /etc/bastion/osh-encrypt-rsync.conf.d/50-gpg-bastion-key.conf updated:
 8<---8<---8<---8<---8<---8<--
 # autogenerated with /opt/bastion/bin/admin/setup-gpg.sh at Wed Mar 21 10:03:08 CET 2018
 {
     "signing_key_passphrase": "************",
     "signing_key": "5D3CFDFFA4480F26"
 }
 --->8--->8--->8--->8--->8--->8

 Done.

While it's working, you can proceed to the section below.

Generating and importing the admins GPG key

You should import on the bastion one or more public GPG keys that'll be used for encryption. If you don't already have a GPG key for this, you can generate one. As this is the admin GPG key, don't generate it on the bastion itself, but on the desk of the administrator (you?) instead.

If you're running a reasonably recent GnuPG version (and the bastion does, too), i.e. GnuPG >= 2.1.x, then you can generate an Ed25519 key by running:

 myname='John Doe'
 email='jd@example.org'
 bastion='mybastion4.example.org'
 pass=$(pwgen -sy 12 1)
 echo "The passphrase for the key will be: $pass"
 gpg --batch --pinentry-mode loopback --passphrase-fd 0 --quick-generate-key "$myname <$email>" ed25519 sign 0 <<< "$pass"
 fpr=$(gpg --list-keys "$myname <$email>" | grep -Eo '[A-F0-9]{40}')
 gpg --batch --pinentry-mode loopback --passphrase-fd 0 --quick-add-key "$fpr" cv25519 encr 0 <<< "$pass"

 gpg: key 3F379CA7ECDF0537 marked as ultimately trusted
 gpg: directory '/home/user/.gnupg/openpgp-revocs.d' created
 gpg: revocation certificate stored as '/home/user/.gnupg/openpgp-revocs.d/3DFB21E3857F562A603BD4F83F379CA7ECDF0537.rev'

If you or the bastion is using an older version of GnuPG, or you are unsure and/or prefer compatibility over speed or security, you can fallback to an RSA 4096 key:

 myname='John Doe'
 email='jd@example.org'
 bastion='mybastion4.example.org'
 pass=`pwgen -sy 12 1`
 echo "The passphrase for the key will be: $pass"
 printf "Key-Type: RSA\nKey-Length: 4096\nSubkey-Type: RSA\nSubkey-Length: 4096\n" \
   "Name-Real: %s\nName-Comment: %s\nName-Email: %s\nExpire-Date: 0\n" \
   "Passphrase: %s\n%%echo Generating GPG key\n%%commit\n%%echo done\n" \
   "$myname ($bastion)" $(date +%Y) "$email" "$pass" | gpg --gen-key --batch

 The passphrase for the key will be: ************
 gpg: Generating GPG key

 Not enough random bytes available.  Please do some other work to give
 the OS a chance to collect more entropy! (Need 119 more bytes)
 .....+++++

 gpg: key D2BDF9B5 marked as ultimately trusted
 gpg: done

Of course, in both snippets above, adjust the myname, email and bastion variables accordingly. Write down the passphrase in a secure vault. All bastions admins will need it if they are to decrypt ttyrec files later for inspection, and also decrypt the backup should a restore be needed. When the key is done being generated, get the public key with:

gpg -a --export "$myname <$email>"

Copy it to your clipboard, then back to the bastion, paste it at the following prompt:

 /opt/bastion/bin/admin/setup-gpg.sh --import

Also export the private admins GPG key to a secure vault (if you want the same key to be shared by the admins):

 gpg --export-secret-keys --armor "$myname <$email>"

Rotation, encryption & backup of ttyrec files

Note

The above section Encryption & signature GPG keys is a prerequisite to this one

The configuration file is located in /etc/bastion/osh-encrypt-rsync.conf. You can ignore the signing_key, signing_key_passphrase and recipients options, as these have been auto-filled when you generated the GPG keys, by dropping configuration files in the /etc/bastion/osh-encrypt-rsync.conf.d directory. Any file there takes precedence over the global configuration file.

Once you are done with your configuration, you might want to test it by running:

/opt/bastion/bin/cron/osh-encrypt-rsync.pl --config-test

Or even go further by starting the script in dry-run mode:

/opt/bastion/bin/cron/osh-encrypt-rsync.pl --dry-run

Configuring keys, accounts & groups remote backup

Note

The above section Encryption & signature GPG keys is a prerequisite to this one, otherwise your backups will NOT be automatically encrypted, which is something you probably want to avoid.

Everything that is needed to restore a bastion from backup (keys, accounts, groups, etc.) is backed up daily in /root/backups by default.

If you want to push these backups to a remote location, which is warmly advised, you have to specify the remote location to scp the backup archives to. The configuration file is /etc/bastion/osh-backup-acl-keys.conf, and you should specify the PUSH_REMOTE and PUSH_OPTIONS.

To verify that the script is correctly able to connect remotely (and also validate the remote hostkey), start the script manually:

 /opt/bastion/bin/cron/osh-backup-acl-keys.sh

 Pushing backup file (/root/backups/backup-2020-05-25.tar.gz.gpg) remotely...
 backup-2020-05-25.tar.gz.gpg
 100%   21MB  20.8MB/s   00:00

Also verify that the extension is .gpg, as seen above, which indicates that the script successfully encrypted the backup.

Logs/Syslog

It is advised to use syslog for The Bastion application logs. This can be configured in /etc/bastion/bastion.conf with the parameter enableSyslog.

There is a default syslog-ng configuration provided, if you happen to use it. The file can be found as etc/syslog-ng/conf.d/20-bastion.conf.dist in the repository. Please read the comments in the file to know how to integrate it properly in your system.

Clustering (High Availability)

The bastions can work in a cluster, with N instances. In that case, there is one master instance, where any modification command can be used (creating accounts, deleting groups, granting accesses), and N-1 slave instances, where only readonly actions are permitted. Any of these instances may be promoted, should the need arise.

Note that any instance can be used to connect to infrastructures, so in effect all instances can always be used at the same time. You may set up a DNS round-robin hostname, with all the instances IPs declared, so that clients automatically choose a random instance, without having to rely on another external component such as a load-balancer. Note that if you do this, you'll need all the instances to share the same SSH host keys.

Before setting up the slave instance, you should have the two bastions up and running (follow the normal installation documentation). Then, to set up the synchronization between the instances, proceed as explained below.

Allowing the master to connect to the slave

On the slave, set the readOnlySlaveMode option in the /etc/bastion/bastion.conf file to true:

run this on the SLAVE:
vim /etc/bastion/bastion.conf

This will instruct this bastion instance to deny any modification plugin, so that changes can only be done through the master.

Then, append the master bastion synchronization public SSH keyfile, found in ~root/.ssh/id_master2slave.pub on the master instance, to ~bastionsync/.ssh/authorized_keys on the slave, with the following prefix: from="IP.OF.THE.MASTER",restrict

Hence the file should look like this:

run this on the SLAVE:
cat ~bastionsync/.ssh/authorized_keys
from="198.51.100.42",restrict ssh-ed25519 AAA[...]

Pushing the accounts and groups files to the slave

Check that the key setup has been done correctly by launching the following command under the root account:

run this on the MASTER:
rsync -v --rsh "ssh -i /root/.ssh/id_master2slave" /etc/passwd /etc/group bastionsync@IP.OF.THE.SLAVE:/root/
group
passwd

sent 105,512 bytes  received 8,046 bytes  75,705.33 bytes/sec
total size is 1,071,566  speedup is 9.44

If this works correctly, you'll have two new files in the /root directory of the slave instance. We'll need those for the next step, which is verifying that the UIDs/GIDs of the slave instance are matching the master instance's ones. Indeed, the sync of the /etc/passwd and /etc/group files can have adverse effects on a newly installed machine where the packages were not installed in the same order than on the master, hence having possibly mismatching UIDs/GIDs for the same users/groups.

The next step ensures these are matching between the master and the slave before actually enabling the synchronization.

Ensuring the UIDs/GIDs are in sync

Now that we have the master's /etc/passwd and /etc/group files in the slave's /root folder, we can use a helper script to check for the UIDs/GIDs matches between the master and the slave. This script's job is to check whether there is any discrepancy, and if this is the case, generate another script, tailored to your case, to fix them:

run this on the SLAVE:
/opt/bastion/bin/admin/check_uid_gid_collisions.pl --master-passwd /root/passwd --master-group /root/group --output /root/syncids.sh
WARN: local orphan group: local group 50 (with name 'staff') is only present locally, if you want to keep it, create it on the master first or it'll be erased

There is at least one warning, see above.
If you want to handle them, you may still abort now.
Type 'YES' to proceed regardless.

In the example above, the script warns us that some accounts or groups are only existing on the slave instance, and not at all on the master. In this case, it's up to you to know what you want to do. If you choose to ignore it, these accounts and groups will be erased on the first synchronization, as the master will push its own accounts and groups to the slave instance. Such a discrepancy shouldn't happen as long as you're using the same OS and distro on both sides. It may happen if you have installed more packages on the slave instance than on the master, as some packages also create system groups or accounts. A possible fix is to install the same packages on the master, and/or simply adding the account(s) and/or group(s) on the master, so that they're synchronized everywhere.

If you type 'YES' or simply don't have any warnings, you should see something like this:

(output continued)
Name collision on UID: master UID 38 exists on local but with a different name (master=gnats local=list)
-> okay, offsetting local UID 38 to 50000038
Differing name attached to same UID: master UID 38 doesn't exist on local, but its corresponding name 'gnats' does, with local UID 41
Name collision on UID: master UID 39 exists on local but with a different name (master=list local=irc)
-> okay, offsetting local UID 39 to 50000039
[...]
You may now review the generated script (/root/syncids.sh) and launch it when you're ready.
Note that you'll have to reboot once the script has completed.

The generated script is found at the location you've specified, which is /root/syncids.sh if you used the command-line we suggested above. Reviewing this script is important, as this is the one that will be making UIDs/GIDs modification to your slave instance, as to sync them to the master's ones, including propagating these changes on your filesystem, using chmod and chgrp commands.

Once you're ready (note that you'll have to reboot the slave right after), you may run the generated script:

run this on the SLAVE:
bash /root/syncids.sh

We'll change the UIDs/GIDs of files, when needed, in the following mountpoints: / /home /run /run/lock /run/snapd/ns /run/user/1001 /run/user/1001/doc /run/user/1001/gvfs
If you'd like to change this list, please edit this script and change the 'fslist' variable in the header.
Otherwise, if this sounds reasonable (e.g. there is no remotely mounted filesystem that you don't want us to touch), say 'YES' below:

Please review the listed mountpoints (obviously, they'll be different than the ones above). As stated you may edit the script to adjust them if needed. If any UID/GID needs to be changed to be in sync with the master, the script will ensure the changes are propagated to the specified filesystems. You might want to exclude network-mounted filesystems and such, if any. The script does its best to do this for you, but you should ensure that it has got it right.

Then, the script may list the daemons and running processes that it'll need to kill before doing the changes, as Linux forbids changing UIDs/GIDs when they're used by a process. This is why a reboot is needed at the end.

(output continued)
The following processes/daemons will need to be killed before swapping the UIDs/GIDs:
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
kernoops    2484  0.0  0.0  11264   440 ?        Ss   Apr11   0:04 /usr/sbin/kerneloops
whoopsie    2467  0.0  0.0 253440 11860 ?        Ssl  Apr11   0:00 /usr/bin/whoopsie -f
colord      2227  0.0  0.0 249220 13180 ?        Ssl  Apr11   0:00 /usr/libexec/colord
geoclue     2091  0.0  0.1 905392 20268 ?        Ssl  Apr11   1:09 /usr/libexec/geoclue
rtkit       1789  0.0  0.0 153156  2644 ?        SNsl Apr11   0:00 /usr/libexec/rtkit-daemon
syslog      1445  0.0  0.0 224548  4572 ?        Ssl  Apr11   0:02 /usr/sbin/rsyslogd -n -iNONE
systemd+    1305  0.0  0.0  91016  4088 ?        Ssl  Apr11   0:00 /lib/systemd/systemd-timesyncd

If you want to stop them manually, you may abort now (CTRL+C) and do so.
Press ENTER to continue.

As stated, ensure that it's alright that these daemons are killed. You may want to terminate them manually if needed, otherwise the script will simply send a SIGTERM to these processes.

(output continued)
[...]
Restoring SUID/SGID flags where needed...
[...]
UID/GID swapping done, please reboot now.

As instructed, you may now reboot.

Note

If you're currently restoring from a backup, you may stop here and resume the Restoring from backup procedure.

Enabling the synchronization

Now that the master and the slave UIDs/GIDs are matching, we may enable the synchronization daemon:

run this on the MASTER:
vim /etc/bastion/osh-sync-watcher.sh

You may review the configuration, but the two main items to review are:

  • enabled, which should be set to 1

  • remotehostlist, which should contain the hosts/IPs list of the slave instances, separated by spaces

If the synchronization daemon was not already enabled and started (i.e. this is the first slave instance you're setting up for this master), then you should configure it to start it on boot, and you may also start it manually right now:

run this on the MASTER:
systemctl enable osh-sync-watcher
systemctl start osh-sync-watcher

Otherwise, if the daemon is already enabled and active, you can just restart it so it picks up the new configuration:

run this on the MASTER:
systemctl restart osh-sync-watcher

Now, you can check the logs (if you configured syslog instead, which is encouraged, then the logfile depends on your syslog daemon configuration. If you're using our bundled syslog-ng configuration, the output is logged in /var/log/bastion/bastion-scripts.log)

run this on the MASTER:
tail -F /var/log/bastion/osh-sync-watcher.log
Apr 12 18:11:25 bastion1.example.org osh-sync-watcher.sh[3346532]: Starting sync!
Apr 12 18:11:25 bastion1.example.org osh-sync-watcher.sh[3346532]: 192.0.2.42: [Server 1/1 - Step 1/3] syncing needed data...
Apr 12 18:11:27 bastion1.example.org osh-sync-watcher.sh[3346532]: 192.0.2.42: [Server 1/1 - Step 1/3] sync ended with return value 0
Apr 12 18:11:27 bastion1.example.org osh-sync-watcher.sh[3346532]: 192.0.2.42: [Server 1/1 - Step 2/3] syncing lastlog files from master to slave, only if master version is newer...
Apr 12 18:11:28 bastion1.example.org osh-sync-watcher.sh[3346532]: 192.0.2.42: [Server 1/1 - Step 2/3] sync ended with return value 0
Apr 12 18:11:28 bastion1.example.org osh-sync-watcher.sh[3346532]: 192.0.2.42: [Server 1/1 - Step 3/3] syncing lastlog files from slave to master, only if slave version is newer...
Apr 12 18:11:30 bastion1.example.org osh-sync-watcher.sh[3346532]: 192.0.2.42: [Server 1/1 - Step 3/3] sync ended with return value 0
Apr 12 18:11:39 bastion1.example.org osh-sync-watcher.sh[3346532]: All secondaries have been synchronized successfully
Apr 12 18:11:39 bastion1.example.org osh-sync-watcher.sh[3346532]: Watching for changes (timeout: 120)...

Your new slave instance is now ready!

Creating SSHFP DNS records

If you want to use SSHFP to help authenticating your bastion public keys by publishing their checksum in your DNS, here is now to generate the correct records:

awk 'tolower($1)~/^hostkey$/ {system("ssh-keygen -r bastion.name -f "$2)}' /etc/ssh/sshd_config

You shall then publish them in your DNS. It is also a good idea to secure your DNS zone with DNSSEC, but this is out of the scope of this manual.

Hardening the SSH configuration

Using our SSH templates is a good start in any case. If you want to go further, there are a lot of online resources to help you harden your SSH configuration, and audit a running SSHd server. As the field evolves continuously, we don't want to recommend one particularly here, as it might get out of date rapidly, but looking for ssh audit on GitHub is probably a good start. Of course, this also depends on your environment, and you might not be able to harden your SSHd configuration as much as you would like.

Note that for The Bastion, both sides can be independently hardened: the ingress part is handled in sshd_config, and the egress part is handled in ssh_config.

2FA root authentication

The bastion supports TOTP (Time-based One Time Password), to further secure high profile accesses. This section covers the configuration of 2FA root authentication on the bastion itself. TOTP can also be enabled for regular bastion users, but this is covered in another section. To enable 2FA root authentication, run on the bastion:

script -c "google-authenticator -t -Q UTF8 -r 3 -R 15 -s /var/otp/root -w 2 -e 4 -D" /root/qrcode

Of course, you can check the --help and adjust the options accordingly. The example given above has sane defaults, but you might want to adjust if needed. Now, flash this QR code with your phone, using a TOTP application. You might want to copy the QR code somewhere safe in case you need to flash it on some other phone, by exporting the base64 version of it:

gzip -c /root/qrcode | base64 -w150

Copy this in your password manager (for example). You can then delete the /root/qrcode file.

You have then two configuration adjustments to do.

  • First, ensure you have installed the provided /etc/pam.d/sshd file, or at least the corresponding line to enable the TOTP pam plugin in your configuration.

  • Second, ensure that your /etc/ssh/sshd_config file calls PAM for root authentication. In the provided templates, there is a commented snippet to do it. The uncommented snippet looks like this:

# 2FA has been configured for root, so we force pubkey+PAM for it
Match User root
    AuthenticationMethods publickey,keyboard-interactive:pam

Note that first, the usual publickey method will be used, then control will be passed to PAM. This is where the /etc/pam.d/sshd configuration will apply.

Now, you should be asked for the TOTP the next time you try to login through ssh as root. In case something goes wrong with the new configuration, be sure to keep your already opened existing connection to be able to fix the problem without falling back to console access.

Once this has been tested, you can (and probably should) also protect the direct root console access to your machine with TOTP, including a snippet similar to this one:

# TOTP config
auth    [success=1 default=ignore]  pam_google_authenticator.so secret=/var/otp/${USER}
auth    requisite                   pam_deny.so
# End of TOTP Config

inside your /etc/pam.d/login file.

Of course, when using TOTP, this is paramount to ensure your server is properly synchronized through NTP.