Common approaches to securing Linux servers and what runs on them.

Kevin M. Gallagher
8 min readFeb 4, 2018


Are we always doing everything that is necessary to secure, and I mean really seriously secure, any valuable server containing sensitive information on the internet? According to Shodan, the answer is no. Some have reasons to cling to an intense threat model, and others may be more flippant. Both white and black hats sometimes have cause to be secretive about their methods. But achieving real endpoint security is not just this mantra about using 2FA and password managers, installing updates and not clicking on suspicious links. Experts know it’s a lot more complicated than what we tell the public.

This post is simply a variation of a talk I’ve previously given twice at conferences, and is geared toward those who are new to, or learning about Linux security. I am not really discussing web application security here, after all we have the OWASP Top Ten to teach developers about input sanitization, SQL injection, CSRF, XSS, session management and whatnot. I’m talking about what you’re going to want to consider having in place if you are worried about highly advanced attackers and need to guard against the possibility of malicious code or privileged scripts being executed at any point in time, a remote intruder already in your systems, or even an insider threat who’s lying in wait to steal and exfiltrate all of your important data.

I‘ll mention some basic concepts, and include associated tools. So here’s what you should ask yourself:

  • Access control lists (ACLs)…

Is your infrastructure split into groups with varying role-based access levels to different systems, or do users possess entirely homogenous privileges?

acl: getfacl+setfacl

  • System auditing…

Can you account entirely for what users executed while logged in to one of your machines?

see: auditd, go-audit

  • Static analysis and fuzzing…

So you’re running some C/C++. Obviously that shit is not memory safe, and this accounts for the vast majority of vulnerabilities that arise. Has that code been audited, statically analyzed, or better yet, fuzzed to make sure it’s solid? Have you considered developing in Rust instead?

see: afl, Radamsa, Sulley, boofuzz, Coverity Scan, Valgrind, sanitizers

  • Network segmentation…

Is your backend properly separated from your frontend and load balancers? Are those hosts which are able to exist without the open internet having a route to them, actually without a route to them? Have you taken the time to set up a company VPN and grant your machines private, internal addresses?

see: iptables, ufw

  • Compartmentalization…

Do your employees use their work computers for personal activities like gaming or running applications not related to their work? Or do they have something like a virtual machine or container each for messaging, browsing, development…?

see: Qubes, VirtualBox

  • File permissions and umask…

File permissions are familiar to anyone with a basic understanding of Unix. If you don’t need a particular user or group to have a read, write, or execute permission on something, then I urge you to just throw it away and always go with the most restrictive model.

more info

  • Containers…

Alright, containerization is in theory great for security. But I want to know who’s got permission to build and push images into production. Are they also signed and verified, plus monitored for security updates and CVEs?

see: Docker, LXC

  • Utilizing threat intelligence…

From the IPv4 address space originates a stream of malicious IPs, botnets ready to DDoS, and automated exploit scans going on. You can either collect intelligence about such activity yourself or subscribe to a product feed or blacklists. But how well do your termination points / firewalls react to and incorporate this information?

see: awesome-threat-intelligence

  • Firewall and packet filters…

How often have you audited your iptables rules or what your router/firewall is enforcing, or even run verification tests against them? Have you set up the fleet so that machines can only talk to those which they absolutely need to talk to?

see: pfSense, OPNsense

  • DNS and domain registrar…

How much effort have you put towards locking this down? Will you be alerted upon unauthorized changes to your nameservers or DNS zone file? Further, have you enabled DNSSEC, for whatever that’s worth?

Personally, I recommend Namecheap as a registrar and Cloudflare for performant DNS.

  • Physical access…

What does that get you? If I am law enforcement with a court order, datacenter staff or your hosting provider, can I freely read the contents of your server ? Not with full-disk encryption. Moreover, when your disks are decommissioned or replaced, are they going to be wiped? If someone plugs a USB drive into your 1U rack, are you going to get an alert about it?

see: LUKS/cryptsetup

  • Do you have deterministic builds?

When a developer builds your code and pushes it into production, can you verify that the binary artifact is what it was intended to be and the source code or dynamically-linked dependencies have not been maliciously modified at some point?

see: Gitian, ReproducibleBuilds

  • Verifying digital signatures…

No doubt you might be fetching some software off of a website instead of through your package manager. Did you compare the checksums/hashes or verify the signature on that download before your team member went ahead and built or installed it?

see: Making and verifying signatures with GnuPG

  • Have you sandboxed your application(s)?

Does it have an AppArmor profile or seccomp filter or RBAC policy specifying what it can and cannot do in terms of system calls and access rights?

see: seccomp, AppArmor

  • TLS and crypto configuration…

Have you removed insecure ciphersuites and algorithms entirely from the picture (e.g. MD5, SHA1, RC4) and insisted on listing support for only the strongest available? Select the best ciphers, HMACs and key exchange algorithms possible within your compatibility and user story. Prefer elliptic curve to RSA if available. Defaults are probably not good enough. This applies to OpenSSH, GnuPG, OpenVPN, etc.

There’s no excuse to not have transport-layer security on most services exposed to the internet anymore, as one can readily obtain free certificates with Let’s Encrypt.

see: Applied Crypto Hardening, Bulletproof SSL and TLS, Server-side TLS

  • Keys and secrets management…

I’m sorry, but that private key is useless to you if it’s been around for a decade and lived on all of your personal computers. Consider moving certain highly prized keys into cold storage or across the air gap. If your employees all have their own keys, think about adopting a solution to synchronize them across the domain. Secrets should be moved out of version control.

see: GPGSync, sops, Vault

  • HTTP security headers…

There are quite a few of these, with varying or arguable utility, but there are enough websites out there still without any at all that it’s worth mentioning.

Here’s a list: X-Frame-Options, X-XSS-Protection, X-Content-Type-Options, X-Download-Options, X-Permitted-Cross-Domain-Policies, Content-Security-Policy, Referrer-Policy, Strict-Transport-Security, Public-Key-Pins

see:, Mozilla web security guidelines

  • File integrity monitoring…

Are you periodically checking that critical files have not been modified, and generating alerts for changes?

see: Tripwire, OSSEC

  • Intrusion detection…

Okay, so you have some kind of integrity monitoring, but are you just running the tool with the default rule set and haven’t taken the time to actually train it on the specifics of your application deployment?

see: Comparison of host-based intrusion detection systems, Snort

  • Vulnerability assessment…

Anyone can subscribe to the correct mailing lists and watch for new exploits as they get disclosed and patched. But when’s the last time you ran anything that checked your stuff for active CVEs?

see: Nessus, CoreOS clair

  • Security of the base system…

If you are a big enough target, then do you actually trust Debian/Ubuntu or RHEL or whatever company’s third-party software repository you’ve added to always deliver you flawless, non-malicious packages? Here’s a thought: you can host your own repositories, pin to specific versions and upgrade stuff only after it’s been tested.

Better yet, run an extremely minimal OS based on Alpine or LinuxKit. The more you reduce your “attack surface”, the less likely you are to be exploited.

  • LSMs (Linux Security Modules)

Meaning AppArmor, SELINUX, Landlock or Smack. Have they done anything for you lately?

  • Linux kernel hardening and enhancement…

Check out PaX and the grsecurity patch. It contains too many neat features to list here, so I direct you straight to their website to learn more about their fine work.

It goes without saying that the kernel and CPU microcode are both items which should not languish in legacy versions, due to multiple privilege escalation bugs in Linux, and recent issues like Spectre and Meltdown.

I’ve been maintaining a list of security-relevant options for systemd service units.

see:, Linux Kernel Runtime Guard, Kernel Self Protection Project

  • Removing unnecessary devices…

If you’re not using Thunderbolt or Firewire or the WiFi adapter, or anything which has DMA (Direct Memory Access), then there’s no reason to load those kernel modules.

see: Kernel module blacklisting

  • Are you aggregating, parsing and alerting upon your logs?

Maybe you are sending all your of logs to somewhere, but you don’t have alerts on certain lines or conditions and someone needs to manually go and check them. Logs are great; the data is interesting, so do something with it. Write the Logstash filters and grok patterns, don’t just leave that stuff unexamined.

see: Filebeat, rsyslog, Logstash

  • How well are you monitoring resource usage?

RAM, CPU load, free disk space. This is pretty basic but it’s key to detecting unusual activity occurring anywhere so it’s worthy of mention.

see: Metricbeat, Prometheus node_exporter, Nagios, Osquery

  • Infrastructure tests…

Okay, so people are familiar with various aspects of software testing, but not as many do infrastructure tests. How can you continually ensure the state of your system is as you intended it to be?

see: Serverspec, Testinfra

  • Platform and firmware security…

Your BIOS and other low-level interfaces are subject to bugs. Intel® AMT and Management Engine should be disabled, as well as Computrace. Below I’ve linked a very useful framework for analyzing the security of system firmware and hardware components.


  • Protecting the remote shell…

The common guidelines apply to sshd: disable root login, use keys instead of passwords, and set up brute force protection. Listening on an alternate port is actually not all that helpful. A better solution would be to place it behind a VPN, an authenticated Tor hidden service, or require a port-knocking procedure.

see: fail2ban, denyhosts, sshguard, Secure Secure Shell

  • Webserver best practices…

You don’t want to leak information about what version you’re running. Setserver_tokens off; for nginx and ServerSignature off for Apache. Look for and remove the X-Powered-By header if it’s there.

When running a complex application that’s reliant upon dynamic scripting languages, consider running a WAF (Web Application Firewall) like ModSecurity. Cloudflare provides this service at scale to its customers.

  • Secondary factors…

I highly recommend the YubiKey, which has a variety of useful functionality. It can be configured to output a static password (well-suited for PAM user login or mounting volumes encrypted with a passphrase), HOTP, or Universal Two-Factor (U2F), or it can work an OpenPGP smartcard. These devices are indispensable for any sysadmin, as there’s no sense in keeping keys on your hard drive when they can be stored on a smartcard instead. I have published a detailed YubiKey GPG+SSH setup guide.

  • DNS resolution…

What are the contents of /etc/resolv.conf? Quad9 is an alternative to Google public DNS or OpenDNS which blocks clients from accessing malicious domains, similar to how Chrome protects users from sites that serve malware via Safe Browsing. Set your nameserver to to try it out.

  • Audit trusted parties…

Beyond keeping your system’s trusted root certificate store up-to-date, you should also check your package manager every once in a while to see which third-parties are trusted, whether their repository signing keys are sufficiently strong (many still use 1024-bit DSA), and remove those which are expired.

try: apt-key list, rpm -qa gpg-pubkey

  • Signing git commits and tags …

Nearly everybody’s using git for version control these days. When you make a new release, is it based off of a GPG-signed git tag? One can also sign commits if you like.

see: Signing tags using GPG, Git signing, Git tools — signing your work



Kevin M. Gallagher

Linux sysadmin/DevOps/SRE privacy & transparency activist 0xB604C32AD5D7C6D8