Linux:Secure Authentication

Edit: I wrote the script for automating this finally. It can be found on my Git page.

In my experience, Linux authentication seems to be one of those problems with so many answers. It’s hard to define even a range of methodologies that could be considered right, let alone narrowing it down to one or two. I’ve been dealing with this one at work quite a bit recently at work and would like to post here an idea I had. Just to be warned, this idea was not accepted for our solution, despite no one being able to give me more than one reason to not use it, which I will detail at the end of this post along with any other exploits I can imagine for this authentication methodology.

In a perfect world…​

In a perfect world, chroot environments would work securely and our app developers and third party vendors would write code on par with apache or openssh which could be started as root and spawn child processes in user space for security. All application files would fit nicely into the defined standards for Linux filesystem organization so we could package everything up nicely and deploy using repo servers. To top it all off, all applications would roll their own logs instead of filling up /var/log or somewhere on / since they rarely follow standards. However, this is rarely if ever the case (I’ve never seen it at least).

What I’ve seen up to this point is third party applications that install themselves exclusively in /opt; applications that are hard coded to not start unless running as uid 0 (root); binary startup scripts that situate themselves in /etc/rc.d/init.d/ (wtf guys?), and just general stubborness as to where the program is located.

Securing an Application Server

The first step I typically take to securing applications is to run them in user space as a service account with access only to its directory in the /apps mount point. I put that one to use on my own servers and it has served me very well. However, with this we have a few problems.

Accessing Service Accounts

While security does tend to introduce complications and interruptions into workflow, it shouldn’t be catastrophic. If your security measures are so strict, your users can’t do what they need to, you’re doing it wrong. Simply running in userspace introduces several problems. A few for example…​

  1. How do your users get to their service accounts in a secure way (no shared passwords or keys)?

  2. How do your users transfer files to and from their servers since they can’t directly access the service accounts?

  3. How do you manage this web of shared account access without it consuming much of your time?

Specifically, a solution is needed for the users to access their service accounts in an accountable and auditable way without hindering their ability to do their jobs [too much].

This has been a problem myself and some fellow engineers have struggled with for a while now. Here’s a few common service account authentication mechanisms that I’m sure we’ve all seen that aren’t necessarily the greatest.

Service Account Passwords

  1. They need to be shared for multiple users to have access

  2. They can be shared without the admins knowing (no accountability)

  3. They have to be routinely changed which causes a huge headache for everyone involved, os and app admins alike

Service Account Keys

  1. They need to be shared for multiple users to have access

  2. They can be shared without the admins knowing (no accountability)

  3. They have to be routinely changed which causes a slightly lesser headache than passwords for everyone involved, os and app admins alike

Sudo

Sudo provides a pretty clean solution to the problem. It allows you to limit who has access to the service account as well log who uses it and when. Just put your application admins into their own group and give that group explicit access to run ONE command…​

sudo su - service_account

This one is tremendously popular for very obvious reasons. However, despite using sudo, this one still has problems

  1. Your end users can’t perform file transfers between their boxes since can’t directly access their service accounts without a key or password

  2. We still lack accountability. Once the user is in a sudo’d shell, their commands are no longer logged.

  3. Managing this across an environment can be a very time consuming thing unless you have a source on a server that you propogate out, but then you have to deal with server compliance.

Granted, there is a pretty obvious Unixy solution to this, but it involves your users all being in the same group as your service account, mucking around with umasks that unset themselves on reboot unless explicitely set, and making sure your sticky bit sticks.

There is another way though.

My Poorly Formed Idea

My idea uses a combination of the crontab, jump hosts, ssh keys, and segregated networks.

Start with two (or more) segregated networks: one for administration, and several for operations. You will probably want three for operations: production, QA, and dev.

From there, you put your servers in your operations networks and set up firewall or routing rules to only allow ssh (port 22 or whatever port you prefer) traffic between the administration network and the operations networks. Your operations networks should now only be accessible for users using the applications and admins coming in from the administration network using ssh.

Next, build out a jump box on your administration network. One per application would be ideal for seperation of concerns, but one for all apps should work well also. For sake of simplicity, we’ll assume a single jump host.

Next, put all of your service accounts on that jump host with their own home directories in /apps. This assumes you have defined and reserved UIDs and GIDs for each of your service accounts so they can be on one system without conflicts. Provide sudo access to each user group to sudo su - <service_account> into their respective service accounts on the jump host.

At this point, the application admins/owners still don’t have access to their service accounts on the operations servers. Here’s where they get that access using rotating ssh keys. Write a script to generate ssh keys (I’ll post the source for mine later), ssh out to a box using the key to be replaced, push the new key, and remove the old key and any others while using the new key. This allows you to schedule key changes automatically using cron. With that in place, just have the script swap out each service account’s key every x minutes (15 or 30 is what I have in mind). Once you’ve got the key exchange working, modify the sshd_config files throughout your environment to disallow all user login over ssh with passwords, that way if your users do set a password to try to circumvent your security, it won’t be accepted anyways. You can also just disable password changing.

Pros

Operations Networks Become a Black Box

With this method, there is only one way in to every single operations box. That one way in is in a secured subnet, presumably accessible only through a vpn or when on site.

File Transfers are Seamless

Users can use scp or sftp to transfer files seamlessly using the jump host as the medium. If the keys are always regenerated as id_rsa, or the ssh config file is set up for each account, key regeneration won’t affect anyone because it takes milliseconds to overwrite the old key with the new one, so any new connections out will use the new key. End users shouldn’t even see an effect.

Safety Despite Turnover

If your company has any measure of turnover, you’ve undoubtedly gone through the password and key change process after an employee leaves the team. With this method, you’re automatically changing the key every X minutes, so even if they do get the key, it’ll only be valid for a very short while.

Lower Licensing Costs

Many companies, through the use of additional software such as Open LDAP, Samba, or some other third party product, put their Linux/Unix servers on their Windows Domain. A perk of this is it provides access to Linux to your AD users without having to manage a few hundred or thousand passwd, group, and shadow files. The downside to this is that if a third party product is used, it costs a lot of money in licenses. With the jump host rotating key model, you can put just the jump host(s) on the domain, and leave all operations servers off of the domain. It saves on licensing costs, maintainence time, and software installs. It also removes yet one more service running on your operations boxes which removes one more access point for exploitation. Additionally, the fewer pieces of software running on a server, the less chance an update will break the applications it’s hosting.

Clean Home Directories

Next up, clean home directories. If you have an entire team of developers and/or application admins logging into every operations system, /home is going to be very large on lots of systems, costing money for backups (if you back home directories up that is), wasting storage space (which is fairly cheap these days though), and adding spread to your users’s files, making it cumbersome for everyone to manage, including non system admins. With the jump host rotating key method, all of your home directories are on one host, so file management for the support staff is much easier.

Cons

Single Point of Failure

This is the one objection I heard from people at work. This can be mitigated in at least two ways. One is by having one jump host per application. It still beats putting hundreds or thousands of systems in AD and all the management and licensing costs that goes with that. Another way to mitigate this is to have a seconday jump host and set up rsync to synchronize the primary jump host with the backup, using the backup as a hot standby.

Single Point of Global Access

This is the one problem with this idea that I think is most relevant and potentially exploitable. However, if your administration boxes are on a network that is not reachable from anywhere but controlled locations, this shouldn’t be too big of a deal. However, if a mistake is made in the networking security or routing and a malicious user gets to a jump host, they still have to get into the service accounts which are inaccessible except through sudo, which means the malicous user has to exploit an existing account. Without that account’s password though, they can’t sudo so they would only have access to that one user’s files. Even if they could sudo though, they will still only have access to the service accounts that user works with, so their impact would be minimal unless that user works on very high profile applications. To sum it up, there are three very solid security measures in place (network segretation, user accounts, limited sudo access requiring passwords) that the malicious user has to get through before having any really impacting access.

Category:Linux Category:Security Category:Authentication