What happened to the cattle ?

In this blog post I am going to go over some good and bad ideas for working with servers in general because I feel that for this specific topic the problem is most of the recommendations are bad or inaccurate.

First of all I believe that you should have proper bootstrapping policies for your servers and those policies should also be responsible for setting up a configuration management links to your master server.(If of course you are using master - slaves kind of tool) From then on you do not want to ssh into a machine, maybe only when something unnatural occurs. Your logs should be already forwarded to a centralized logging stack.(e.g.: elasticsearch - logstash - kibana )

In terms of specific configuration, having everyone login using a single user account is a terrible idea. You can't audit who did what, ever. You have to remember to remove people from authorized_keys when they leave, and also make sure that they haven't left themselves a backdoor -- a cron job that reinstates the key, an extra user account, even just changing the root password to something else. User account management is a pain, so that's why we have things like LDAP. Everyone has their own user account. You can audit who does what on every machine, and for stuff that requires root, sudo will log the things people do (of course, if you let people have root shells, that's harder). The only people who get access to a local account (and/or root, but I still think root should just have a random password that no one knows) are a few sysadmins. When someone leaves, you kill their account in the LDAP server.

Having an up-to-date system and only accepting security updates is a good policy. Fail2ban is a good tool (but it's a starting point; you should be doing other things to detect suspicious behavior). Blind-updating systems in production is a terrible idea. Things break in the open source world all the time when you do this. Never ever use unattended-upgrades. You just need to be on top of security updates. Period. No excuses.

A different approach to working with servers is baking pre-configured machine images. As an example you can refer to the Netflix techblog and their post about how they are not using any config management : http://techblog.netflix.com/2013/03/ami-creation-with-aminator.html

TLDR: You should never even ssh to a machine to configure type thing anyway. Rolling out a new server should be fully automatically taken care of. The first time you log into the server, it should be completely ready to go. It should be ready to go without you needing to log into it at all. This takes a small amount of up-front effort, and will pay off immediately when you bring up your second server.(This can be either via configuration management or baking golden images)