'}}
Steve Davis
Denyhosts 2.6 (< 6.el5) Misses a Large Number of Attacks
'}}

UPDATE (2015/03/09)

There was a release of denyhosts that finally fixes this bug! (Maybe the RHEL EPEL maintenance crew read my blog post, haha). So as long as you patch through 2.6-6.el5 (or equivalent if you are running a newer version of CentOS/RHEL/etc) then you will not have this issue. Thanks EPEL team!

Original article for reference

So, I hesitate to call this a vulnerability, but it definitely is a critical issue.

How many of you use DenyHosts to thwart SSH brute force attacks? Did you install an rpm package from your server oriented distribution, like CentOS or RHEL via EPEL (and probably others)? Was that version 2.6?

If so, listen up!

We are a small business without dedicated IT support. We know the importance of IT, and how a specialized IT manager would benefit our business. However, we are pretty capable at most IT related tasks, and for our small number of machines (6 workstations and 3 servers) we feel the costs would outweigh the benefits in our situation. The following is a direct example of where that line of thinking has failed us (don't judge!).

SSH

We rely on SSH in our organization. It makes system management a snap (if you know the command line), is secure, and enables a lot of tasks (remote development, data transfer, collaboration). We have a lot of faith in the SSH protocol and OpenSSH implementation. For this reason, we have enabled SSH logins to the internet at large, unfiltered, for years.

Brute force bots (a.k.a. those automated jerks on the internet)

Years ago, along with our policy of open access to port 22 through our firewall, along came a steady number of automated brute force attacks on our server. This was picked up by our SysAdmin (i.e. me) and appropriate measures to mitigate these attacks were researched. At the time, a program called DenyHosts was very popular and easy to install and use on all of our Linux machines. So we did.

Not enough

After installing DenyHosts, these attacks continued but would get identified, denied, and quickly abandoned. Our bot attack traffic immediately dropped to an acceptable level.

However, there was a point in time where this seemed to change. It was long enough ago that I don't remember exactly when, but not long enough to forget that at some point the number of attacks on our servers started to increase and seemed to avoid getting blocked by DenyHosts. At the time no one was getting through (actually in 11 years we have never been remotely compromised through an SSH vector) -- but they were definitely affecting the traffic to the server and causing usability issues.

I sprung into action! and... couldn't figure out a good solution. Being that I still desired to maintain an open SSH policy, I explored other options. Seeing as though the attacks were all coming from a single Class A IP range. Well. You guessed it: I blocked that entire range of addresses.

Now, I'm sure some of you are chuckling at this point, or maybe even out-right laughing at this... but it worked. The attacks immediately dropped off, and things went back to normal.

Fast-forward to a Server Reboot

Like most Linux admins I know, unless we have a critical kernel security vulnerability, our server stays up. This can be months and months of time between reboots, and this period was no different. The thing is, that Class A firewall block I put in place was temporary.

You see, I had always told myself that I would come back and reexamine the problem and come up with a different, better, solution. But life as a small business owner got in the way, and that well-intentioned idea fell so far down on the backlog of tasks, that it fell right off.

Recently, our ISP told us we would experience some downtime with our server. A simple maintenance downtime, but it would require a server power-off (they were upgrading the firmware on our RAID card). This occurred without issue, but after that reboot, I started to notice some irregularities. A dropped SSH connection once in awhile, or slow connection. Infrequent at first, but seemingly increasing in frequency as the days went on.

Finally, last week it clicked: I checked the logs and we were getting SSH brute force attacked, yet Denyhosts was again not blocking the attacks.

The Attacks

So I began examining what was going on.

# /etc/init.d/denyhosts restart

No affect...

# vim /var/log/secure

No abnormalities, looks the same as always.

So I staged a fake attack from one of our own machines. Blocked. Hmm...

Closer look at the log files again...

Feb 26 13:35:05 alpha1 sshd[1176]: Failed password for invalid user tomcat from 207.110.31.76 port 43029 ssh2
Feb 26 13:35:05 alpha1 sshd[1186]: Invalid user tomcat from 207.110.31.76
Feb 26 13:35:12 alpha1 sshd[1212]: Failed password for invalid user tomcat from 207.110.31.76 port 46336 ssh2
Feb 26 13:35:12 alpha1 sshd[1217]: Invalid user tomcat from 207.110.31.76
Feb 26 13:35:15 alpha1 sshd[1200]: Failed password for root from 103.41.124.15 port 40675 ssh2
Feb 26 13:35:15 alpha1 sshd[1202]: Failed password for root from 103.41.124.18 port 37659 ssh2
Feb 26 13:35:15 alpha1 sshd[1198]: Failed password for root from 103.41.124.15 port 52201 ssh2 (This continues over and over in the logs)
Feb 26 13:35:15 alpha1 sshd[1237]: refused connect from 207.110.31.76 (207.110.31.76) Blocked

Hmm, all of the attacks that are succeeding are exclusively attempting to brute force the root account.

More digging, yes this is definitely the pattern. All attacks limited to the root account are succeeding, invalid accounts are blocked within minutes.

Denyhosts v2.10

So what was going on? Time to dig into the source code!

While preparing to review the source, (and looking for anyone else experiencing the same problem), I stumbled upon an apparently newly maintained version of Denyhosts on Github. What to make of this? The v2.6 website of Denyhosts website doesn't seem to explicitly mention it is abandoned, but it certainly has not been updated in a long time. Moreover, v2.6 still exists in many repositories (including the latest RHEL EPEL7 repository) and no one seems to have moved to this version 2.10 from Github.

Even more confusing, perusing the Github source code led me to another nearly identical Sourceforge site

denyhost.sourceforge.net

(Notice the missing s from the URL... *sarcasm* yeah, that isn't confusing *end sarcasm*).

Not sure which to look at, I decided to contain my searching to the 2.6 version. It didn't take long to narrow down the cause, write Python script and setup some test log files, and confirm the problem. There is a broken regular expression in the Denyhosts v2.6 code that makes it incompatible with the latest log file output from ssh.

I suppose this might actually not have been broken at the time it was written, as the log format of SSH might have changed after the release of v2.6 of Denyhosts. But regardless, the combination of this regular expression and the current log output from ssh were incompatible (and have been for quite some time):

FAILED_ENTRY_REGEX = re.compile(r"""Failed (?P\S*) for (?Pinvalid user |illegal user )?(?P.*) from (::ffff:)?(?P\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})$""")

Feb 26 13:35:15 alpha1 sshd[1200]: Failed password for root from 103.41.124.15 port 40675 ssh2

Notice the trailing port XXXXX ssh2 part? Yeah... that doesn't work.

Fixed?

So now that I had a problem, let's look if it has ever been fixed...
https://github.com/denyhosts/denyhosts/commit/e31b18662976a0097c5a83b53f...

Sure enough, the latest version of denyhosts from Sourceforge/Github does not have this problem. According to the project and news section, v2.7 includes a "minor DoS security fix". Yes, yes it sure does...

The take away

While the new site certainly spells out that it is important to upgrade to >= v2.7 of Denyhosts, there are a large number of repositories and big names out there that haven't made that switch. For me, it isn't clear if that is in the works or not so I wanted to spell it out on our blog. But more importantly, I also wanted to point out that simply relying on the security updates from our distributions and upstream package maintainers without any (automated or manual) oversight on our network security is a bad idea. It was easy for me to leave this as a low priority as we are using a stable, server focused, actively maintained distribution (CentOS), but this episode has reminded me despite that relative level of comfort, I must keep my SysAdmin duties as a high priority or else we need to bite the bullet and hire a dedicated IT staffer.

Follow us on LinkedIn for all of our news!