AppSec Blog

AppSec Blog

My Top 6 Honeytokens

A few years ago, I was looking for a new developer to join our team. Of course, the hard part was to find a developer that was up to the task. I don't believe much of what people say in their resumes, so I rather had them show me a site they coded and give me permissions to take a closer look. Many of the sites failed miserably after only some minor probing. The result was a small guide I published to summarize my experience, Web Application Auditing over Lunch. Sadly, a lot of attacks are just so simple. But then again, lets see how we can turn this around and use it against the attacker. An attacker will likely try to use simple tricks just like the once outlined in the guide. As a counter measure, I am using a set of "honeytokens". Simple tripwires to alert me of an attacker.

  1. Don't hand session credentials to automated clients: Whenever a browser identifies itself a "wget", or a search engine, don't bother setting a session cookie for them. They shouldn't log in. Yes, it is easy to fake the user agent. But many attackers don't bother.
  2. Add fake admin pages to robots.txt: Add a fake admin page as "Disallowed" to your robots.txt file. We all know of course that robots.txt should not be used as a security tool. But many websites still use it that way and as a result, attackers use it as a road map to attack a site. Whenever someone hits your fake "admin" page, you know they are up to no good.
  3. Add fake cookies: Add a fake "admin" cookie and set it to "FALSE", or "No". This is a classic mistake attackers are looking for. But you are of course not using this cookie to assign admin privileges. Instead, you detect attacks whenever the cookies value changes.
  4. Add "spider loops": Little redirect loops to send spiders in a loop. Be nice, and add "NOFOLLOW" tags to not annoy legit search engines too much. See if anybody falls for it. It is kind of like a La Brea tarpit for web application vulnerability scanners.
  5. Add fake hidden passwords as HTML comments: On your login page, add a comment like <!-- NOTE: developer test login, user: alkhewr password: 234kjw --> ... Wait for someone to use it :-)
  6. "Hidden" form fields: This is different from the <input type="hidden"> form field. Instead, add a regular form field <input type="text"> but set the style to "display: none" . That way, the form field will not be visible to normal browsers. But vulnerability scanners will happily fill it in. Note that this can be a problem for "audio browsers" used by the blind. You may want to pre-fill the form with something like "do not change this field".
Now isn't that all just about hiding more serious vulnerabilities? Security through obscurity? Sure ... it is... but it works. In my opinion, a good defensive technique is easy to implement but hard to bypass. These techniques only take minutes to implement, but break most automated tools and will cost hours of the attackers time.

You have to decide for yourself what to do once you detected an attacker. Maybe just hope for the attacker to go away? A more aggressive, but dangerous approach is to automatically shun attackers. It all depends on how much you are willing to lock out the wrong person. In particular the hidden URLs in robots.txt are dangerous as someone who discovers the trick my now spam this hidden URL to your customers and lock them out of the side if they click on it. If you decide to shun attacker: Have a plan ready to mass "unblock" a large number of false positives.

Got your own tricks like that? Let us know and write a comment below!

7 Comments

Posted June 05, 2009 at 7:27 PM | Permalink | Reply

planetheidi

Nice. I'm adding this to my talk I'm giving at Toorcamp

http://www.toorcamp.org/content/A9

Posted June 06, 2009 at 3:40 PM | Permalink | Reply

Michael Condon

I've never tried any of the fake admin techniques, but I've always used "self protecting code". Knowing how crawlers, site mirrors, etc. work, I've developed code that I add as standard php copy's in every module. I've used a triangular redirect for a long time, which doesn't send any messages - it just keeps going until a client side timeout. I know that certain modules should only be invoked by a specific list of others, and that they will only be using a PUT (for example), so if something else happens they go into the "Bermuda Triangle".
I also never use URL parameters, so if I get a result from QUERY_STRING, it's not me.
I have a few other techniques, but not for public disclosure.

Posted June 06, 2009 at 4:47 PM | Permalink | Reply

BebeZed

Classic! I've used the fake admin thing before... and had the page log offenders. Great fun! Thanks for sharing these ideas -- each one is a gem!

Posted June 08, 2009 at 6:24 PM | Permalink | Reply

John O

This isn't security by obscurity in any sense of the word. This is simple detection measures as every one of these can be linked to an alerting system, thus it is similar to an IDS if anything. Most wouldn't consider Snort catching a small portion of an attack, and alerting the organization as obscurity. These tactics are GREAT and show proactive thinking. Thanks for the great article.

Posted June 08, 2009 at 9:32 PM | Permalink | Reply

ethicalhack3r

Reject user-agents from popular tools.

acunetix, webscarab, w3af, nikto, ect...

Posted June 11, 2009 at 7:32 PM | Permalink | Reply

McFlowers

I use Fail2Ban to firewall clearly nefarious and/or unwanted traffic. The following will issue a two hour firewall block on the offending IP:

- Access to robots.txt
- More than ten 404 responses in ten minutes
- Blacklisted user agents
- Access to /w00tw00t\.at.ISC.SANS
- Requests beginning with http://

Posted June 22, 2010 at 1:14 PM | Permalink | Reply

Clerkendweller

Sorry for this being a bit late in the day, but we've been talking about honey traps over on the OWASP AppSensor project

http://www.owasp.org/index.php/Category:OWASP_AppSensor_Project

mailing list

https://lists.owasp.org/pipermail/owasp-appsensor-project/2010-June/000108.html

I believe item 4 above is slightly incorrect if this refers to the REL attribute value of HTML A (link) tags. The NOFOLLOW attribute value in links does not mean "do not follow", it refers to an anti comment spam measure and means "do not pass on page rank". Wikipeadia has a good description:

http://en.wikipedia.org/wiki/Nofollow

If these URLs were in robots.txt, then that might be what's intended here.

Post a Comment






* Indicates a required field.