Archive for the ‘Information Security’ Category

Critical Infrastructure and the Internet (and by extensions, the Cloud)

September 1, 2013

Dan Geer has written a nice article ( http://queue.acm.org/detail.cfm?id=2479677) about putting critical infrastructure (like power grids) online, unfortunately it seems to ignore the economics involved, using the Internet is vastly cheaper than anything else (e.g. dedicated physical communications infrastructure, imagine the cost of laying your own network to connect a few power plants and control centers up vs. just buying Internet access from a provider). I suspect this pattern of “ZOMG! critical infrastructure it attached to the Internet” will be repeated for the Cloud, with the same economic drivers resulting in stuff being shoved into the Cloud anyways.

 

Your Logical Fallacy Is

September 1, 2013

I can’t wait to teach my kids this stuff and send them to school: https://yourlogicalfallacyis.com/

And in case the site ever does down here’s a local high res copy of the poster: FallaciesPoster24x36

 

Is Microsoft spamming anyone else about robots.txt blocking Bing?

June 5, 2013

So Microsoft spammed me about my robots.txt again:

from: Jyoti Bhagavatula (HCL America Inc) <[email protected]>

to: “[email protected]” <[email protected]>
date: Wed, Jun 5, 2013 at 11:11 AM

subject: Robots.txt blocking Bing crawler: http://www.seifried.org/robots.txt

Hello,

I am contacting you on behalf of the Bing Search engine (http://www.bing.com/) in regards to your robots.txt file:

http://www.seifried.org/robots.txt

Our customers have alerted us that your website was partially absent from our results and we have discovered that you are blocking our crawler, named BingBot, via a disallow directive in your robots.txt file:

User-agent: msnbot

Disallow: /

User-agent: bingbot

Disallow: /

We would be pleased if you could edit your robots.txt file to allow our crawler to fetch and index your content properly, which will in turn increase traffic to your site via our search results, by including the following section:

User-agent: Bingbot

Disallow: 

I find this pretty annoying, they don’t take the time to even look at my website and see that the contact email is pretty obviously [email protected], instead they spam my DNS WHOIS email contact. Secondly, I’ve already told them several times I don’t want to let them index my site, first via robots.txt, and secondly in email replies to the spam they send me.

Is anyone else getting this emails?

Make Money Fast with BitCoin!

March 20, 2013

So I noticed these ads at slashdot.org: http://butterflylabs.com/landing/landing-ls.php these are high speed crypto devices designed to mine BotCoins. They literally make money, fast. But if they are actually capable of mining BitCoins cost effectively why is the company selling them? Wouldn’t it make more sense to simply run them and harvest the BitCoins themselves?

I did some back of the envelope math and they don’t look that cost effective (unless the price of BitCoins rise, but of course then all bets are off and in theory an Arduino would be cost effective). So I can only conclude that while this is some potentially cool technology it is not cost effective. If was going to try and make money with BitCoins, I would buy a ton of BitCoins, then sell them rapidly to crash the price (since the BitCoin market is still not terribly liquid) and then buy as the price bottoms out. Or I’d hack an exchange and steal a ton of BitCoins. Much like a Casino the only way to reliably mask money with BitCoins is to cheat.

Also if you want to get into BitCoins (beyond cheating/speculating) I suggest you read about deflationary spirals: https://en.bitcoin.it/wiki/Deflationary_spiral

All in all considering that BitCoins are wholly unregulated, the exchanges keep getting compromised, and the long term deflationary issues I would imagine that most of us are better off investing in pretty much anything other than BitCoins.

Google Chrome and Kerberos on Linux

November 24, 2012

So we all know you can enable Kerberos by adding the “–auth-server-whitelist” to the command line:

google-chrome --auth-server-whitelist="*.example.org"

But you can also make it permanent. Simply create a directory (in Linux) called /etc/opt/chrome/policies/managed/ and within it drop a json file such as example-corp.json with the following contents:

{ "AuthServerWhitelist": "*.example.org",
"AuthNegotiateDelegateWhitelist": "*.example.org" }

And voila, no need to fiddle the command line options every time you start Chrome. Plus as an administrator you can simply deploy that file automatically across all your workstations and not have to bother the users, things will just work.

Fedora 16 with SELinux running WordPress with WP Super Cache

January 4, 2012

Updated (Jan 5, 2012): chcon changes the SELinux security on a file, but a restorecon would wipe that out, you need to actually run semanage to change the policy, then run restorecon to make it “permanent”.  Thanks to [email protected] for pointing this out to me.

So I recently started upgrading all the CloudSecurityAlliance web servers from F14 (with SELinux enabled) to F16 (with SELinux enabled). But I ran into a nasty little problem, WP Super Cache was broken. The error message that came up was:

Error: Your cache directory (/var/www/html/wp-content/cache/)
or /var/www/html/wp-content need to be writable for this
plugin to work. Double-check it.

Cannot continue... fix previous problems and retry.

Well shoot. The file permissions were correct, apache had write permissions and so on to the directory. But it was unable to write… Ah.. must be SELinux. The quickest way to test this is to simply disable SELinux for a second:

setenforce Permissive

and reload the WP Super Cache control page. Ah it works. So we know it’s an SELinux problem. The good news is that this is easy to fix, simply set the label on the directories and files you want the httpd process to be able to write to (and don’t forget to re-enable SELinux after you disabled it in the previous step):

You can use chcon to change the content on the directories:

chcon -R -t httpd_sys_rw_content_t /var/www/html/wp-content/cache/
chcon -R -t httpd_sys_rw_content_t /var/www/html/wp-content/plugins/
chcon -R -t httpd_sys_rw_content_t /var/www/html/wp-content/themes
chcon -R -t httpd_sys_rw_content_t /var/www/html/wp-content/uploads/
chcon -t httpd_sys_rw_content_t /var/www/html/wp-content/wp-cache-config.php

For a more permanent change however use semanage to change the targeted policy:

semanage -S targeted -i - << _EOF
fcontext -a -t httpd_sys_rw_content_t /var/www/html/wp-content/cache(/.*)?
fcontext -a -t httpd_sys_rw_content_t /var/www/html/wp-content/uploads(/.*)?
fcontext -a -t httpd_sys_rw_content_t /var/www/html/wp-content/plugins(/.*)?
fcontext -a -t httpd_sys_rw_content_t /var/www/html/wp-content/themes(/.*)?
fcontext -a -t httpd_sys_rw_content_t /var/www/html/wp-content/wp-cache-config.php
fcontext -a -t httpd_sys_rw_content_t /var/www/html/wp-content/themes(/.*)?
fcontext -a -t httpd_sys_rw_content_t /var/www/html/wp-content/blogs.dir(/.*)?
_EOF
restorecon -R -v /var/www/html

Also you may want to add “/var/www/html/wp-content/blogs.dir(/.*)?” so that WordPress multi-site uploads work properly.

Be mindful of selinux policy updates, they shouldn’t overwrite the changes you made but you may want to in order to get the latest greatest policies. Also you probably won’t have semanage installed:

yum install policycoreutils-python

Ok that was easy. But how do we find out what label needs to be applied? Well the ls command can give us a hint:

ls -dZ /var/www/html
drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html

So we’re dealing with httpd (obviously, but for other servers and services it may not be as simple so I’m doing this step by step). We can check for httpd related contexts using grep:

grep httpd /etc/selinux/targeted/contexts/files/file_contexts

Which will return a huge list of stuff. If you examine the output however you will notice lines like:

/var/lib/drupal.*       system_u:object_r:httpd_sys_rw_content_t:s0
/var/cache/mediawiki(/.*)?      system_u:object_r:httpd_cache_t:s0

So it would appear we have two main choices: httpd_sys_rw_content_t and httpd_cache_t. There are a number of other related labels as well: httpd_mediawiki_rw_content_t, httpd_git_rw_content_t, httpd_bugzilla_rw_content_t, httpd_mojomojo_rw_content_t and httpd_dspam_content_rw_t to name a few. Any of these will work, but I chose httpd_sys_rw_content_t as it;s rpetty obvious as to what it is for at first glance.

For more details on Red Hat Enterprise Linux (this also applies to Fedora) with SELinux and confined services check the Red Hat documentation:

Managing_Confined_Services/sect-Managing_Confined_Services-The_Apache_HTTP_Server-Types.html

It feels slow – testing and verifying your network connection

May 1, 2010

So I have a backup network link (working from home means you need two network links) and it was feeling kind of slow. I had a Linksys BEsFx41 connected to it, which according to the specifications is an ok unit (does VPN, etc.) but in practice it felt really slow (web browsing was not fun). So let’s test this objectively I thought.

First obviously was to check the speed, am I getting what I paid for? a quick visit to www.speedtest.net showed that I was indeed getting the 4 megabits down and 1 megabit up (it’s a wireless link, so not super fast, but I don’t have to worry about backhoe fade) that I pay for. So if I’m getting good upload/download speeds why would it feel slow?

DNS

Luckily the DNSSEC has been in the news a lot recently and several DNS testing sites have come up in various blogs/conversations/etc. So I headed over to the ICSI Netalyzr which promises to “Debug your Internet.” It’s a java based test and takes a while, but I have to say the results are worth it. It checks for connection speed, filtering, DNS speed and filtering and a few other things. Turns out DNS lookups were horribly slow (on the order of several thousand milliseconds… aka seconds). No wonder web browsing felt slow!

Turns out the BEFSX41 intercepts DNS lookups and proxies them, good for filtering, terrible for performance.

So I tried out a Dlink EBR-2310, which had even worse DNS performance. To add insult to injury it doesn’t support routing properly. On the BEFSX41 I can specify static routes, i.e. a router on 192.168.1.1 can get to 10.1.2.0/255.255.255.0 through the machine at 192.168.1.2. The EBR-2310 simply doesn’t support any routing. It also does the DNS proxy intercept, worse than the BEFSX41 (about twice as slow, in other words completely unusable).

So off to the store I go for a Netgear RP614v4. I was hoping that because it was a relatively recent device it would have slightly better hardware and firmware. Luckily I was right. It’s a mildly retarded device; you can set it up as a DHCP server but you don’t really have many (well any) options as to what it serves out via DHCP (domain, DNS servers, default gateway, etc., it does these all with a brain dead default set). But it does DNS lookups in an average if 70-80ms (as opposed to 1-3 seconds).

On my main subnet Internet access is brokered through a pretty vanilla OpenBSD machine (apart from having IPv6 enabled it’s pretty bog standard) and DNS lookups/etc are much faster. If anything this experience has taught me that if you want performance go find a small cheap machine, load it up with OpenBSD and be happy. Time to buy a Soekris I suppose. Oh and if you want DNSSEC these hardware firewalls aren’t going to do the trick, they all pretty much only support short DNS replies, meaning that longer DNSSEC replies will be truncated (and thus broken). To test this you can use the OARC reply size test:

dig +short rs.dns-oarc.net txt

I also decided to test my network links for traffic shaping/etc., turns out my primary ISP does and my backups ISP doesn’t. To see if yours does/doesn’t check out the EFF page covering this.

IPv6 and OpenBSD (Part 2)

May 1, 2010

So now that you’re online with an IPv6 enabled OpenBSD machine what can you do? The first thing I ran into is noticing that not all OpenBSD ftp sites are IPv6 enabled. The following is a list of IPv6 capable FTP sites for OpenBSD:

  • anga.funkfeuer.at
  • ftp5.usa.openbsd.org
  • ftp.arcane-networks.fr
  • ftp.belnet.be
  • ftp.esat.net
  • ftp.estpak.ee
  • ftp.eu.openbsd.org
  • ftp.freenet.de
  • ftp.fsn.hu
  • ftp.heanet.ie
  • ftp.irisa.fr
  • ftp.kddlabs.co.jp
  • ftp.nluug.nl
  • ftp.obsd.si
  • ftp.openbsd.dk
  • ftp.piotrkosoft.net
  • piotrkosoft.net
  • ftp.rediris.es
  • ftp.task.gda.pl
  • ftp.tcc.edu.tw
  • ftp.ulak.net.tr
  • mirror.aarnet.edu.au
  • mirror.bytemark.co.uk
  • mirror.corbina.net
  • mirror.planetunix.net
  • mirrors.nic.funet.fi
  • mirror.switch.ch
  • stacken.kth.se
  • http://www.obsd.si

What I find most interesting is how few North American sites are represented as compared to the European and Asian sites.

Verisign certificate authority finally fixes (part of the) domain verification problem

April 20, 2010

So a few months ago I decided to see how easy it was to buy an SSL certificate for a domain I didn’t own. It turns out that it was very easy because at least one large certificate authority (CA), RapidSSL (owned by Verisign) allowed a large number of email addresses to be used for verification (such as [email protected], [email protected], etc.).

The original article is available here: http://www.linux-magazine.com/w3/issue/114/054-055_kurt.pdf. I also contacted the Mozilla/Firefox people through the mozilla.dev.security.policy mailing list to let them know about it and to show the proof (the emails from RapidSSL to me). Betanews ran a nice article (http://www.betanews.com/article/Security-researcher-Trivially-easy-to-buy-SSL-certificate-for-domain-you-dont-own/1270072287), and not much else happened for a while.

Things finally got rolling when a Bugzilla report was filed (Bug 556468, which was basically a copy of an older bug Bug 477783 concerning Equifax doing basically the same thing). A Verisign representative confirmed they had removed a bunch of problematic email addresses they allowed, but [email protected] was still valid (this turned out to be a mistake as you’ll soon see).

Fast forward about two weeks and someone has copied my article and thrown in a few screen shots and submitted it to Slashdot (http://news.slashdot.org/story/10/04/18/1218212/Become-an-SSLAdmin-In-a-Few-Easy-Steps) personally I don’t mind if people copy my work and build off of it, but when they protray it as their own original work with no credit or original source mentioned that is a bit annoying.

Although annoying it had the benefit of showing the world that by using the [email protected] email address (remember, the one Verisign didn’t remove) which resulted in the rather quick disabling of it:

VeriSign will be removing the following generic approver email options for GeoTrust and RapidSSL as of tonight or tomorrow night:

– ssladmin, sysadmin, and info

So I guess in the end it worked, certificates are hopefully a little more secure now but the sad thing is I spent several dozen hours basically holding vendors feet to the fire for something they should have been doing all along.

Oh and there is no way to find out if a certificate authority has issued a certificate for your domain to someone else. Unlike DNS/etc. there is no way to query what certificates have been issued.

Mapping the Internet / scanning every web server

April 20, 2010

This is something I’ve always wanted to do, having a data set such as every ping-able IP, every server with port 80 exposed, lookups on every domain name or IP and so on are very useful. But the bandwidth and computation needed for this is often out of reach. Except now with services such as EC2 it is within reach, figure 1k data sent and received for an IP, a class A scan would only take 32 gigabytes in total. Figure 1 month of Small Linux instance machine time at Amazon and you’re looking at $63.60 (cheaper if you use a reserved instance or a spot instance!). So for a few thousand dollars you can now easily scan the entire Internet or create other similarly large data sets for a reasonable price.