Creating a self-signed, wildcard SSL certificate for Chrome 58+

Chrome 58+ requires Subject Alternative Name to be present in the SSL certificate for the domain name you want to secure. This is supposed to be a replacement for Common Name, which has some security holes (like being able to define a certificate for *.co.uk, which is not possible with SAN).

I’ll be using MacOS and OpenSSL v1.1.1d installed via brew.

Recent OpenSSL versions add the basicConstraints=critical,CA:TRUE x509v3 SAN extension by default, which prevents such generated certificate to work in Chrome 58+. We need to disable that first.

Edit /usr/local/etc/openssl@1.1/openssl.cnf (your path may vary) and comment the following line:

[ req ]
# x509_extensions = v3_ca # The extensions to add to the self signed cert

And then you are off to generate the certificate. I’ll be using the *.example.net domain name here.

/usr/local/Cellar/openssl@1.1/1.1.1d/bin/openssl req \
  -x509 \
  -newkey rsa:4096 \
  -sha256 \
  -days 7000 \
  -nodes \
  -out cert.pem \
  -keyout key.pem \
  -subj "/C=US/O=Org/CN=*.example.net" \
  -addext "basicConstraints=critical,CA:FALSE" \
  -addext "authorityKeyIdentifier=keyid,issuer" \
  -addext "keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment" \
  -addext "subjectAltName=DNS:example.net,DNS:*.example.net"

This will generate two files. key.pem, which is the private key, without passphrase and cert.pem, which is the actual certificate.

Verify that the actual certificate has required x509v3 SAN extensions:

$ openssl x509 -in cert.pem -noout -text

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            70:4c:28:...
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, O=Org, CN=*.example.net
        Validity
            Not Before: Oct  2 15:48:10 2019 GMT
            Not After : Dec  1 15:48:10 2038 GMT
        Subject: C=US, O=Org, CN=*.example.net
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (4096 bit)
                Modulus:
                    00:a7:b5:01...
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier:
                DirName:/C=US/O=Org/CN=*.example.net
                serial:70:4C:...

            X509v3 Key Usage:
                Digital Signature, Non Repudiation, Key Encipherment, Data Encipherment
            X509v3 Subject Alternative Name:
                DNS:example.net, DNS:*.example.net
    Signature Algorithm: sha256WithRSAEncryption
         59:1d:96:...

The last step is to import the certificate (cert.pem) into the keychain (I’m using the login keychain) and trust it.

So easy. So hard.

Run guard-jasmine-headless-webkit without X server

You write specs for your javascript, right? If not, you really should.

jasmine-headless-webkit really helps with that. guard-jasmine-headless-webkit makes it all even more enjoyable, although there’s one caveat – it’s not so easy to set it all up.

There is a great guide for that, but it lacks some important details on running guard-jasmine-headless-webkit without graphical interface (X server).

Assuming you already have Xvfb installed, execute this command to run Xvfb in the background:

Xvfb :0 -screen 0 1024x768x24 > /dev/null 2>&1 &

And then you need to setup the DISPLAY shell variable in order for guard-jasmine-headless-webkit to automatically connect to our virtual frame buffer. Here’s the excerpt from my .bash_profile (it first checks if there is Xvfb running on display :0 and only then sets the DISPLAY variable):

xdpyinfo -display :0 &>/dev/null && export DISPLAY=:0

The downsides of Virtuozzo when used with mongrel

The scenario goes more or less like this. You have your Virtuozzo powered VPS. You have your RAM limit. You have your mongrels behind nginx plus mysql taking on average 60-70% of your available RAM (quite sensible limit). Now imagine the server gets hammered (not necessairly your VPS). Load goes over 10 (or even over 50). Mongrels stop responding. Queue builds up (inside mongrels), mongrels consume more and more of memory. Load still over 10. Memory limit reached. One of the mongrels is killed by Virtuozzo (too bad if it was the only one). Load still high. At some point other mongrels stop accepting new requests (queue limit reached?) and when load goes down they are unable to process the built up queue. Effect: your website is returning a 500 error code and you have to manually restart the mongrels (they are hanged up). I’ve been seeing similar behaviour too often lately…

Possible solutions: xen (does not kill your children), passenger (spawns new children whenever needed), haproxy (prolongs the life of your children), god (brings dead children back to life).

I’m going with passenger for now, thinking about moving to xen in the future.

Upgrading Ubuntu to 8.04 (Hardy Heron). Ugh.

Some minor problems:

  • /etc/default/locale has been deleted (wtf?). Needed to be recreated.
  • Both /etc/timezone and /etc/localtime have been deleted. Needed to recreate the links.
  • /etc/updatedb.conf has been deleted. Needed to be copied from another machine.

and one major one:

  • klogd now takes 5 minutes to start, which means I have to wait 5 minutes after each reboot to use the machine. Adding -x switch in the init.d script solved the problem. What was the root cause? No idea. There are only hints.

Apparently there is some reasoning behind not upgrading your linux policy.

Mongrel_cluster not starting after hard reboot

Does the following error sound familiar?

** !!! PID file log/mongrel.pid already exists.  Mongrel could be running already.  Check your log/mongrel.log for errors.
** !!! Exiting with error.  You must stop mongrel and clear the .pid before I'll attempt a start.

It usually happens when the server crashes. After that you need to ssh into it, remove the mongrel pid files and start the cluster manually. No more.

I assume you have mongrel_cluster setup properly, ie: project’s config file is in /etc/mongrel_cluster and the mongrel_cluster script has been copied from:
/usr/lib/ruby/gems/1.8/gems/mongrel_cluster-*/resources/
to the /etc/init.d directory. You need to edit the /etc/init.d/mongrel_cluster file:

Change this two bits:

start)
  # Create pid directory
  mkdir -p $PID_DIR
  chown $USER:$USER $PID_DIR

  mongrel_cluster_ctl start -c $CONF_DIR
  RETVAL=$?
;;

and

restart)
  mongrel_cluster_ctl restart -c $CONF_DIR
  RETVAL=$?
;;

to

start)
  # Create pid directory
  mkdir -p $PID_DIR
  chown $USER:$USER $PID_DIR

  mongrel_cluster_ctl start --clean -c $CONF_DIR
  RETVAL=$?
;;

and

restart)
  mongrel_cluster_ctl restart --clean -c $CONF_DIR
  RETVAL=$?
;;

respectively.

Adding the --clean option makes the mongrel_cluster_ctl script first check whether mongrel_rails processes are running and if not, checks for existing pid files and deletes them before proceeding.

You must be using the mongrel_cluster version 1.0.5+ for it to work as advertised (previous versions were buggy). To upgrade do:

gem install mongrel_cluster
gem cleanup mongrel_cluster

Here’s the related mongrel_cluster changeset.

Ubuntu’s UUID schizophrenia

Ubuntu Linux

Actually it was more like I was losing my mind, not my Ubuntu…

But let’s start from the beginning… I have two identical 250GB hard disks so I’ve decided to create a RAID array out of them. Not a system (bootable) one as I had too much trouble setting it up (I’ve set it up but dist-upgrade broke it all too nicely; kernel panic, etc.). I’ve set up a separate 5GB system partition on the first drive, leaving the rest for RAID. This left me with 5GB of free space to spare on the second drive. Smart as I was, I decided to clone the system partition from first drive to the second one, using dd, so I’d still be able to boot if either of the drives crashed. I called it semi-RAID built-by-hand and, well, I was quite proud of it. All seemed fine as months passed (and remember, that this was a server and as such almost did not require any reboots). But time passed and suddenly there was the new Ubuntu out, the Feisty one, so I decided it was time to upgrade. As I had some minor troubles during the upgrade (obsolete packages, invalid config files that I ordered to keep, etc.), I was rebooting every few minutes. And this is where fun comes in…

After a successful upgrade to 7.04 the screen greeted me with a 6.04 prompt. Hmm… strange. Let’s see what’s going on. Okey, so this upgrade actually did go so well. No problem, let’s do it again. This time I did not reboot, but kept making other changes. At some point I had to reboot, though. Now I was scratching my head really hard. Some packages, I knew I had uninstalled previously, kept coming back. I was making changes to various config files only to see those changes not being written to the disk after rebooting. Like, WTF? Now I was rebooting like crazy… losing my mind more with every reboot. I was making directories like THIS-IS-FIRST-HARD-DISK-FOR-SURE only to see them disappear and reappear a couple of reboots later. I was almost crying with despair. I’ve came up with the idea to compare /dev/sda1 with /dev/sdb1. Funny thing, they turned out to be the same. Who knew, maybe my RAID-by-hand automatically turned into a real one?

I had dark thoughts. I was thinking about giving up on having two identical hard disks inside one PC and maybe about downgrading to Edgy, not even knowing whether that was possible. I was even thinking about giving up on those two 250GB disks. I was really desperate. I knew, I needed a break.

10 minutes and one glass of cold water later I was on the mission to find out what is exactly wrong with my Ubuntu. Or my PC. Or my hard disks. Or the world around me.

It wasn’t easy. The df command reported my system being on /dev/sda1. Mounting /dev/sdb1 did not help as it has been showing me the same partition. But then came the bright idea to try and mount /dev/sda1, despite it being already mounted. To my surprise it turned out to be a completely another partition! The lost one! The one I missed so much. I was in heaven, so I started googling, because by that time I just knew it had something to do with those weird UUIDs. And I’d found out that I was not alone. I was so happy…

Now I know that my mistake was to make the exact clone of the system partition and have those two partitions (with the same UUIDs, yeah, unique ids my ass) available at the same time. No wonder my Ubuntu felt schizophrenic, but it still does not justify all of this weird behavior I was greeted with. Some error, some syslog entry, anything would be helpful… is that too much to ask?

What I was left with after I’ve figured it all out was this nice free disk space report (notice the double /dev/sda1 entry):

$ df
Filesystem  1K-blocks     Used  Available  Use% Monuted on
/dev/sda1     5162796  1650512    3250028   34% /
(..)
/dev/sda1     5162796  1558632    3341908   33% /mnt/disk-a

The root of the problem is that I base most of my core linux knowledge on the RedHat from the 90s when /dev/hda1 was saint and meant exactly what it represented, namely the first partition of the first hard disk (presumably connected using the first cable and set as master). With UUIDs all this has changed. Apparently for the better, but leaving some folks like me scratching their heads with disbelief.

Yes, Ubuntu is Linux for human beings. Apparently not for all…

PS: For future reference, remember to set the UUID after doing any partition duplication using dd. You do it like this:

tune2fs -U random /dev/sdb1

Squid: WARNING! Your cache is running out of filedescriptors

So you have a LAN with 50+ users and you set up a nice Squid w3cache as a transparent proxy with 100GB of space reserved for the cache (hdds are so cheap nowadays…). Weeks pass and suddenly you notice that something is messing up your web experience as Firefox suddenly decides to run painfully slow. About 30 minutes wasted on finding the culprit (like changing your DNS servers, clearing browser cache, etc.) until you decide to check the router and then the Squid with its logs. And then you find something fishy:

2007/01/01 17:51:19| WARNING! Your cache is running out of filedescriptors
2007/01/01 17:51:35| WARNING! Your cache is running out of filedescriptors
2007/01/01 17:51:51| WARNING! Your cache is running out of filedescriptors
(...)

I won’t be explaining why this happens. Others have done it before. What I’m going to do is present you with a solution that does not require a complete Squid recompilation/reinstallation procedure.

RedHat/Fedora

/etc/init.d/squid stop

nano /etc/squid/squid.conf
  max_filedesc 4096

nano /etc/init.d/squid
  # add this just after the comments (before any script code)
  ulimit -HSn 4096

/etc/init.d/squid start

Debian

nano /etc/defaults/squid
  SQUID_MAXFD=4096

/etc/init.d/squid restart

Ubuntu

nano /etc/default/squid
  SQUID_MAXFD=4096

/etc/init.d/squid restart

And now watch the /var/log/squid/cache.log for a similar line:

2007/01/01 18:32:27 With 4096 file descriptors available

If it still says 1024 file descriptors available (or similarly low value) you are out of luck (or you’ve just messed something up).

Installing Ruby on Rails on Ubuntu

This guide is valid for the following distributions:

Ruby on Rails

While there is a nice tutorial in the Ruby on Rails wiki, it’s by no means complete. According to it, you should only type: apt-get install rails to have the newest Rails installed on Ubuntu. It installs both Ruby and Rails, but what about rubygems? Sorry, not this time. There is also another caveat. Although commands like rails test and ruby script/server are working properly, ruby/console is not. If you had the misfortune of experiencing the aforementioned behavior, then this tutorial is just for you.

Pre requirements:

nano /etc/apt/sources.list

Add the following at the end of the file (replace edgy with breezy if you are running Breezy, dapper for Dapper, etc.):

# All Ubuntu repositories
deb http://archive.ubuntu.com/ubuntu edgy main restricted universe multiverse

Update your apt sources:

apt-get update

Installation:

Install Ruby with developer’s libraries:

apt-get install ruby ri rdoc irb ri1.8 ruby1.8-dev libzlib-ruby zlib1g

Download and install Ruby Gems (no .deb package, unfortunately):

wget http://rubyforge.org/frs/download.php/17190/rubygems-0.9.2.tgz
tar xfvz rubygems-0.9.2.tgz
cd rubygems-0.9.2
ruby setup.rb

Update your RubyGems (also updates the gems cache):

gem update --system

If you get Could not find rubygems-update (> 0) in the repository or a similar error, you need to delete your RubyGems cache:

$ gem env gemdir
PATH_TO_DEFAULT_GEM_REPOSITORY
$ rm PATH_TO_DEFAULT_GEM_REPOSITORY/souce_cache

and

rm $HOME/.gem/source_cache

In the next step install the OpenSSL bindings for Ruby (needed to install signed gems). They are required if you get the following error: SSL is not installed on this system, while installing signed gems like rake:

apt-get install libopenssl-ruby

And the last one:

gem install rails -y

And this is basically it. There are, however, depending on your needs, some…

Additional steps:

One of them is setting up the Rails to connect to the MySQL database in a proper way. We will be using the MySQL C bindings, which, for one, support the MySQL old style passwords (which is set as default for Ubuntu 5.04), but are also significantly faster (in the 2-3x range) than the native Ruby MySQL bindings. First, we will need to install the gcc compiler (and libc6-dev if you don’t have it already installed). Although strange it may seem, as a default it is not installed on a clean Ubuntu installation.

apt-get install gcc libc6-dev

MySQL development libraries are also required (mysql_config plus mysql/include):

apt-get install libmysqlclient14-dev

(for MySQL 5.0 you might be better of with libmysqlclient15-dev).

And now we can install C MySQL bindings:

gem install mysql

If you get "sh: make: not found" do:

apt-get install make

or if you have it already installed, add it to your path:

export PATH=/usr/bin:"${PATH}"

And, of course, in the end install Mongrel:

gem install mongrel -y

And that’s it. Rails installation is complete. Complicated? Not really :) Happy coding!