Creating a self-signed, wildcard SSL certificate for Chrome 58+

Chrome 58+ requires Subject Alternative Name to be present in the SSL certificate for the domain name you want to secure. This is supposed to be a replacement for Common Name, which has some security holes (like being able to define a certificate for *.co.uk, which is not possible with SAN).

I’ll be using MacOS and OpenSSL v1.1.1d installed via brew.

Recent OpenSSL versions add the basicConstraints=critical,CA:TRUE x509v3 SAN extension by default, which prevents such generated certificate to work in Chrome 58+. We need to disable that first.

Edit /usr/local/etc/openssl@1.1/openssl.cnf (your path may vary) and comment the following line:

[ req ]
# x509_extensions = v3_ca # The extensions to add to the self signed cert

And then you are off to generate the certificate. I’ll be using the *.example.net domain name here.

/usr/local/Cellar/openssl@1.1/1.1.1d/bin/openssl req \
  -x509 \
  -newkey rsa:4096 \
  -sha256 \
  -days 7000 \
  -nodes \
  -out cert.pem \
  -keyout key.pem \
  -subj "/C=US/O=Org/CN=*.example.net" \
  -addext "basicConstraints=critical,CA:FALSE" \
  -addext "authorityKeyIdentifier=keyid,issuer" \
  -addext "keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment" \
  -addext "subjectAltName=DNS:example.net,DNS:*.example.net"

This will generate two files. key.pem, which is the private key, without passphrase and cert.pem, which is the actual certificate.

Verify that the actual certificate has required x509v3 SAN extensions:

$ openssl x509 -in cert.pem -noout -text

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            70:4c:28:...
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, O=Org, CN=*.example.net
        Validity
            Not Before: Oct  2 15:48:10 2019 GMT
            Not After : Dec  1 15:48:10 2038 GMT
        Subject: C=US, O=Org, CN=*.example.net
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (4096 bit)
                Modulus:
                    00:a7:b5:01...
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier:
                DirName:/C=US/O=Org/CN=*.example.net
                serial:70:4C:...

            X509v3 Key Usage:
                Digital Signature, Non Repudiation, Key Encipherment, Data Encipherment
            X509v3 Subject Alternative Name:
                DNS:example.net, DNS:*.example.net
    Signature Algorithm: sha256WithRSAEncryption
         59:1d:96:...

The last step is to import the certificate (cert.pem) into the keychain (I’m using the login keychain) and trust it.

So easy. So hard.

VI mode indicator in ZSH prompt

Here is my take on VI mode indicator in zsh’s prompt. This is useful only for people who use the vi mode (bindkey -v) in ZSH.

vim_ins_mode="%{$fg[cyan]%}[INS]%{$reset_color%}"
vim_cmd_mode="%{$fg[green]%}[CMD]%{$reset_color%}"
vim_mode=$vim_ins_mode

function zle-keymap-select {
  vim_mode="${${KEYMAP/vicmd/${vim_cmd_mode}}/(main|viins)/${vim_ins_mode}}"
  zle reset-prompt
}
zle -N zle-keymap-select

function zle-line-finish {
  vim_mode=$vim_ins_mode
}
zle -N zle-line-finish

# Fix a bug when you C-c in CMD mode and you'd be prompted with CMD mode indicator, while in fact you would be in INS mode
# Fixed by catching SIGINT (C-c), set vim_mode to INS and then repropagate the SIGINT, so if anything else depends on it, we will not break it
# Thanks Ron! (see comments)
function TRAPINT() {
  vim_mode=$vim_ins_mode
  return $(( 128 + $1 ))
} 

And then it’s a matter of adding ${vim_mode} somewhere in your prompt. For example like this:

RPROMPT='${vim_mode}'

Other examples on the web use zle reset-prompt in the zle-line-init, which has a very nasty side effect of deleting last couple of lines on mode change (when going from ins to cmd mode) when using multi-line prompt. Using zle-line-finish works around that.

Also see my current ~/.zshrc, which includes those tweaks (and many others!).

ZSH vi mode with emacs keybindings

This is my attempt at bringing emacs-style keybindings to vi mode in ZSH:

# VI MODE KEYBINDINGS (ins mode)                                      
bindkey -M viins '^a'    beginning-of-line                            
bindkey -M viins '^e'    end-of-line                                  
bindkey -M viins '^k'    kill-line                                    
bindkey -M viins '^r'    history-incremental-pattern-search-backward  
bindkey -M viins '^s'    history-incremental-pattern-search-forward   
bindkey -M viins '^p'    up-line-or-history                           
bindkey -M viins '^n'    down-line-or-history                         
bindkey -M viins '^y'    yank                                         
bindkey -M viins '^w'    backward-kill-word                           
bindkey -M viins '^u'    backward-kill-line                           
bindkey -M viins '^h'    backward-delete-char                         
bindkey -M viins '^?'    backward-delete-char                         
bindkey -M viins '^_'    undo                                         
bindkey -M viins '^x^r'  redisplay                                    
bindkey -M viins '\eOH'  beginning-of-line # Home                     
bindkey -M viins '\eOF'  end-of-line       # End                      
bindkey -M viins '\e[2~' overwrite-mode    # Insert                   
bindkey -M viins '\ef'   forward-word      # Alt-f                    
bindkey -M viins '\eb'   backward-word     # Alt-b                    
bindkey -M viins '\ed'   kill-word         # Alt-d                    
                                                                      
                                                                      
# VI MODE KEYBINDINGS (cmd mode)                                      
bindkey -M vicmd '^a'    beginning-of-line                            
bindkey -M vicmd '^e'    end-of-line                                  
bindkey -M vicmd '^k'    kill-line                                    
bindkey -M vicmd '^r'    history-incremental-pattern-search-backward  
bindkey -M vicmd '^s'    history-incremental-pattern-search-forward   
bindkey -M vicmd '^p'    up-line-or-history                           
bindkey -M vicmd '^n'    down-line-or-history                         
bindkey -M vicmd '^y'    yank                                         
bindkey -M vicmd '^w'    backward-kill-word                           
bindkey -M vicmd '^u'    backward-kill-line                           
bindkey -M vicmd '/'     vi-history-search-forward                    
bindkey -M vicmd '?'     vi-history-search-backward                   
bindkey -M vicmd '^_'    undo                                         
bindkey -M vicmd '\ef'   forward-word                      # Alt-f    
bindkey -M vicmd '\eb'   backward-word                     # Alt-b    
bindkey -M vicmd '\ed'   kill-word                         # Alt-d    
bindkey -M vicmd '\e[5~' history-beginning-search-backward # PageUp   
bindkey -M vicmd '\e[6~' history-beginning-search-forward  # PageDown

You know, so that your muscle memory can rest in peace. Also see the commit adding the above emacs style keybindings to my dotfiles.

Run guard-jasmine-headless-webkit without X server

You write specs for your javascript, right? If not, you really should.

jasmine-headless-webkit really helps with that. guard-jasmine-headless-webkit makes it all even more enjoyable, although there’s one caveat – it’s not so easy to set it all up.

There is a great guide for that, but it lacks some important details on running guard-jasmine-headless-webkit without graphical interface (X server).

Assuming you already have Xvfb installed, execute this command to run Xvfb in the background:

Xvfb :0 -screen 0 1024x768x24 > /dev/null 2>&1 &

And then you need to setup the DISPLAY shell variable in order for guard-jasmine-headless-webkit to automatically connect to our virtual frame buffer. Here’s the excerpt from my .bash_profile (it first checks if there is Xvfb running on display :0 and only then sets the DISPLAY variable):

xdpyinfo -display :0 &>/dev/null && export DISPLAY=:0

The downsides of Virtuozzo when used with mongrel

The scenario goes more or less like this. You have your Virtuozzo powered VPS. You have your RAM limit. You have your mongrels behind nginx plus mysql taking on average 60-70% of your available RAM (quite sensible limit). Now imagine the server gets hammered (not necessairly your VPS). Load goes over 10 (or even over 50). Mongrels stop responding. Queue builds up (inside mongrels), mongrels consume more and more of memory. Load still over 10. Memory limit reached. One of the mongrels is killed by Virtuozzo (too bad if it was the only one). Load still high. At some point other mongrels stop accepting new requests (queue limit reached?) and when load goes down they are unable to process the built up queue. Effect: your website is returning a 500 error code and you have to manually restart the mongrels (they are hanged up). I’ve been seeing similar behaviour too often lately…

Possible solutions: xen (does not kill your children), passenger (spawns new children whenever needed), haproxy (prolongs the life of your children), god (brings dead children back to life).

I’m going with passenger for now, thinking about moving to xen in the future.

Upgrading Ubuntu to 8.04 (Hardy Heron). Ugh.

Some minor problems:

  • /etc/default/locale has been deleted (wtf?). Needed to be recreated.
  • Both /etc/timezone and /etc/localtime have been deleted. Needed to recreate the links.
  • /etc/updatedb.conf has been deleted. Needed to be copied from another machine.

and one major one:

  • klogd now takes 5 minutes to start, which means I have to wait 5 minutes after each reboot to use the machine. Adding -x switch in the init.d script solved the problem. What was the root cause? No idea. There are only hints.

Apparently there is some reasoning behind not upgrading your linux policy.

Mongrel_cluster not starting after hard reboot

Does the following error sound familiar?

** !!! PID file log/mongrel.pid already exists.  Mongrel could be running already.  Check your log/mongrel.log for errors.
** !!! Exiting with error.  You must stop mongrel and clear the .pid before I'll attempt a start.

It usually happens when the server crashes. After that you need to ssh into it, remove the mongrel pid files and start the cluster manually. No more.

I assume you have mongrel_cluster setup properly, ie: project’s config file is in /etc/mongrel_cluster and the mongrel_cluster script has been copied from:
/usr/lib/ruby/gems/1.8/gems/mongrel_cluster-*/resources/
to the /etc/init.d directory. You need to edit the /etc/init.d/mongrel_cluster file:

Change this two bits:

start)
  # Create pid directory
  mkdir -p $PID_DIR
  chown $USER:$USER $PID_DIR

  mongrel_cluster_ctl start -c $CONF_DIR
  RETVAL=$?
;;

and

restart)
  mongrel_cluster_ctl restart -c $CONF_DIR
  RETVAL=$?
;;

to

start)
  # Create pid directory
  mkdir -p $PID_DIR
  chown $USER:$USER $PID_DIR

  mongrel_cluster_ctl start --clean -c $CONF_DIR
  RETVAL=$?
;;

and

restart)
  mongrel_cluster_ctl restart --clean -c $CONF_DIR
  RETVAL=$?
;;

respectively.

Adding the --clean option makes the mongrel_cluster_ctl script first check whether mongrel_rails processes are running and if not, checks for existing pid files and deletes them before proceeding.

You must be using the mongrel_cluster version 1.0.5+ for it to work as advertised (previous versions were buggy). To upgrade do:

gem install mongrel_cluster
gem cleanup mongrel_cluster

Here’s the related mongrel_cluster changeset.

Intel Core 2 Duo power consumption

During the recent move of my development machine to the basement I’ve conducted a test telling me what is the actual power consumption of my new Core 2 Duo powered server. Basically, it is a normal PC: Core 2 Duo E6300 1.86GHz, 2 x 512MB DDR2, 2 x 250GB SATA 7200rpm (RAID 1), old PCI graphic card and a 350W power supply. All running latest Ubuntu (currently 7.04 Feisty Fawn, server edition). Since it is a development machine, it’s idle most of the time (98% or even more). And this is the state I was making my measurements in. So what are the results? Well, I was quite surprised how low my power consumption actually is. I took three tests, which indicated basically the same: about 77 Watts. Even taking into the account temporary power usage spikes (when I’m actually using the machine…) it shouldn’t cost me more than 4$ per month to keep it running 24/7. Isn’t that sweet? ;)

Ubuntu’s UUID schizophrenia

Ubuntu Linux

Actually it was more like I was losing my mind, not my Ubuntu…

But let’s start from the beginning… I have two identical 250GB hard disks so I’ve decided to create a RAID array out of them. Not a system (bootable) one as I had too much trouble setting it up (I’ve set it up but dist-upgrade broke it all too nicely; kernel panic, etc.). I’ve set up a separate 5GB system partition on the first drive, leaving the rest for RAID. This left me with 5GB of free space to spare on the second drive. Smart as I was, I decided to clone the system partition from first drive to the second one, using dd, so I’d still be able to boot if either of the drives crashed. I called it semi-RAID built-by-hand and, well, I was quite proud of it. All seemed fine as months passed (and remember, that this was a server and as such almost did not require any reboots). But time passed and suddenly there was the new Ubuntu out, the Feisty one, so I decided it was time to upgrade. As I had some minor troubles during the upgrade (obsolete packages, invalid config files that I ordered to keep, etc.), I was rebooting every few minutes. And this is where fun comes in…

After a successful upgrade to 7.04 the screen greeted me with a 6.04 prompt. Hmm… strange. Let’s see what’s going on. Okey, so this upgrade actually did go so well. No problem, let’s do it again. This time I did not reboot, but kept making other changes. At some point I had to reboot, though. Now I was scratching my head really hard. Some packages, I knew I had uninstalled previously, kept coming back. I was making changes to various config files only to see those changes not being written to the disk after rebooting. Like, WTF? Now I was rebooting like crazy… losing my mind more with every reboot. I was making directories like THIS-IS-FIRST-HARD-DISK-FOR-SURE only to see them disappear and reappear a couple of reboots later. I was almost crying with despair. I’ve came up with the idea to compare /dev/sda1 with /dev/sdb1. Funny thing, they turned out to be the same. Who knew, maybe my RAID-by-hand automatically turned into a real one?

I had dark thoughts. I was thinking about giving up on having two identical hard disks inside one PC and maybe about downgrading to Edgy, not even knowing whether that was possible. I was even thinking about giving up on those two 250GB disks. I was really desperate. I knew, I needed a break.

10 minutes and one glass of cold water later I was on the mission to find out what is exactly wrong with my Ubuntu. Or my PC. Or my hard disks. Or the world around me.

It wasn’t easy. The df command reported my system being on /dev/sda1. Mounting /dev/sdb1 did not help as it has been showing me the same partition. But then came the bright idea to try and mount /dev/sda1, despite it being already mounted. To my surprise it turned out to be a completely another partition! The lost one! The one I missed so much. I was in heaven, so I started googling, because by that time I just knew it had something to do with those weird UUIDs. And I’d found out that I was not alone. I was so happy…

Now I know that my mistake was to make the exact clone of the system partition and have those two partitions (with the same UUIDs, yeah, unique ids my ass) available at the same time. No wonder my Ubuntu felt schizophrenic, but it still does not justify all of this weird behavior I was greeted with. Some error, some syslog entry, anything would be helpful… is that too much to ask?

What I was left with after I’ve figured it all out was this nice free disk space report (notice the double /dev/sda1 entry):

$ df
Filesystem  1K-blocks     Used  Available  Use% Monuted on
/dev/sda1     5162796  1650512    3250028   34% /
(..)
/dev/sda1     5162796  1558632    3341908   33% /mnt/disk-a

The root of the problem is that I base most of my core linux knowledge on the RedHat from the 90s when /dev/hda1 was saint and meant exactly what it represented, namely the first partition of the first hard disk (presumably connected using the first cable and set as master). With UUIDs all this has changed. Apparently for the better, but leaving some folks like me scratching their heads with disbelief.

Yes, Ubuntu is Linux for human beings. Apparently not for all…

PS: For future reference, remember to set the UUID after doing any partition duplication using dd. You do it like this:

tune2fs -U random /dev/sdb1

Squid: WARNING! Your cache is running out of filedescriptors

So you have a LAN with 50+ users and you set up a nice Squid w3cache as a transparent proxy with 100GB of space reserved for the cache (hdds are so cheap nowadays…). Weeks pass and suddenly you notice that something is messing up your web experience as Firefox suddenly decides to run painfully slow. About 30 minutes wasted on finding the culprit (like changing your DNS servers, clearing browser cache, etc.) until you decide to check the router and then the Squid with its logs. And then you find something fishy:

2007/01/01 17:51:19| WARNING! Your cache is running out of filedescriptors
2007/01/01 17:51:35| WARNING! Your cache is running out of filedescriptors
2007/01/01 17:51:51| WARNING! Your cache is running out of filedescriptors
(...)

I won’t be explaining why this happens. Others have done it before. What I’m going to do is present you with a solution that does not require a complete Squid recompilation/reinstallation procedure.

RedHat/Fedora

/etc/init.d/squid stop

nano /etc/squid/squid.conf
  max_filedesc 4096

nano /etc/init.d/squid
  # add this just after the comments (before any script code)
  ulimit -HSn 4096

/etc/init.d/squid start

Debian

nano /etc/defaults/squid
  SQUID_MAXFD=4096

/etc/init.d/squid restart

Ubuntu

nano /etc/default/squid
  SQUID_MAXFD=4096

/etc/init.d/squid restart

And now watch the /var/log/squid/cache.log for a similar line:

2007/01/01 18:32:27 With 4096 file descriptors available

If it still says 1024 file descriptors available (or similarly low value) you are out of luck (or you’ve just messed something up).