If you're seeing the following on login (ssh) to an Ubuntu machine:
Linux blofeld 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 i686 GNU/Linux
Welcome to Ubuntu!
* Documentation: https://help.ubuntu.com/
Instead of the more informative:
Linux blofeld 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 i686 GNU/Linux
Welcome to Ubuntu!
* Documentation: https://help.ubuntu.com/
System information as of Fri Jan 21 11:19:55 GMT 2011
System load: 0.61 Processes: 123
Usage of /: 40.8% of 72.65GB Users logged in: 1
Memory usage: 58% IP address for eth0: 10.0.0.29
Swap usage: 0%
Graph this data and manage this system at https://landscape.canonical.com/
Last login: Fri Jan 21 11:15:57 2011 from 10.0.0.84
And you happen to know your CPU load is indeed less than one (why one..?) then I have a fix:
sudo apt-get install update-notifier-common
Behold, it worked for me! (And now there's a bug for it to be fixed, too.)
If you have a DHCP server that hands out IP addresses in the subnet 169.254/16 then you could find yourself becoming disconnected.
From Wikipedia on the subject, that subnet is reserved for devices that need to auto-configure themselves with an IP address when no DHCP server responds. More specifically however, it is called a "Link-Local address" and routers are specifically not to forward packets from such interfaces.
Now for whatever reason (well before my time with the company) our Windows 2003 server was configured to serve this subnet up for VPN clients. Combine this with an Apple decision from Mac OC 10.6.5 to now honour the non-routing of messages from such addressed interfaces you have my Macbook Pro able to connect to our office VPN but get absolutely nothing on that network to respond.
If (on a Mac) you check the console's system.log (or tail /var/log/ppp.log) you might see:
That means there already is a route for 169.254... Apple I think added one to prevent further routes being added (which would be invalid, of course).
Nov 24 20:12:30 COMPUTER_NAME pppd: local IP address 169.254.190.134
Nov 24 20:12:30 COMPUTER_NAME pppd: remote IP address 169.254.86.242
That's a pretty cast-iron guarantee that your DHCP server is issuing you an improper address. So to fix it you should instead choose from the "Private Address Space" for example 192.168.0.0/16 or 10.0.0.0/8.
Anyway, we reconfigured the Windows 2003 server to allocate IPs from it's normal LAN address space of 10.0.0.0/24 and my Mac now connects and routes traffic just fine.
It was a slightly disappointing day yesterday. I had managed to organise one of Amazon UK's sales representatives to call my employer for a chat about our potential use of their cloud technologies.
Background: We have an app perfect for expansion into the cloud. However, many of our customers require their data held within England due to Data Protection Act and related laws & regulations. Closest Amazon gets to meeting our needs is their Ireland datacenter.
Do we gave our Amazon contact a brief background. His reply was really threefold:
- No UK datacenter planned (disappointing bit)
- Amazon are represented in Government plans for IT advice surrounding the future use of cloud technologies and hope to see some documentation emerging next year
- Humans (our customers, our customers' customers) may not want their personal details outside the UK irrespective of legal protections offered
The second point is being partly driven by the CESG - a division of GCHQ. For us, our customer's data would need ranking on the Government's Business Impact Levels. Most data never reaches higher than level three, apparently. However, Amazon UK are working to ensure their Irish datacenter provides facilities capable of hosting data up to level three (from my memory of the conversation, you'll understand).
The third point is based on what we believe to be true: that Irish Data Protection laws are based on EU frameworks, as the UK ones are apparently. They should therefore be broadly similar and compatible with each other. So, our customers should (given legal assurances) be happy to see their customers' data held in Ireland. However, whether their customers will be happy with that no-one really has any idea.
This past August I gave my fiance an ASUS laptop for her birthday. Only this past weekend, however, did we grab some blank DVDs to use as backups for the AU Recovery tool they ship. You know the one - it pops up each time you boot asking if you want to burn the backups to disc yet.
So we fired it up, and it told us five discs would be needed. It burned the first two fine but the third failed. Unfortunately the message said only that an "exception" had occurred, without detailing possible causes.
At which point she handed the machine over to me. Oh good..!
Nothing wrong with the machine, I thought. Mind you, looking at C: drive she only had 800MB of 60GB space left. How on earth had she filled it? She normally doesn't go beyond Hotmail and Facebook, and she has AVG installed so hopefully it's not a virus.
I fired up the Add/Remove Programs utility and found Microsoft Office Trial which I duely deleted, freeing up an insanely large amount of capacity (gigabytes). I wonder if it's running out of disk space?
Google time, lots of people reporting much the same problems, and suggesting that people merely capture the .iso files the utility produces and write them to DVD using a separate utility. Surely not..? Anyway, I found the location and discovered Disc 3 was in fact zero bytes, yet Disc 4 had around 1GB. There was no Disc 5. There were also folders, presumable for building the .iso images from. That's a lot of disk space to be used for building, indeed 5x4.6GB plus temporary storage!
There being no other option, I quit the backup tool and was about to re-run the application once again when I saw the free space on C: drive leap up to 36GB. The backup tool had erased the images and temporary data. I re-ran the tool and watched the C: drive fill slowly, occassionally asking for new DVD. This time, Disc 3 had 4.6GB of data, as did Disc 4, and Disc 5 was now present with 1GB of data.
The DVDs finished. Moral of this story is that ASUS partitions your C: drive with too little spare room for all the added "essentials" like Office Trial plus any Windows Updates (love those Gigs...) over time to be able to use the backup tool. Unfortunately it's not always easy to shift the data from C: to D: where over 100GB of free space was in our case to be found. She didn't have much in My Documents, except the backup tool's own temporary data.
Hopefully this will educate and prevent further trouble for others with ASUS notebooks. And ASUS, how about providing better error information?
Earlier this month I rolled out changes across our small fleet of services that made ActiveMQ an essential element backing much of our software. Hopefully I can note that the things that went well and the things that didn't go according to plan, at least in respect to the message queue software and our use of it.
Firstly, actually installing ActiveMQ is really not very difficult (we run Ubuntu GNU/Linux). However, there are some configuration tweaks needed to ensure a restart occurs in a timely manner. Also, since security of the data within is essential all traffic between AMQ instances goes via SSL. This latter aspect proves a bit of a pain given you need to set up mutual key store trusts between each linkage. However, it does work.
An area that we could not easily predict would be memory use. Here's an area that really could use some clarification from the AMQ documentation. There's three limits to be set - an in-RAM limit, a temporary storage (overflow for in-RAM), and a persistence storage limit. This seems to be on top of any Producer Flow Control - which occurs per thread. It may look pretty easy to configure this stuff up however there are enough questions about behaviour to make me wonder what I've missed. One of our servers actually began issuing Java out of memory exceptions - something that is documented on AMQ's site and which we managed to adjust.
Another thing - when using the PHP PECL Stomp library to connect to localhost for some reason it can fail to connect. The solution appears to be to specify 127.0.0.1 instead. The library's author did note the same thing as something to be investigated.
So launch evening night came along and having made the necessary adjustments I published our internal set of customer accounts to one of our servers. Publication itself only took a few seconds however consumption speed was dire. Given I'd tested on a small subset beforehand without even noticing a delay I was rather surprised. Either way, I was not about to subject our staff or customers to potential performance problems elsewhere and backed out. Subsequent testing proved the problem had nothing to do with AMQ at all but MySQL and ext4 performing way too many fsyncs. We adjusted MySQL and the consumption of accounts was orders of magnitude faster.
A second roll-forward was scheduled, this one went smoother although verification that all systems were operating normally took far too long - again absolutely nothing to do with AMQ more ensuring staff were aware of any new ways of operating our systems and repercussions for our customers.
So now that the dust has settled how have things been? AMQ has - touch wood - not been a source of trouble. We're not exactly pushing tens of thousands of messages per second through this stuff but we're seeing the enqueue/dequeue counters flying fast enough to be coping with things nicely.
If I were to do things differently: well, we got hit with at least two major architectural changes that needed to be pushed out simultaneously. I can't tell you how painful that is to deal with.
What about AMQ use? Give the broker plenty of CPU and RAM. It will fly. And don't write a line of code before you read Enterprise Integration Patterns.
I spent the majority of yesterday in Norwich city centre at three appointments looking for outfits to hire for our wedding.
Essentially I need groom (me), best man, my father, father of the bride, and two ushers to be kitted out. Without preconceptions about what I wanted and without my bride to be (deliberate) I was joined by my parents and sister, and her fiancée - who happens to double as my best man.
The three appointments were are John Field Formal Hire, Moss Bros, and Greenwoods.
John Field Formal Hire
We arrived almost thirty minutes early and waited outside as it was clear they were busy attending to another customer. The shop is situated in St. Benedicts which isn't the most pleasant area with a combination of musical shops pawnbrokers dominating the businesses.
Once inside we were met by the only staff member on duty - Mrs Liz Field. She was very friendly and extremely professional, getting to know the family and our circumstances firstly, then suggesting that we start with a basic traditional suit before experimenting.
While not everything was in stock for trying out, the majority of what we were thinking about certainly was and Mrs Field went out of her way to ensure we tried every combination, given that a simple change in tie colour was having such a dramatic effect. She was most accommodating and allowed plenty of pictures to be taken. We ended our visit after roughly one and a half hours time, with a quote of £495 for the six of us (groom goes free).
There was no appointment necessary here, and upstairs we went to the hire shop where we were met by a courteous gentleman most willing to offer his help. We explained we wanted to try out a traditional suit. Unlike our visit to John Field, I was initially only handed a jacket, then after asking was given a front-only sample waistcoat, then trousers. Finally a tie was supplied.
Moss Bros were unable to supply very much for trying-on on the day itself due to stock constraints. A brochure was given to us to explain some of the different styles and colours although the package costs themselves did not very hugely (at least not on the ones we were interested in). The quote supplied was within a handful of pounds of John Field.
We left within thirty minutes which was reasonable given we knew what we wanted from the assistance received at John Field.
Opposite the rear entrance of Debenhams sits Greenwoods, and downstairs in the basement rather pushed in is where their suits for hire can be found. The manager presented herself and immediately took a few measurements. Very shortly after we arrived another group turned out and was told that we would be attended to first, and that they should come back in fifteen minutes time.
Once again, I was given trousers, shirt, tie, waistcoat and jacket to try on. More selection to try on here that at Moss Bros although not more than at John Field. Still, the quote given here was considerably cheaper at under £400 with cufflinks supplied free.
We have some additional questions for John Field that we remembered to ask of Moss Bros and Greenwoods such as insurance, how far in advance we were receive the clothes, procedures should we be unsatisfied and how soon must the garments be returned. However, the professionalism and time received was extremely impressive - John Field is preferred at this stage.
Moss Bros was very quick and almost manufactured by comparison. If Tesco's did formal suits it might look something like this. Don't get me wrong, the sales assistant was extremely helpful and it was clear that he expected nothing to go wrong. It was just a little too production-line like.
Greenwoods - a little on the small side and the quotation just felt too cheap. However, they aren't ruled out yet.
And seriously, it was the least exciting upgrade procedure ever.
Backed up the database using phpMyAdmin, downloaded a copy of host's home directory using cPanel, and clicked the upgrade link inside of WordPress itself.
And it works fine.
If only all software upgrades were like that.
Or, "no, there's no .deb for it yet".
ActiveMQ is a software message queue from the Apache Software Foundation. If you're reading this you most likely know what it is and probably have reasons to use it on an Ubuntu machine but have found out that it does not exist in the perfect world that is,
And I did promise the solution, so here it is.
So, hop over to the download page and select the latest available release tarball and extract it on your server. This concludes the super-simple bit.
Ensure you have java installed. I have the
sun-java6-jre packages for this purpose for Ubuntu 9.10 and 10.04.
Next, you might want to add a dedicated user account which doesn't have much at all. Examine the
adduser command options to disable login, disable password and disable the home directory. This adds up to a locked-down user account. Recursively change the ownership of the entire activemq directory firstly to
root, then change the data directory to the activemq user you just added.
With sudo, move the entire activemq directory into
/opt - this is not particularly standard for Ubuntu but given the ActiveMQ scripts default to
/opt/activemq and this package does not originate from a Debian package it seems appropriate enough.
Next, symlink the init script as provided by ActiveMQ 5.4.0+ from
$ sudo ln -sf /opt/activemq/bin/activemq /etc/init.d/
While here you might as well tell Ubuntu to start ActiveMQ on boot:
$ sudo update-rc.d activemq defaults
Defaults and Local Configuration
Now, let's build a default configuration file:
$ sudo /etc/init.d/activemq setup /etc/default/activemq
And edit the newly generated /etc/default/activemq file. You're looking initially for
ACTIVEMQ_USER="". Enter the name of your activemq user between the quotes. Further down, uncomment the lines:
And right underneath that lot:
ACTIVEMQ_SUNJMX_CONTROL="--jmxurl service:jmx:rmi:///jndi/rmi://127.0.0.1:11099/jmxrmi --jmxuser controlRole --jmxpassword abcd1234"
# Specify the queue manager URL for using "browse" option of sysv initscript
Now, you must create the data/jmx.password and data/jmx.access files (use the sample data they provide in the comments immediately above the lines). Ensure that these jmx files are only readable by the activemq user account!
Doing that lot enables the init script to connect to the locally running software via JMX, a management console. Without this configures correctly you're looking at issuing a shutdown command and seeing a ton of Java errors followed by thirty seconds of timeout before the script finally issues a
KILL on the pid. Not exactly elegant nor convenient.
That's it. There really isn't anything more to it but I thought I'd better note it as I'd been struggling to understand why the out-of-the-box configuration led to such poor shutdowns on each of my boxes.
Message queues are simple yet fundamentally critical to distributing an application across multiple computers. If you're like us you may need to distribute some of these messages between your servers over a public Internet connection.
To prevent eavesdropping we can encrypt the traffic between the servers using industry-standard SSL. Specifically we use RSA.
To get SSL connectivity up and running could be considered a bit of a dark art however and having spent a few hours on the topic I'd thought I'd document what I learned.
Firstly let me explain the topology I'm using - we have an office machine that we call the "hub" containing various administrative details. Next, we have the servers each of which does various jobs. We connect each server to the hub. This particular topology is the hub-spoke architecture.
Each machine (hub and servers) needs two files when running:
- A private keystore
- A shared keystore
A "keystore" is a database containing keys. A key is a digital fingerprint - an identity if you like. Each entry within the keystore can be referenced by an alias. The aliases you choose should for consistency's sake match the hostname of your machines which might make them "hub", "server-a", "server-b", etc.
Let's Get Started
Within your ActiveMQ installation you should have a conf directory. Within here you will be creating both keystores and a third file.
Firstly, on each machine, create a private keystore with a machine fingerprint. Let's take server-a as an example:
jamesg@server-a:/opt/activemq/conf$ sudo keytool -genkey -alias server-a -keyalg RSA -keystore server-a.ks
jamesg@server-b:/opt/activemq/conf$ sudo keytool -genkey -alias server-a -keyalg RSA -keystore server-b.ks
[ etc ]
You'll need to set a password so open up your favorite password management utility and create an entry for each machine's private keystore file. Enter the details needed on each machine one-by-one.
Once you have each machine with it's own private keystore you should export the private key you just built into a public key file:
jamesg@server-a:/opt/activemq/conf$ sudo keytool -export -alias server-a -keystore server-a.ks -file server-a_cert
jamesg@server-b:/opt/activemq/conf$ sudo keytool -export -alias server-a -keystore server-b.ks -file server-b_cert
[ etc ]
So now each machine has a keystore with a private signature, and a file with a public signature (the cert file).
Now, securely copy each server's public certificate file to the hub machine. On the hub you should end up with server-a_cert, server-b_cert, server-c_cert, etc. You need to import each file into a shared keystore file. You'll be prompted for a new password for this new keystore file on the hub:
jamesg@hub:/opt/activemq/conf$ sudo keytool -import -alias server-a -keystore shared.ks -file /home/jamesg/server-a_cert
jamesg@hub:/opt/activemq/conf$ sudo keytool -import -alias server-b -keystore shared.ks -file /home/jamesg/server-b_cert
jamesg@hub:/opt/activemq/conf$ sudo keytool -import -alias server-c -keystore shared.ks -file /home/jamesg/server-c_cert
[ etc ]
Each server (a, b, c, etc.) now needs it's own shared keystore for inbound connections to be authenticated against. On each server, import it's own public signature:
jamesg@server-a:/opt/activemq/conf$ sudo keytool -import -keystore shared.ks -file server-a_cert
jamesg@server-b:/opt/activemq/conf$ sudo keytool -import -keystore shared.ks -file server-b_cert
Remember, this is on each server, not the hub which has a shared.ks keystore holding every server's signature.
At this point you should have two keystores on each server and on the hub. The servers only know about themselves so far - the hub knows about itself and the servers. To allow the hub to talk to the servers we now need to be add the hub's own public key to the shared keystore on each server.
To do this, on the hub machine export your public key just as you did on the servers and add it to a shared key file on the hub. Next, securely copy your newly minted hub_cert file to each server and import them just as you did with the server certificates themselves to each server's shared.ks keystore file. When you add each one, remember to alias it according to the hostname of the machine it comes from so when adding the hub's hub_cert to server-a's shared.ks, the alias should be "hub".
Back on to each server you need to edit the conf/activemq.xml file. Remember, as of ActiveMQ 5.4.0 the elements need to be listed in alphabetical order!
A child of <broker> needs to be added: sslContext. There are no attributes but there is a single child: sslContext (yes, again):
<sslContext keyStore="server-a.ks" keyStorePassword="server-a-password"
Yes, there really is an sslContext child of an sslContext. Repeat on each server. Then on the hub:
<sslContext keyStore="hub.ks" keyStorePassword="hub-private-password"
You now have the SSL keys shared and configured. Now we need to configure our transports so the servers actually connect to the hub. On each server add the following to the activemq.xml configuration (remember, alphabetical order!):
Notice above we have duplex="true". This enables two-way communication over the same connection. Obvious!
Finally, on the hub itself modify your activemq.xml:
<transportConnector name="openwire" uri="ssl://0.0.0.0:61616?needClientAuth=true"/>
If you wish, you can add the above on port 61617 or similar provided that chosen port is not used and you reflect the change of port on each connecting server.
Note that you should not adjust the transporConnectors on your server unless clients that connect to your servers require something non-standard (ssl, Stomp, etc) or you need to adjust the settings for security reasons. Remember, the IP 0.0.0.0 means "listen on each interface, public and private".
Naturally you must ensure you have opened your hub's firewall to allow inbound connections to this port. Restart the hub, then your servers. Fingers crossed!
If you're wondering what is happening take a look within data/activemq.log. Sadly, the errors themselves aren't automatically prompting humans where to look to correct them so a little Googling may be needed. Generally, check that the right signatures are listed each server's and the hub's shared.ks and that you have configured the right passwords. If you really don't see anything, check that ActiveMQ is actually still running - if there's an XML file problem it will quit without logging the fault! You can invoke using java manually if you need to.
As to the keystores themselves the good news is that keytool -list -keystore thefile.ks will show the contents when given the password. Compare the MD5 signatures of the aliases found within. If you've mis-named one, don't worry - just keytool -delete -alias badalias -keystore thefile.ks. It really is pretty simple when you get used to it.
There is no reason why server-a could not talk SSL to server-b. Ensure the public certificates are added to each other's shared keystore and add the relevant networkConnector node to the XML file. Remember, the server with the networkConnector is considered by the connected-to server as a standard client.
I previously blogged about getting ActiveMQ working from an init.d script on Ubuntu Linux. I supplied such a script after finding none in the distribution and those I had found on the Internet did not work for me.
As of version 5.4.0 the ActiveMQ project ships with not only a proper init script but a way of building a default configuration file (a file to bootstrap the java loader and it's configuration files).
To use, simply copy or symlink the
bin/activemq script into
/etc/init.d/. Ensure it is executable (
Next, call it as
/etc/init.d/activemq. By default it lists how it should be used. Copy the
/etc/default/activemq file suggested and pass it in with the setup argument. Then edit the file and place your own activemq account username in at the provided blank space - as soon as it can the init script will drop from root down to this non-privileged account.
Remember to have created the activemq account with
--disabled-password --no-home-dir --disabled-login for extra security. Also ensure that wherever you place the activemq distribution (
/opt/activemq is suggested) you ensure the data sub directory is owned by the activemq user.