Atlantic Linux Blog http://atlanticlinux.ie/blog Thoughts on running an Irish Linux business Fri, 22 Aug 2014 15:00:13 +0000 en-US hourly 1 https://wordpress.org/?v=4.5.1 Monitoring your infrastructure – Zabbix http://atlanticlinux.ie/blog/monitoring-your-infrastructure-zabbix/ http://atlanticlinux.ie/blog/monitoring-your-infrastructure-zabbix/#comments Tue, 14 Sep 2010 19:00:43 +0000 http://atlanticlinux.ie/blog/?p=197 Hi there – I’m afraid I’ve neglected the blog for the last few months – it’s been a busy spring and summer. I’ll try to post more regular articles now that the evenings are closing in again!

As you grow your infrastructure – one of the growing pains you’ll encounter is how to keep an eye on how your systems are running. Sure, you can come in every morning and login to each of your servers, maybe scan the logs and run a few commands like htop and dstat to verify that things are working ok. This approach doesn’t scale very well (it might work for 2 or 3 machines, but will be problematic with 50 machines). So you need something to monitor your infrastructure, ideally something that will do all of the following,

  1. Monitor all of your systems and notify you if there is a “problem” on any system.
  2. Store historical data for some key performance parameters on the system (it is useful to understand what kind of loads your systems normally run and whether these loads are increasing over time).
  3. Provide 1. and 2. via an easy to configure and use interface.
  4. Provide a nice graphical display of this data – for scanning for performance problems.
  5. Automatically perform actions on systems being monitored in response to certain events.

There are many open source and commercial applications for monitoring systems which meet the above requirements – see Wikipedia’s Comparison of network monitoring systems for a partial list of so called Network Monitoring / Management Systems.HP OpenView is the 800 lb gorilla of commercial network/system management tools but seems to have morphed into a whole suite of management and monitoring tools now. In Linux circles, the traditional solution for this problem has been Nagios. It has a reputation for being stable and reliable and has a huge community of users. On the other hand (based on my experiences while evaluating a number of different tools), it is configured with a series of text files which take some getting to grips with and a lot of functionality (like graphing and database storage) is provided through plugins (which themselves require installation and configuration). I found the default configuration to ugly and a little unfriendly and while there is a large community to help you – the core documentation is not great. There was a fork of Nagios, called Icinga which set out to address some of those problems – I haven’t checked how it’s progressing (but a quick look at their website suggests they have made a few releases). Kris Buytaert has a nice presentation about some of the main open source system monitoring tools from 2008 (which still seems pretty relevant).

After evaluating a few different systems, I settled on Zabbix as one which seemed to meet most of my requirements.  It is a GPL licensed network management system. One of the main reasons I went with Zabbix is because it includes a very nice, fully functional web interface. The agents for Zabbix (the part of Zabbix that sits on the system being monitored) are included in most common distributions (and while the distributions don’t always include the most recent release of the Zabbix agent, newer releases of Zabbix work well with older releases of the agents). Also, Zabbix is backed by a commercial/support entity which continues to make regular releases, which is a good sign. For those with really large infrastructures, Zabbix also seems to include a nicely scalable architecture. I only plan on using it to monitor about 100 systems so this functionality isn’t particularly important to me yet.

While our chosen distributions (Ubuntu and Debian) include recent Zabbix releases, I opted to install the latest stable release by hand directly from Zabbix – as some of the most recent functionality and performance improvements were of interest to me. We configured Zabbix to work with our MySQL database but it should work with Postgres or Oracle equally well. It does put a reasonable load on your database but that can be tuned depending on how much data you want to store, for how long and so on.

I’ve been using Zabbix for about 18 months now in production mode. As of this morning, it tells me it is monitoring 112 servers and 7070 specific parameters from those servers. The servers are mainly Linux servers although Zabbix does have support for monitoring Windows systems also and we do have one token Windows system (to make fun of ). Zabbix also allows us to monitor system health outside of the operating system level if a server supports the Intelligent Platform Management Interface (IPMI). We’re using this to closely monitor the temperature, power and fan performance on one of our more critical systems (a 24TB NAS from Scalable Informatics). Finally, as well as monitoring OS and system health parameters, Zabbix includes Web monitoring functionality which allows you to monitor the availability and performance of web based services over time. This functionality allows Zabbix to periodically log into a web application and run through a series of typical steps that a customer would perform. We’ve found this really useful for monitoring the availability and behaviour of our web apps over time (we’re monitoring 20 different web applications with a bunch of different scenarios).

As well as monitoring our systems and providing useful graphs to analyse performance over time, we are using Zabbix to send alerts when key services or systems become unavailable or error conditions like disks filling up or systems becoming overloaded occur. At the moment we are only sending email alerts but Zabbix also includes support for SMS and Jabber notifications depending on what support arrangements your organisation has.

On the downside, Zabbix’s best feature (from my perspective) is also the source of a few of it’s biggest problems – the web interface makes it really easy to begin using Zabbix – but it does have limitations and can make configuring large numbers of systems a little tiresome (although Zabbix does include a templating system to apply a series of checks or tests to a group of similar systems). While Zabbix comes with excellent documentation, some things can take a while to figure out (the part of Zabbix for sending alerts can be confusing to configure). To be fair to the Zabbix team, they are receptive to bugs and suggestions and are continuously improving the interface and addressing these limitations.

At the end of the day, I doesn’t matter so much what software you are using to monitor your systems. What is important is that you have basic monitoring functionality in place. There are a number of very good free and commercial solutions in place. While it can take time to put monitoring in place for everything in your infrastructure, even tracking the availability of your main production servers can reap huge benefits – and may allow you to rectify many problems before your customers (or indeed management) notice that a service has gone down. Personally, I’d recommend Zabbix – it has done a great job for us – but there are many great alternatives out there too. For those of you reading this and already using a monitoring system – what you are using and are you happy with it?

]]>
http://atlanticlinux.ie/blog/monitoring-your-infrastructure-zabbix/feed/ 2
Linux as a Home Theatre PC (HTPC) – Installation http://atlanticlinux.ie/blog/linux-as-a-home-theatre-pc-htpc-installation/ http://atlanticlinux.ie/blog/linux-as-a-home-theatre-pc-htpc-installation/#comments Tue, 23 Feb 2010 05:30:51 +0000 http://atlanticlinux.ie/blog/?p=172 See Linux as a Home Theatre PC (HTPC) – Introduction for an introduction to using Linux as a HTPC. In this post, I detail the steps I used to actually install and configure the HTPC and some minor gotcha’s that cropped up in relation to audio over HDMI.

  1. Downloaded Mythbuntu 9.10 64-bit edition from the Mythbuntu site.
  2. Installed Mythbuntu using the standard configuration settings (I may reinstall with a different partitioning scheme in the future but for now, at least, I just need a partition in which to dump various bits of media).
  3. Connected the MythTV box to my HDTV using a standard HDMI cable.
  4. Ensure you are using the NVidia Restricted Driver version 180 (and not 173) in order for HDMI audio to work (there is also a newer version 190 driver but I haven’t verified that this works yet).
  5. During initial MythTV configuration, configured to use ALSA:hdmi for audo playback rather than the default. This is sufficient to have MythTV play video files loaded in /var/lib/mythtv/video correctly.
  6. While MythTV presents a nice interface, I would also like to be able to use the standard Ubuntu desktop from time to time (Mythbuntu installs an XFCE4 environment by default – you can install GNOME or KDE also if you wish but for occasional use, the standard environment works very well). To get HDMI audio working outside of MythTV (for example, when browsing), added the following to /etc/asound.conf
    pcm.hdmi_hw {
     type hw
     card 0     #  <-----  Put your card number here
     device 3   #  <-----  Put your device number here
     }
    pcm.hdmi_formatted {
     type plug
     slave {
     pcm hdmi_hw
     rate 48000
     channels 2
     }
     }
    pcm.hdmi_complete {
     type softvol
     slave.pcm hdmi_formatted
     control.name hdmi_volume
     control.card 0
     }
    pcm.!default hdmi_complete

    and then went to Applications/Multimedia/Mixer and clicked on Select Controls and enabled IEC958 2 and hdmi_volume. Back in the main mixer window, select the Switches tab and enable IEC958 2. Sound over HDMI should now be working (thanks to http://ubuntu.ubuntuforums.org/showthread.php?p=8522729 for tips on this, configuring HDMI sound output can be a little tricky for now at least).

  7. ALSA includes a useful utility called speaker-test which you can use to test your sound output.
    speaker-test -Dplug:hdmi -c2 -twav

Of course this is an experimental system – so not everything works perfectly. In particular, the TV card is not currently picking up output from my UPC cable set-top box (the set-top box includes a standard TV aerial socket on the back which I’ve connected to the Hauppage PVR-150). When I tested the system with Mythbuntu 9.04, I detected a signal from this and could view some television channels (the quality was mediocre but I didn’t attempt any tuning or tweaking) and could use the system as a PVR / DVR – one of MythTV’s key features. Since installing Mythbuntu 9.10, I haven’t detected a signal despite some efforts to configure it. I suspect a kernel driver issue but I have yet to work my way through the IVTV troubleshooting procedure mainly because I’m not very interested in this functionality for the momet at least. It’s something I’ll investigate at some stage, although in the future – I’ll probably be more interested in adding a DVB-T card to the system to avail of Ireland’s Digital Terrestrial Television.

In conclusion – things that are working well include,

  • Video playback – both in the MythTVin frontend and from the XFCE desktop (using VLC or Totem).
  • Music playback,
  • HD playback including using VDPAU. During playback of some HD samples, processor load on the system remained negligible, suggesting that the bulk of the decoding activity is happening on the 9400 rather than on the cpu.
]]>
http://atlanticlinux.ie/blog/linux-as-a-home-theatre-pc-htpc-installation/feed/ 4
Linux as a Home Theatre PC (HTPC) – Introduction http://atlanticlinux.ie/blog/linux-as-a-home-theatre-pc-htpc-introduction/ http://atlanticlinux.ie/blog/linux-as-a-home-theatre-pc-htpc-introduction/#comments Tue, 16 Feb 2010 10:26:26 +0000 http://atlanticlinux.ie/blog/?p=168 Hi – apologies for the recent hiatus, hibernation is coming to an end, should be posting more frequently again now (also currently experimenting with both twitter and buzz).

I recently started looking into using a Linux box as a media centre or HTPC. In the past I’ve experimented with so called “multimedia drives” as a solution for managing my collection of media recordings and archived media. The drive I was using was a Lacie Silverscreen and while it worked, it did have various limitations. In particular, it didn’t play HD media and it sometimes had audio sync issues playing back media that played without problems on the PC. I assume the sync issues were the product of either a lack of processing power in the Silverscreen or possibly a lack of codecs. Newer products from Lacie (or similar products from other companies like Iomega’s ScreenPlay) have probably addressed some of these issues and there is no doubt, if you’re a non-technical user, these multimedia drives are a good solution.

In the interests of learning more about how well Linux works as a media solution, I decided to go about building one with a view to installing MythTV on it and evaluating the suitability of a Linux box as a full-featured HTPC. The first step in this experiment was to identify suitable hardware for a HTPC – key requirements for me were,

  • Noise – as a machine sitting in your living room beside your TV, the HTPC needs to be quiet. This also suggests it should run cool, it means if you have any fans in the system they won’t need to run at a high speed and/or for long. Key considerations for sound are minimising the number of fans (preferring passive cooling options such as heatsinks if they work), sound reduction features in the box (such as sound insulation and things like rubber/silicone grommets for mounting hard drives, fans and so on).
  • Performance – I want the system to be capable of playing back all possible types of media including HD video. I’d also like to have the option of encoding new media on the fly while simultaneously watching  something else. All of this means that a reasonably powerful processor and a reasonably high performing graphics card are neccesary.
  • Size – as a further consideration, I’d prefer if the unit I build is reasonably small – ideally it shouldn’t be particularly visible in the living room beside or near the TV. For this one, I don’t plan to go to any heroic efforts so I’ll prefer a conventional case over anything amazingly small. As you reduce a PC in size, you start running into heat issues and you start having to use components that have been specifically designed for smaller units – which leads to increasing cost.
  • Cost – while not the main driver, I didn’t plan on spending excessively for any particular component of the system. Certainly, a good spec HTPC shouldn’t cost any more than a reasonble spec desktop PC.

After lots of research and review reading, I finally settled on the following spec (note that in February, 2010, at least some of this hardware is well behind the curve – if you’re building a new box now you can probably find improved components for at least parts of this),

I chose the motherboard, primarily for the integrated NVIDIA 9400 graphics chipset which provides a HDMI output on the motherboard and also includes support for HD decoding in the chipset (rather than requiring the main system processor – while there is some support for such decoding on chipsets from other vendors, the support for doing this in Linux seems to particularly good with the 9400 using VDPAU). Other components were mainly chosen either because they have a good noise profile or because they give good “bang for buck”. The processor is probably overkill but was relatively cheap and given this system is a testbed for various media experiments, I’d like to have enough processing power just in case. For a typical user, a lower spec processor would be more than sufficient. A similar comment applies to the memory, it is way more than I expect to use but for the price, it didn’t make sense to purchase less. Note that the memory is standard, boring, “value” memory – I’ve experimented with high performance memory in the past and it required various tweaks such as bumping the memory voltage in the BIOS and manually setting memory timings before it performed optimally (or at all) – life is too short for this and the performance gains for a typical user aren’t really noticable (but such memory usually features impressive go faster stripes if thats your thing!

As an aside, since I build this system – NVIDIA have released their ION graphics platform and Intel have released their Atom lower power processor range. These seem to provide the basis for a good HTPC type system (as far as I know, the NVIDIA ION platform is built around a version of the 9400 chipset so it should have similar functionality to my motherboard’s chipset) but I have some concerns about how suitable the Atom processor would be for heavier duty tasks such as transcoding. As always, there is a trade-off between performance and overall size – smaller systems have less room to dissipate heat so they need to run cooler (usually with a lower performance).

For my initial foray into the area, I decided to use Mythbuntu, an Ubuntu based Linux distribution which includes MythTV and is preconfigured to work well connected directly to a TV. Mythdora is a similar idea but based around the Fedora distribution. There is no reason not to install your favourite Linux distribution and install MythTV on top of it if you wish.
My initial experiments were carried out with Mythbuntu 9.04 earlier this year but I thought I’d reinstall with Mythbuntu 9.10 on it’s recent release and document my experiences here, including detailing what works and what doesn’t. See my next posting for details of how things went.

Update: Puzlar sent me a tweet asking what kind of remote control I used with the system. The Antec Fusion case comes with it’s own infrared receiver and remote control. The Hauppage PVR-150 also included it’s own IR receiver and remote control. Since the IR receiver in the Antec Fusion case doesn’t need any additional items to be plugged into the box, I opted to go with that. The MythTV wiki contains details of how to configure the IR receiver to work properly – when I installed Mythbuntu 9.04 it required some manual tweaking but when I installed Mythbuntu 9.10 it worked out of the box without any tweaking as far as I can remember – so the standard keys like the arrows and the play/pause/rewind/forward buttons on the control do what they should do in the Mythtv Frontend and you can move the mouse around the desktop using the control also (which is a bit slow but ok if you just need to point and click on something occasionally).  I also have a wireless mouse and keyboard (a Logitech S510 but it seems to have been discontinued in the meantime) for occasional surfing and tuning of the system. I recently tried out XBMC as an alternative interface for the HTPC and that supported all the remote control functionality too (perhaps more than MythTV out of the box) – XBMC looks like a nice alternative to MythTV if PVR isn’t a requirement – I’ll be playing around with it some more.

One other note on the case – the Antec Fusion is a nice case – maybe a little bigger than I expected but it does look more like a piece of HiFi kit than a PC so it blends in well beside the TV. While I thought an LCD panel on the HTPC would be useful, in retrospect I have no need or use for this and if I was going again, I’d probably order a case without an LCD – perhaps something like the Antec NSK2480.

]]>
http://atlanticlinux.ie/blog/linux-as-a-home-theatre-pc-htpc-introduction/feed/ 3
Java on Debian tip http://atlanticlinux.ie/blog/java-on-debian-tip/ http://atlanticlinux.ie/blog/java-on-debian-tip/#respond Tue, 10 Nov 2009 20:00:49 +0000 http://atlanticlinux.ie/blog/?p=165 The default Java environment installed by Debian is the GNU Compiler for Java, while GNU have made remarkable progress with this – anyone doing mainstream Java development will probably prefer to install the latest Sun Java environment on their system – even free software projects like Hadoop are primarily tested in this environment.

Most Linux distributions provide packages for Sun’s Java as well as GCJ and probably OpenJDK. When you install more than one package which provides similarly named commands, Debian’s Alternatives System comes into play – allowing you to select one command or another. When you install a different Java, you can of course manually update each Java command (java, javac, javadoc, jconsole, jmap,jps and so on) using the update-alternatives command, but if you want a quicker way, try the update-java-alternatives command from the java-common package.  It will automatically update the paths to all provided Java commands in one go once you tell it which installed Java environment you wish to use, for example,

update-java-alternatives -s java-6-sun
]]>
http://atlanticlinux.ie/blog/java-on-debian-tip/feed/ 0
Debian 5.0 (Lenny) install on Software RAID http://atlanticlinux.ie/blog/debian-5-0-lenny-install-on-software-raid/ http://atlanticlinux.ie/blog/debian-5-0-lenny-install-on-software-raid/#comments Mon, 28 Sep 2009 19:00:46 +0000 http://atlanticlinux.ie/blog/?p=154 As mentioned in previous posts, I’m a big fan of Linux Software RAID. Most of the Ubuntu servers I install these days are configured with two disks in a RAID1 configuration. Contrary to recommendations you’ll find elsewhere, I put all partitions on RAID1, not some (that includes swap, /boot and / – in fact I don’t normally create a separate /boot partition, leaving it on the same partition as /). I guess if you’re using RAID1, I think you should get the advantage of it for all of your data, not just the really, really important stuff on a single RAIDed partition.

When installing Ubuntu (certainly recent releases including 8.10 and 9.04) you can configure all of this through the standard installation process – creating your partitions first, flagged for use in RAID and then configuring software RAID and creating a number of software RAID volumes.

I was recently installing a Debian 5.0 server and wanted to go with a similar config to the following,

Physical device Size Software RAID device Filesystem Description
/dev/sda1 6GB /dev/md0 swap Double the system physical memory
/dev/sdb1 6GB
/dev/sda2 10GB /dev/md1 ext3, / You can split this into multiple partitions for /var, /home and so on
/dev/sdb2 10GB
/dev/sda3 40GB /dev/md2 ext3, /data Used for critical application data on this server
/dev/sdb3 40GB

When I followed a standard install of Debian using the above configuration, when it came to installing GRUB, it failed with an error. The error seemed to be related to the use of Software RAID. Searching the web for possible solutions mostly turned up suggestions to create a non-RAIDed /boot partition but since this works on Ubuntu I figured it should also work on Debian (from which Ubuntu is largely derived).

First, a little background to GRUB and Linux Software RAID. It seems that GRUB cannot read Linux software RAID devices (which it needs to do to start the boot process). What it can do, is read standard Linux partitions. Given that Linux software RAID1 places a standard copy of a Linux partition on each RAID device, you can simply configure GRUB against the Linux partition and, at a GRUB level, ignore the software RAID volume. This seems to be how the Ubuntu GRUB installer works. A GRUB configuration stanza like the following should thus work without problems,

title           Debian GNU/Linux, kernel 2.6.26-2-amd64
root            (hd0,1)
kernel          /boot/vmlinuz-2.6.26-2-amd64 root=/dev/md1 ro
initrd          /boot/initrd.img-2.6.26-2-amd64

When I tried a configuration like this on my first install of Debian on the new server, it failed with the aforementioned error. Comparing a similarly configured Ubuntu server with the newly installed Debian server, the only obvious difference I could see is that the partition table on the Ubuntu server uses the old msdos format while the partition table on the Debian server seems to be in GPT format. I can’t find any documentation on when this change was made in Debian (or indeed whether it was something in my configuration that specifically triggered the use of GPT) but it seems like this was the source of the problems for GRUB.

To circumvent the creation of a GPT partition table on both disks, I restarted the Debian installer in Expert mode and installed the optional parted partitioning module when prompted. Before proceeding to the partitioning disks stage of the Debian installation, I moved to a second virtual console (Alt-F2) and started parted against each disk and ran the mklabel command to create a new partition table. When prompted for the partition table type, I input msdos.

I then returned to the Debian installer (Alt-F1) and continued the installation in the normal way – the partitioner picks up that the disks already have an partition table and uses that rather than recreating it.

This time, when it came to the GRUB bootloader installation step, it proceeded without any errors and I completed the installation of a fully RAIDed system.

]]>
http://atlanticlinux.ie/blog/debian-5-0-lenny-install-on-software-raid/feed/ 10
BIOS flash upgrades on Linux http://atlanticlinux.ie/blog/bios-flash-upgrades-on-linux/ http://atlanticlinux.ie/blog/bios-flash-upgrades-on-linux/#comments Wed, 23 Sep 2009 19:01:36 +0000 http://atlanticlinux.ie/blog/?p=147 To upgrade your system BIOS you normally need to run a piece of software from the system manufacturer which loads an updated copy of the BIOS into the EPROM chip on your system motherboard – a process known as flashing your BIOS. Most system manufacturers supply BIOS upgrades in a form that will run under DOS or, occasionally, Windows. It is rare to find a BIOS upgrade program that runs under Linux (I’d love to hear about one). Recognising that not all of their customers are necessarily running a 28 year old, 16-bit operating system – some system manufacturers supply their BIOS upgrades in the form of an image which you can burn to a CDROM and boot from (making the question of what OS you are running irrelevant).

I recently had to upgrade the BIOS on one of our Supermicro systems (an X7DVL-E system). Supermicro provide their BIOS upgrades as a ZIP file containing the actual BIOS and a DOS flash program. They also seem to provide some software which you can run on Windows to create a BIOS flash floppy disk (for the younger readers in the audience, that’s another wonderful technology from the 80s, and I’m talking about the super-modern 3.5″ floppy there). I’m not singling out Supermicro for particular criticism here, a lot of the system manufacturers seem to work on the assumption we’re still running PCs with Windows and a floppy drive (to be fair, if you have the optional IPMI management card installed, you can normally upload your firmware through that, but we don’t) – but for those of us running Linux servers, upgrading the BIOS can be a painful process.

There is a work-around for this problem. Thanks to the Linux boot-loader, GRUB – you can boot from a DOS disk image containing your BIOS upgrade program and run the program from within that booted image without ever actually installing DOS or a floppy drive in your system. The following procedure worked well for me on an Ubuntu 9.04 system (with thanks to this OpenSUSE page and and this Ubuntu forums posting for some assistance along the way) and the same approach should work on other distributions.

WARNING: Upgrading your system BIOS is an inherently risky process – with the primary risk being that if things go wrong you can brick your system. Things that can go wrong include flashing your system with a BIOS upgrade for a different system or the power getting interrupted while you are in the middle of a BIOS upgrade. In some cases, you may be able to reflash the BIOS using some emergency procedure but with most systems, you may be looking at a motherboard replacement. So proceed with caution and only upgrade your BIOS if you have a specific problem which the upgrade fixes.

  1. Download a bootable DOS disk image from the FreeDOS distribution site (FreeDOS is an excellent open source version of DOS. It is widely used by hobbyists and companies including Dell, HP and Seagate).
    wget http://www.fdos.org/bootdisks/autogen/FDOEM.144.gz
  2. Download your system manufacturers BIOS upgrade
    wget http://www.example.com/bios/version2.zip
  3. Place the downloaded BIOS upgrade program and files into the downloaded bootable DOS image.
    gunzip FDOEM.144.gz
    sudo mount -o loop FDOEM.144 /mnt
    sudo mkdir /mnt/bios
    cd /mnt/bios
    unzip <path to download BIOS upgrade file>/version2.zip
    umount /mnt
  4. Add the bootable DOS image (with the bios upgrade software) to your Linux bootloader (this requires a file from the syslinux package),
    sudo aptitude install syslinux
    sudo mkdir /boot/dos
    sudo cp /usr/lib/syslinux/memdisk /boot/dos
    sudo cp FDOEM.144 /boot/dos
    sudo vi /boot/grub/menu.lst

    and add the following section to the end of the file

    title DOS BIOS upgrade
    kernel /boot/dos/memdisk
    initrd /boot/dos/FDOEM.144
  5. Reboot your system and choose the DOS BIOS upgrade boot option. If the boot is successful you should shortly be presented with the A:\ DOS boot prompt. At this point you can run the BIOS upgrade software, for example,
    A:\CD BIOS
    A:\FLASH V2BIOS.ROM
  6. Once the upgrade finishes, reboot and enjoy your upgraded system.
]]>
http://atlanticlinux.ie/blog/bios-flash-upgrades-on-linux/feed/ 4
Package repositories for old Ubuntu releases http://atlanticlinux.ie/blog/package-repositories-for-old-ubuntu-releases/ http://atlanticlinux.ie/blog/package-repositories-for-old-ubuntu-releases/#respond Wed, 16 Sep 2009 09:03:26 +0000 http://atlanticlinux.ie/blog/?p=143 While you shouldn’t run old, unsupported releases of any Linux distribution (and Ubuntu is no exception to this), if you have to for some good reason (like you have some server running some proprietary system which you haven’t gotten working on a current release yet, and you have put adequate security measures around this system) then may find http://old-releases.ubuntu.com useful.

To continue installing packages on your unsupported release of Ubuntu,

  1. Edit /etc/apt/sources.list
  2. Replace all occurrences of archive.ubuntu.com with old-releases.ubuntu.com
  3. aptitude update
  4. aptitude install <package name>

It is very much a band-aid, and shouldn’t be used to run old unsupported releases indefinitely (for the simple reason that you most likely have various unpatched security holes in that old release).

If you don’t like upgrading your distribution every few months, you should really be using an LTS release of Ubuntu (currently 8.04 with the next LTS expected to be 10.04 next year).

]]>
http://atlanticlinux.ie/blog/package-repositories-for-old-ubuntu-releases/feed/ 0
Ubuntu 9.04 Fake RAID problems http://atlanticlinux.ie/blog/ubuntu-9-04-fake-raid-problems/ http://atlanticlinux.ie/blog/ubuntu-9-04-fake-raid-problems/#comments Tue, 15 Sep 2009 20:00:14 +0000 http://atlanticlinux.ie/blog/?p=137 So we have RAIDa technology that allowed computer users to achieve high levels of storage reliability from low-cost and less reliable PC-class disk-drive components, via the technique of arranging the devices into arrays for redundancy to quote the Wikipedia article.

In the beginning, manufacturers created dedicated hardware controllers to which disks were attached. These controllers include their own processor and memory and handle all the RAID functionality within the black box they present to the system (the good ones will even include a battery that lets the controller run for long enough in the event of a power failure so that any data stored in the RAID controller’s cache memory isn’t lost but can be written to the drives when the power comes back). As far as the system the controller is attached to is concerned – the RAID controller is one big disk. This is called hardware RAID.

As machines have gotten more powerful, most machines (certainly most desktop machines) are sitting idle most of the time, so it has become feasible to start using the system for operating system level tasks like providing RAID. All mainstream operating systems provide some form of this software RAID which performs exactly the same functionality as the hardware RAID controller above, but using the system’s processor and memory. There are advantages and disadvantages to both approaches (I’m increasingly leaning towards using software RAID on Linux – low end hardware RAID controllers aren’t very reliable and tend to be slow from an I/O perspective – most modern Linux servers tend to have multiple processor cores which are sitting idle most of the time and are perfectly suited to driving a RAID array) but they both work reasonably well.

In between these two comes something described as Firmware/driver-based RAID, HostRAID or Fake RAID. This is provided by cheap RAID controllers that do not implement all the RAID functionality (normally they are standard disk controllers with some special firmware) and utilise the main system processor for most of the heavy-lifting. They also rely on dedicated operating drivers to provide the RAID functionality, hence the name Fake RAID. I’m not a fan of Fake RAID controllers – apart from the fact that the manufacturers of these controllers rarely make it clear that they are not fully functional RAID controllers, their reliance on elaborate driver software makes them less reliable than hardware RAID but more complex to maintain than true software RAID. They are reasonably well supported under Linux these days using the Device-Mapper Software RAID Tool (aka dmraid) but personally, I prefer to use a Fake RAID controller as a standard SATA controller and if I require RAID on such a system, implement it using Linux’s excellent Software RAID support.

Up to recently, when people installed Ubuntu – if they did want to use their Fake RAID controller as a RAID controller, they ran into the problem of the installer not including dmraid support. Using Ubuntu 9.04 (Jaunty) – the installer detects at least some Fake RAID controllers and prompts you as to whether to use this controller via dmraid or not. If you choose not to, you will then be able to use it as a normal SATA controller.

I ran into an interesting problem on a recent reinstall of Ubuntu 9.04 onto a Supermicro X7DVL system which includes an Intel 631xESB/632xESB I/O controller which supports some sort of Fake RAID (Intel seems to call their Fake RAID Matrix Storage Technology). Given my stance on Fake RAID, I immediately disabled this in the BIOS by changing the controller to compatible mode (the datasheet above suggests this should disable RAID). When installing Ubuntu, the installer still detected the Fake RAID volumes and offered to configure dmraid for me. I declined the option and the native SATA disks (unRAIDed) were presented to me and fully partitioned and formatted.

I thought nothing more of this until I rebooted after completing the installation. The system booted as far as GRUB before dumping the message

No block devices found

It took me a while to figure out what was going on. Google turned up lots of people who had problems with Ubuntu and dmraid, but generally they were having the opposite problem of wanting to use dmraid but the installer not supporting it (like DMRAID on Ubuntu with SATA fakeraiddmraid missing from livecd and Need dmraid to support fakeraid). Presumably most of these problems have been fixed with the inclusion of dmraid in Jaunty.

This was the clue for me – I finally figured out (with some help from bug 392510 I must admit) that even though I had declined to use dmraid during the install, the newly installed operating system still contained dmraid and was loading the dmraid kernel modules at boot-time. This resulted in the kernel seeing some dmraid volumes rather than the partitions I had created during the OS install.

Once I figured that out, fixing the problem was relatively straightforward,

  1. Reboot with the Ubuntu 9.04 install cd and select Rescue broken system.
  2. When the rescue boot has been configured, select Execute shell and chroot into the installed environment.
  3. aptitude purge dmraid (this removes the dmraid software and the dmraid kernel modules from the initramfs).
  4. Reboot and enjoy your new OS.

Two things that I found misleading here are,

  • I had declined to use dmraid during the Ubuntu install, but it still included this functionality during installation
  • I had disabled SATA RAID in the BIOS but it was still visible to Ubuntu. I notice a newer version of the BIOS from Supermicro which may fix this problem but since Supermicro don’t include change log in their BIOS releases it’s hard to tell without going to the trouble of actually installing the update.

I should probably log a bug against the dmraid package in Ubuntu (if I get
around to it, it should appear against the dmraid package) – bug 392510 talks about supporting a nodmraid option to the kernel at boot time which would explicitly disable dmraid, I think this could be a good idea (Fedora apparently already does this).

Update 1: Bug 311637 already addresses this problem so I’ve added a comment to this.

Update 2: Upgrading the Supermicro system to the latest BIOS and disabling the Fake RAID controller through the BIOS seems to fix this problem also.

]]>
http://atlanticlinux.ie/blog/ubuntu-9-04-fake-raid-problems/feed/ 2
Passing kernel module parameters in Ubuntu 8.10 http://atlanticlinux.ie/blog/passing-kernel-module-parameters-in-ubuntu-8-10/ http://atlanticlinux.ie/blog/passing-kernel-module-parameters-in-ubuntu-8-10/#comments Wed, 02 Sep 2009 19:00:51 +0000 http://atlanticlinux.ie/blog/?p=134 Sorry for the mouthful of a title but I wanted to use something that would show up for the kind of queries I was firing into Google yesterday in a vain attempt to solve my problem.

A little background first: I’m working with some SuperMicro Twin systems (basically, two system boards in a single 1U chassis sharing a power supply – not as compact as a blade but not bad) which includes a nVidia MCP55V Pro Chipset Dual-port LAN / Ethernet Controller.  On Ubuntu 8.10 at least, this uses the forcedeth driver (originally a cleanroom implementation of a driver which competed with a proprietary offering from Nvidia – it now seems to have superceded that driver).

I noticed while doing large network transfers to or from one of these machines that the load on the machine seemed to spike. Running dmesg show a lot of these messages,

[1617484.523059] eth0: too many iterations (6) in nv_nic_irq.
[1617484.843113] eth0: too many iterations (6) in nv_nic_irq.
[1617484.869831] eth0: too many iterations (6) in nv_nic_irq.
[1617485.101377] eth0: too many iterations (6) in nv_nic_irq.
[1617485.855067] eth0: too many iterations (6) in nv_nic_irq.
[1617485.896692] eth0: too many iterations (6) in nv_nic_irq.

Google returns lots of results for this message – some people seem to have experienced complete lockups while others noted slowdowns. The proposed solution seems to be to pass the max_interrupt_work option to the forcedeth kernel module to increase the maximum events handled per interrupt. The driver seems to default to running in throughput mode where each packet received or transmitted generates an interrupt (which would presumably ensure the fastest possible transfer of the data) but can also be configured to operate in CPU mode (aka poll mode) where the interrupts are controlled by a timer (I’m assuming this makes for higher latency of transfers but smoother before for high network loads – the documentation on this is a little thin). This behaviour is controlled by the optimization_mode option. You can investigate the options which can be passed to any kernel module by running,

modinfo <module name>

for example,

modinfo forcedeth

So, as an initial pass at tuning the behaviour on the server, I decided to pass the following options to the forcedeth driver.

max_interrupt_work=20 optimization_mode=1

The standard way on Ubuntu 8.10 to set kernel module parameters is to add a line to /etc/modprobe.d/options like the following

options forcedeth max_interrupt_work=20 optimization_mode=1

I tried this and then tried rebooting my system but found it didn’t have any effect (I started seeing the same error messages after running a large network transfer and the max iterations were still referenced as 6 rather than the 20 I should be seeing if my options were parsed).

After trying a few things and brainstorming with the good people on #ubuntu-uk I figured out that you need to run the following command, after modifying /etc/modprobe.d/options

sudo update-initramfs -u

This presumably updates the initramfs image to include the specified options. It is possible this is only needed for modules that are loaded early on in the boot process (such as the network driver) – I haven’t had the need to modify other kernel modules recently (and this stuff seems to change subtly between distributions and even versions).

Following another reboot and a large network transfer, the error messages suggesting that the options are now being parsed (some kernel modules allow you to review the parameters currently being used by looking in the file /sys/modules/<module name>/parameters but forcedeth doesn’t seem to support this).

I figured the trick with update-initramfs was worth publishing for the next person who runs into this problem. I’d also love to hear from others using the forcedeth driver as to what options they find useful to tune the throughput.

]]>
http://atlanticlinux.ie/blog/passing-kernel-module-parameters-in-ubuntu-8-10/feed/ 2
Marketing 102 – Your business card http://atlanticlinux.ie/blog/marketing-102-your-business-card/ http://atlanticlinux.ie/blog/marketing-102-your-business-card/#respond Tue, 25 Aug 2009 19:00:53 +0000 http://www.atlanticlinux.ie/blog/?p=109 So, your business card. Should you bother? I mean, we’re all on the internet now, right? Yes, we mostly are and depending on your audience – the venerable business card may not be as important as it once was but I think it’s still important. At the very least, it’s all your contact details on a small, easy to carry piece of paper. At best, it’s a distinctive piece of marketing material which conveys to customers and potential customers that you are the right person to fix their problem (or prevent their problem from ever happening).

If you’re going to traditional networking events – if you’re not passing out business cards you’re wasting your time – unless you have a really impressive pitch, amazing presence and a really easy to remember web address, no one is going to remember how to contact you the following day, never mind in a few weeks or months (I know I’ve certainly dug through my collection of business cards on a few occasions, knowing I met someone that provided a useful service in the past but whose contact details escaped me).

So we’ve established that you need a business card. Now, what to put on it? At a minimum, you want your name, title, address, email and phone. After that, space permitting, the sky is the limit – most people will include their company logo (although traditional businesses like solicitors and doctors usually don’t), their website address, maybe their blog address, their skype username, maybe their twitter address and depending on their audience – maybe one or more of the social networking site address (linkedin, facebook, and so on). For a business audience, linkedin is probably the most useful – but certainly if you’re audience is web 2.0 then you may want to consider one or more of the others. Remember, there is a fine line between too much info and too little. Personally I like a minimalist, uncluttered business card but each to their own.

Once you’ve figured out what to put on your card – the next step is to design your card. If you have the budget, of course you can get someone to design your card. Assuming you’re an SME startup, I’m not sure it’s the best use of your marketing budget. It’s hard to make a mess of designing a business card. For inspiration, take a look at some business cards you’ve received from others. There are a few standard patterns. If you don’t have any (cmon, get out there and get networking!) – Wikipedia’s page on business cards will give you some examples. Depending on who you use to print your card, they may also provide some basic design tools and templates which you can use. As with all things marketing, you can tweak this over time so don’t agonise over it too much.

The last question relating to design is what to put on the back of your card. Some people choose to leave this blank and there is nothing wrong with this. Others put an image (maybe a bigger version of their logo) and either their company tagline or the website url again. Of course the sky is the limit here and you can do all sorts of unique and visually striking things with your card. The problem with some of these is that they will dramatically increase your costs. So if you’re in the business of graphic design, it may make sense to spend here, otherwise something less expensive is probably the right choice. For Atlantic Linux, I decided to put a QR Code (a type of two-dimensional bar code that some mobile phones can read) including the company website URL on the back – it’s a little bit different and compliments our overall technology theme.

As an company that works extensively with open source software – where it makes business sense I prefer to use open source tools. Designing our business cards was no exception. We use the Scribus Open Source Desktop Publishing tool to create our cards. It is extremely powerful and allows fine-grained layout of our cards. It also allows us to export the final business card in a wide range of different formats suitable for consumption by whatever company you decide to use to print your cards. As with any software tool, there is a learning curve but overall I found Scribus to be reasonably intuitive and very powerful. A new release (1.3.5) is available now so it is a good time to check it out. You can of course use a graphics package or a word processor (such as OpenOffice) and they will work fine – but subsequently editing the design or moving items around may prove more difficult (word processors aren’t really designed for fine-grained layout of various elements).

So you have your card designed and developed and you’re ready to print it. In Ireland at least, this is more expensive than it should be especially for low volumes of cards. You can go to a traditional printers but they’re normally geared up to printing a few hundred or thousand cards in a run and price accordingly. If you have a need for that many cards and shop around for a few quotes you’ll get an ok price. The problem is, for a small business, unless you’re doing lots and lots of networking, you probably won’t use a few thousand cards in a year or two. In practice, you’ll want to change your card before then – either because you’re business has changed in some way (maybe moved to bigger offices!) or because you want to modify your design in some way.

I took to the Internet to see if there were any alternatives. I was looking for a company that would be willing to produce lower volumes of cards, accepting that the costs per card would be higher (we’re not talking hundreds of euros here but if you’re a startup, you should be aggressively focusing on all costs). I found two suitable companies, Smileprint and Moo.com. After looking at both sites, I eventually decided to go with Moo.com – they provide more options for the kind of card and layout you want and work out cheaper on smaller volumes of cards (I decided to only go with a very small amount of cards so I have the option of changing them more frequently).

The end result –

business card (front)

business card (front)

and

business card (back)

business card (back)

One other thing to note – for printing purposes, make sure you use a large DPI when you are creating your business card files, DTP packages usually offer better control of this than word processors (whose output is normally intended for screens, not printers). Also, printers usually prefer documents that use a CMYK colour model rather than RGB.  I might revisit these in a future posting if there is interest (although Google should turn up lots of info on these and your chosen printer will probably provide specific guidelines also).

]]>
http://atlanticlinux.ie/blog/marketing-102-your-business-card/feed/ 0