web – Atlantic Linux Blog http://atlanticlinux.ie/blog Thoughts on running an Irish Linux business Fri, 22 Aug 2014 15:00:13 +0000 en-US hourly 1 https://wordpress.org/?v=4.5.1 Setting UUIDs on new partitions http://atlanticlinux.ie/blog/setting-uuids-on-new-partitions/ http://atlanticlinux.ie/blog/setting-uuids-on-new-partitions/#comments Wed, 13 May 2009 06:55:02 +0000 http://www.atlanticlinux.ie/blog/?p=101 If you take a look at /etc/fstab on a recent Linux installation, you may notice it’s using lines like

UUID=663f1349-3d37-4633-af59-849eda89bae4 / ext3 defaults 0  1

instead of the more traditional

/dev/sda1       /               ext3    defaults 0  1

These universally unique identifiers (UUIDs) can be generated by the uuidgen command and can reasonably be considered unique among all UUIDs created on the local system, and among UUIDs created on other systems in the past and in the future (from the uuidgen(1) man page).

Why would you want to use a UUID rather than the far more readable /dev/sda1 (well, readable if you’re somewhat comfortable with Linux anyway)? The problem with traditional device names (like /dev/sda1) is that they are assigned by the Linux kernel at boot-time and depend on the order in which devices are found. So /dev/sda is the name the Linux kernel assigns to the first SCSI or SATA hard drive it finds, /dev/sdb is the name used for the second SCSI or SATA hard drive and so on.

This works very well until the Linux kernel detects your devices in a different order to the usual. In practice, this is a relatively rare event but it is certainly possible when moving between kernel versions and if you make changes to your system hardware (such as adding in a second SATA controller). If your device order does change, all of a sudden, your root device which used to be known as /dev/sda1 becomes /dev/sdb1 or similar. Normally this will cause a kernel panic at boot time as your Linux system attempts to mount a filesystem which doesn’t exist or which doesn’t contain the expected filesystem.

One solution to this is the use of UUIDs. When you create a new filesystem, a UUID is generated and written to the filesystem’s superblock. The /etc/fstab then refers to this UUID which will remain constant regardless of what kernel you are running or how the drive is connected to your system (indeed, if you move the drive to an entirely different system, the UUID will still be valid).

If you only create partitions during the initial installation of your OS – you won’t have to deal with creating UUIDs for your partitions, the installer should take care of it automatically.

If you create any new partitions though, you will need to assign UUIDs to those partitions manually if you wish to refer to those partitions in your /etc/fstab by UUID (you can continue to use a mix of /dev/sda1 type entries and UUID based entries in /etc/fstab if you wish). To assign new UUIDs you need to

  1. Generate new UUIDs with uuidgen
    uuidgen
    f15f8aed-0073-4d2f-abec-aa5da4f72e8c
  2. Write this uuid to your new partition (WARNING: Do not run these commands against an existing partition),
    for ext2, ext3 or ext4:

    sudo tune2fs -U f15f8aed-0073-4d2f-abec-aa5da4f72e8c /dev/sdc5

    for xfs:

    sudo xfs_admin -U f15f8aed-0073-4d2f-abec-aa5da4f72e8c /dev/sdc5

    for reiserfs:

    reiserfstune -u f15f8aed-0073-4d2f-abec-aa5da4f72e8c /dev/sdc5

Update: When you  create a new filesystem on a device, the command seems to allocate a uuid at that stage. To view existing uuids (in order to use them in your fstab and so on, run the blkid command).

]]>
http://atlanticlinux.ie/blog/setting-uuids-on-new-partitions/feed/ 2
Repartitioning modern Linux systems without reboot http://atlanticlinux.ie/blog/repartitioning-modern-linux-systems-without-reboot/ http://atlanticlinux.ie/blog/repartitioning-modern-linux-systems-without-reboot/#respond Fri, 17 Apr 2009 15:30:02 +0000 http://www.atlanticlinux.ie/blog/?p=92 This one is for my own future reference as much as anything. Ever since the move to udev in Linux 2.6, I’ve found it neccesary to do the very un-Linux like thing of rebooting before the appropriate device appeared under /dev. This was only an occasional hassle but still, you shouldn’t need to reboot Linux for such a thing.

Thanks to Robert for his Google magic in turning up partprobe, part of the GNU Parted package. As the Debian man page for partprobe says

partprobe is a program that informs the operating system kernel of
partition table changes, by requesting that the operating  system
re-read the partition table.

Excellent! Parted is normally installed on Debian and Ubuntu by default anyways, if not, simply, aptitude install parted and you’ll have access to the excellent partprobe.

We were trying to add some additional swap to a running system, the full series of commands needed as follows (I could have used parted to create the partition  but the cfdisk tool has a nice interface),

  1. sudo cfdisk /dev/sda (and create new partition of type FD, Linux RAID)
  2. sudo cfdisk /dev/sdb (and create new partition of type FD, Linux RAID)
  3. sudo partprobe
  4. sudo mdadm –create /dev/md3 -n 2 -x 0 -l 1 /dev/sda4 /dev/sdb4 (our swap devices are software RAID1 devices)
  5. sudo /etc/init.d/udev restart (this updates /dev/disk/by-uuid/ with the new RAID device)
  6. sudo mkswap /dev/md3
  7. sudo vi /etc/fstab (and add a new entry for /dev/md3 as a swap device)
  8. sudo swapon -a (to activate the swap device)
  9. sudo swapon -s (to verify it is working)
]]>
http://atlanticlinux.ie/blog/repartitioning-modern-linux-systems-without-reboot/feed/ 0
Marketing 101 – Your business website http://atlanticlinux.ie/blog/marketing-101-your-business-website/ http://atlanticlinux.ie/blog/marketing-101-your-business-website/#comments Thu, 16 Apr 2009 07:30:56 +0000 http://www.atlanticlinux.ie/blog/?p=90 I’m probably the last guy in the world who should be blogging about sales and marketing, I’m a techie after all (and like most techies, for a long time, I thought if you did good technology the customers would follow without any persuasion required). But maybe some of what I say will resonate with other techies out there more than if it comes from a sales and marketing guy. This blog started out as Marketing 102 – Business cards but as I wrote the introductory paragraph I started talking a little about the preceeding step of preparing your business website and, well, here we are.

I’ll get back to the business cards in a later post including some recommendations for who to use to print a small volume of nicely finished cards and what you should put on the cards.

I guess for a technology company (large or small), I figure your first step in marketing should be putting together a website for your business, possibly accompanied by a blog (if you have the time and energy to write regularly and you have something useful to say). At a minimum, your website should answer the following questions,

  • Who
  • What
  • Why

The who involves telling the customer a little about you, your background as well as providing the obvious such as contact details (email, phone and physical address) and maybe some details on what your company’s mission is.

The what involves telling the customer what you actually do. When you start on this, if you’re a techie, you’ll enter a brief fugue state where you start spewing technical terms and concepts that only other level 7 nerds will understand (hey it’s ok, I’m one too, I understand). Once you get over this, step back, and translate these terms into plain english that a (non-technical) customer can understand (so, while Atlantic Linux can deploy a large-scale event management framework utilising SNMP, IPMI and active and passive agents to quantify the availability of your enterprise infrastructure – in plain english we do remote system monitoring or even Linux systems support).

The why is the little bit about why customers should be talking to you instead of the company down the road for the services they require. This is similar to what you do but more about the customer than you. It can be summed up in three words,

Benefits, not features

So, rather than telling the customer about your 20 years of experience with Linux, tell them about how that 20 years of experience means that you’ve seen all the things that can go wrong in their systems and you know how to fix them. Rather than talking about how you’ve used 15 different Linux distributions on 10 different types of computer, tell the customer about how you have enough knowledge of Linux distributions to know which one will suit their needs (obviously, if you’re a software developer or a Windows consultant then you might want to talk about software development or Windows rather than Linux but you get the idea).

Putting a good website together is a long, painstaking process and will involve frequent rewrites and pruning (I reckon it’s a good sign if you find yourself taking stuff out rather than putting stuff in). We’re still working on our one but I think we’re getting close now (for the last year or so :).

]]>
http://atlanticlinux.ie/blog/marketing-101-your-business-website/feed/ 2
Web 2.0 executive summary http://atlanticlinux.ie/blog/web-20-executive-summary/ http://atlanticlinux.ie/blog/web-20-executive-summary/#comments Thu, 02 Apr 2009 20:00:49 +0000 http://www.atlanticlinux.ie/blog/?p=49 Neat applet from IBM, Wordle generated the following output from this blog’s RSS feed.

Update: Regenerated to address stattos’s concerns about the prescence of banks at the centre of the old one. I think this one might make a nice t-shirt image.

atlanticlinux-wordle

]]>
http://atlanticlinux.ie/blog/web-20-executive-summary/feed/ 2
Google Chrome – first impressions http://atlanticlinux.ie/blog/google-chrome-first-impressions/ http://atlanticlinux.ie/blog/google-chrome-first-impressions/#respond Tue, 02 Sep 2008 21:03:34 +0000 http://www.atlanticlinux.ie/blog/?p=36 I guess most of you have heard about Google Chrome by now, courtesy of the interesting comic book marketing device (allegedly accidentally published before it was ready, hhhmmm). Some of the features and design decisions mentioned in the comic made me curious enough to keep an eye out for its release this evening.  Ok, it doesn’t run on Linux (yet) but it is open source (Google seem to be using the BSD license for their code in Chrome) and contains some interesting features.

The intention with Google Chrome seems to be to keep the UI clean – first impressions are that they’ve succeeed in doing that. It seems much cleaner than either IE (which I find to be irritatingly non-intuitive) or Firefox (which, while it has a lot going on, since 3.0, manages to display things pretty cleanly).

Interestingly during initial start-up, it offered to import my Firefox settings, but I didn’t see any sign of an offer to import my Internet Explorer settings – not that I would have needed it but there seems to be a statement of intent here.

A quick tour of a few of the sites that I usually visit didn’t reveal any major problems. Chrome also enforces the same kind of warning about self-signed SSL certs that Firefox 3.0 introduced but doesn’t present quite as intimidating a warning. Performance seems pretty good but I couldn’t think of any particularly tortuous sites that I regularly visit so I don’t know how well it will handle heavier sites. I do miss my Adblock Plus Firefox extension though – I didn’t have time to see whether there is anything equivalent in Chrome yet or whether you can somehow get it to use Firefox extensions (mind you, considering Google’s core business, it probably won’t be going out of it’s way to help us filter ads). The new tab page / home page is interesting but I’m not sure how useful it will be in the long-term. I may revisit the same old pages every day more than I realise, in which case it may turn out to be a handy launch-pad.

An hour of use isn’t going to show a great deal. I’ll probably give this a test drive for a week or so before I come to any solid conclusions. Unfortunately (or maybe fortunately) most of my day-to-day activities are carried out on Linux desktops / notebooks so I won’t get to fully battle test Chrome until they release the Linux port.

First impressions though, are that Google have an interesting new browser with some nice features and that both Microsoft and Mozilla have some interesting times ahead.

]]>
http://atlanticlinux.ie/blog/google-chrome-first-impressions/feed/ 0
Kudos to Blacknight http://atlanticlinux.ie/blog/26/ http://atlanticlinux.ie/blog/26/#comments Fri, 22 Aug 2008 21:30:34 +0000 http://www.atlanticlinux.ie/blog/?p=26 I’ve been considering moving some of our hosted domains off of our office servers for some time. We’ve been hosting our own websites (http://www.aplpi.com, http://blog.aplpi.com, http://www.atlanticlinux.ie and a few others) and email since we first moved into our existing offices over 4 years ago. In those 4 years I think we’ve had maybe 3 or 4 days outage in total and most of that on weekends. Our email has been backed up by DynDns’s excellent MailHop Backup MX service which has kept any incoming email safe while our mail servers were down. Our office servers are, of course, Linux boxes (running Debian GNU/Linux 4.0, Postfix and SpamAssassin for email and Apache HTTP Server for our web sites) and have proved remarkably stable. To be honest, we don’t receive an earth-shattering volume of web traffic, but thanks to spammers, our mailserver gets plenty of exercise. On one day last week, we suffered a major spam attack which resulted in our mailservers processing over 20,000 mails in a 24-hour period. It did take SpamAssassin a while to process the spam backlog (it missed about 300 spams out of 18,000 or so – not bad going) but our mailserver happily chugged its way through the mail in about 8 hours. Not bad for a tiny Linux server sitting at the end of a plain old DSL line.

Despite all this, I’ve been considering moving to a hosted setup for a number of reasons,

  • Incoming spam, in particular, is a big consumer of the overall capacity of our office DSL line. It makes me wonder if it wouldn’t make more sense to let an ISP who is geared up to handle this kind of junk filter out most of it for us. This gives us more bandwidth to use productively.
  • We’re due to move offices in the next couple of months. Normally, I’d handle this over a weekend and have the infrastructure back up in the new location by the next business day, but it does involve working antisocial hours and you are dependent on all of your service providers having everything set up properly beforehand. I figure if we have the critical stuff hosted, we don’t need to worry about any upheaval during a move.
  • As a Linux consulting and support company, it’s important for us to eat our own dog food when it comes to our software and services – we’ve been doing that with these pieces of Linux infrastructure for quite a while now and have learned a lot. But the time we spend managing those can now be spent on newer services and software, if we offload these services to someone else.
  • Finally, I’m curious to see how well the big guys handle these services – it’s been a few years since I’ve used any hosting companies.

Never one to rush into anything, I figured we’d start by migrating one of our domains and see how it goes from there. I keep an eye on Blacknight Solutions – they’re an Irish ISP and give good support to various Linux and open source events around Ireland. Also, their MD writes a good blog and he seems to be a Stargate Atlantis fan so they have to be a good company to work with (I suspect I won’t be getting an honorary MBA from anyone for that kind of strategic reasoning – but I’m a firm believer in trusting your gut instincts on these things). Michele recently blogged about their new hosting plans and as it happened, the time is right for us to try one out. I purchased their Minimus hosting package during the week with a view to initially migrating our atlanticlinux.ie domain over to it. If that goes well, I’ll migrate the rest over the next few weeks.

My first impressions of Blacknight’s hosting platform is very impressive – they have a very intuitive web interface that lets you configure pretty much everything without resorting to their support. Not only can you configure the usual web and email – but they have included a lovely application installer which lets you install everything from blogging software to shopping cart software.

I did some testing earlier in the week and ironed out a few migration kinks (the main one being that our existing wordpress blogging system needed a PHP timeout to be extended before I could successfully export my existing blog postings from it) and bit the bullet this evening to do the migration. From start to finish, the entire process took about an hour – and most of that was time spent testing and tweaking one or 2 small problems. Granted, the atlanticlinux.ie website, email system and blog are pretty basic and don’t contain a lot of users – but my god, it really couldn’t have been much simpler. Well done Blacknight!

The icing on the cake for me was the wordpress migration – it took all of 10 minutes to

  • Install wordpress on the new site via the Blacknight control panel.
  • Export the wordpress blog data from our existing office wordpress installation.
  • Import the wordpress blog data to our new hosted wordpress installation.
  • And start posting new blog entries like this one.

I’ve done a few wordpress installs in the past and it is a pretty straightforward app to install, but the Blacknight system really does take any issues out.

I’m not generally one to endorse products or services on our blog – but I feel good services and products should be recognised and so far Blacknight have shone in their delivery – both in the product they have and the support they have offered. In my initial testing of their services, I must have logged about 20 support tickets – most of them were answered within minutes and all of the responses I received were intelligent and helpful.

I’ll be the first  to publicly complain if I receive a poor service in the future but so far I’ve been amazed with the quality of service I’ve received, especially considering the price, and no, I’m not receiving any favours to endorse the service. I’m just really impressed with it so far.

Thanks Blacknight.

]]>
http://atlanticlinux.ie/blog/26/feed/ 2
Semantic Web enabled Blog http://atlanticlinux.ie/blog/semantic-web-enabled-blog/ http://atlanticlinux.ie/blog/semantic-web-enabled-blog/#respond Wed, 06 Dec 2006 18:24:11 +0000 http://blog.aplpi.com/index.php/2006/12/06/semantic-web-enabled-blog/ I was at a presentation recently from the Digital Enterprise Research Institute (DERI) on some of their current work. We do a lot of work with Semantic Web technologies with our partner Profium. Profium’s products use Semantic Web technologies in certain niches such as the news and media industries where the benefits of Semantic Web in managing large amounts of metadata bring clear business advantages.

Outside of such niches, I’ve found it difficult to see where or how Semantic Web technology would be adopted by the mainstream. It was great to see that the folks at DERI have been busy working on just such applications. One of their current projects is the Semantically-Interlinked Online Communities Project which is developing tools which will ultimately allow the islands of information in blogs, forums and mailing lists to be accessed in whatever way a person wishes rather than requiring a person to access each source of information individually. The SIOC project will also make it easier to link information in each of these different media or indeed to mine the information stored in various locations and create your own virtual medium with a user interface of your own creation. I think the area of community software such as forums, blogs and mailing lists is eminently suitable for semantic web technologies – there are massive amounts of information in such islands around the Internet, unfortunately, at the moment it is very difficult to access this information and separate the signal from the noise.

To do my bit for the nascent semantic web I’ve installed SIOC Exporter for WordPress on this blog. This plugin allows any blog using WordPress to export SIOC metadata about the blog. Wahey, Applepie Solutions is on the Semantic Web!

For other bloggers and system administrators who are interested in this, it is a very straightforward WordPress plugin to install – just follow the INSTALL document that comes with the plugin files.
The DERI folks also had a poster session where they demonstrated other practical applications including the Semantic Radar for Firefox extension. This nifty Firefox extension scans each page you open in your browser for Semantic Web metadata (RDF) and flags the presence of such data on a page with a little icon in the status bar. At the moment it only handles a limited number of types of metadata (including SIOC, FOAF and DOAP) but over time this will should expand. It can also ping the Semantic Web Ping Service allowing others to learn about your metadata (and the pages they describe).

It’s good to finally see some maintstream developments in the Semantic Web world .. hopefully this is only the beginning.

]]>
http://atlanticlinux.ie/blog/semantic-web-enabled-blog/feed/ 0
Analysing web traffic http://atlanticlinux.ie/blog/analysing-web-traffic/ http://atlanticlinux.ie/blog/analysing-web-traffic/#respond Wed, 14 Jun 2006 17:09:34 +0000 http://www.aplpi.com/blog/index.php/2006/06/14/analysing-web-traffic/ Phew, I was gonna say it’s amazing how fast a month disappears and then I notice that it’s more like 2 months since I posted. Good to see Jim, Albert, Rob and William have kept the blog entries coming.

One of the things I’ve been meaning to do for some time is take a look at the statistics for our website and the blog to get an idea of what kind of traffic we’re seeing and maybe divine how to improve the site to attract more of the right kind of traffic (people interested in our services). I’m using the word divine intentionally, sometimes analysing webserver statistics feels a bit like reading tea-leaves (or tasseography to the tea-leaf reading crowd).

Being something of a data packrat, I do of course have a whole year of webserver access logs stored away and ready for analysis. I’ve been using analog on and off for about 10 years to analyse webstats. Back in the day, it was very fast and produced nice detailed reports which were easily customised. I fired that up on my 12 months of web data first and it performed as well as usual.

When we were reviewing the statistics afterwards and trying to identify trends, Albert pointed out that it doesn’t provide any information about visitors, focusing rather on hits or pageviews. I guess I hadn’t been keeping an eye on the state of the art in web server statistics analysis and as he said it, I realised it was true. Some of the current crop of webserver statistic tools give you more detailed information on visitors rather than focusing solely on hits. This lets you glean information such as how long people are staying on your site, what route they are following through the site and where they exit from the site.

A quick trawl through Debian’s package list turned up another likely candidate for web log analysis in awstats which sounded like it provides similar functionality to analog and details of unique visitors (this comparison is quite detailed). This sounded like just the ticket so I went ahead and installed it.

The usage model for awstats is a little different to that of analog. awstats expects to be run periodically (either as a cgi-bin or from the command-line) and analyse both cached data from previous runs and new data from the latest logfile. It took a bit more effort to configure up awstats to first analyse all of my archived weblogs and then parse the most recent one (you can’t throw the whole lot into the config file and let awstats decide which way to read gzipped files versus normal files) – analog takes whatever you throw at it and does the right thing. To be fair to awstats, it does document how to do this but the ramp-up time to generating the desired reports is a bit longer than with analog. When it did finish chugging through my data, it produced a pretty decent report .

On balance, the output from awstats is an improvement over analog in that it provides some visitor statistics. On the other hand, the default analog report reads better to me and the awstats default is possibly a little too long (you feel you have all of this information in front of you but aren’t really sure how to digest it). So I guess I’d use awstats if I need the visitor information, but as a tool I’d still have a preference for analog (especially if it ever supports visitor information).

Google threw a spanner in the works a week ago when they finally sent me an invite for Google Analytics. I requested an invitation for Google Analytics a few months ago but it sounds like they were initially swamped with demand and have only opened things up again lately. Google Analytics takes an entirely different approach to web statistics than tools like analog and awstats. It requires you to insert a small piece of javascript into any page you want to track (a technique called page tagging) – anytime the page is loaded, the javascript passes details of the page visitor back to Google Analytics for analysis. The wikipedia web analytics entry discusses the advantages of each approach. I guess the command-line Linux hacker in me is concerned that only visitors using a browser that supports Javascript and that has it enabled will show up in the statistics. In practice, I guess this is a small minority for most sites but it’s a nagging concern all the same. The privacy advocate in me is a little concerned at how much data we’re shovelling to Google – pretty soon they’ll know everything about you!. But hey, the Google Analytics interface and reports look neat and I’m sucker for new Google software anyways 🙂

We’ve only been running it for 2 weeks, so the amount of data in the Google Analytics reports is still quite minimal but I’m definitely impressed with the interface. It gives you lots of different views of the data and includes some nice toys like a display of where in the world visitors to your site are coming from (I don’t know how accurate this can be, I haven’t gone looking at how this is done yet). It also beats awstats in terms of how much visitor-focused information it gives you, down to where visitors are entering your site, how long they are staying around and what page they are exiting from – which is really the information I should be thinking about when it comes to redesigning our website or adding new material.

I haven’t made up my mind yet as to which tool I’ll be using in the future. The Google Analytics one is easy to check on once every few days and gives nice information at a glance. I’ll probably still run awstats every few months for the moment, and if analog start supporting visitor patterns I’ll probably go back to using that.

Now to figure out exactly what I want to know about our website 🙂

]]>
http://atlanticlinux.ie/blog/analysing-web-traffic/feed/ 0