useful tools – Atlantic Linux Blog http://atlanticlinux.ie/blog Thoughts on running an Irish Linux business Fri, 22 Aug 2014 15:00:13 +0000 en-US hourly 1 https://wordpress.org/?v=4.5.1 Monitoring your infrastructure – Zabbix http://atlanticlinux.ie/blog/monitoring-your-infrastructure-zabbix/ http://atlanticlinux.ie/blog/monitoring-your-infrastructure-zabbix/#comments Tue, 14 Sep 2010 19:00:43 +0000 http://atlanticlinux.ie/blog/?p=197 Hi there – I’m afraid I’ve neglected the blog for the last few months – it’s been a busy spring and summer. I’ll try to post more regular articles now that the evenings are closing in again!

As you grow your infrastructure – one of the growing pains you’ll encounter is how to keep an eye on how your systems are running. Sure, you can come in every morning and login to each of your servers, maybe scan the logs and run a few commands like htop and dstat to verify that things are working ok. This approach doesn’t scale very well (it might work for 2 or 3 machines, but will be problematic with 50 machines). So you need something to monitor your infrastructure, ideally something that will do all of the following,

  1. Monitor all of your systems and notify you if there is a “problem” on any system.
  2. Store historical data for some key performance parameters on the system (it is useful to understand what kind of loads your systems normally run and whether these loads are increasing over time).
  3. Provide 1. and 2. via an easy to configure and use interface.
  4. Provide a nice graphical display of this data – for scanning for performance problems.
  5. Automatically perform actions on systems being monitored in response to certain events.

There are many open source and commercial applications for monitoring systems which meet the above requirements – see Wikipedia’s Comparison of network monitoring systems for a partial list of so called Network Monitoring / Management Systems.HP OpenView is the 800 lb gorilla of commercial network/system management tools but seems to have morphed into a whole suite of management and monitoring tools now. In Linux circles, the traditional solution for this problem has been Nagios. It has a reputation for being stable and reliable and has a huge community of users. On the other hand (based on my experiences while evaluating a number of different tools), it is configured with a series of text files which take some getting to grips with and a lot of functionality (like graphing and database storage) is provided through plugins (which themselves require installation and configuration). I found the default configuration to ugly and a little unfriendly and while there is a large community to help you – the core documentation is not great. There was a fork of Nagios, called Icinga which set out to address some of those problems – I haven’t checked how it’s progressing (but a quick look at their website suggests they have made a few releases). Kris Buytaert has a nice presentation about some of the main open source system monitoring tools from 2008 (which still seems pretty relevant).

After evaluating a few different systems, I settled on Zabbix as one which seemed to meet most of my requirements.  It is a GPL licensed network management system. One of the main reasons I went with Zabbix is because it includes a very nice, fully functional web interface. The agents for Zabbix (the part of Zabbix that sits on the system being monitored) are included in most common distributions (and while the distributions don’t always include the most recent release of the Zabbix agent, newer releases of Zabbix work well with older releases of the agents). Also, Zabbix is backed by a commercial/support entity which continues to make regular releases, which is a good sign. For those with really large infrastructures, Zabbix also seems to include a nicely scalable architecture. I only plan on using it to monitor about 100 systems so this functionality isn’t particularly important to me yet.

While our chosen distributions (Ubuntu and Debian) include recent Zabbix releases, I opted to install the latest stable release by hand directly from Zabbix – as some of the most recent functionality and performance improvements were of interest to me. We configured Zabbix to work with our MySQL database but it should work with Postgres or Oracle equally well. It does put a reasonable load on your database but that can be tuned depending on how much data you want to store, for how long and so on.

I’ve been using Zabbix for about 18 months now in production mode. As of this morning, it tells me it is monitoring 112 servers and 7070 specific parameters from those servers. The servers are mainly Linux servers although Zabbix does have support for monitoring Windows systems also and we do have one token Windows system (to make fun of ). Zabbix also allows us to monitor system health outside of the operating system level if a server supports the Intelligent Platform Management Interface (IPMI). We’re using this to closely monitor the temperature, power and fan performance on one of our more critical systems (a 24TB NAS from Scalable Informatics). Finally, as well as monitoring OS and system health parameters, Zabbix includes Web monitoring functionality which allows you to monitor the availability and performance of web based services over time. This functionality allows Zabbix to periodically log into a web application and run through a series of typical steps that a customer would perform. We’ve found this really useful for monitoring the availability and behaviour of our web apps over time (we’re monitoring 20 different web applications with a bunch of different scenarios).

As well as monitoring our systems and providing useful graphs to analyse performance over time, we are using Zabbix to send alerts when key services or systems become unavailable or error conditions like disks filling up or systems becoming overloaded occur. At the moment we are only sending email alerts but Zabbix also includes support for SMS and Jabber notifications depending on what support arrangements your organisation has.

On the downside, Zabbix’s best feature (from my perspective) is also the source of a few of it’s biggest problems – the web interface makes it really easy to begin using Zabbix – but it does have limitations and can make configuring large numbers of systems a little tiresome (although Zabbix does include a templating system to apply a series of checks or tests to a group of similar systems). While Zabbix comes with excellent documentation, some things can take a while to figure out (the part of Zabbix for sending alerts can be confusing to configure). To be fair to the Zabbix team, they are receptive to bugs and suggestions and are continuously improving the interface and addressing these limitations.

At the end of the day, I doesn’t matter so much what software you are using to monitor your systems. What is important is that you have basic monitoring functionality in place. There are a number of very good free and commercial solutions in place. While it can take time to put monitoring in place for everything in your infrastructure, even tracking the availability of your main production servers can reap huge benefits – and may allow you to rectify many problems before your customers (or indeed management) notice that a service has gone down. Personally, I’d recommend Zabbix – it has done a great job for us – but there are many great alternatives out there too. For those of you reading this and already using a monitoring system – what you are using and are you happy with it?

]]>
http://atlanticlinux.ie/blog/monitoring-your-infrastructure-zabbix/feed/ 2
Parallel ssh http://atlanticlinux.ie/blog/parallel-ssh/ http://atlanticlinux.ie/blog/parallel-ssh/#respond Tue, 18 Aug 2009 07:30:22 +0000 http://atlanticlinux.ie/blog/?p=128 I’m increasingly working on clusters of systems – be they traditional HPC clusters running some MPI based software or less traditional clusters running software such as Hadoop‘s HDFS and MapReduce.

In both cases, the underlying operating systems are largely the same – pretty standard Linux systems running one of the main Linux distributions (Debian, Ubuntu, Red Hat Enterprise Linux, CentOS, SuSE Linux Enterprise Server or OpenSUSE).

There are various tools for creating standard system images and pushing those to each of the cluster nodes – and I use those (more in a future post), but often, you need to perform the same task on a bunch of the cluster nodes or maybe all of them. This task is best achieved by simply ssh’ing into each of the nodes and running some command (be it a status command such as uptime, or ps or a command to install a new piece of software).

Normally, one or more users on the cluster will have been configured to use password-less logins with ssh so a first-pass at running ssh commands on multiple systems would be to script the ssh calls from a management cluster node.  The following is an example script for checking the uptime on each node of our example cluster (which has nodes from cluster02 to cluster20, I’m assuming we’re running on cluster01).

#!/bin/bash
#
for addr in {2..20}
do
 num=`printf "%02d" $addr`
 echo -n "cluster${num}:" && ssh cluster${num} uptime
done

The script works, the downside is you have to create a new script each time you have a new command to run, or a slightly different sequence of actions you want to perform (you could improve the above by passing the command to be run as an argument to the script but even then the approach is limited).

What you really need at this stage is a parallel ssh, an ssh command which can be instructed to run the same command against multiple nodes. Ideally, the ssh command can merge the output from multiple systems if the output is the same – making it easier for the person running the parallel ssh command to understand which cluster nodes share the same status.

A quick search through Debian’s packages and a Google for parallel ssh turns up a few candidates,

This linux.com article reviews a number of these shells.

After looking at a few of these, I’ve settled on using pdsh. Each of the tools listed above use slightly different approaches – some provide multiple xterms in which to run commands – some provide a lot of flexibility in how the output is combined. What I like about pdsh is that it provides a pretty straightforward syntax to invoke commands and, most importantly for me, it cleanly merges the output from multiple hosts – allowing me to very quickly see the differences in a command’s output from different hosts.

You will need to configure your password-less ssh operation as normal. Once you have done that, on Debian or Ubuntu,  edit /etc/pdsh/rcmd_default and change the contents of this file to a single line containing the following,

ssh

(create the file it it doesn’t exist).

Now you can run a command, such as date (to verify if NTP is working correctly) on multiple hosts with the following,

pdsh -w cluster[01-05] date

This runs date on cluster01,cluster02,cluster03,cluster04 and cluster05 and returns the output. To consolidate the output from multiple nodes into a compact display format, pdsh comes with a second tool called dshbak, used as follows,

pdsh -w cluster[01-05] date | dshbak -c

Personally, I find this output most readable. On Debian and Ubuntu systems, to invoke dshbak by default, edit /usr/bin/pdsh (which is a shell script wrapper) and change the invocation line from

exec -a pdsh /usr/bin/pdsh.bin "$@"

to

exec -a pdsh /usr/bin/pdsh.bin "$@" | /usr/bin/dshbak -c

Now when you invoke pdsh by default, it’s output will be piped through dshbak.

]]>
http://atlanticlinux.ie/blog/parallel-ssh/feed/ 0
sudo via ssh http://atlanticlinux.ie/blog/sudo-via-ssh/ http://atlanticlinux.ie/blog/sudo-via-ssh/#respond Wed, 12 Aug 2009 07:30:09 +0000 http://atlanticlinux.ie/blog/?p=126 By default, if you attempt to run sudo through ssh, when you respond to the password prompt from sudo – it will echo the password back on your console. To avoid this, provide the -t option to ssh which forces the remote session to behave as if run through a normal tty (and thus masks the password), for example,

ssh -t foo.example.com sudo shutdown -r now
]]>
http://atlanticlinux.ie/blog/sudo-via-ssh/feed/ 0
Repartitioning modern Linux systems without reboot http://atlanticlinux.ie/blog/repartitioning-modern-linux-systems-without-reboot/ http://atlanticlinux.ie/blog/repartitioning-modern-linux-systems-without-reboot/#respond Fri, 17 Apr 2009 15:30:02 +0000 http://www.atlanticlinux.ie/blog/?p=92 This one is for my own future reference as much as anything. Ever since the move to udev in Linux 2.6, I’ve found it neccesary to do the very un-Linux like thing of rebooting before the appropriate device appeared under /dev. This was only an occasional hassle but still, you shouldn’t need to reboot Linux for such a thing.

Thanks to Robert for his Google magic in turning up partprobe, part of the GNU Parted package. As the Debian man page for partprobe says

partprobe is a program that informs the operating system kernel of
partition table changes, by requesting that the operating  system
re-read the partition table.

Excellent! Parted is normally installed on Debian and Ubuntu by default anyways, if not, simply, aptitude install parted and you’ll have access to the excellent partprobe.

We were trying to add some additional swap to a running system, the full series of commands needed as follows (I could have used parted to create the partition  but the cfdisk tool has a nice interface),

  1. sudo cfdisk /dev/sda (and create new partition of type FD, Linux RAID)
  2. sudo cfdisk /dev/sdb (and create new partition of type FD, Linux RAID)
  3. sudo partprobe
  4. sudo mdadm –create /dev/md3 -n 2 -x 0 -l 1 /dev/sda4 /dev/sdb4 (our swap devices are software RAID1 devices)
  5. sudo /etc/init.d/udev restart (this updates /dev/disk/by-uuid/ with the new RAID device)
  6. sudo mkswap /dev/md3
  7. sudo vi /etc/fstab (and add a new entry for /dev/md3 as a swap device)
  8. sudo swapon -a (to activate the swap device)
  9. sudo swapon -s (to verify it is working)
]]>
http://atlanticlinux.ie/blog/repartitioning-modern-linux-systems-without-reboot/feed/ 0
Subversion sparse checkouts http://atlanticlinux.ie/blog/subversion-sparse-checkouts/ http://atlanticlinux.ie/blog/subversion-sparse-checkouts/#comments Tue, 07 Apr 2009 21:52:42 +0000 http://www.atlanticlinux.ie/blog/?p=79 I’ve been using Subversion for a few years now but as with lots of technology I work with, I’ve learned enough about it to do the job I need to do but I’ve never dug into it exhaustively. It turns out a nice feature called sparse checkouts was introduced into Subversion 1.5. With Subversion,  you can either create one repository for each project or use a single repository for multiple projects. I like using a single repository for multiple projects but there are advantages and disadvantages to both approaches and it’s yet another source of religious debate and flamage so I won’t suggest which would suit your needs best.

One of the disadvantages of using a single repository for multiple projects is that any time you want to check out part of your repository, you either had to do something like this,

svn checkout http://www.example.com/svn/myrepo

to check out the whole repository (and if it’s a big repository, and you’re on a slow connection, you get to watch the world wide wait in action) or something like this

svn checkout http://www.example.com/svn/myrepo/oneofmyprojects

to just check out a teensie part of your repository which should happen faster than the former approach. The disadvantage to the second approach is that you end up with only part of the repository checked out and if you want another part in the future, you’ll have to check that out separately like

svn checkout http://www.example.com/svn/myrepo/anotheroneofmyprojects

Pretty soon, you’ll have a directory full of separately checked out projects, each of which you have to individually svn update, svn commit and so on. Hey, it starts looking like you have one repository for each project. Ideally, what you want to be able to do is to check out your entire repository but only the bits your are interested in, while keeping the option open of checking out other parts in the future and managing them all as the one repository that they are. Sparse checkouts introduced this functionality.

With svn’s sparse directory support, you can do the following,

svn checkout --depth=immediates http://www.example.com/svn/myrepo

This checks out the myrepo repository, but only to a depth of 1, that is, all files and directories immediately under myrepo but not any further subdirectories and files. So a directory listing of your checked out repository might look like,

oneofmyprojects/
anotheroneofmyprojects/
README.txt

This gives you an overview of the myrepo hierarchy without pulling all the files. Furthermore, it is sticky – any subsequent svn update commands you run will honour the scope you set in the first checkout.

If you now want to flesh out parts of the tree, you can do the following

svn update --set-depth=infinity myrepo/oneofmyprojects

This updates the contents of myrepo/oneofmyprojects with all children (files and subdirectories) ensuring you have a full copy of that part of the repository. If you subsequent run an svn update in myrepo – the behaviour for oneofmyprojects continues to be sticky and will result in an update of all files and subdirectories (while not checking out the children of any of  the other myrepo top-level directories).

Unfortunately, you cannot checkout a directory with depth=infinity and then update it to a reduced a depth (the behaviour only works in the direction of increasing depth for now).

More detail is available at http://svnbook.red-bean.com/en/1.5/svn.advanced.sparsedirs.html

I took a quick look at TortoiseSVN (a very nice graphical Subversion client for Windows) and if you do an SVN checkout it has an option  for Checkout Depth which I’m guessing provides the same functionality (but I haven’t tested it).

]]>
http://atlanticlinux.ie/blog/subversion-sparse-checkouts/feed/ 5
Converting Openoffice Calc spreadsheet to image http://atlanticlinux.ie/blog/converting-openoffice-calc-spreadsheet-to-image/ http://atlanticlinux.ie/blog/converting-openoffice-calc-spreadsheet-to-image/#comments Thu, 02 Apr 2009 15:09:11 +0000 http://www.atlanticlinux.ie/blog/?p=69 Just a quick tip, if you have a spreadsheet or part of a spreadsheet in Openoffice Calc which you want to convert to an image (for use in a web page or something).

I was initially dismayed to find that Export and Save As in Openoffice calc only support other spreadsheet formats, XHTML and PDF (to be fair, the XHTML is pretty clean compared to that output by Microsoft Office but it wasn’t what I wanted in this case).

After some playing around, I figured out a pretty easy way of converting my spreadsheet to an image. I highlighted the table in Openoffice calc and clicked on Edit / Copy. Then I started the GNU Image Manipulation Program (GIMP) – I guess other graphics programs should work equally well – and clicked on File / Acquire / Paste as New and voilà – the spreadsheet appeared in a new GIMP window ready to be saved in whatever graphics format you wish.

(I did this on a Linux system, I’d be curious to know if the same works on Windows).

]]>
http://atlanticlinux.ie/blog/converting-openoffice-calc-spreadsheet-to-image/feed/ 6
What a difference a Gig makes http://atlanticlinux.ie/blog/what-a-difference-a-gig-makes/ http://atlanticlinux.ie/blog/what-a-difference-a-gig-makes/#respond Tue, 14 Oct 2008 11:50:42 +0000 http://www.atlanticlinux.ie/blog/?p=41 We’re working on a project at the moment that involves deploying various Linux services for visualising Oceanographic modelling data using tools such as Unidata’s THREDDS Data Server (TDS) and NOAA/PMELS’s Live Access Server (LAS). TDS is a web server for making scientific datasets available via various protocols including plain old HTTP, OPeNDAP which allows a subset of the original datasets to be accessed and WCS. LAS is a web server which, using sources such as an OPeNDAP service from TDS, allows you to visualise scientific datasets, rendering the data overlaid onto world maps and allowing you to select particular variables from the data which you are interested in. In our case, the datasets are generated by the Regional Ocean Modeling System (ROMS) and include variables such as sea temperature and salinity at various depths.

The data generated by the ROMS models we are looking at uses a curvilinear coordinate system – to the best of my understanding (and I’m a Linux guy, not an Oceanographer, so my apologies if this is a poor explanation) since the data is modelling behaviour on a spherical surface (the Earth) it makes more sense to use the curvilinear coordinate system. Unfortunately, some of the visualisation tools, in particular LAS prefers to work with data using a regular or rectilinear grid. Part of our workflow involves remapping the data from curvilinear to rectilinear using a tool called Ferret (also from NOAA). Ferret does a whole lot more than regridding (and is, in fact, used under the hood by LAS to generate a lot of the graphical output of LAS) but in our case, we’re interested mainly in its ability to regrid the data from one gridding system to another. Ferret is an interesting tool/language – an example of the kind of script required for regridding is this one from the Ferret examples and tutorials page. Did I mention we’re not Oceanographers? Thankfully, someone else prepared the regridding script, our job was to get it up and running as part of our work flow.

We’re nearly back to the origins of the title of this piece now, bear with me!

We’re using a VMware virtual server as a test system. Our initial deployment was a single processor system with 1 GB of memory. It seemed to run reasonably well with TDS and LAS – it was responsive and completed requests in a reasonable amount of time (purely subjective but probably under 10 seconds if Jakob Nielsen’s paper is anything to go by). We then looked at regridding some of the customer’s own data using Ferret and were disappointed to find that an individual file took about 1 hour to regrid – we had about 20 files for testing purposes and in practice would need to regrid 50-100 files per day. I took a quick look at the performance of our system using the htop tool (like the traditional top tool found on all *ix systems but with various enhancements and very clear colour output). There are more detailed performance analysis tools (include Dag Wieers excellent dstat) but sometimes I find a good high-level summary more useful than a sea of numbers and performance statistics. Here’s a shot of the htop output during a Ferret regrid,

High kernel load in htop

What is interesting in this shot is that

  • All of the memory is used (and in fact, a lot of swap is also in use).
  • While running the Ferret regridding, a lot of the processor is being spent in kernel activity (red) instead of normal (green) activity.

High kernel (or system) usage of the processor is often indicative of a system that is tied up doing lots of I/O. If your system is supposed to be doing I/O (a fileserver or network server of some sort) then this is good. If your system is supposed to be performing an intensive numerical computation, such as here, we’d hope to see most of the processor being used for that compute intensive task, and a resulting high percentage of normal (green) processor usage. Given the above it seemed likely that the Ferret regridding process needed more memory in order to efficiently regrid the given files and that it was spending lots of time thrashing (moving data between swap and main memory due to a shortage of main memory).

Since we’re working on a VMware server, we can easily tweak the settings of the virtual server and add some more processor and memory. We did just that after shutting down the Linux server. We restarted the server and Linux immediately recognised the additional memory and processor and started using that. We retried our Ferret regridding script and noticed something interesting. But first, here’s another shot of the htop output during a Ferret regrid with an additional gig of memory,

Htop with high use processor time

What is immediately obvious here is that the vast majority of the processor is busy with user activity – rather than kernel activity. This suggests that the processor is now being used for the Ferret regridding, rather than for I/O. This is only a snapshot and we do observe bursts of kernel processor activity still, but these mainly coincide with points in time when Ferret is writing output or reading input, which makes sense. We’re still using a lot of swap, which suggests there’s scope for further tweaking, but overall, this picture suggests we should be seeing an improvement in the Ferret script runtime.

Did we? That would be an affirmative. We saw the time to regrid one file drop from about 60 minutes to about 2 minutes. Yes, that’s not a typo, 2 minutes. By adding 1 GB of memory to our server, we reduced the overall runtime of the operation by 97%. That is a phenomenal achievement for such a small, cheap change to the system configuration (1GB of typical system memory costs about €50 these days).

What’s the moral of the story?

  1. Understand your application before you attempt tuning it.
  2. Never, ever tune your system or your application before you understand where the bottlenecks are.
  3. Hardware is cheap, consider throwing more hardware at a problem before attempting expensive performance tuning exercises.

(With apologies to María Méndez Grever and Stanley Adams for the title!)

]]>
http://atlanticlinux.ie/blog/what-a-difference-a-gig-makes/feed/ 0
Stress testing a PC revisited http://atlanticlinux.ie/blog/stress-testing-a-pc-revisited/ http://atlanticlinux.ie/blog/stress-testing-a-pc-revisited/#respond Thu, 25 Sep 2008 14:52:22 +0000 http://www.atlanticlinux.ie/blog/?p=38 I’m still using mostly the same tools for stress testing PCs as when I last wrote about this topic. memtest86+ in particular continues to be very useful. In practice, the instrumentation in most PCs still isn’t good enough to identify which DIMM is failing most of the time (mcelog sometimes makes a suggestion about which DIMM has failed and EDAC can also be helpful, but in my experience there is lots of hardware out there which doesn’t support these tools well). The easiest approach I’ve found to date is to take out one DIMM at a time and re-run memtest86+ … when the errors go away you’ve found your problematic DIMM – put it back in again and re-run to make sure you’ve identified the problem. If you keep getting the errors regardless of which DIMMs are installed, you may be looking at a problem with the memory controller (either on the processor or the motherboard depending on which type of processor you are using) – if you have identical hardware, you should look at swapping the components into that for further testing.

Breakin is a tool recently announced on the beowulf mailing list which looks like it has a lot of potential also and I plan on adding it to my stress testing toolkit the next time I encounter a problem which looks like a possible hardware problem. What looks nice about Breakin is that it tests all of the usual suspects including processor, memory, hard drives and it includes support for temperature sensors, MCE logging and EDAC. This is attractive from the perspective of being able to fire it up, walk away and come back to check on progress 24 hours later.

Finally, we’ve found the Intel MPI Benchmarks (IMB, previously known as the Pallas MPI benchmark) to be pretty good at stress testing systems. Anyone conducting any kind of qualification or UAT on PC hardware, particularly hardware intended to be used in HPC applications should definitely be including
an IMB run as part of their tests.

]]>
http://atlanticlinux.ie/blog/stress-testing-a-pc-revisited/feed/ 0
Viruses and Malware on Windows http://atlanticlinux.ie/blog/viruses-and-malware-on-windows/ http://atlanticlinux.ie/blog/viruses-and-malware-on-windows/#comments Tue, 09 Sep 2008 14:55:05 +0000 http://www.atlanticlinux.ie/blog/?p=37 Here I am writing about Windows – If I’m not careful, I’ll have to rename this blog to Thoughts on Windows. What’s the Linux angle here? I guess I’m the smug Linux user poking fun at Windows or something along those lines (but don’t leave just yet if you’re one of those smug Windows users, I’d be interested in your thoughts on the following).

Two unrelated events inspired this piece. I came across an interesting blog recently comparing the performance of various anti-virus products on a number of items of malware. I haven’t come across the guys behind this before, InfraGard but given their links to the FBI they seem to have some credibility so I’m assuming their testing methodologies are reasonably reliable.

Three things struck me about that blog,

  • AVG does a pretty good job of protecting Windows systems from malware and viruses (I know I’m starting to sound like an AVG fan-boy between this and my previous references to it).
  • Some of the “leading” anti-virus programs / suites are pretty poor at protecting Windows systems (not to mention the fact that they interfere with the operation of your computer).
  • You can’t rely on any anti-virus software to fully protect your Windows system.

That’s about the point where I become the smug Linux user, up until the point where I remembered that I have to look after my share of Windows systems both in our offices and for friends and family. This brings me on to the second recent event which inspired this piece.  A friend running Windows Vista had recently started getting worrying messages about things called Trojan-Spy.Win32.KeyLogger.aa trying to send traffic from his PC and wanted to know if he should be worried. “Probably”, I said and took a look at his system.

In the past, my toolbox for a healthy Windows PC would include the aforementioned AVG and, if I had concerns about spyware, Spybot – Search & Destroy – another great Windows tool that is free for non-commercial use. Between those two tools, I could be pretty confident that a Windows machine was running clean of any malicious software. So I installed and ran both on my friends PC – multiple times! Spybot even suggested running immediately after start-up as Administrator so that it could ferret out as much dodgy malware as possible. A few hours later, we were still being entertained by messages from Windows about our good friend Trojan-Spy.Win32.KeyLogger.aa (and maybe some others) which hadn’t even been detected by AVG or Spybot, never mind removed by them.

Some research on the interweb turned up posts and comments from various people who had encountered this particular trojan and by all accounts it’s a tough one to remove. I was on the verge of suggesting an OS re-install (taking inspiration from Aliens,  sometimes nuking the system from orbit is the only way to be sure) possibly in tandem with a Linux re-install to forever banish such nasties when I came across some references to another tool called Superantispyware which some recommended as the antidote to Trojan-Spy.Win32.KeyLogger.aa. With a name like that, it had to be good at dealing with spyware right? I figured it was worth a shot before we tried something more drastic, particularly since there is a free for non-commercial use version available. One download and install later, it kicks off and immediately warns us about some spyware it has found (either our friend the KeyLogger or another, as yet unknown, piece of spyware). After a half hour or so, it had finished a scan and proceeded to remove or quarantine all of the various pieces of spyware it had turned up. We booted the system once more, re-ran AVG and Spybot S&D and didn’t get any more warnings about Trojan-Spy.Win32.KeyLogger.aa. trying to send data off of the system. My friend was happy enough that the system was clean. Me? I’d probably still go and re-install the OS before putting my credit card details near the computer again (to be sure, to be sure) but the odds are it is clean – for which we probably have Superantispyware to thank.

So, what are our conclusions?

  • (With my smug Linux hat on once more) – consider installing and running Linux for your home desktop – a distribution such as the latest Ubuntu will provide all the software you need for typical day to day surfing, emailing and word-processing and won’t leave you open to half of this stuff (you’ll still be susceptible to phishing attacks and cross-site scripting attacks but you’ll be automatically eliminating a whole world of viruses, keyloggers and trojans which won’t ever run on a Linux system).
  • If you must run Windows, make sure you install some decent software to protect you – start with AVG, Spybot S&D (and maybe Superantispyware) – or let a comment to tell us about other useful ones.
  • If you’re running Windows, do not use the Administrator account for your activities, and don’t set up an alternative account with administrator privileges either – that kinda defeats the purpose. I know it’s a pain in the ass when you want to install some new software, but trust me, it’ll be a bigger pain in the ass when someone starts buying things from Itunes with your credit card.
  • Don’t click on things that you don’t understand and don’t install stuff from random web-pages, even if they do tell you it’s for your security (cmon, if some random stranger came to your door and told you he needed to “install something” in your bedroom “for your security” you’d slam the door in their face, before calling the police, why would you react differently to a stranger on the internet?).
  • Finally, the bad news is that email you just received claiming to be a red hot picture of Britney or Christina in a compromising position … well it probably isn’t (I know, if some international criminal ring is going to take over your computer for nefarious purposes you’d think they’d at least give you a naughty picture to take your mind off things, but I’m afraid they generally don’t play fair) so don’t click on the attached zip-file.
]]>
http://atlanticlinux.ie/blog/viruses-and-malware-on-windows/feed/ 2
Google Chrome – first impressions http://atlanticlinux.ie/blog/google-chrome-first-impressions/ http://atlanticlinux.ie/blog/google-chrome-first-impressions/#respond Tue, 02 Sep 2008 21:03:34 +0000 http://www.atlanticlinux.ie/blog/?p=36 I guess most of you have heard about Google Chrome by now, courtesy of the interesting comic book marketing device (allegedly accidentally published before it was ready, hhhmmm). Some of the features and design decisions mentioned in the comic made me curious enough to keep an eye out for its release this evening.  Ok, it doesn’t run on Linux (yet) but it is open source (Google seem to be using the BSD license for their code in Chrome) and contains some interesting features.

The intention with Google Chrome seems to be to keep the UI clean – first impressions are that they’ve succeeed in doing that. It seems much cleaner than either IE (which I find to be irritatingly non-intuitive) or Firefox (which, while it has a lot going on, since 3.0, manages to display things pretty cleanly).

Interestingly during initial start-up, it offered to import my Firefox settings, but I didn’t see any sign of an offer to import my Internet Explorer settings – not that I would have needed it but there seems to be a statement of intent here.

A quick tour of a few of the sites that I usually visit didn’t reveal any major problems. Chrome also enforces the same kind of warning about self-signed SSL certs that Firefox 3.0 introduced but doesn’t present quite as intimidating a warning. Performance seems pretty good but I couldn’t think of any particularly tortuous sites that I regularly visit so I don’t know how well it will handle heavier sites. I do miss my Adblock Plus Firefox extension though – I didn’t have time to see whether there is anything equivalent in Chrome yet or whether you can somehow get it to use Firefox extensions (mind you, considering Google’s core business, it probably won’t be going out of it’s way to help us filter ads). The new tab page / home page is interesting but I’m not sure how useful it will be in the long-term. I may revisit the same old pages every day more than I realise, in which case it may turn out to be a handy launch-pad.

An hour of use isn’t going to show a great deal. I’ll probably give this a test drive for a week or so before I come to any solid conclusions. Unfortunately (or maybe fortunately) most of my day-to-day activities are carried out on Linux desktops / notebooks so I won’t get to fully battle test Chrome until they release the Linux port.

First impressions though, are that Google have an interesting new browser with some nice features and that both Microsoft and Mozilla have some interesting times ahead.

]]>
http://atlanticlinux.ie/blog/google-chrome-first-impressions/feed/ 0