Phil Perry |

June 28th, 2009

Over the past six months or so we've been working really hard on - a community Enterprise Linux Repository for hardware drivers.

One of the main strengths of Enterprise Linux is it's stability and long term support. However, this can also lead to a lack of hardware support where the upstream vendor is sometimes slow to backport the latest advancements. Linux in general has made tremendous progress in the last few years in terms of hardware support, to the point where virtually every device conceivable is now supported in the mainline kernel. However, given the nature of an Enterprise release, such advancements can sometimes be slow to filter down the line and this is an area previously lacking any real coordinated effort. If smaller organisations and end users are also to benefit from Enterprise Linux (RHEL) and it's rebuilds (CentOS, Scientific Linux), then it's important we ensure that these products support the latest modern hardware without destablizing their kernel.

One of the great features of the Linux kernel is it's modular nature. This makes it possible to backport virtually any driver as a kernel module (kmod) into the current kernel. For example, if your network card or sound chipset or webcam isn't supported, you can simply load an updated driver into your current kernel thus retaining the stability of your Enterprise product. ELRepo has built an enviable collection of drivers for Enterprise Linux including filesystem, graphics, hardware monitoring, network, sound and webcam drivers. Some are backported directly from upstream projects (e.g, ALSA, Video4Linux), whilst others are backported from the mainline kernel (e.g, coretemp, it87) or direct from the vendor (e.g, Intel and Realtek nic drivers). All are packaged as kABI-tracking kmods so they don't need to be rebuilt against each new kernel.

ELRepo is designed to be compatible with Enterprise Linux (RHEL) and it's rebuilds (CentOS, Scientific Linux) and not to conflict with other 3rd party repositories (e.g, RPMForge).

lm_sensors and coretemp on CentOS 5.3

March 11th, 2009

In a previous entry I talked about the importance of reading release notes. One of the things I wanted to test in the CentOS 5.3 QA was the rebase of lm_sensors to 2.10.7. This is of relevance for my coretemp kmod package as this requires lm_sensors >= 2.10.2 and CentOS 5.2 shipped with 2.10.0 thus requiring an update to work. So I was kind of hoping that with the rebase of lm_sensors to 2.10.7 in CentOS 5.3, kmod-coretemp would Just Work.

So lets look at what the upstream release notes have to say, and I quote:

"lm_sensors needs the kernel module coretemp.ko to sense temperatures of certain Intel processors including the Core 2 Duo processor family and Core 2 Solo processor family. Although drivers to support the temperature sensors of these processors are included in Red Hat Enterprise Linux 5, the supporting kernel module is not included. This means that while the "sensors-detect" command will report that the "Intel Core family thermal sensor" "detects correctly", the "sensors" command will report "No sensors found!" The drivers for the temperature sensors of these Intel processors are being removed until a future kernel update includes the coretemp.ko module (which will also require an update of lm_sensors). With these drivers removed, "sensors-detect" will no longer falsely appear to detect the temperature sensors of these processors."

OK, that doesn't sound too good. The release notes suggest the lm_sensors driver for coretemp has been removed from the upstream package.

However, it's actually not that bad. Examination of the upstream source reveals they haven't actually removed the driver from lm_sensors, only the ability for sensors-detect to detect the presence of the Intel Core thermal sensor. What does this mean? Well, it means that if you already have coretemp set up and working under 5.2 then updating lm_sensors in CentOS 5.3 will be fine and all will contine to work - nothing breaks. On the other hand, if you try to install and configure the coretemp sensor for the first time on CentOS 5.3, sensors-detect will fail to detect the Intel Core thermal sensor. To this end, I've patched lm_sensors to reinstate detection of the Intel Core thermal sensor. Updated packages can be downloaded here.

Now when we run sensors-detect we see the correct detection of the Intel Core family thermal sensor:

[root@Quad]# sensors-detect

Some south bridges, CPUs or memory controllers may also contain
embedded sensors. Do you want to scan for them? (YES/no): yes
Silicon Integrated Systems SIS5595...                     No
VIA VT82C686 Integrated Sensors...                        No
VIA VT8231 Integrated Sensors...                          No
AMD K8 thermal sensors...                                 No
AMD K10 thermal sensors...                                No
Intel Core family thermal sensor...                       Success!
    (driver `coretemp')
Intel AMB FB-DIMM thermal sensor...                       No

and now life is again good...

[phil@Quad ~]$ rpm -q lm_sensors

[phil@Quad ~]$ sensors
Adapter: ISA adapter
Core 0:  +25°C (high = +100°C)

Adapter: ISA adapter
Core 1:  +24°C (high = +100°C)

Adapter: ISA adapter
Core 2:  +21°C (high = +100°C)

Adapter: ISA adapter
Core 3:  +21°C (high = +100°C)

I guess the next thing on the ToDo list is to set up a repo and get the packages signed given that there's little hope of getting these packages into an official (or 3rd party) repo given the dependency on a patched lm_sensors.

Safer remote updates with screen

January 26th, 2009

With CentOS 5.3 due out shortly, many users will be performing the update. For those without physical access to their servers, updates are something that must be performed remotely, normally over an ssh login. However, for an update as large as a point release, where upwards of a couple hundred packages may need updating, this presents some potential dangers. What if your ssh session fails mid update? The exact answer to this will depend at which point in the update process the session failed, but the general gist is that you'll most likely end up with a pretty broken system that needs some major repair work to complete the update.

So here's some tips to ease the potential pain:

  • Read the release notes before updating
  • Use a screen session so you can reattach the session if a remote ssh login should fail
  • Split the update into smaller chunks to minimize damage (and repair time) should anything unforeseen go wrong

First up, read the release notes. There are a few highlighted gotchas that may or may not apply to you.

If you're updating remotely, perform the update from within a screen session so that if the remote ssh session fails you can log back in and reattach the session without losing anything. For those not familiar with screen, simply start 'screen' after logging in, su to root and 'yum update' as normal. Should the ssh connection fail, just log back in again and reattach the screen session with 'screen -d -r'.

[phil@Quad ~]# ssh
[phil ~]# screen
[phil ~]# su -
[root ~]# yum update
#### connection lost! ####
[phil@Quad ~]# ssh
[phil ~]# screen -r -d
#### phew - she's still alive ####
[root ~]# exit
[phil ~]# exit

If you read the release notes in the first part, you will have noticed an issue with glibc that requires this package to be updated first. The release notes suggest:

[root@Quad ~]# yum update glibc
[root@Quad ~]# yum update

However, you can take this concept a stage further and split updates into smaller chunks thus minimizing the potential for damage (and repair time) should anything unforeseen go wrong. For example, we may like to split updates into logical groups such as the kernel, package management components, compiler tools, etc. How you split your updates will depend on what you have installed, but it may look something like this:

[root@Quad ~]# yum update glibc
[root@Quad ~]# yum update kernel\*
[root@Quad ~]# yum update yum\* rpm\* python\*
[root@Quad ~]# yum update gcc\* make\* automake\*
[root@Quad ~]# yum update open\*
[root@Quad ~]# yum update xorg\* gnome\* kde\* arts\*
[root@Quad ~]# yum update lib\*
[root@Quad ~]# yum update

and finally finishing with a 'yum update' to pull in the remaining updates.

Security through obscurity and SSH

January 16th, 2009

Security through obscurity is generally considered to be a bad thing, especially when it relates to bugs in software or encryption methodologies. It is not about making a product or service more secure, but rather hiding some detail or vulnerability in the hope that no one finds it.

However, we can sometimes use the principle to good effect so long as we understand the limitations. Take securing the commonly used SSH service, for example. We often recommend running the SSH service on a non-standard port as a way to prevent the vast majority of brute-force attempts to crack system accounts. Anyone who operates an SSH server on the standard tcp port 22 will be very familiar with the many logged attempts to gain access and simply moving SSH to a non-standard port will eliminate the vast majority of these attempts. Why does this work? Well, the bad guys will typically scan a network segment for IP addresses running SSH services on port 22 and then run automated tools against identified hosts to attempt to gain access using commonly used username/password combinations. How successful are they? If they weren't getting some success then they wouldn't still be doing it. If you think your SSH server won't get detected or noticed amongst the millions of others out there then you are wrong. The average IP address will get probed on port 22 around 3-5 times per day. Interestingly, recent studies estimate that any reasonably sized botnet is capable of scanning the entire IPv4 address space for a single port in a single day so the bad guys have the capability to locate every single SSH server on the Internet running on port 22 in a day. That's scary!

By moving SSH to a non-standard port we are hiding our needle in the proverbial haystack of 65,535 available ports. But it's important to understand that it's still security through obscurity as we haven't improved the inherent security of the service, just made it harder to find. The popular nmap network scanning tool will by default scan about 1600 of those ports. However, if a determined attacker or one launching a targeted attack decides to probe your IP address they will still find the hidden SSH service if they happen to knock on the right port:

[root@Quad ~]# nmap -sS -sV -p 2222 -P0

Starting Nmap 4.11 ( ) at 2009-01-15 23:59 GMT
Interesting ports on (
2222/tcp open ssh OpenSSH 4.3 (protocol 2.0)

Nmap finished: 1 IP address (1 host up) scanned in 0.095 seconds

So picking a non-standard port a little less obvious than port 2222 is probably a good idea!

Taking it a stage further

However, we can take the concept a stage further. Besides hiding our SSH service on a non-standard port we can also run a second instance of the service on the standard port. Why would we possibly want to do that I hear you ask? Well, the second instance running on the standard port can be configured as a dummy service on which it is impossible to log in. The principle here is that we show attackers a door, the wrong door, in the hope that they will then hammer away at it rather than go looking for the real door. An example of how to run a second instance of SSH is detailed here. Once the second instance is set up it should be deliberately (mis-)configured to not allow logins. The simplest way to achieve this is probably with the AllowUsers directive by specifying a non-existant user (you MUST specify something - a blank AllowUsers will allow logins from ALL users):

# /etc/ssd/sshd_dummy_config

Port 22
PermitRootLogin no
AllowUsers none

Here we configure the dummy SSH daemon to run on the standard port 22, to not allow root logins (always a good idea) and to only allow logins from the user 'none' which (presumably) doesn't exist on the system. Now anyone attempting to gain access will be invited to enter their password none the wiser that it is impossible to log in as there are no valid username/password combinations.

Hopefully I have shown how with a little creative thinking we can use security through obscurity to our advantage to trick even the most determined of attackers into hammering away on the wrong door. Show the attacker what they are expecting to see whilst hiding away the real entry point. In addition, we can also use the dummy service for logging failed attempts and gaining valuable insight and information about those trying to gain access to our systems. But of course it's still security through obscurity.

MD5 collision vulnerability could lead to SSL certificate forgery

December 30th, 2008

Today researchers at the CCC congress in Berlin announced weaknesses in the MD5 hash algorithm used by some SSL certificates. Using these weaknesses, an attacker could obtain fraudulent SSL certificates for websites they don’t legitimately control.

SSL certificates are commonly used for the HTTPS protocol and serve two main purposes:

  • to confirm the authenticity of the website being visited
  • to encrypt communications between the website and user

So, for example, when you visit your online bank the certificate used by your bank firstly confirms to you that it really is your bank that you're communicating with and not some phishing site pretending to be your bank, and secondly the communications between the bank and you are encrypted so bad guys can't intercept your transactions and discover your details.

It's all about trust

SSL certificates are widely used on the web for anything requiring trust or encryption. SSL certificates are signed by a trusted certificate authority (CA) known as a root CA. Root CA certificates are bundled with your browser so that any certificate signed by a root CA is automatically trusted by your browser. We place our trust in these bodies (for example, VeriSign, Thawte etc) to make sure they only sign certificates for people once they have confirmed that they are indeed the rightful owners of that site. For example, you or I couldn't simply obtain a certificate for, set up a fake website and go phishing. Or at least we couldn't if users noticed that the site wasn't using the secure HTTPS protocol or had an invalid certificate.

The root of the problem

The root of the problem is that many root CA's still use an old insecure hashing algorithm (MD5) to sign certificates. In today's announcement, researchers disclosed that they had been able to exploit weaknesses in the MD5 hashing algorithm such that they could produce fake root CA certificates, thus allowing them to sign their own fake certificates for any site that would automatically be trusted by all web browsers.

This has huge implications because users can potentially no longer trust certificates. I say potentially, because of a number of important mitigating factors. Firstly there is no evidence of this being exploited in the wild already thus we can conclude it is still a new discovery. Secondly, the researchers disclosed the weakness responsibly - they have yet to reveal full technical details as to how to exploit the weakness. Thirdly, the researchers did not simply stumble across the weakness, but rather are a group of highly skilled mathematicians, cryptographic experts and programmers. It is unlikely that many will have the technical expertise to figure out the details even after this disclosure. Lastly, significant computer resources were required to compute the MD5 hash collision - it took the researchers 3 days of computing time on a 200 machine cluster. The last point is somewhat moot as bot masters often have thousands of PCs under their control so once technical details are in the public domain the threat becomes very real very quickly.

The implication of this weakness is that users can no longer 100% confirm the authenticity of the website being visited. Communications will still be encrypted, just that you may not know who you are communicating with. Combined with the major DNS flaw discovered by Dan Kaminsky (see 22nd July, 2008 entry), phishers now have everything they need to produce phishing sites that are virtually undetectable even to the most tech-savvy vigilant users. Furthermore, SSL certificates are not just used for the HTTPS protocol so other protocols using SSL/TLS are also vulnerable to this weakness. For example, SSL-based VPN, SSL/TLS encrypted SASL and/or POP3S email are also vulnerable (SSH is not affected).


This is not an easy threat to mitigate against. Vendors such as Microsoft and Mozilla have been quick to point out that this is not a bug or vulnerability in their software, nor is it a vulnerability in the protocols that use SSL. Rather it is a weakness in the way root CA's are signing certificates. A more secure hashing algorithm (SHA1) is available and root CA's are urged to stop using MD5 hashing and switch to SHA1 immediately. In the meantime there is little the end user can do other than stay vigilant and insist that any certificates you purchase for your own sites are not signed with MD5 hashing. Even so, that doesn't mitigate the fact that anyone could still fake a certificate for your site (or any other) regardless of the hashing algorithm used on the genuine certificate.

Choosing a webcam for CentOS

December 1st, 2008
QuickCam Pro 9000

Finding a webcam that will work under Linux has always been a bit of a lottery. Sure, things are slowly improving, and drivers for many webcams are finally starting to appear in the mainline kernel, but on an Enterprise-class distro such as CentOS like we use at Pendre, support for webcams isn't top of the priority list - afterall CentOS is primarily a server OS and not that many servers actually need a webcam.

So after doing plenty of research, I finally settled on a Logitech QuickCam Pro 9000. This is a high-end webcam with integrated microphone boasting HD resolution from it's 2 Megapixel sensor, and a high quality Carl Zeiss lens with autofocus. The Logitech QuickCam Pro 9000 can be found for around £45 ($80-100US).

The QuickCam Pro 9000 uses the UVC driver which has recently been introduced into the mainline kernel. Unfortunately this has yet to be backported into CentOS 5, so I built the driver from source. This was relatively easy - just grab the latest tarball from, extract, build and install (make && make install). Once built, the driver can be inserted into the kernel by running modprobe uvcvideo.

I tested the camera out using Skype and although the higher resolutions available with this webcam under Windows are not available to Linux users, the camera produces a very crisp clear picture and is clearly superior to that produced by most entry-level and midrange webcams. Sound is equally impressive from the built in microphone. Overall, highly recommended and you really do get what you pay for.

WPA Wi-Fi encryption cracked

November 6th, 2008

WPA Wi-Fi encryption, the security that prevents unautorised users from accessing your wireless networks, has been cracked. Researchers report that the TKIP keystream may be decrypted in as little as 12-15 minutes:

More information here

So with both older WEP and now WPA security compromised, the only way to currently secure a wireless network is with the newer WPA2 encryption protocol. However, much older hardware doesn't support this newer protocol so check your hardware for support, switch to WPA2 if available or upgrade/replace your hardware.

For those that don't take the potential threat seriously, allow me to put it in context by reminding you of T.K.Maxx who were in the process of upgrading the WEP encryption to WPA on their wireless cash registers used to transmit customer credit card transactions when they were hacked and subsequently lost hundreds of millions of customers credit card details in an incident that still remains one of the most widely publicised security breeches ever. I'm sure T.K.Maxx have learnt their lesson so let's make sure we learn from their mistake too rather than repeating it.

Firefox security update

September 24th, 2008

Mozilla have released a critical security update for version 3 (3.0.2) of their popular Firefox web browser.

Users are advised to update immediately.

Building kernel modules: Part 3

July 24th, 2008

Previously we have seen how we can build kernel modules for CentOS (or RHEL) out of tree (Part 1; May 30th, 2008) or as DKMS modules (Part 2; July 18th, 2008). In this final part we will look at building kmod kernel modules packaged in RPM format and again we will use the coretemp module as an example.

The first thing we need to do is make sure we have all the kernel-devel packages installed for the currect kernel, including any xen and PAE variants. Next create a working directory containing the project source code and Makefile and create the Kbuild file by copying the Makefile. Finally, create a bzip2 compressed tarball of the source directory and copy the compressed tarball to the /SOURCES directory within your build environment.

[buildsys@Quad]$ rpm -qa kernel\* | sort
[buildsys@Quad]$ mkdir /usr/src/coretemp-1.0kmod/
[buildsys@Quad]$ cp coretemp.c Makefile /usr/src/coretemp-1.0kmod/
[buildsys@Quad]$ cd /usr/src/coretemp-1.0kmod
[buildsys@Quad]$ cp Makefile Kbuild
[buildsys@Quad]$ cd ..
[buildsys@Quad]$ tar -cf coretemp-1.0kmod.tar coretemp-1.0kmod/
[buildsys@Quad]$ bzip2 coretemp-1.0kmod.tar
[buildsys@Quad]$ cp coretemp-1.0kmod.tar.bz2 /usr/src/buildsys/SOURCES/

Now we make a copy the kmodtool script (located at /usr/lib/rpm/redhat/kmodtool) called kmodtool-coretemp in the SOURCES directory and edit it to include any dependencies for our kmod RPM package (thanks to Johnny Hughes for the tip). Creating a custom kmodtool script isn't strictly needed for building kmod packages if you don't need to specify any dependencies but I was unable to find a convenient way to pass package dependencies from the SPEC file to the kmodtool build script. Below is an excerpt of my kmodtool-coretemp script showing an additional dependency for lm_sensors >= 2.10.2 which may be obtained from ATrpms.

Requires(post): /sbin/depmod
Requires(postun): /sbin/depmod
Requires: lm_sensors >= 2.10.2

Finally we make a SPEC file for the project named coretemp-kmod.spec file in the /SPECS directory of our build environment:

# Sources:
Source0: coretemp-1.0kmod.tar.bz2
Source10: kmodtool-coretemp

# If kversion isn't defined on the rpmbuild line, build against the current kernel.
%{!?kversion: %define kversion %(uname -r)}

%define kmod_name coretemp
%define kmodtool sh %{SOURCE10}
%define kverrel %(%{kmodtool} verrel %{?kversion} 2>/dev/null)
%define kmodrel %(echo %{kverrel} | sed 's/2.6.18-//')

%define basevar ""
%ifarch i686
%define paevar PAE
%ifarch i686 x86_64
%define xenvar xen

%{!?kvariants: %define kvariants %{?basevar} %{?xenvar} %{?paevar}}

Name: %{kmod_name}-kmod
Version: 1.0
Release: 4.%{kmodrel}
Summary: Coretemp 1.0 CentOS-5 module
License: GPL v2
Group: System Environment/Kernel

BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
ExclusiveArch: i686 x86_64

This package provides the coretemp kernel module for monitoring the core temperature
of Intel Core 2 Duo and Quad Core processors with the CentOS 5.2 series kernels.

# Magic hidden here:
%define kmp_version %{version}
%define kmp_release %{release}
%{expand:%(%{kmodtool} rpmtemplate_kmp %{kmod_name} %{kverrel} %{kvariants} 2>/dev/null)}

%setup -q -c -T -a 0
for kvariant in %{kvariants} ; do
cp -a coretemp-1.0kmod _kmod_build_$kvariant

for kvariant in %{kvariants} ; do
pushd _kmod_build_$kvariant
make -C "${ksrc}" modules M=$PWD

export INSTALL_MOD_DIR=extra/%{kmod_name}
for kvariant in %{kvariants} ; do
pushd _kmod_build_$kvariant
make -C "${ksrc}" modules_install M=$PWD


* Tue Jul 15 2008 Alan J Bartlett
- Fixed bugs in spec file. 1.0-

* Sat Jul 12 2008 Philip J Perry
- Fixed dependencies. 1.0-

* Tue Jul 08 2008 Philip J Perry
- Fixed bug in spec file. 1.0-

* Tue Jul 08 2008 Philip J Perry
- Initial RPM build. 1.0-

Now we're ready to build our kmod packages using rpmbuild:

[buildsys@Quad SPECS]$ rpmbuild -bb --target=`uname -m` coretemp-kmod.spec
[buildsys@Quad SPECS]$

which will produce a set of kmod-coretemp RPMs in your /RPMS directory under the appropriate architecture. Additionally, our SPEC file allows us to specify from the command line at build time which kernel version to build against (e.g, --define 'kversion 2.6.18-92.el5') or which kernel-variant kmod packages to build (e.g, --define 'kvariants ""' would only build the base kernel package).

kmod coretemp packages for CentOS 5 may be downloaded below and installed with rpm:

Alan has kindly updated the CentOS Wiki page on building kernel modules and I'd like to thank both Alan Bartlett and Akemi Yagi for helpful discussions, testing and encouragement whilst preparing this series of articles.

Thunderbird released

July 24th, 2008

Mozilla have just released a new version of their Thunderbird email client that fixes a number of security issues:

  • MFSA 2008-34 Remote code execution by overflowing CSS reference counter
  • MFSA 2008-33 Crash and remote code execution in block reflow
  • MFSA 2008-31 Peer-trusted certs can use alt names to spoof
  • MFSA 2008-29 Faulty .properties file results in uninitialized memory being used
  • MFSA 2008-26 Buffer length checks in MIME processing
  • MFSA 2008-25 Arbitrary code execution in mozIJSSubScriptLoader.loadSubScript()
  • MFSA 2008-24 Chrome script loading from fastload file
  • MFSA 2008-21 Crashes with evidence of memory corruption (rv:

Users are advised to upgrade immediately.

Major DNS security flaw

July 22nd, 2008

In a major collaborative effort over 80 vendors simultaneously released patches to their DNS software to address a critical vulnerability. DNS, the Domain Name System, forms the basis of how today's Internet works by translating domain names into IP addresses and vice versa. Without DNS you wouldn't be able to type a domain name (such as for example) into a web browser and reach that site.

The current flaw in DNS potentially allows hackers to poison the DNS system and redirect users to malicious sites rather than the site they intended to visit. The researcher that discovered the flaw, Dan Kaminsky, had attempted to keep technical details of the vulnerability secret until next month in order to give system administrators time to patch their servers against the flaw. However, details of the vulnerability were revealed yesterday before many systems have been patched.

In a worst case scenario a major ISPs DNS servers could be subverted redirecting a major site such as Google to a malicious site designed to infect visitors PCs with malware. Such a scenario could result in hudreds of thousands of computers being infected in a very short period of time. With such rewards on offer you can bet the bad guys will be all over this in a flash.

What should you do?

Users are strongly advised to test their DNS servers now to see if they're vulnerable. Dan Kaminsky has a "Check My DNS" applet available on his site here.

If your DNS servers are vulnerable you should contact your ISP (or whoever provides your DNS) and inform them, plus ask them when they intend to patch their servers.

If your DNS servers are vulnerable then you can use the freely available DNS servers provided by OpenDNS until your normal servers can be patched. Windows users should go to Control Panel > Network Connections and right click on the connection and select "Properties". Then select Internet Protocol (TCP/IP) and click Properties. Select "Use the following DNS server addresses" and enter the following two IP addresses for the Preferred and Alternate DNS servers, respectively: and Linux users should edit the nameserver values in /etc/resolv.conf

Additionally, if you use a home router to automatically assign network settings then you should also update the DNS server settings in your router.

Users are then advised to retest to ensure their DNS servers are no longer vulnerable.

Building kernel modules: Part 2

July 18th, 2008

In an earlier article (Part 1; May 30th, 2008) I described how to build kernel modules for CentOS (or RHEL) out of tree and used the coretemp module as an example. My good friend and colleague, Alan Bartlett, had persuaded me to investigate packaging kernel modules, so herein we are going to look at how to package kernel modules in RPM format using the coretemp module as an example. I'd like to thank Alan for his technical assistance and encouragement without which this article wouldn't have been written.

There are two methods we can employ to build kernel module RPMs - DKMS and kmod. In this article we will look at Dynamic Kernel Module Support (DKMS) and in part 3 we will use kmod.

DKMS must be installed from RPMForge together with the necessary development packages.

We start by creating the necessary directory structure and copying the module source code and Makefile to that directory:

[root@Quad]# mkdir /usr/src/coretemp-1.00/
[root@Quad]# cp coretemp.c Makefile /usr/src/coretemp-1.00/

Next create a dkms.conf file for the project and save it to /usr/src/coretemp-1.00/


Now we are ready to invoke DKMS to add the coretemp module to the DKMS tree and build our RPM package:

[root@Quad]# dkms add -m coretemp -v 1.00

Creating symlink /var/lib/dkms/coretemp/1.00/source ->

DKMS: add Completed.
[root@Quad]# dkms mkrpm --source-only -m coretemp -v 1.00

Using /etc/dkms/template-dkms-mkrpm.spec

Wrote: /var/lib/dkms/coretemp/1.00/rpm/coretemp-1.00-1dkms.src.rpm
Wrote: /var/lib/dkms/coretemp/1.00/rpm/coretemp-1.00-1dkms.noarch.rpm

DKMS: mkrpm Completed.

Finally copy the created RPMs to somewhere safe and remove the coretemp module from the DKMS tree:

[root@Quad]# cp /var/lib/dkms/coretemp/1.00/rpm/coretemp*.rpm /usr/src/
[root@Quad]# dkms remove -m coretemp -v 1.00 --all

Deleting module version: 1.00
completely from the DKMS tree.
[root@Quad ~]#

Now we can install our newly created coretemp RPM as normal:

[root@Quad]# cd /usr/src/
[root@Quad]# rpm -Uvh coretemp-1.00-1dkms.noarch.rpm
Preparing...        ########################################### [100%]

In part 3 I will describe how to build coretemp as a kmod kernel module RPM package for CentOS (or RHEL) and will make the RPMs available for download.

Firefox security updates

July 17th, 2008

Mozilla have released security updates to both version 2 ( and version 3 (3.0.1) of their popular Firefox web browser.

Firefox version 2 will be supported until December 2008 so users are encouraged to upgrade to version 3 if they have not already done so.

A rant about software updates

June 27th, 2008

Adobe have recently released a security bulletin for Acrobat (and Acrobat Reader) affecting the latest versions of both 7 and 8.

Nothing new there, so why the rant?

Well, firstly the security update is not picked up through the application's builtin update feature (Help menu > Check for Updates) so unless you happen to regularly follow security blogs, how are end users supposed to know it's been released. Secondly, the download is in the form of a Microsoft .msi file and how many end users would know what to do with one of these? Finally, after successfully updating a system there doesn't appear to be any easily visible record that the patch has been applied making it difficult for system administrators to track which systems have been updated and which haven't.

With the current trend of exploiting vulnerabilities in 3rd party applications (Adobe Acrobat) and browser plugins (Adobe Flash), software vendors need to do more to ensure they deliver software updates in a timely and coherent fashion. In short, they need to do more to ensure end user systems stay fully patched.

End users, and home users in particular, don't always have system administrators to look after their machines, ensuring that all the relevent security patches have been applied and the Bad Guys know this and take advantage by targeting these applications and browser plugins. All it takes is for one email with a link to a malicious PDF file hosted on some website, click on it and you're infected.

Delivering updates

One way to address the paradigm of delivering software updates might be for Microsft to open up their Microsoft Updates site to 3rd party software vendors and allow them to automatically deliver their updates through that system thus ensuring that all users receive them. A "one-stop-shop" for updates really is what's needed as it's no longer reasonable to expect users to go trawling the web in search of the security updates they need. Software with an automatic update feature built in was a step in the right direction, but as Adobe have just shown it's not infallible and only works when they deliver all updates through that mechanism.

In the meantime, services such as Secunia's Online Software Inspector provide a valuable service to end users for ensuring that their systems are kept up to date.

F-Secure Rescue CD 3.00 released

June 22nd, 2008

F-Secure have just released version 3 of their Rescue CD. The rescue CD is a Linux-based LiveCD ISO image built on Knoppix version 5.3.1 that you can use to boot an infected system, scan all the hard drives and automatically rename any infected files with the .virus extension.

One of the main advantages of such tools is that booting from a LiveCD allows the efficient detection and removal of rootkit-protected malware whereas normal booting of a system would render these malware otherwise undetectable to anti-virus products.

The new version supports online virus definition updates if a network connection is present (requires dhcp), or manual updating from a USB drive. Windows FAT and NTFS partitions are supported thanks to an updated NTFS driver (NTFS-3G 1.2506) and the ability to detect MBR viruses is also supported.

This is a useful addition to any system administrator's toolbox and comes highly recommended.

Mozilla releases Firefox 3

June 17th, 2008

Today Mozilla released an all new version 3 of their open source Firefox web browser. After nearly 3 years in development the new browser version contains many new features and improvements to speed, security and memory usage.

Mozilla are also aiming to break the Guinness World Record for the most software downloaded in 24 hours and are hoping for 5 million downloads on the day of release. In comparison, version 2 of Firefox managed 1.6 million downloads on the day it was originally released, 24 October, 2006.

Firefox 3 was made available for general release at 6pm (BST) and in the first couple of hours after release Mozilla's download site was either very slow or unreachable most likely due to extraordinarily heavy demand - a good sign that they may reach their target number of downloads. Firefox 3 is available for for all popular operating systems, including Linux, Mac OS, as well as Windows 2000, XP and Vista.

Building kernel modules: Part 1

May 30th, 2008

There are a number of reasons why you may wish to build Linux kernel modules. For example, you may need a driver that isn't present in the current kernel, or you may want to apply a patch to an existing driver that adds functionality or fixes an bug. There are a number of ways you can tackle such issues:

  • Build a new kernel based on the upstream vanilla sources that contain the required driver version
  • Rebuild your current distro kernel with a patch applied

Both of these methods require you to build a complete custom kernel which you then become responsible for maintaining. However, many drivers will build out of tree and may be loaded into your currently running kernel. This approach has a number of advantages in that you don't need to compile a complete kernel, you can continue to run your stable distro kernel and you don't have to worry about maintaining your own custom kernel.

Building modules out of tree

As an example, I'm going to build the coretemp module for CentOS 5 used for monitoring the core temperature from inside Intel Core series processors as this driver didn't make it into the 2.6.18 series kernel used in CentOS 5.

First we need to create a project directory and src directory in that:

[phil@Quad ~]$ mkdir coretemp
[phil@Quad ~]$ cd coretemp/
[phil@Quad coretemp]$ mkdir src
[phil@Quad coretemp]$ ls -l
total 4
drwxrwxr-x 2 phil phil 4096 May 30 18:58 src
[phil@Quad coretemp]$

and place your module source file in /src

Next we create a top level Makefile in the project directory ~/coretemp/

all: clean modules install

$(MAKE) -C src/ modules

$(MAKE) -C src/ clean

$(MAKE) -C src/ install

and a Makefile in ~/coretemp/src/

KVER   := $(shell uname -r)
KDIR   := /lib/modules/$(KVER)/build
KMISC  := /lib/modules/$(KVER)/extra/
KEXT   := $(shell echo $(KVER) | sed -ne 's/^2\.[567]\..*/k/p')o
KFLAG  := 2$(shell echo $(KVER) | sed -ne 's/^2\.[4]\..*/4/p')x

$(MAKE) -C $(KDIR) SUBDIRS=$(PWD)/src modules

rm -rf *.o *.ko *~ .dep* .*.d .*.cmd *.mod.c *.a *.s .*.flags .tmp_versions Module.symvers

install -m 644 -c coretemp.$(KEXT) $(KMISC)

obj-m += coretemp.o

Since coretemp isn't a module provided in the CentOS 5 kernel it makes sense to install our newly built module to /lib/modules/{uname -r}/extra.

So now we can build our new kernel module (you can build the module as a regular user, but make install will need root privileges):

[root@Quad coretemp]# make
make -C src/ clean
make[1]: Entering directory `/home/phil/coretemp/src'
rm -rf *.o *.ko *~ .dep* .*.d .*.cmd *.mod.c *.a *.s .*.flags .tmp_versions Module.symvers
make[1]: Leaving directory `/home/phil/coretemp/src'
make -C src/ modules
make[1]: Entering directory `/home/phil/coretemp/src'
make -C /lib/modules/2.6.18-92.el5/build SUBDIRS=/home/phil/coretemp/src modules
make[2]: Entering directory `/usr/src/kernels/2.6.18-92.el5-x86_64'
CC [M] /home/phil/coretemp/src/coretemp.o
Building modules, stage 2.
CC /home/phil/coretemp/src/coretemp.mod.o
LD [M] /home/phil/coretemp/src/coretemp.ko
make[2]: Leaving directory `/usr/src/kernels/2.6.18-92.el5-x86_64'
make[1]: Leaving directory `/home/phil/coretemp/src'
make -C src/ install
make[1]: Entering directory `/home/phil/coretemp/src'
install -m 644 -c coretemp.ko /lib/modules/2.6.18-92.el5/extra/
make[1]: Leaving directory `/home/phil/coretemp/src'
[root@Quad coretemp]#

and finally run depmod, load the module with modprobe and check it's loaded:

[root@Quad coretemp]# depmod -a
[root@Quad coretemp]# modprobe coretemp
[root@Quad coretemp]# lsmod | grep coretemp
coretemp 41344 0
hwmon 36553 2 coretemp,it87
[root@Quad coretemp]#

So let's have a look at coretemp in action, shown here monitoring all 4 cores of an Intel Q6600 quad core processor:

[root@Quad coretemp]# sensors

Adapter: ISA adapter
Core 0: +38°C (high = +100°C)

Adapter: ISA adapter
Core 1: +38°C (high = +100°C)

Adapter: ISA adapter
Core 2: +36°C (high = +100°C)

Adapter: ISA adapter
Core 3: +36°C (high = +100°C)

[phil@Quad coretemp]$

In part 2 I'll describe how to build dkms enabled kernel modules for CentOS/RHEL and package these in RPM format for distribution.

OpenSSL: Predictable PRNG in debian-based systems

May 13th, 2008

A critical bug has been found in the Pseudo Random Number Generator (PRNG) used in OpenSSL in debian Linux and it's derivative (e.g, Ubuntu etc) that renders random numbers predictable. The bug was introduced into debin unstable on September 17th, 2006 and has since propogated into the stable branch. All keys generated on affected systems should be considered compromised.

I don't use debian (or Ubuntu), so how does this affect me?

The impact of this vulnerability is far reaching and goes way beyond debian-based systems. The vulnerability directly affects all debian-based systems using OpenSSL generated keys (OpenSSH, OpenVPN, website and mail server authentication by SSL/TLS, etc) and potentially affects anyone using SSL-based security or encryption.


If you have a Linux server, then you almost certainly have SSH configured by default for remote access. If you use public/private key authentication and users have generated keys on affected systems, then brute-forcing those keys to gain remote access is trivial and there are already scripts available in the wild. The same applies to OpenVPN. The important facor here is the system the keys were generated on, not whether your server is running debian or other affected Linux distribution.

Additionally, all DSA keys that were ever used on a vulnerable debian-based system for signing or authentication should also be considered compromized due to a known attack on DSA keys.

System administrators are advised to conduct a full audit of all key pairs and regenerate new keys where appropriate.

SSL-encrypted websites and mail servers

Users of debian-based systems running SSL encrypted websites or mail servers using SSL/TLS certificates generated on affected systems should generate new certificates immediately. Additionally, if you have had your certificates signed by Root Certificate Authorities (eg, Twarte, VeriSign etc), then you will need to generate new certificates and get these resigned.

By definition, the public key is publicly known, and therefore it is trivial to generate a matching private key. It then becomes trivial to launch man-in-the-middle style attacks against encrypted or secured transactions such as credit card or other banking transactions or to obtain users login credentials. The potential for exploitation is huge and has the potential to affected all users of the Internet.

If you are at all unsure about how to generate new keys or certificates, please do not hesitate to contact us and we will be happy to assist in generating new OpenSSL keys or certificates for you.

AVG Anti-Virus Free version 8 released

April 28th, 2008

Last week Grisoft, the makers of the popular AVG Anti-Virus program, released version 8 of their free product.

Users are recommended to upgrade, and the new version can be downloaded here

As Grisoft state, "AVG Anti-Virus Free Edition is only available for single computer use for home and non commercial use."

The six dumbest ideas in computer security

April 25th, 2008

Sometimes those of us working in the Information Security (InfoSec) business feel like it's nothing but doom and gloom, and for the most part, that's probably not an unrealistic synopsis. So here's a little lighthearted weekend reading:

The six dumbest ideas in computer security

"Let me introduce you to the six dumbest ideas in computer security. What are they? They're the anti-good ideas. They're the braindamage that makes your $100,000 ASIC-based turbo-stateful packet-mulching firewall transparent to hackers. Where do anti-good ideas come from? They come from misguided attempts to do the impossible - which is another way of saying "trying to ignore reality." Frequently those misguided attempts are sincere efforts by well-meaning people or companies who just don't fully understand the situation, but other times it's just a bunch of savvy entrepreneurs with a well-marketed piece of junk they're selling to make a fast buck. In either case, these dumb ideas are the fundamental reason(s) why all that money you spend on information security is going to be wasted, unless you somehow manage to avoid them.

For your convenience, I've listed the dumb ideas in descending order from the most-frequently-seen. If you can avoid falling into the the trap of the first three, you're among the few true computer security elite."

Fed up with spam?

March 7th, 2008

A couple of weeks back we promised that we would publish an article on spam telling you everything you need to know to take back control of your inbox.

"More and more frequently I get asked by clients what they can do about the amount of spam they are receiving. To know the answer, we need to understand a little about what spam is, why we get it and things we can do to minimise or irradicate it."

"Here at Pendre we have developed a custom spam filtering system on our email servers that blocks, on average, 99.6% of spam before it ever reaches you. This is the ideal situation as you no longer have to waste time and bandwidth downloading the spam or spend time checking for messages that have been incorrectly marked as spam. And if you are concerned about false positives, i.e. us falsely blocking messages that you do want to receive, you can add senders email addresses (or domains) to our whitelists so they will never be spam filtered and will guarantee delivery every time. So if your current email provider isn't providing a satisfactory solution, feel free talk to us about moving your email onto our servers and benefit from our highly efficient spam filtering."

Well, we've kept our promise. Read the full article here.

Another major spam run of storm worm

March 3rd, 2008

Today we saw yet another major spam run of the storm worm. Messages typically referred to a funny postcard (or ecard) and provided a link of the form http://ip_address contained within the message body. Clicking on the link would take a user to a website that will infect their computer with the storm worm virus. Users are reminded never to click on links in emails.

Our email servers at Pendre successfully filtered these spam messages based on body checks for raw dotted quad ip addresses, a technique regularly used by storm worm and other such spam. As such, users of Pendre email servers were not at risk. Anti-Virus detection of these new storm worm variants is now reasonably good.

vmsplice local privilege escalation in Linux kernel

February 11th, 2008

There is currently a local privilege escalation flaw in the current Linux Kernel, affecting all kernels from 2.6.17 to inclusive.

Exploit code for this vulnerability is available in the wild:

[phil@centos test]$ ls
exploit exploit.c
[phil@centos test]$ uname -a
Linux centos 2.6.18-53.1.6.el5 #1 SMP Wed Jan 23 11:30:20 EST 2008 i686 athlon i386 GNU/Linux
[phil@centos test]$ whoami
[phil@centos test]$ ./exploit
Linux vmsplice Local Root Exploit
By qaaz
[+] mmap: 0x0 .. 0x1000
[+] page: 0x0
[+] page: 0x20
[+] mmap: 0x4000 .. 0x5000
[+] page: 0x4000
[+] page: 0x4020
[+] mmap: 0x1000 .. 0x2000
[+] page: 0x1000
[+] mmap: 0xb7f0c000 .. 0xb7f3e000
[+] root
[root@centos test]# whoami
[root@centos test]#

This vulnerability is not remotely exploitable so an attacker would first have to gain local access, however this could present significant risks for system administrators in shared environments with multiple system accounts.

Patches should be available shortly from distribution vendors and system administrators are advised to upgrade their kernel as soon as these become available.