Red Hat Enterprise Linux 5
By Tom Henderson and Rand Dvorak, Network World | Network World US | Published: 01:00, 28 March 2007
With the recent release of Red Hat Enterprise Linux 5.0, Red Hat is both following and bucking server operating-systems trends we’ve witnessed in past tests of Novell’s SuSE Linux and Microsoft’s Longhorn beta code.
Following suit with Novell’s SLES 10 and Microsoft’s Longhorn, RHEL5 sports user session controls with its enhanced Security Enhanced Linux (SELinux) implementation and Xen-based server virtualisation technology. With this release, administrators can couple these two technologies to provide multiple server operating instances with secured sessions running underneath, for a kind of one-two punch of reliability and user session isolation from root-access issues.
In the bucking-trends column, while Microsoft’s current tack is to make a different version for nearly every kind of server imaginable (multiplied by OEM server options), Red Hat with this release cuts the number of versions down to two major categories, server and client, with a further delineation for 32- and 64-bit CPU genres. There are no separate versions for a Storage Server, Certificate Manager Server, Small Business Server or Left-Handed Freckled Linux.
Also, while Red Hat has made a few minor GUI enhancements, RHEL5 doesn’t offer much in the way of eye-candy adjustments to its interface like Microsoft and Apple consistently do with their operating-system upgrades.
With RHEL 5, SELinux access controls are in place immediately unless you opt out of them at installation. While SELinux was first provided in RHEL 4, we found that Red Hat has made setting user access-control policies much easier in this release, as its SELinux Management Tool can be used to set user policies as well as policies for specific applications by module. However, some administration tasks, such as changing policy characteristics for groups of applications, still require manipulation that the SELinux instrumentation tools in RHEL5 don’t handle out of the box. Red Hat, to its credit, supplies a very good SELinux Troubleshooter application.
SELinux Troubleshooter, which scours system logs looking for problems spawned by users or application misbehaviour, is quite articulate in the types of misbehaviour it finds. However, the logs needed by SELinux Troubleshooter aren’t found in any ‘default’ location by the application. Also, once you point it in the right direction, the location is not saved by the application for subsequent default-location log-file openings.
Because logs can record the same repeating error, a handy SETroubleShooter filter can be used to separate the discrete error instances from the entire list. What’s missing from TroubleShooter, though, is an automatic alarm mechanism that can send messages to syslog or even straight to an administrator when application, user session or guest operating-system instances start to misbehave.
And while SELinux is set during installation, we found that root and user passwords for the operating system at large have few constraints placed on them. It’s a conundrum that SELinux’s protection of user sessions has evolved, but passwords, unless purposefully hardened, are subject to dictionary attacks.
Xen is now
Red Hat has made extensive use of the Xen server virtualisation technique for the first time in this release. What’s different, we found, is that Red Hat’s Xen implementation is far more evolved than what we found in SUSE 10, although it does lack comprehensive instrumentation.
We could easily get the Xen hypervisor up and running and subsequently build a modified (called "Xenified" hosting) kernel quickly. Guest operating sessions could be built quickly upon these instances and subsequently monitored through Virt-Manager, the basic open source Xen tool included by Red Hat in this release. The malady to how RHEL5 has used Xen is that it isn’t sewn together administratively and begs for an "empirical" Xen-management application rather than non-intuitive sequence of using the standard open source tools.
When we tested the RHEL5 native kernel for performance (using OS install time-chosen defaults, as we normally do) against SUSE 10, we found little notable differences in our LMBench results.
Performance is also enhanced by RHEL5’s ability to use multi-core CPUs and tap into several of them in the same machine. In our tests, RHEL5 detected the twin Athlon CPUs of our Polywell 2200S machine easily, and made the same short work of our four-Athlon 64 CPU, dual-core HP 585. In fact, there were no other detection errors except for odd graphics-card geometry problems that we saw on a Dell PowerEdge P280 and our HP DL140 generic servers, and these were trivial.
The impact of introducing Xen’s hypervisor to the subsequently Xen-ified RHEL5 kernel represented only nominal latency from the native RHEL5 results. We measured the Xen impact on performance, and found that the ‘insertion loss’ of the hypervisor layer and ‘Xenified RHEL5 kernel’ are nominal: performance isn’t affected much. Adding guest operating systems to the Xenified RHEL5 kernel dragged down performance. We spawned two guest ‘domU’ (as they’re called in Xen parlance) instances and ran LMBench3 concurrently in each OS guest instance. Performance dropped linearly with the added instances.
Drivers and eye-candy
Red Hat has rebuilt its driver model in a quest to equip hardware vendors with a more consistent code-building exercise. One of the possible payoffs is an open source iSCSI driver that allows OS instances to use virtualised storage provided via the iSCSI driver’s ability to link to external storage facilities, such as iSCSI hosted SAN assets. This driver allows external iSCSI storage devices to be reached from the operating system as ‘SAN,’ thereby reducing the server hardware footprint. It also give system designers flexible storage options should they need to service virtualised OS instances in a more orderly way.
Less evolved in RHEL5 in both server and client versions are the kinds of flashy GUI enhancements Microsoft and Apple have been putting in their operating systems. There’s really no new lipstick on the Red Hat pig. It’s still using Gnome 7.1; support for high-performance graphics cards has been added, as are the AIGLX libraries, which mime some of the graphics visual effects of competitive GUI features. These include slick minimisation, transparency/translucency, fading and window manipulation tricks.
In terms of other notable changes, install-time support options have increased, and RHEL5 includes more sophisticated and comprehensive support for IPV6 in the areas of detection and firewall manipulation.
Sex appeal doesn’t seem to be the focus of this release. Instead, Red Hat makes a strong statement in its competitive infrastructure in the form of well-executed virtualisation and user session controls in its RHEL5 release. The aggressive number of components inside this operating system still beg to be sewn together more comprehensively with better administrative tools, but the fundamentals are definitely in there.
How we tested
We tested final, gold code for Red Hat 's RHEL5 in a switched Gigabit Ethernet IPV4/IPV6 network comprising both DLink and HP switches.
We tested the code on numerous servers, including an HP585 (outfitted with 4-AMD Athlon dual-core CPUs, 12GB DRAM and an HP SCSI array), an HP DL140 (sporting dual 32-bit Intel Xeon CPUs and 4GB DRAM), a Polywell 2200S server (equipped with two AMD single-core Athlon CPUs and 4GB DRAM) and a Dell P280 (which had a single Intel Celeron CPU, 4GB DRAM; 500GB SCSI drive and Fibre Channel card).
We successfully tested connectivity via NFS4, LDAP and Samba, connecting with Windows 2003 Enterprise Server Edition, Apple MacOS 10.4.7 Server edition, Novell SUSE Linux 10, as well as Windows Vista Ultimate/XP SP2, MacOS 10.4.7 client and NetBSD 3.
We tested Xen efficiency using LMBench3 on the same hardware (Polywell 2200 described above), first using a native SMP kernel, then a hypervised Xenified kernel and then using two DomU kernel guest instances. We saw linear performance degradation across the results.