Skip navigation

Monthly Archives: November 2005

Here is the promised hard drive installation script for Thinstation. The scripts should be fairly self-explanatory, but here’s a quick rundown of what they do:

On boot, a script (/etc/init.d/should_prepare_hd) sees if the first hard drive is partitioned the way we want it to be. If it isn’t, the script creates an icon on the desktop to install to hard drive.
The script to install to hard drive zeros out the first 30MB or so of the disk, then partitions it, copies in an MBR, and installs Etherboot, booted via Syslinux, onto the first partition. The script then creates swap on the second partition, and activates it.

To use this package, just extract it into the Thinstation root, like any other Thinstation package. To make so that it is included in the build, add a “package hd-install” line to your build.conf. Unfortunately, I haven’t figured out a way to make it specify that it needs the vfat module, so you’ll need to uncomment the “module vfat” line in the build.conf file as well.

A couple of notes:
* The partition comparison is made more convoluted by a bug in the busybox version of sed that makes it impossible to escape backslash characters. I’ve worked around it, but it’s annoying.
* Theoretically, the script should be able to support SCSI disks, but my version of sfdisk can’t see them. I’ll need to research whether this is a limitation of sfdisk or Thinstation (hopefully the former).

One improvement that I hope to make at some point is adding the ability to install another bootable image (e.g. Thinstation) instead of Etherboot, possibly pulling down from a TFTP server.

The question I’ve been tackling recently is how to reliably boot a bunch of old computers into a version of Linux, in order to be used as thin clients. This is easy to do poorly, and much more difficult to do well.

The easy solution is to boot a version of the thin client OS (Thinstation) from CD. Thinstation’s build process produces a bootable .iso automatically; all you need to do is burn it to cd, stick the cd in the client machine, and you’re ready to go. The problems with this approach is that if you want to roll out a new version of Thinstation, you need to burn new CDs, and manually replace the old CDs in each of the machines. This would be tolerable with 5 computers, but becomes a nightmare with 15 or more, especially when the computers are in hard to reach places. To make matters worse, many old machines do not boot from CD, or have broken cdrom drives.

Because of the versioning issue, I decided to go with network booting (via PXE), which pulls the boot image from the server each time the client is booted. The problem here is that while almost all new desktops can network boot, few old desktops do. Fortunately, there is the Etherboot project. Etherboot produces boot roms for programming the EEPROMs in network cards to allow the network cards to network boot via PXE. The real boon for us is the fact that these images can be installed to hard drive, and booted from there. They can also be used to generate a bootable floppy or cd.

The approach we’ve chosen is this: we add to Thinstation the ability to install a Etherboot PXE loader to the hard drive. When we want to set up a new thin client, we put in both a bootable floppy and a bootable cd into the computer and turn it on. The chances are high that the BIOS will be set to boot from either the floppy or the cd before the harddrive. These bootable disks are simple Etherboot loaders that pull down the Thinstation image, and boot it. Once Thinstation is booted and we know everything is working correctly, we run the script (via an icon on the desktop) to install the PXE loader to hard drive. The next time Thinstation loads, it sees that the hard drive has already been set up, and hides the icon to install to hard drive.

There are a few more complications with notebooks, but I’ll try to post solutions to those (as well as the Thinstation package) soon.

…is the coolest thing.

I maintain a network that consists of a SuSE Linux terminal server and about 15 thin clients. The clients connect to the server via NoMachine’s NX terminal server technology. We are in the process of moving from booting the thin clients off of cds towards network booting. The only difficult thing about this transition has been setting up a staging/testing environment. Testing essentially requires its own network, complete with a dhcp/tftp server. Since we’re a small shop, we can’t afford that kind of investment.

Enter VMware. WMware Workstation is virtualization software, which means it lets you run “guest” operating systems on your computer (the “host”). Each guest thinks it is running on a normal PC.

I’ve used Workstation for a couple years now to satisfy my love for playing with operating systems, but had never before looked into its virtual networking capabilities. They’re impressive, as I’ve discovered. You set up Teams, which are basically a group of virtual machines connected together by various virtual networks you define. For each network you define, you can set the link speed and packet drop %. (Very useful for testing flaky wireless connections!)

I set up my Team to consist of one server and multiple thin clients. The server has two virtual network adapters, one connected to the “real” outside network, and one to a virtual network. On the “real” side, it looks to the rest of the network like just another computer, gets an IP address, has an internet connection, etc. SuSE is configured to respond to DHCP and tftp requests only on the other network adapter, so it doesn’t mess with the existing, “real” dhcp server. All the clients are connected only to the virtual network, where they are configured to network boot from the SuSE server.

You can also power on the whole Team at once. In my case, it starts the server, waits 100 seconds to allow the server time to boot, then powers on the thin clients, one every twenty seconds. Here is a screenshot showing a boot sequence in progress.

All this took maybe twenty minutes to set up, plus OS install on the server. All of it runs on a single machine, my workstation. I love virtualization.