25 years of Linux

The 25th anniversary of Linux has been celebrated recently. I can’t remain silent about it because Linux has been playing an important role in my life.

In 1980’s, Richard Stallman started a heroic and successful effort to create a free operating system in response to the unfortunate dominance of proprietary operating systems. But in 1990’s, his GNU project was still missing a very important part – the operating system kernel. This gap was filled in by Linus Torvalds when he started working on a new operating system kernel called Linux and decided to publish it under the GNU General Public License.

I’m not sure why exactly Linux took the place and became so popular. Perhaps it was the right thing coming at the right moment. When I first heard about Linux, it was IIRC around 1993, it looked like an amazing alternative to the world dominated by the unbelievably inferior Microsoft systems, complemented by proprietary Unix systems that were anything but a suitable operating systems for a student’s PC. The only real alternative to DOS and its graphical add-on called Windows was the BSD family of systems. But Linux, perhaps due to being a young system, was less hardware demanding and I could run it, including the X Window System, on my PC equipped with 4 MB RAM. I abandoned DOS/Windows and soon switched to using GNU/Linux exclusively.

It was a lucky choice and it founded the direction of my professional carrier. I didn’t bother to pick the best things from different worlds and to use different operating systems for different purposes. I instead focused on the right thing and solving the numerous problems I faced when I started using GNU/Linux. I learned that freedom has its price but when one is ready to pay it without looking aside, it brings a great revenue.

I’ve been a GNU/Linux user for more than half of my life. The proprietary vendors who tried to lock users inside their proprietary systems, similarly as they try to do today with e.g. smartphones and IoT devices (despite they often run on top of the Linux kernel!), have failed to put the advancing free operating system to irrelevance. I can still use GNU/Linux and I can make my living by developing free software on Linux.

OS X upgrade

I upgraded OS X on my work computer to a new version. It required two system restarts and made the computer (with an SSD drive!) unusable for about half an hour. This is the so called “world’s most advanced desktop operating system”.

20 years of Debian

Debian has celebrated its 20th anniversary last week. IIRC, I’ve installed Debian in 1995 for the first time, version 0.93R5. My very first GNU/Linux distribution was SLS (who knows today what it was?). I switched to Slackware soon and after some time I decided it might be a good idea to try something else once again. Looking at our faculty FTP server I’ve found a distribution called Debian with interesting development model and with package dependencies (missing dependencies were quite a problem with Slackware that time and Debian was the only distribution solving the problem). About a year later I became one of the Debian developers.

I’ve never switched to another distribution again since I installed Debian for the first time and I’ve remained a Debian user (and developer) till today. Although I tried to install and use other Linux distributions during the time I wasn’t fully satisfied with any of them. Even in times when I wasn’t particularly happy with Debian my research led just to the conclusion there was no real alternative.

Although Debian has never been the most popular GNU/Linux distribution it has still been very successful for all the years, having solid user base and grounds. What makes it so successful? I can see three basic reasons.

For first, Debian’s top priority has always been software freedom, since its beginning. Debian’s insistence on being a truly free software distribution makes its development and use unrestricted, independent and cooperative.

For second, Debian has always been a non-commercial distribution directed only by its developers. Its development approach “users for users” makes it free from big commercial pressures, bureaucratic constraints or similar destructive problems. The development is based on democratic decision processes and efforts to reach consensus rather than making simple decisions dictating what to do. This is not always easy and quiet but it contributes to long-term success. Debian has also been successful in self-organizing and managing its growth, which isn’t easy for a group of more than thousand less or more regular voluntary contributors. Commercial activities are left to Debian derivatives, which is a good thing.

For third, Debian prefers stability over feature creep or release rush. I can confirm it’s sometimes annoying but in the final result it makes the life of users much easier. There is a choice between stable releases and testing, unstable and experimental continuous branches. Many other distributions, despite being smaller, are unable to provide equivalents to Debian stable (whether they offer something named this way or not), sometimes being even less stable than Debian unstable.

In my view Debian excellently demonstrates the strength of free software. It’s a unique phenomenon, being the only large, stable, independent and completely free software distribution with sustainable development. It perfectly complements the great and successful free software projects such as Linux, GNU and others. It also serves as a basis of many other GNU/Linux distributions of various kinds derived from it. Let’s wish to Debian the best for its next years.

Using Mac OS X

I use a proprietary operating system even on a desktop computer now. I’m forced to use Mac OS X at my job. Apple says OS X Mountain Lion is an easy to use and incredibly powerful system with features I’ll love. In my user’s experience the system is primitive, inflexible and chaotic. It’s very hard to find a single feature which would make me excited and I feel relief each time something works at least as expected. OS X is similar to other proprietary systems in that regard. It’s just much more overhyped.

But my point is not ranting about Apple software here. My point is that after I had become somewhat familiar with OS X, I lost (although maybe just temporarily) my habit to complain of GNU/Linux. It’s often a bit hard but then I can usually say to myself “OS X can’t do that at all”. I’m not much familiar with Windows systems but I guess they are not much better than OS X. So if you are in despair because your favorite free software system tends to be buggy, deficient and confused, I suggest using OS X or Windows for a week exclusively as a cure.

My lesson is we shouldn’t underestimate what free software operating systems have achieved and how much they are ahead of proprietary systems. Software freedom is not primarily about advanced features or quality of software. But we can see that freedom and collaboration can achieve a lot despite all the well known problems.

An interesting question related to OS X is the practical difference between copyleft licensing of GNU/Linux systems and relaxed licensing of *BSD systems. Is it good or bad (or does it matter at all) that Apple could derive its new operating system from a free Unix system? On one hand the Unix roots make OS X survival easier (e.g. it’s fine to have ls available) and maybe Apple has contributed something to FreeBSD (I don’t know), on the other hand *BSD licensing helps making proprietary software. My view is that copyleft licensing is one of significant reasons to prefer GNU/Linux systems over *BSD systems.

ZFS user

I decided that it is a good idea to replace my complicated hard drive setup utilizing parted + mdadm + vg* + lv* + mkfs + fsck + fstab + whatever else with just zpool + zfs. Let’s run zpool once and then create and manage file systems without artificial constraints, without unnecessary administration overhead and with added benefits such as check sums, snapshots or file system sharing across different operating systems.

I run ZFS on Linux for more than a month now and I appreciate its end user simplicity a lot. It’s not without problems though.

The first problem is licensing set by a proprietary software company and making the ZFS free license incompatible with GPL. So ZFS can’t be integrated into Linux and has to be installed separately as additional modules. It’s not that much problem for the end user except that it complicates using ZFS as a root file system. For simplicity and safety I use small separate non-ZFS drives for booting and running the root file systems. Otherwise ZFS on Linux releases build Debian packages smoothly and they can be installed without any problems.

The second problem is operating system stability. I run ZFS on two quite different GNU/Linux machines seriously. zfs send reliably crashes both of them. While this is the only problem on one of the machines, the other machine suffers from occasional freezes since I started to use ZFS on it. I can’t say the problem is in ZFS as I can’t get any information from a frozen machine, but ZFS is the primary suspect.

EDIT: Well, the computer has just frozen on the BIOS boot screen after manual reboot. So ZFS can’t be the direct cause of the freezes. Perhaps my hardware doesn’t like my new hard drive configuration.

System reinstallation

Linux Containers became unable to start after host system boot on one of my machines. They started fine on boot but any later attempts to start any of them failed with a weird message Invalid argument - pivot_root syscall failed. I couldn’t get help on that and obvious actions like trying to stop some daemons or reading Linux sources didn’t help.

I had no interest to debug the boot process on the machine so when I was changing my hard drive configuration (for a completely different reason) I simply reinstalled the host system. And the problem was gone. This was for the first time during my 19 years use of GNU/Linux systems when I had to reinstall the system because something stopped working.

I’m disappointed with Linux Containers. Although they basically work they still lack some basic features like finer permissions, direct execution of commands in a running container or file deduplication. And I can see little progress since I started using them. There are new problems instead. Why is such a basic virtualization feature so poorly supported in Linux? Is my system reinstallation another sign of deeper problems in free software development?

Linux drivers

I bought a new input device, Wacom Bamboo Pen & Touch tablet. I was careful enough to buy an older model and to check the device is supported on Linux. Based on my previous experiences I also tested the tablet on a Windows computer to be sure it actually works and I can handle it, so that I’m sure contingent faults on Linux are caused solely by the drivers and are not hardware or user defects. Installation of the Windows driver was quick and easy and the device worked well immediately.

Then I tried to get the device running on Linux. I installed a newer 2.6 Linux version, reported to support the device, and the newest released version of X.Org Wacom drivers. I added appropriate sections to my /etc/X11/xorg.conf. So after some hours of googling and updating the system I got an environment which should be ready for testing the tablet.

I restarted the X server and ended up with a frozen black screen, having to use a reset button on my computer. This situation had repeated for several times until I discovered the Wacom driver conflicts in some way with a Wizardpen driver, causing crash of the X server without recovering the keyboard and console. So I commented out the Wizardpen input device in xorg.conf and then could start X without having to reset my computer anymore.

The pen function worked well but the touch function didn’t work well and the tablet buttons didn’t work at all. I found out that although the given model number of the tablet should be supported in newer 2.6 kernels, there are actually several variants of the model and the newer ones are not supported in those kernels. So I installed the newest version of the tablet kernel drivers for Linux 2.6.

The touch function started to work in a different but still unusable way. After some exploration I found out that while the tablet kernel drivers support several versions of 2.6 kernels, not all fixes are backported to all the supported versions. My Linux was too old so I had two options (not counting giving up on using the touch function): either to backport the changes myself or to upgrade my system to a development version. I decided to upgrade and the touch function finally started to work with a recent 3.1 kernel, after I was forced to abandon my stable OS installation (I couldn’t upgrade just the kernel because it was necessary to recompile OpenAFS modules for it, which is possible only with new gcc, which depends on new libc, etc.).

Nevertheless gesture recognition still didn’t work as expected. I had to install the latest development version of X.Org Wacom drivers to fix that.

Tablet buttons still didn’t work. After some guesswork I could get them working by changing my X.Org configuration, a wrong device was used in the sample configuration copied from wiki. After another round of googling I finished the device installation by tweaking the unsuitable acceleration settings, by adding tablet rotation support to my screen rotation script and by remapping the buttons. After many hours of googling, reading, editing, compiling, reseting and experimenting the tablet still doesn’t work as well as on Windows after five minute installation, because of the limited set of supported gestures, but at least it can now do everything my mouse can.

Lessons learned:

  • Hardware vendors still mostly ignore Linux users.
  • It’s best to avoid buying new hardware unless it is really needed. Spend saved money on something else (e.g. donation to Free Software Foundation, to developers of your favorite free software product or to any common charity you like).
  • Don’t buy a computer without a reset button if you intend to run Linux on it.
  • The fact that just presence of two certain drivers in X.Org may cause the whole X server crash and loosing user control over the computer indicates there is something very wrong with software development practices (well, it’s nothing new but the fact proves it’s all not “just fine”).
  • Linux doesn’t care about making driver backports easy. Don’t buy new hardware unless you are ready to upgrade everything. Even then you may be forced to apply patches, compile, install, debug, read the source code and make fixes yourself.
  • Using kernel modules not included in official Linux kernel is troublesome all the time. I’d abandon using OpenAFS only for this reason if Linux provided any reasonable network file system.
  • There is no such thing like reliable driver information. Be ready for a lot of googling, judging and reading the source code yourself.
  • I’d be helpless without access to the source code of various components.

In summary, device drivers demonstrate big problems. Hardware vendors are generally uncooperative, there are insufficient resources for reverse engineering and development and documentation of free drivers, software development practices are generally poor. If a device ever becomes reasonably supported, its remaining driver problems are unlikely to get ever fixed once the device vanishes from the market.

I consider device drivers being big blockers of development and adoption of new advanced operating systems. I could tolerate various problems of experimental operating systems, but it’s hard to get any serious interest of new users and developers when the hardware doesn’t work or suffers from big performance problems.

Is there a solution? I can’t see any. Perhaps the only hope may be making coordinated campaigns targeted on hardware vendors and coming of new clever and enthusiastic Linux kernel developers.

Linux Containers

Until recently I was a relatively satisfied Linux VServer user. But when I upgraded to Debian 6 about half a year ago, OpenAFS aklog stopped to work inside the guests. There were always problems with running OpenAFS clients inside VServer guests but this time I couldn’t find any workaround. I had to solve the problem and I had to act quickly.

As VServer didn’t look useable anymore I decided to move to Linux Containers. It’s a similar virtualization approach, implemented in a different way. I can’t provide any expert comparison of the two solutions as I’m just a home user. Here are my simple observations.

Linux VServer is generally more mature product (no wonder, it has been around for some years). It provides better management tools, including things like vserver-stat (summary information about running guests and resources consumed by them), vserver stop (safe stopping of a guest), vserver enter (a way to enter a guest directly from the host) or vapt-get (batch invocation of apt-get over all running guests). It defines a finer set of capabilities, e.g. you don’t have to set the big CAP_SYS_ADMIN permission just to be able to use FUSE. And it contains hashify with the copy-on-write feature to save main memory, memory cache and disk space.

Linux Containers allow me to run some things in the containers that I’ve never managed to get running inside VServer guests (new OpenAFS aklog, OpenVPN). They provide more sophisticated device isolation (mknod /dev/null possible without permitting too much) and network isolation (each container can have its own routing and filtering rules). Configuration is easier. And they are included in the official kernel source. On the other hand they lack some important features provided by Linux VServer and I experienced several less or more annoying problems (but none of them preventing me from using Linux Containers completely). The implementation may improve rapidly, so it can be better now.

Both the projects lack good documentation.

There is some lesson with Linux VServer and Linux Containers. AFAIK, Linux developers have originally rejected the Linux VServer idea as unnecessary for several years. Apparently during the time they changed their mind and Linux Containers are here. The result of the lost years is that we haven’t got a complete and well working solution yet. Well, we know that progress is sometimes constrained by our mental barriers.

Smart phones one year later

When I was looking for a new mobile phone and looked at smart phones more than a year ago, I’ve found there has been no working smart phone equipped with a truly free operating system providing rich set of applications and nice development environment. It seems there happened at least two important changes in this area during last year.

I’ve found Neo FreeRunner is still not dead. And looking at their latest news I can see a great change: Debian is taken seriously now as the primary operating system for the device. This means the phone can be equipped with a reliable operating system, I should be able to install my favorite applications on the phone, there should be a working development environment and it would make sense to contribute to the development of the phone software environment (nothing of that applied to the former OpenMoko operating system). For this fact, if I was buying a smart phone today, I would buy the FreeRunner without much doubts. So after fixing the wrong operating system strategy now the question is what will happen with the hardware. We’ll see.

As for big vendors, Nokia is still interested in providing operating systems based on GNU/Linux platform in their phones and possibly future tablets. They’ve abandoned Maemo which is no pity. When I looked at Maemo web pages last time, I’ve got the impression that Maemo is a semi-free operating system providing only a very limited set of supported packages and not being worth to contribute to. Why simply not to use Android under such conditions? The only real advantage of Maemo might be that with a significant effort and patience one could in theory port his favorite applications and all their dependencies to the system, there is no such option with Android. It’s better to avoid such systems.

Now Nokia joined its efforts with Intel on development of MeeGo operating system. Will MeeGo fix the problems of Maemo? They promise MeeGo itself will be completely open source, this is good. But will it provide all the important applications such as Emacs, KStars, Scid with Stockfish, etc.? I doubt, it’s no easy thing to develop a complete operating system distribution and it’s even harder under corporation umbrella where it’s likely the distribution will be rather closed and driven mostly by marketing requirements. And if all the marketing wants is to provide just another Android and iPhone competitor then the question is again: Why not to use Android straight out? If Nokia would like to be different (would it?) then they could work on making Debian easily installable and runnable on their phones and tablets. Then those devices could be real killers in a certain market segment. We’ll see.