I’ve recently experienced root file system loss. This happened after my keyboard had stopped functioning due to a hardware problem and I had turned off the computer by the power button. Apparently some random process had happened on the disk and while most data survived the file system as a whole was seriously damaged and the system was unusable after fsck fixes. Well, I should probably use the reset button next time and turn the power off only in the bootloader menu.
I had to reinstall the whole computer. As Debian 5.0 was released recently, it was a good opportunity to test its installation.
The most interesting experience was discovering how software is unstable today. I often use development versions of the operating system so I use to be tolerant to various small problems. But when I install a released system from scratch, I expect a completely smooth process. It wasn’t so and Debian 5.0 is disappointing to me. I don’t think other distributions work better but Debian used to have higher standards (and I would expect them after 7 months of freeze). While the installation process itself was completely fine, not all of the installed systems were ready to work well without fixes.
There are two kinds of problems: Debian problems and upstream problems. I experienced one Debian problem. It was that apt-proxy didn’t work in the default setup. This seems to be a known bug (!), making a small change in one of the source files was needed. Apparently there was some lack of coordination between archive maintainers, installer maintainers, and apt-proxy maintainers. Considering the complexity of the current distributions one can understand it’s difficult to make them completely flawless. But there seem to be problems in testing and paying appropriate attention to reported bugs.
Upstream problems are more frequent and more difficult to deal with. The current trend seems to be to fix bugs by releasing new upstream versions without proper attention to regressions. Similarly, it’s much easier for a distribution package maintainer to package a new upstream version than to backport the bug fixes. The result is that old bugs get fixed, but new ones are introduced. Combined with the facts that it’s often uneasy to reproduce and diagnose the reported bugs and that we are all busy, busy, busy today, there is no obvious way to handle the general software instability problem. Even when some bugs are known, there’s not enough power to fix them and they tend to get ignored.
So should we simply learn to live with the fact that software features get more and more complex while their stability gets more and more reduced? Or is there a feasible way to get the growing software complexity managed?
I think there are things worth to try. More and better interaction between software developers and users is needed. We need to build better ways to prevent regressions, to reduce amount of hidden bugs, and to offer easy means to users to determine causes of problems. Instead of hurrying for new features we should focus more on making existing functionality actually work. This is what I demand from the software as a user.
Unfortunately the real world doesn’t allow me to step into this area anytime soon. But if I ever have opportunity to work more on Debian, I’ll probably start with joining the Debian QA efforts.
Leave a Reply