Monday, December 31, 2007

Access ext2/ext3 Disk from Windows

I've formatted my old 60GB USB external backup disk as a single ext3 partition. I figured it would be ideal to offload to it all my media files (videos, music, e-books and other documents) . But then I realized that I'll need to move stuff to and from this disk and my wife's Windows PC. So I started searching for a way to do it. I ended up trying out two ext2 file-system drivers for Windows: ext2ifs and ext2fsd.

Both are free and both seem to handle the basics well - read and write files from and to the disk. Both require a rather cumbersome mounting procedure where a special utility is used to assign a drive letter to the ext2/ext3 disk (but Linux users like me should not complain). Both fully support ext2 (which also covers ext3 partitions, but without journaling).

I tried ext2ifs first, and it worked nicely - I used it to move a 4GB DVD image from one computer to the other (perfectly legal, I assure you). It can't, however, handle UTF-8 encoded file names. This limitation is clearly stated in the ext2ifs FAQ, but I somehow missed it.

The ext2fsd changelog, on the other hand, indicates that it correctly handles UTF-8 and other file name encodings. And indeed it does. But unlike with ext2ifs, I could not "safely remove" the USB disk even after I unmounted the drive and exited the ext2 volume manager.

I'll stick with ext2fsd for the time being, mainly because it solves the UTF-8 problem, but also because it seems to be a rather active project with a growing set of features, while ext2ifs seems to have been frozen for over a year now.

[27 Jan. 2008] UPDATE: ext2fsd USB issues have been fixed since version 0.42, and even better - it can be configured to auto-mount USB disks. Sweet!

Tuesday, December 25, 2007

HPLIP Upgrade or Yet Another Printing Problem

The long overdue upgrade of HPLIP (HP Linux Printing and Imaging) to version 2.7.10, was supposed to make me happier - it was supposed to fix the fax (try saying it fast).

It didn't.

The first problem I encountered after the upgrade was that hp-toolbox insisted that it can't communicate with my printer (HP OfficeJet 5510 all-in-one). It took some futzing around to discover that the printer was accessible if I ran the program as root with gksu -u root hp-toolbox. Annoying.

By this time I managed to remove the fax printing queue from CUPS. Never mind, I told myself - just run hp-setup, follow the wizard's instructions and it'll be good to go. But here I got another surprise - hp-setup created a printer queue, but failed to create a fax queue, complaining that it could not find the HPLIP fax PPD file.

I futzed around a bit more, trying to manually add a fax queue from within the CUPS web interface. But hp-sendfax reported an error when I tried to use it (I later realized that I selected the printer PPD file instead of the fax PPD file - simply because there was no fax PPD file to select).

I finally decided to go over the hplip package bug page on the Debian Bug Tracking System. I then realized that I was actually hit by two bugs:

Bug #452454: to use the printer the user must be a member of the scanner group.

Bug #454341: the path to the PPD files is wrong in /etc/hp/hplip.conf, and should be set like this:

ppd=/usr/share/ppd/hpijs/HP/

The fax works now. Dang.

Friday, December 21, 2007

Small Fonts in Icedove

I'm using Icedove 2.0.0.6 from the unstable package repository, as my email client. A recent update caused it to show small, difficult to read, fonts in both menus and messages. It took an hour of mucking around with font sizes, pondering the contents of .mozilla-thunderbird and googling for relevant search phrases, before I decided to go over the list of Icedove bugs.

And sure enough, the problem is not only known, but a workaround exists, too:
  1. in Icedove select Edit->Preferences
  2. select the Advanced tab
  3. press "Config Editor"
  4. enter "dpi", press enter
  5. modify the value of the key layout.css.dpi from -1 to 0
  6. restart Icedove
My eyes feel better now.

Wednesday, December 19, 2007

DHCP Server @ Home

My wife's windows PC is connected to my Debian box with a crossover Ethernet cable. It was configured to use a fixed IP (10.0.0.4), and my box (10.0.0.2) serves as both its gateway and its DNS. This setup works fine, but I decided, in the interest of flexibility and control, to attempt to install a DHCP server on my box.

So here goes:
  1. install the DHCP server:
    apt-get install dhcp3-server
  2. edit /etc/dhcp3/dhcpd.conf and add the following at its bottom:

    host windows-pc {
    hardware ethernet 00:16:36:8E:92:3B;
    fixed-address windows-pc.home;
    }

    subnet 10.0.0.0 netmask 255.255.255.0 {
    option domain-name "home";
    option domain-name-servers machine-cycle.home;
    option routers machine-cycle.home;
    default-lease-time 28800;
    max-lease-time 28800;

    # Unknown clients get this pool.
    pool {
    max-lease-time 300;
    range 10.0.0.200 10.0.0.253;
    allow unknown-clients;
    }

    # Known clients get this pool.
    pool {
    range 10.0.0.5 10.0.0.199;
    deny unknown-clients;
    }
    }

    subnet 172.27.208.0 netmask 255.255.240.0 {
    }


    The first stanza (host) assigns an IP address (or a host name) to the specified MAC address. The second stanza (subnet) defines the properties common to all computers on the home network (at the moment it's just my wife's laptop). Note the use of address pools (this stanza was copied almost verbatim from the man page for dhcpd.conf). The last stanza defines a subnet associated with my cable modem with no properties or hosts - this allows the DHCP server to ignore requests originating from the cable modem network interface.

  3. (Re)start the DHCP server:
    /etc/init.d/dhcp3-server start
  4. Configure the firewall (if you're using one) to allow DHCP traffic. For Shorewall simply add the following lines to the /etc/shorewall/rules file:
    #       dhcpd
    ACCEPT loc $FW udp 67 68
    ACCEPT $FW loc udp 68 67
  5. Restart the firewall:
    /etc/init.d/shorewall restart

  6. On the windows machine disable the relevant network interface, and reconfigure it (right-click, properties, etc.) to get its IP address by DHCP and the same for DNS.
  7. Configure the firewall on the windows machine to allow traffic on the 10.0.0.x subnet (in ZoneAlarm this means that this subnet should be added to the trusted zone).
  8. Enable the network interface and verify that it acquires the correct IP address, gateway and DNS.

Saturday, December 8, 2007

Mixing "testing" and "unstable"

I have several (14) packages installed from "unstable" (aka "sid"), and several (3) locally installed packages. The other packages (2192) come from the "testing" repository.

Setting up a mixed system is rather easy - I followed the instructions at the APT HOWTO, and you may also want to read the nice HOWTO and comments over at the Debian User Forums.

I use the following oneliner to list the packages from "unstable":
apt-show-versions | cut -d ' ' -f 1 | awk 'BEGIN{FS="/"}{if($2=="unstable")print$1}' | sort
(replace "unstable" with "testing" to list the packages from "testing").

And here's how to list the locally installed packages:
apt-show-versions | cut -d ' ' -f 1 | awk 'BEGIN{FS="/"}{if(NF==1)print$1}' | sort
Append "|wc -l" to these commands in order to count the packages instead of just list them.

Thursday, November 29, 2007

Bacula to the Rescue (again)

It happened again - disastrous data corruption. Last time it involved my email client (icedove) on my own Linux box. This time it was my wife's email client (Outlook Express) on her Windows machine.

It started when OE asked me, upon being closed, if I'd like it to compact its folders in order to save disk space. This kind of maintenance is recommended by Micro$oft in order to prevent corruption of the message storage files. Compacting is also known to cause corruption...

And guess what? it did. My wife's Inbox, very much like her desk, is filled with stuff - more than 8000 messages. After compacting was done, only 6000 messages were left - 9 months worth of emails were lost.

If this ever happens to you and you're running Windows XP and up than you may be able to restore the original dbx storage files - OE places a copy of the original files in the recycle bin before compacting (but with a .bak extension instead of .dbx).

I didn't know this at the time, so I went ahead and restored those email messages from the Bacula backup. As I described a while ago, I have an elaborate backup procedure that backs up individual email messages instead of the gigantic dbx files. After restoring the messages to a temporary folder on my wife's PC, I simply selected the missing messages (eml files) in Windows Explorer, and then dragged and dropped them into the Inbox folder in OE.

It took a few minutes, but OE (and I) survived this ordeal.

Sunday, November 25, 2007

Are We Running on AC Power?

One of the nice things about using a laptop as a desktop substitute is its battery. This proved important lately as we experienced several successive power failures. We finally managed to find the culprit - a faulty electrical appliance.

In the meantime I needed a way to disable backups when my laptop was running on batteries, because the the external USB backup disk runs on mains power. This is easy enough using the script on_ac_power (part of the powermgmt-base package) - define the following function

check_ac_power ()
{
# must be on AC power
echo -n "Checking AC Power ... "
( on_ac_power > /dev/null && ( echo "ON" ; return 0 ) ) || ( echo "OFF" ; return 1 )
}

and run it at the beginning of the script that's executed by Bacula with the RunBeforeJob directive:

check_ac_power || exit $?

The exit $? bit is used to return the status that was returned by check_ac_power, if it fails (i.e. non-zero status code), to Bacula.

[14 Jul. 2008] UPDATE: the on_ac_power script stopped working after a kernel upgrade... (I've also revised and fixed the contents of this entry).

Wednesday, November 21, 2007

Format and Label a FAT32 External Disk

It took me a while to accept the fact that my backup disk is simply too small. It proved too small even after carefully selecting the files and directories to backup, and calculating the files and job retention periods, and backup rates, to match my 60GB disk. It only takes a few days of leaving large files lying around to fill up the backup disk - it has no slack.

I felt it was time to indulge myself and buy some hardware - I got a Western-Digital 250GB Elements USB external disk. It comes with no software at all, FAT32 formatted, and is readily recognized by my box.

I decided to format it after checking the disk with fsck.vfat (part of the dosfstools package) - it complained about differences between the FAT's on the disk, and about hidden sectors. The interesting bit of trivia regarding FAT32 is that you can't format a FAT32 volume larger than 32GB - get this - under Windows! Under Linux you just run the following:

mkfs.vfat -F 32 -n volume_name /dev/sda1

(you should, obviously, replace /dev/sda1 with the correct device path). The generated file system (233GB in size) is perfectly usable on both Windows and Linux.

As it happens, I did not specify a volume label, and went on to copy the backup files from the old disk to the new disk. I (and my Linux box) only realized that the volume label was now empty, at the next reboot, two days later. So I needed a way to label the disk without formatting it. This can be done with mlabel (part of the mtools package) like this:

mlabel c:volume_label

But you must first edit /etc/mtools.conf to make sure that the drive letter c: maps to the correct Linux device path - in my case it was just a matter of un-commenting the following line:

drive c: file="/dev/sda1"

I don't know why this mapping is necessary, but that's how these tools work.

Slack is good.

[7 Feb. 2008] UPDATE: labeling a disk updates just one FAT, causing fsck.vfat to complain about differences between FAT's on the labeled disk - this seems to be perfectly harmless.

Monday, November 5, 2007

Display IMAP Quota in Icedove 2

For Icedove 1.5 you should install the Display Quota add-on. It works with Icedove 2 (currently available from the unstable repository), but isn't really needed.

Icedove 2 shows the quota in the status area when it goes above some threshold (default is 75%). To make the quota always visible follow this procedure:
  1. Select the menu "Edit -> Preferences"
  2. Select the "Advanced" tab
  3. Click the "Config Editor..." button: a window titled "about:config" should appear
  4. In the filter text box enter "quota"
  5. Modify the value of mail.quota.mainwindow_threshold.show from 75 to 0
  6. Restart Icedove

Saturday, November 3, 2007

Script for Removing GNOME Panel Applets

I was playing around with the GNOME Swallow applet. It's a nice little toy that can "swallow" non-applet applications into the GNOME panel. I used it to convince wmforkplop and wmhdplop to show up in the bottom panel, and then decided it would be nicer to have them on the top panel.

I didn't realize at the time, but it turns out that hitting "cancel" on the swallow applet configuration dialog box, doesn't remove the applet from the panel - it only leaves it un-configured. The only way to actually remove it (when it's not configured) is by directly editing the list of panel applets that's kept in the GNOME configuration database. This can be done manually using gconf-editor, but following this guide I came up with a script to get rid of all the swallow applets on the bottom panel:

#! /bin/sh
applets=`gconftool-2 --get /apps/panel/general/applet_id_list | sed -r -e s/[][]*//g -e s/,/\ /g`
new_applets=""
for applet in $applets ; do
add=$applet
bonobo_iid=`gconftool-2 --get /apps/panel/applets/${applet}/bonobo_iid`
if [ ${bonobo_iid} == "OAFIID:GNOME_Swallow" ]; then
echo $applet
panel=`gconftool-2 --get /apps/panel/applets/${applet}/toplevel_id`
if [ ${panel} == "bottom_panel_screen0" ]; then
add=""
fi
fi
new_applets=${new_applets}","${add}
done
new_applets="["`echo ${new_applets} | sed -r -e s/,+/,/g -e s/^,// -e s/,$//`"]"
gconftool-2 --set -t list --list-type=string /apps/panel/general/applet_id_list ${new_applets}

This probably took longer then it should have, but at least I learned something.

Saturday, October 20, 2007

One Liner: Put a Gallery2 Website in Maintenance Mode

Use the following to put a Gallery2 website in "maintenance mode" (e.g. during nightly backups):

/usr/bin/replace "\$gallery->setConfig('mode.maintenance', false);" "\$gallery->setConfig('mode.maintenance', true);" -- <gallery2-root>/gallery2/config.php

where <gallery2-root> stands for the root directory of your Gallery2 website. The (rather handy) utility replace is part of the MySQL server package.

Use a similar one liner to get out of maintenance mode, but with false/true replaced with true/false, respectively.

Wednesday, October 17, 2007

Surprise

I get the following message in Vim 7.1.56 when typing Ctrl-x Ctrl-c :

Type :quit<enter> to exit Vim

Vim? Being helpful? to an obvious emacs user?

Monday, October 15, 2007

One Liner: Recursively Delete Empty Directories

Here's how to delete all empty directories recursively under some directory:

find <parent-dir> -depth -type d -empty -exec rmdir -v {} \;

This is one step away from deleting all your data, so please be careful.

Saturday, October 13, 2007

The Case of the First Page Only Printer

Chapter 1: I'm hit

In the past week or so I've been hit with a strange printing problem: my hp officejet 5510 all-in-one only prints the first page of multi-page print jobs. There are no error indications, it just stops after the first page.

It all started when I tried printing a PDF document from within Acrobat Reader. My initial guess was that there was something wrong with the document (I downloaded it from some website, so it seemed plausible).

Chapter 2: I Test therefore I Am

I tried printing another document, that I've printed before, and I got the same result - just the first page.

I tried printing to a virtual PDF printer, and this time I got all pages printed.

I tried printing to file from Acrobat Reader (with the officejet as the target printer) - the PostScript file looked just fine - all pages were there. Weird.

I tried printing the PostScript file at the console with lpr - and was a bit surprised when I got the first page only.

At this point I was pissed. But also worried - I rarely use the printer, but my wife prints a lot, as part of her work. She prints from her Windows laptop, and I had to make sure that it still worked. So I copied the PDF file to her machine and tried printing it from Acrobat Reader. It printed OK - all pages came out.

Eeek!

I can print from a remote Windows machine but not directly from my own Debian box? this was an insult!

Chapter 3: From Insult to Workaround

The officejet is accessed from my wife's PC via IPP (Internet Printing Protocol) - so I came up with an idea: I'll setup an IPP printer in CUPS, that prints to the URL of the direct printer. This can be easily accomplished from the CUPS web interface at http://localhost:631:
  1. select the "Printers" tab
  2. copy the link to the directly connected printer
  3. select the "Administration" tab
  4. click "Add Printer"
  5. fill in the printer's name, location and description, press "Continue"
  6. select the device "Internet Printing Protocol (http)"
  7. paste the link copied at step 2, press "Continue"
  8. fill in printer make/model, press "Add Printer"
  9. print a test page
Except for step 2, this is the way to add a network connected printer to CUPS.

OK, so now I tried printing to this new printer - it's really funny, since GNOME shows two printer icons in the tray area! Oh, and guess what? it printed all pages!

This was very puzzling. I could've let it go at this point, since I found a workaround, but I just couldn't. It worked before and it broke down - I wanted a fix. I had no idea what went wrong. The system guys at work were also puzzled - not a good sign - I was alone with this problem.

Chapter 4: Debugging CUPS

But now I had a way to compare a working print job against a non-working print job:
  1. set LogLevel to debug in /etc/cups/cupsd.conf
  2. restart CUPS:
    /etc/init.d/cupsys restart
  3. print the same file twice: once to the directly connected printer, and once to the IPP printer
  4. compare the diagnostic messages in the CUPS error log file /var/log/cups/error_log, that are emitted while printing each job
I sifted out the relevant debug messages from the log file, by searching for messages that contain the string "Job nnn" (where nnn is the job number of the relevant print job), and removing the time stamp and job number (by removing first few characters in each line):

grep "Job 399" /var/log/cups/error_log | cut -c 42- > direct-log.txt
grep "Job 400" /var/log/cups/error_log | cut -c 42- > ipp-log.txt

I then compared the files in emacs (ediff-buffers). There were differences alright, too many of them actually. I finally decided that the interesting part was where direct-log.txt showed

Skipping page 2...
Skipping page 3...
Wrote 1 pages...

and ipp-log.txt showed bits of PostScript ending with the message

Wrote 3 pages...

Chapter 5: The Fix

So CUPS was telling me that it is skipping pages for the directly connected printer. But why does it do that? there's only one explanation (unless I stumbled upon a bug, which didn't seem likely) - because it's told to do that! Somewhere there was an option set to print just the first page. But where?

I started reading the CUPS manual at http://localhost:631/help and found the manual page for lpoptions which pointed me to the file ~/.cups/lpoptions which holds user default printing options. And sure enough, one of the options set for my printer in this file was "page-ranges=1" - print just the first page...

I removed that option and got the printer working again. Bliss.

Epilogue: The Culprit

At this point I recalled that the page ranges option did indeed show up in the debug log messages, just for the directly connected printer. There were other unique options there like "wrap=true" - which pointed me to the culprit: gtklp.

I played around with gtklp while trying to print a certain plain text document, with wrapping of long lines enabled. After a bit of testing I realized that gtklp saves printing options into ~/.cups/lpoptions which affects printing from any other application.

It turns out I'm not the only one ever hit by this bit of odd behavior - see Debian bug #386592.

Anyway, the only appropriate action at this point seemed to be this:

apt-get remove --purge gtklp

Monday, October 8, 2007

Syntax Highlighting Pager

I use a pager (less) to view files at the console all the time. One feature that's missing with the pager programs I'm aware of (less, more), is syntax highlighting. I find myself opening shell scripts and programs in an editor in order to view them properly with all keywords, strings and numbers color coded and easy to spot. This is cumbersome.

I had an idea: add syntax highlighting to less! A quick internet search after the idea hit me revealed that vim comes equipped with a script that does exactly that by using vim as the pager.

The script is not as feature complete as less, but it's cool - I have the following snippet of code in my .bashrc to define the character - (dash) as a syntax highlighting pager:

# syntax highlighting pager
- () {
/usr/share/vim/vim71/macros/less.sh "$*"
}

to use it just type at the console
- some_code_file
(it can also accept text from a pipe).


Tuesday, September 25, 2007

Fun with Debian Source Packages II

On my last post I promised to present a use case for a Debian source package which doesn't have to do with debugging or patching a bad binary package. So here goes.

As part of my preparations for a rainy day - namely, the day on which my PC will suddenly die on me - I wanted to make sure that I could access the files on my USB backup disk, without Bacula. The files are stored in a set of large compressed archive files, so it may seem hopeless, but luckily Bacula comes with a file extraction utility bextract. The following command can extract the files from a given storage device, using a bootstrap file and the configuration file of the Bacula storage daemon:

bextract -b <bsr-file> -c <bacula-sd.conf> FileStorage <destination-directory>

So all I need to do is run a script at the end of each backup job, that copies the bootstrap file that Bacula generates to the backup disk. Furthermore, I need to make sure that up to date versions of both the storage daemon configuration file, and of the bextract executable are also stored on the backup disk.

But the bextract executable depends on a number of external shared libraries, as can be determined with ldd:

# ldd /usr/sbin/bextract
linux-gate.so.1 => (0xffffe000)
libacl.so.1 => /lib/libacl.so.1 (0xb7f67000)
libz.so.1 => /usr/lib/libz.so.1 (0xb7f52000)
libpython2.4.so.1.0 => /usr/lib/libpython2.4.so.1.0 (0xb7e41000)
libutil.so.1 => /lib/i686/cmov/libutil.so.1 (0xb7e3d000)
librt.so.1 => /lib/i686/cmov/librt.so.1 (0xb7e34000)
libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb7e1d000)
libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7e18000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb7d2d000)
libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb7d08000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7cfd000)
libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7bb5000)
libattr.so.1 => /lib/libattr.so.1 (0xb7bb1000)
/lib/ld-linux.so.2 (0xb7f85000)


In order to avoid these dependencies, bextract has to be statically linked. The need for static linking has been anticipated by Bacula's authors and is supported as an option in the upstream build procedure.

But it's not enabled in the regular Debian build procedure, and getting this done entails some modification of the source package...

Looking at the package's debian/rules file I realized that it actually runs the upstream package's configure script with a set of options. I executed ./configure --help and got a list of all the available options - the relevant option in this case is --enable-static-tools.

So, in order to enable static linking of the Bacula tools, I added the option --enable-static-tools to the configuration options provided to the configure script (by modifying CONF_ALL in debian/rules). Instead of building the whole package with dpkg-buildpackage I launched the build of the sqlite3 version of the storage daemon (which is the version I have installed), like this:

fakeroot -- debian/rules build-stamp-sqlite3

This generated a statically linked bextract at debian/tmp-build-sqlite3/src/stored/bextract, as can verified with ldd:

# ldd debian/tmp-build-sqlite3/src/stored/bextract
not a dynamic executable


While useful at times, mutilation of source packages has some obvious downsides:
  • it may require quite a bit of experimentation before you get it right
  • you're basically on your own, pretty much the same situation that you face when downloading any non-Debian source package
  • maintenance can be quite a nightmare because source packages are not tracked by the Debian package management tools
Have fun.

Fun with Debian Source Packages I

Most of the time there's no need to build packages from source. But sometimes it can be useful. I did this on several occasions, for several reasons:
  • manually apply a patch from the Debian bug tracking system, so as to fix a bug
  • attempt to fix/debug a problem (this requires programming skills, and motivation - I happen to have both)
  • build binaries with different options than those used by the package maintainer
Why would anyone do the latter? Well, I'll have more to say about that next time.

For now I'll just spell out the procedure for building a typical Debian package from source (adapted from the APT HOWTO):
  • one-time step: add a deb-src line to /etc/apt/sources.list, like this:
    deb-src http://http.us.debian.org/debian/ testing main contrib non-free
  • run apt-get update
  • install build dependencies for the package you want to build (I'll use bacula-sd as an example):
    apt-get build-dep bacula-sd
  • cd to a temporary directory
  • get the source package:
    apt-get source bacula-sd
  • cd to the newly created source directory:
    cd bacula-2.2.0/
  • optional: make modifications to the source code
  • build the package:
    dpkg-buildpackage -rfakeroot -uc -b
  • optional: install the package:
    dpkg -i ../bacula-sd_2.2.0-1_i386.deb
    note that all the bacula packages are built from the same source package, so that the parent directory actually contains a bunch of binary packages.
The source directory is typically quite similar to the upstream source package, except that it contains a sub-directory named debian, where Debian specific files reside. Among these is the rules file which can be used to manually perform the various steps in the build process.

More fun to come.

Sunday, September 23, 2007

Printing Plain Text

Last time I wanted to print a document I got bitten quite hard. This time I was ready. Or so I thought. I mean, all I wanted to do was print a plain text document...

Way back when, during my time at the university, we used to print stuff with lpr, so I tried it out, and realized that it doesn't wrap around lines that are longer than the paper width. The document at hand contained newline characters only after each paragraph - it looked OK on screen, but horrible when printed out.

Well, I tried printing from within gedit (the default GNOME editor). It performs text wrapping on long lines at word boundaries, which is nice. But:
  • it prints the file name and page numbers on every page - and I couldn't find an obvious way to turn this feature off
  • there doesn't seem to be a simple way to insert a page break
  • it does not interpret or honor pagefeeds (^L characters) that are often used in plain text documents as page breaks
Next I tried gtklp: it can be made to wrap long lines, but not at word boundaries, so words get cut up between lines.

How about emacs? open the file, mark the whole document by hitting Ctrl-x-a , and then hit Meta-x (Alt-x on normal PC keyboards) and type fill-region - this does line wrapping at word boundaries, which makes the document printable with gtklp, or directly from emacs.

Except that region filling in emacs does some unexpected things - like indenting a whole paragraph if the first word is indented (I wanted just the first line to be indented). I could fix it by hand, but that didn't seem like the Right Thing To DoTM. There's probably a way to change this behavior, but I didn't bother looking for it.

I vaguely recalled using a2ps to convert plain text documents to PostScript, so I installed the a2ps package, read the manual page, and realized it didn't do line wrapping at word boundaries. Sigh.

So I searched for "word wrap a2ps", in the hope that I was missing something, and hit a blog entry which pointed to enscript as the right tool for the job.

The following command line does exactly what I want:

enscript --header='||Page $% of $=' --margin=72:72:72:72 -1 --word-wrap --media=A4 file.txt

(one inch margins on all sides, 1 up, word wrap, A4 page size, right aligned header showing page info)

Sheesh...

Sunday, September 2, 2007

Bacula to the Rescue

It finally happened - disastrous data corruption.

My mail client of choice - Icedove (the debianized Thunderbird) - started crashing for no apparent reason. It took me a few crashes to figure out that one of my email accounts was probably corrupted somehow - Icedove would crash whenever I tried to get messages for it. It even crashed when I tried to simply select its Inbox folder (I wanted to move the messages to a different folder and then delete and recreate the account).

This was definitely the right time to try bacula for real:
  1. Select "Edit->Account Settings..." menu item in Icedove's menu
  2. Find the bad account, record the path of the "Local directory" under "Server Settings"
  3. Exit Icedove
  4. Run bconsole - the bacula command console - in a terminal
  5. Enter the command restore and follow instructions to select the appropriate backup job that should be restored (I picked the latest)
  6. Eventually the command console will enter file selection mode, where you can mark files and directories to restore (hit ? and <Enter> to get a list of available commands). Select the files under the directory recorded at step 2 above.
  7. Run the restore job. Files are restored to a restore directory, so that there's no risk of overwrite.
  8. Replace the content of the bad mail directory with the restored files using

    cp -a <restore directory>/* <bad mail directory>

It's really that easy.

Thursday, August 30, 2007

Mounting a Windows Shared Folder

Over the past year or so I've used several methods to transfer file between my wife's laptop (running Win XP Home) and my Debian box. I have several requirements of any such method:
  1. my wife's laptop is not always connected
  2. two way file transfers
  3. non-English characters in file names
  4. no crashes or stalls, no transfer errors
  5. usable in a script
  6. bulk file transfers
  7. large files
  8. one-time or automatic setup
While these seem rather obvious, it took quite a while before I converged on the right approach.

The first step is to share a folder on the Windows PC.

The next step is connect to that shared folder from the Linux PC:

I started out using the "Places->Connect to Server..." menu item on the Gnome panel. It's really easy: select "Windows share" in the Service type drop menu, and type in the relevant connection information (server, share, folder, etc.). This worked rather well as long as I was using nautilus for my file transfers. I couldn't figure out at the time how to access the remote files via a script with regular shell commands (e.g. cp, rm, mv).

I just recently learned that I was actually using Gnome VFS, and that files may be copied at the command line with the gnomevfs-copy utility, using the same file URI's that nautilus uses (they start with smb://).

Still, I wanted something that's not tied to Gnome, since I have plans to replace it with something else (I'll have more to say about that in the near future).

For a short while I used scp (secure copy) and sshfs (ssh user-space file system), but this method has several drawbacks: for starters I needed to setup an SSH server on my wife's laptop (available for free as part of Cygwin). It isn't straight forward.

There are other problems:
  • I can't access my wife's documents folder when I connect with my own username, even though it is shared
  • Filenames must be in English (I couldn't figure out how to configure this)
  • sshfs tends to stall in mid transfer on my setup, I didn't investigate why.
The next attempt was to mount the shared folders manually using smbfs (which is the method used at my workplace). I added the following line to /etc/fstab:

//10.0.0.4/C /mnt/windows/C smbfs uid=<username>,gid=<username>,username=guest,guest,codepage=<codepage>,iocharset=utf8 0 0

Notes:
  • my wife's machine has the local address 10.0.0.4
  • it has the whole C drive shared
  • I created the directory /mnt/windows/C to be used as the mount point
  • you should replace the text in red with your own stuff
  • the shared folder is treated here as if it is always available - I tried to add the noauto option but then the codepage and iocharset settings were ignored (probably due to a bug in smbmount).
Last week I got fed up with this and searched Google for smbfs - the first link I got pointed me to CIFS VFS -Advanced Common Internet File System for Linux. A few minutes later I tried the following line in /etc/fstab:

//10.0.0.4/C /mnt/windows/C cifs noauto,noexec,nosuid,nodev,uid=<username>,gid=<username>,username=guest,guest,iocharset=utf8 0 0

And it worked just fine - it meets all of my requirements!

One last note: an issue that seems to be a FAQ is how to mount a folder like "My Documents" that's shared on the windows machine? - the problem is that the space messes up /etc/fstab. The solution is to use the octal code for the space character \040, as follows:

//10.0.0.4/My\040Documents /mnt/windows/My\040Documents cifs noauto,noexec,nosuid,nodev,uid=<username>,gid=<username>,username=guest,guest,iocharset=utf8 0 0

Happy sharing!

Thursday, August 23, 2007

Mount Gigabyte - Update

As part of my daily system update, I've upgraded udev to version 0.114-2. I usually take the time to go over the change logs of updated packages, but I haven't done it this time. I probably should have.

I discovered that the disk id format has changed, i.e. the entries in /dev/disk/by-id have changed. This meant that my /etc/fstab settings for my external USB disk, stopped working. The device name that was used to refer to the disk did not exist anymore.

My first reaction was to use a different method of referencing that drive (e.g. /dev/disk/by-uuid), but I figured I really should handle it the udev way:

  1. use the following to detect the disk serial number (attached as /dev/sda1):

    udevinfo -a -p /sys/block/sda | grep serial

  2. add a specific udev rule, in /etc/udev/local.rules

    KERNEL=="sd?1", ATTRS{serial}=="DEF1078555F6", SYMLINK+="gigapod"

    (this adds a symbolic link /dev/gigapod that points to the disk's first partition).
  3. create a symbolic link in /etc/udev/rules.d/

    ln -sf ../local.rules z99_local.rules

  4. modify the mount point specification in /etc/fstab as follows

    /dev/gigapod /mnt/gigapod vfat users,rw,noexec,nosuid,nodev,shortname=mixed,uid=bacula,gid=bacula,umask=000,iocharset=utf8,noauto 0 0

  5. unmount and disconnect the USB disk
  6. restart udev:

    /etc/init.d/udev restart

  7. connect the USB disk
This way the disk is correctly mounted during the boot process, and there's no need to mount the disk in /etc/init.d/bootmisc.sh. Furthermore, Gnome seems to like it better this way (it was creating two disk icons on the desktop, one named gigapod and the other named backup60g which is the disk label - I now get only the latter).

Total control. Brrr.

Wednesday, July 18, 2007

Selecting a Default Kernel to Boot

I'm experiencing some network problems with the new 2.6.21 kernel - can't really figure it out right now. It's quite unusable at the moment.

Luckily, I kept the previous kernel (2.6.18) installed, so I decided to revert back to it, until this issue is resolved. By revert I mean to say that I wanted it to be the default kernel that GRUB selects to boot.

It was late at night, so it took me two reboots before I actually looked at /boot/grub/menu.lst and saw the line specifying the default menu entry:

default 0

I guessed that menu entries are counted from 0 and up, so I modified the 0 to 2 (corresponding to the third menu entry, which is my old 2.6.18 kernel), and rebooted the machine.

I wonder how much futzing around it would've taken me before I actually read the manual. Or even read the comments in menu.lst (!).

Last time I acted this way was when I decided, late at night, to enlarge an NTFS partition, without a backup. I lost all my data. In the recovery process I managed to get my computer's motherboard fried. It's a long and sad story, which I'm not inclined to share.

Luckily, this time it just worked, so I had no need to search for the documentation ...

Wednesday, July 11, 2007

Windows Update, ZoneAlarm and VNC - an Unholly Trinity

I use a combination of VNC over an SSH tunnel to access my wife's computer from work. This allowed me to troubleshoot her computer when the need arose ("I've lost an important file...", "I can't print...", etc.), making both my wife and me happy.

So I tried, using the same trick, running windows update on my wife's machine. During the installation process ZoneAlarm popped up one of its oddly shaped dialog boxes, letting me know that a certain program is attempting to connect to the Internet, and asked whether I want to allow that to happen. Since that program was obviously launched by the windows update installer, I assumed it's as harmless as any Microsoft application can possibly be, pointed my mouse towards the appropriate button and attempted to "press" on it by clicking the left mouse button.

To my surprise, nothing happened. The mouse and keyboard seemed to have no effect on any ZoneAlarm dialog box, including the pop-up context menu that appears when right-clicking the ZoneAlarm tray icon (I wanted to shut it down...).

It took a quick search to figure out that I had to disable the "Protect the ZoneAlarm client" option in the ZoneAlarm preferences tab, in order for it to play nicely with VNC. But this is impossible to do from within VNC...

I killed the window update process, waited patiently for my wife to return home and then guided her to click for me in the right place, while I was watching her moves through VNC.

Later on I tried running windows update again, and this time it worked without a hitch. So boring. Just the way I like it.

Tuesday, May 29, 2007

Anti Virus Woes

My wife's laptop came bundled with Norton Internet Security 2006 and its update subscription expired recently. My options at this point seemed clear enough:
  1. do nothing
  2. purchase a new subscription
  3. uninstall
  4. uninstall and replace with a different product
    1. free
    2. non-free
The logic is also quite simple:
  • an updated security tool is better than an obsolete one
  • any security tool is a resource hog - but some are less hungry than Norton's
  • some security tools are better than Norton's
  • most non-free security tools are better than the free security tools
  • it's a laptop, and it's also my wife's laptop, and sh*t does happen

The performance claims above are based on independent benchmarks such as those conducted by AV-Comparatives.

I finally chose eset's NOD32 Anti-Virus over Kaspersky Anti Virus.

I uninstalled the Norton suite and then installed NOD32 trial version together with the ZoneAlarm Free firewall (it's becoming increasingly difficult to track down this free utility on the ZoneAlarm website - but you can get it direct from download.com).

I was quite happy with this setup - the laptop wasn't as slow as with the Norton tools, and it seemed to work without a hitch. It was only after the 30 day trial period ended, that I realized that I couldn't purchase a license online from eset's website. In that respect Norton is much easier - the registration process is simple, painless, and actually works (it makes sense - they do want my money).

I had to purchase a real boxed CD from a local dealer, uninstall the trial version and then install from the CD. What a drag.

I guess I'll have to go through this next year too.


Tuesday, May 15, 2007

Memory Upgrade

I've recently increased my laptop's memory from 256 MB to 512 MB. I've been meaning to do this for quite a while now, but I was worried about compatibility, and took my time checking all the options. Plus, cost was an important issue.

HP/Compaq dictates that only specific memory modules (e.g. 256MB module P/N 285523-001) may be installed in my laptop (costs around $200 and requires return of a defective part).

I went to the Kingston and Crucial (Micron) websites and selected the proper laptop model and got a list of memory modules that are guaranteed to be compatible with my laptop. These modules seem similar to generic modules of the same specification (DDR SODIMM, PC2100, CL2.5, 266MHz) but much more expensive (compare for example Kingston's KTC-P2800/256 at $42 with their own KVR266X64SC25/256 at $27).

I finally opted for a used generic memory module (at 15$). I got it from a friend of one of the sys-admins at work, with a promise that if it didn't work I could give it back to him. So how does one make sure that a memory module is functional? - well, it's quite easy:
  1. install memtest86+ like this:
    apt-get install memtest86+
    (this actually installs a new "kernel" that's dedicated to memory testing)
  2. shutdown the PC
  3. install the memory module (and yup - firmly push that sucker into its slot)
  4. turn on the PC - enter BIOS
  5. make sure the memory module is detected correctly (note that some memory may be used by the graphics accelerator, so that the reported memory size may be a bit smaller than expected)
  6. exit BIOS, and select memtest86+ from the GRUB menu
  7. memtest86+ starts testing memory automatically
  8. wait for it to complete at least one full test with no errors (takes about 40 minutes on my laptop with 512 MB)
  9. exit and commence with normal boot
Guess what? not keeping in trend with my previous posts here, it actually worked like a charm!

Thursday, May 3, 2007

Upgrading from "etch" to "lenny": Upgrading Bacula

As I said previously, the upgrade process to Debian "lenny" was painless enough, except for some problems with Bacula. I read the Bacula 2.0 release notes before upgrading, and I suspected problems in three areas:
  1. Storage device configuration: I use an external hard disk, connected via USB - and version 2.0 includes some improvements regarding such devices.
  2. Database: the database format has changed (I use the sqlite3 package), and it's necessary to convert the database to the new format, using a migration script.
  3. Scripts: I've configured several scripts to be run by the Bacula Director Daemon and the Bacula File Daemon, that perform some chores before and after the backup process. The scripting facility has been significantly overhauled in Bacula 2.0. The changes include modifications to the configuration file syntax, but the previous syntax is still available (e.g. the RunBeforeJob directive is implemented as a shortcut for a predefined RunScript block).
I half hoped that everything would just work, but I did not kid myself.
And sure enough - sh*t happened:
  1. I had no problem with the storage device - it's mounting is handled by an external script.
  2. The database conversion script was run automatically during the upgrade process, and did something really bad to my database - it contained no data after the upgrade!
    I intended to clear it anyway, because I wanted to split the backup pool in two - one pool for full backups and one pool for incremental/differential backups. But if this hadn't been my intent, I would've been left with a real problem.
    I did not investigate this any further. YMMV.
  3. The improved scripting facility caused an interesting problem:
    Background: the windows backup job inherited its properties from a default job common to all backup jobs. One of these is a ClientRunBeforeJob directive, that cannot be used on a windows machine, so that the windows backup job overrides this with its own ClientRunBeforeJob directive.
    Problem: the new RunScript facility allows several scripts to be specified, each with its own set of properties, so that the ClientRunBeforeJob directive in the windows backup job specification did not override the default job, but rather added another one. It so happens that this script was run first (on the windows machine) and then the File Daemon tried to run the default job's client script - this caused an error and the backup process died.
    Solution: I split the default job - one default job for Linux and one for Windows.
I kept the old backup for a week or so before I decided that the new setup works. It's not as if I had any option (I had no intention of going back to version 1.38), but it felt somehow more appropriate to wait.

Thursday, April 26, 2007

Backup/Extract/Convert Outlook Express Messages

[18 Jun. 2008] UPDATE: I now use UnDBX to facilitate fast incremental backups of DBX files.

My wife's email client of choice is Outlook Express. That, in itself, is OK with me. Except that she doesn't delete messages. Never. She just can't be bothered. Her OE storage folder currently takes up 1.4GB of disk space.

The real problem is backup. All the messages in each OE message folder are stored in a single monolithic dbx file. This means that, on a daily basis, Bacula, during the incremental backup process, encounters very large files that were modified and need to be backed up. A daily incremental backup of over 1GB is unacceptable, since my storage medium is a 60GB hard disk, that needs to hold the full and incremental backups of both our computers.

The solution I came up with was pretty simple: extract all the email messages from the dbx files to a different folder, so that Bacula only needs to backup new email messages, during an incremental backup process. Little did I know how difficult it would be to setup such a scheme.

It took me quite a while to find a command line tool that can extract eml files from dbx files, and is free. Searching Google for "extract eml dbx" or "convert eml dbx" bring a lot of links to shareware tools, and most of these cannot be used from a script.

I tried using tools like xdelta to build and backup binary delta files, but this proved to be problematic - all the tools I tried required too much memory, and took a lot of time to run, to the point of being impractical.

I even started toying with the idea of writing such a tool. Finally, during my search for the OE dbx file format specification, I found DbxConv - a nice little utility that does exactly what I wanted it to do.

The complete solution is a bit more complex than just running DbxConv. Before every backup job, the Bacula Director Daemon instructs the Bacula File Daemon on my wife's computer, to run a VB script (available here), as specified in /etc/bacula/bacula-dir.conf:

Job {
...
ClientRunBeforeJob = "c:/windows/system32/cscript.exe c:/backup/tools/run-before-job.vbs %n"
...
}

This script attempts to shutdown Outlook Express, calls DbxConv to extract eml messages from the dbx files to a scratchpad folder, and then uses cygwin's rsync utility to synchronize the content of the scratchpad folder with an eml storage folder that is marked for backup in the Bacula Director's configuration. The scratchpad folder is then erased, and the backup process continues.

This process require an extra free disk space of twice the size of the OE storage folder, but that is a small price to pay, compared to the daily savings in backup disk space.

[18 Jun. 2008] UPDATE: I now use UnDBX to facilitate fast incremental backups of DBX files.

Sunday, April 15, 2007

Upgrading from "etch" to "lenny"

[25 Feb. 2009] UPDATE: this is an old post about upgrading Debian/testing. If you're considering uprading Debian/stable, please read the official upgrade instructions.

Well, Debian "etch" is now officially Stable. Turning stable meant one thing: a major upgrade.

I like the fact that the software I use gets routinely updated. Going stable means that "etch" only gets security updates from now on. So, on to "lenny" - previously Unstable and now officially Testing.

The only thing required to perform the transformation is edit /etc/apt/sources.list and make sure that any reference to etch is replaced with testing. After that it's just a matter of pressing Ctrl-Alt-F1, logging in as root, and running

apt-get update
apt-get dist-upgrade

and then reboot.

Sounds easy enough ...

Well, not quite. But it wasn't too painful either. I encountered three issues:
  1. the djvulibre-plugin had a broken dependency - running
    apt-get -f dist-upgrade
    (as instructed by apt-get itself) resolved this issue,
  2. bacula was updated from version 1.38 to 2.0 - I intend to share my experiences regarding this issue in an upcoming post.
  3. gallery2 was updated from version 2.1 to 2.2 - I had to visit all three gallery sites that I maintain and let the upgrade wizard step me through the upgrade process.
Needless to say (but I'll do it anyway) I made sure that a valid backup was ready, just in case, before I started this.

The whole process (not counting bacula maintenance) took about an hour.
And now back to work...

[04 Nov. 2008] UPDATE: I've recently spotted the Lenny upgrade-advisor. It's a modular tool that's meant to perform some sanity checks on your system before upgrading (README). I haven't tried it myself - I'm just passing the word...

Wednesday, March 14, 2007

Printing Acrobatics

It took a whole evening of my life to get Acrobat Reader 7 to print on my box. I've installed it from the Debian Multimedia Packages Repository, soon after installing Debian, and only recently needed to print a document.

The printing dialog box shows the correct printer, but nothing happens when I try to print. I noticed that acroread uses the lpr command to print, so I opened a terminal and tried to print a text document with lpr. No luck. I used lpq to look at the print job queue and verified that the print job exists. The CUPS job queue, however, was empty.

I've managed to print by installing gtklp and using that as the printing command in acroread. But acroread keeps reverting to lpr after being restarted. I still haven't figured out how to fix it - and it's pretty annoying.

But I don't need to fix it, because I did manage to get lpr working. I stumbled upon debiantutorials.com, followed their instructions, and installed cupsys-bsd. This replaced lpr with a CUPS aware version, which made me happy again.

<rant>
Being clueless can be very time consuming when it comes to using Linux. Making sense of it on your own often takes a lot of time and effort. It may be worth it, experience wise, but I do pretend to have a life, and episodes like this one just burst my bubble.
</rant>

As for the document - I had no time to read it...

Friday, March 9, 2007

The Case of the Slow Scanner

The problem: scanner access was very slow. All the SANE front-ends that I tried got stuck for more than a minute, while "scanning for devices".

Troubleshooting: my initial suspicion was that the sane configuration at /etc/sane.d/ was somehow messed up. But this was a dead end. Using strace I found out that hpssd (the HPLIP services and status daemon) was the culprit: it was stuck waiting on a socket. But what was it waiting for?

I followed the troubleshooting procedure at the HPLIP site and started hpssd in debug mode. I then realized that hpssd was waiting for hpiod (the input/output daemon?) to find devices connected to the parallel port, probably timing out, and only then querying for USB devices (I pieced this theory together from a bunch of cryptic debug messages that hpssd spewed out while in debug mode). So, what next?

Solution: Searching through Google, I couldn't find any reference to a problem similar to mine. I did, however, encounter several hits that linked the parallel port mode with timeouts. So I set the parallel port mode to EPP via the BIOS, and the timeouts were gone, and scanner access became snappy.

The funny thing is that if anyone would've told me that I needed to change the parallel port mode in order to fix a problem with a USB scanner, I'd ignore the advice and prevent that person access to my computer...

Thursday, March 8, 2007

Mount Gigabyte

I'm using an external USB disk as the backup storage device for bacula. As usual, setting it up seemed easy enough at first, but got complicated later: after all, all I needed to do was connect the drive, and let hotplugging magic take over...

This didn't quite work - the file system on the disk is FAT32 and auto-mounting it meant that the user bacula was not permitted to access the disk. I needed to manually specify an entry for the disk in /etc/fstab as follows:

/dev/sda1 /mnt/gigapod vfat users,rw,noexec,nosuid,nodev,shortname=mixed,uid=bacula,gid=bacula,umask=000 0 0

This fixed the permissions issue, but caused two other problems. The most obvious problem was that the disk failed to mount during start up. Adding the noauto option to the mount options, and the following lines to do_start in /etc/init.d/bootmisc.sh, fixed it, by postponing the disk mounting very close to the end of the boot sequence:

# mount backup storage device and restart backup storage service

mount /mnt/gigapod
/etc/init.d/bacula-sd restart

The second issue that cropped up was that when other USB storage devices were connected, I would sometimes end up with /dev/sda1 pointing to one of the other storage devices. This was fixed by referring to the persistent device node name (courtesy of udev):

/dev/disk/by-id/usb-ExcelSto_r_Technology_J36_DEF1078555F6-part1 /mnt/gigapod vfat users,rw,noexec,nosuid,nodev,shortname=mixed,uid=bacula,gid=bacula,umask=000,noauto 0 0

It looks innocent enough, but it took me quite a while to get this working, which makes it worth documenting.

[23 Aug. 2007] Update: I now use udev rules to do this.

Sunday, March 4, 2007

Upstairs, Downstairs

Navigating the console command history can be tedious at times. One simple configuration I find very useful is binding the up/down keys to the backward/forward search functions of libreadline, by adding the following lines to ~/.inputrc:

$if mode=emacs

"\eOB": history-search-forward
"\e[B": history-search-forward
"\eOA": history-search-backward
"\e[A": history-search-backward
$endif

With this, just typing the first few characters of a command and hitting the up arrow key, will bring the previous command that starts with the already typed text. Now hitting up/down allows one to pick the right command from the history (hit ENTER to execute the line, or any other key to modify it).

This seems to work with any console based program that uses libreadline for keyboard input.

Wednesday, February 28, 2007

Scanner Darkley

Getting remote scanning to work seemed easy enough at first...
  1. Install the sane network daemon (and other stuff):
    # apt-get install sane-utils
  2. Follow the instructions outlined in the man page, namely:
    • add a line for saned in /etc/inetd.conf
      sane-port stream tcp nowait saned.saned /usr/sbin/saned /usr/sbin/saned
    • specify a list of allowed clients in /etc/sane.d/saned.conf
    • restart inetd with
      # /etc/init.d/openbsd-inetd restart
  3. Install SaneTwain on my wife's laptop and configure it to connect to my laptop
  4. Add a line to /etc/shorewall/rules to accept connections on the saned control port 6566, and restart the firewall with
    # /etc/init.d/shorewall restart

But it didn't work. Specifically, SaneTwain was able to query the type of the scanner and its parameters, but failed to acquire a preview.

It turns out that the scanned data isn't transferred thru the saned control port, but rather thru a different, dynamically set port. This is actually mentioned in the man page under the restrictions section, and their suggestion is
"If you must use a packet filter, make sure that all ports > 1024 are open on the server for connections from the client"
which seems like a bad idea.

The interim solution to the problem, until proper saned connection tracking is available, is outlined on the Gentoo-Wiki, and here is how I implemented it with shorewall:
  1. create an empty file /etc/shorewall/action.SaneConntrack
  2. create a file /etc/shorewall/SaneConntrack

    # track SANE control connections
    run_iptables -A $CHAIN -m recent --update --seconds 600 --name SANE
    # related traffic (ACK, FIN, DNS UDP responses etc.)
    run_iptables -A $CHAIN -m state --state ESTABLISHED,RELATED -j ACCEPT
    # SANE server uses a dynamic data port above 1024
    run_iptables -A $CHAIN -p tcp -m tcp --dport 6566 --syn -m recent --set --rsource --name SANE -j ACCEPT
    run_iptables -A $CHAIN -p tcp -m tcp --dport 1024: --syn -m recent --rcheck --rsource --seconds 3 --name SANE -j ACCEPT

  3. Add a line to /etc/shorewall/actions (create the file if it does not exist):
    SaneConntrack
  4. Add the following lines to /etc/shorewall/rules
    # saned
    SaneConntrack loc $FW tcp 6566
    SaneConntrack loc $FW tcp 1024:
  5. Restart shorewall.
Simple, right?

Hello World!

Welcome to my on-line system administration diary.

The system at hand consists of the following components:
  • My laptop: a Compaq Presario 900 (with a non-original 80GB hard disk) running Debian GNU/Linux "Etch"
  • My wife's laptop: an HP Pavilion dv6000 running Windows XP Home
  • an ethernet crossover cable connecting the two laptops
  • a Thomson DCM245 cable modem used to connect my laptop to the Internet
  • an HP OfficeJet 5510 printer/copier/scanner/fax machine connected to my laptop
  • a Gigapod III external HDD case containing a 60GB IDE hard disk, connected to my laptop via a PCMCIA USB 2.0 adapter.
My wife uses her laptop primarily for writing Word documents, and surfing the Web.

My laptop is meant to host our family's on-line photo gallery website and serve as gateway for the Internet and the multi-function printer. It also hosts a backup system (Bacula) and a firewall (shorewall). I also use it as my personal desktop (mostly for surfing the Web).

Hope you'll find some of the stuff here useful.