Friday, December 26, 2008

Listing Storage Devices That Contain A File System

Here's a script that finds (using HAL) all the storage devices containing a file-system (except for optical discs) that are attached to a given machine, whether or not they are mounted:

#! /bin/bash
hal-find-by-property --key volume.fsusage --string filesystem |
while read udi ; do
# ignore optical discs
if [[ "$(hal-get-property --udi $udi --key volume.is_disc)" == "false" ]]; then
dev=$(hal-get-property --udi $udi --key block.device)
fs=$(hal-get-property --udi $udi --key volume.fstype)
echo $dev": "$fs
fi
done

This was my answer to a question on stackoverflow.com, during the few days that I thought that website was worth the time. I got over it.

(see a previous post of mine, for another example of stuff that can be done with HAL)

Friday, December 19, 2008

Override (Supersede) DHCP Network Interface Configuration

Suppose you find out that the MTU reported by the DHCP server, to the DHCP client on your box, is, for some reason, incorrect, and you end up with a mis-configured network interface.

There's a workaround until the problem is fixed on the server side - you can override that (and any other) value with a supersede statement in /etc/dhcp3/dhclient.conf, like this:

interface "eth1" {
supersede interface-mtu 1500;
}

(reference: the dhclient.conf man page)

Friday, December 12, 2008

Cropping Pages in Scanned PDF Files

Here's a script that takes a PDF file containing a scanned document whose pages are surrounded by an annoying black margin, extracts all pages from it, crops every page to a (common) desired geometry and then joins them back:

#! /bin/bash
# usage:
# pdf-crop.sh path/to/file.pdf geometry
# see 'man convert' for geometry syntax (example: 100%x90%+750)
mkdir -p "/tmp/$1"
echo "Extracting images..."
pdfimages -j "$1" "/tmp/$1/image"
echo "Cropping images..."
list=$( \
find "/tmp/$1/" -name "image-*.pbm" -o -name "image-*.ppm" -o -name "image-*.jpg" | sort | \
while read file ; do \
pdffile="${file}.pdf" ;\
printf "\"%s\" " "${pdffile}" ;\
convert -crop "$2" "$file" "$pdffile" ;\
done \
)
echo "Joining images..."
eval "pdfjoin --outfile \"""${1/%.pdf/.cropped.pdf}""\" "${list}

The script depends on pdfimages, convert and pdfjoin:
aptitude install xpdf-utils imagemagick pdfjam

And just in case you're wondering - the script started out simple, but there are spaces in the name of the PDF file that I used for testing, which turned out to be rather tricky to handle.

Friday, December 5, 2008

Running A Script Upon External Disk Removal

A while ago I posted here about how I use a udev rule for triggering the backup of an external USB disk upon being connected to my box.

The udev rule matches both the kernel device name with a wildcard (because it's assigned dynamically) and the device's serial number (which is supposed to be a unique device attribute), and then installs an easy to remember symbolic link to the device and runs the backup script:

KERNEL=="sd?1", ATTRS{serial}=="300000064029", ACTION=="add", SYMLINK+="aluminum", RUN+="/path/to/script"

I now need to run another script when this external USB disk is disconnected.

At first it looked easy enough to accomplish: copy and paste the rule above, replace ACTION=="add" with ACTION=="remove", remove the SYMLINK bit and modify the path to the script.

I was surprised to find out that the script was never called - the remove event did not seem to fire. It took a few anxious minutes, with several physical connects and disconnects of the external disk, before I figured it out.

It seems that when the disk is removed, the conditional ATTRS{serial}==... is always false - presumably because the device attribute called serial is gone and can't be matched against. The correct (read: working) approach is to match against the symbolic link, like this:

SYMLINK=="aluminum", ACTION=="remove", RUN+="/path/to/post/removal/script"

I bet Linus Torvalds drives a car with a manual gearbox.
I guess Bill Gates has a chauffeur.
And Steve Jobs... well, he simply teleports.

Friday, November 28, 2008

Using Putty for Surfing the Web

I found this guide to be rather useful:
http://www.buzzsurf.com/surfatwork/

All I have to add is the following command line, that I've added to the startup menu:

"C:\PUTTY\PUTTY.EXE" -N -D 8080 -load profile

Surf away!

Friday, November 21, 2008

Sharing a CUPS-PDF Printer Over IPP

Generating PDF documents is easy enough on Window$: install PDFCreator and print from any application to the newly created PDF printer.

The same is possible on Linux using a CUPS-PDF printer. It's even easier if the source document is an OpenOffice.org document, because you can directly export it to PDF. Not to mention a lot of other applications that let you create PDF documents directly, or convert from most formats to PDF.

I recently needed to generate PDF documents on my VirtualBox hosted Window$ XP virtual PC. But instead of installing PDFCreator, it seemed to make more sense to "simply" print to the existing CUPS-PDF printer - it's just a matter of installing a new IPP printer at the Window$ side...

Start at the Linux box:
  1. point your browser to the local CUPS administration web interface: http://localhost:631
  2. click the Printers tab, scroll down until you find the PDF printer - write down its URL
And now for the main event, at a real or virtual Window$ XP box:
  1. open the Printers and Faxes folder
  2. from the menu select File->Add Printer
  3. click Next until the wizard asks you to choose between a local or network printer, select the network printer and click Next
  4. specify the printer with the URL recorded earlier, but replace localhost with the Linux box IP address, e.g.
    http://10.0.0.1:631/printers/PDF-Printer
    and click Next
  5. select a printer driver - in our case any Color PostScript printer should be OK
  6. a few more mouse clicks and we're done
At this point the newly created PDF printer can be used to generate PDF documents by printing to it from any application.

The documents will be created, by default, in the PDF directory at your Linux user account. If you're printing from a Window$ user account with a username that does not match any user account on the Linux box, then the generated PDF files will land in /var/spool/cups-pdf/ANONYMOUS/ (the paths can be configured by editing /etc/cups/cups-pdf.conf).

The problem with the procedure above is that the printer driver selected for the new printer does not match the capabilities and limitations of the CUPS-PDF printer. The Right ThingTM to do is to install a PostScript printer driver with the correct CUPS-PDF PPD file:
  1. copy the file CUPS-PDF.ppd from the Linux box (find it at /usr/share/ppd/cups-pdf/CUPS-PDF.ppd) to a temporary folder on the Window$ box
  2. download the Adobe PostScript Universal Printer Driver for Windows Installer
  3. launch the installer - a printer installation wizard will come up
  4. tell it that the printer to install is a Local Printer, connected to LPT1: or any other local port
  5. when prompted to select a printer model, click Browse and search for the PPD file
  6. you should now be able to select "Generic CUPS-PDF Printer" and continue
The new local printer is a fake one, that's used only to get the driver installed. The next step is to replace the IPP printer's driver with this new driver:
  1. open the Printers and Faxes folder
  2. right-click the IPP PDF printer icon, select "Properties" from the pop-up menu
  3. select the Advanced tab
  4. select "AdobePS Generic CUPS-PDFPRinter" from the Driver drop-down selection box
  5. click OK
  6. delete the fake local CUPS-PDF printer

I hope you realize by now that it's much simpler to install PDFCreator and be done with it, instead of all this futzing around with PPD files and all those printer installations and driver replacements.

Bottom line: sharing a CUPS-PDF printer is perfectly feasible, yet, at the same time, quite pointless.

Friday, November 14, 2008

All Line Are Busy!

I've exported an OpenOffice.org document (with .odt extension) to Microsoft Word format (.doc). I had a simple plan: to edit the exported document in MS Word on my VirtualBox hosted Window$ XP virtual PC. Nothing fancy, really.

There was one snag that had to fixed: fonts. The original document uses Bitstream Vera Sans TrueType fonts, which aren't installed on the Window$ PC. I could've switched to Tahoma, which look similar enough, but, being a perfectionist (read: obsessive-compulsive), I decided to install the fonts.

Should be easy, right? In principle, that's true:
  1. download the fonts and extract them to a temporary folder
  2. double click the Fonts control panel applet
  3. from the menu, select File->Install New Font...
  4. when prompted, find the temporary folder from step 1, and select all the fonts
  5. press OK and wait for the fonts installation to complete
But at the last step I got the following error message, no matter what I tried:
The font folder is busy and cannot install the selected fonts at this time. You may retry now or cancel and retry later.
Annoying as hell. What gives?

A quick search got me to a certain blog entry, which pointed me to Microsoft's TweakUI tool, available as a separate download on the PowerToys package page (search the downloads side bar at the right):
  1. install Tweak UI
  2. launch it
  3. use the mouse to select the "Repair" branch from the left side selection tree
  4. select "Repair Font Folder" from the drop-down selection box
  5. hit "Repair Now"
Good as new.

Sunday, November 2, 2008

One Liner: Determine Length of Video Clip in Seconds or Frames

Using mplayer you can extract all sorts of interesting details about your video clip:
mplayer -identify -frames 0 video.avi

which, among other stuff, pukes out lines looking like this:

...
ID_FILENAME=video.avi
ID_DEMUXER=avi
ID_VIDEO_FORMAT=MP42
ID_VIDEO_BITRATE=1840648
ID_VIDEO_WIDTH=854
ID_VIDEO_HEIGHT=480
ID_VIDEO_FPS=24.000
ID_VIDEO_ASPECT=0.0000
ID_AUDIO_FORMAT=85
ID_AUDIO_BITRATE=245736
ID_AUDIO_RATE=0
ID_AUDIO_NCH=0
ID_LENGTH=596.46
ID_SEEKABLE=1
...
ID_VIDEO_CODEC=ffmp42
...
ID_AUDIO_BITRATE=160000
ID_AUDIO_RATE=48000
ID_AUDIO_NCH=2
...
ID_AUDIO_CODEC=mp3
...

In our case the important line is the one starting with ID_LENGTH, so that the complete one-liner is:
$ mplayer -identify -frames 0 video.avi 2>&1 | grep ID_LENGTH | sed s/ID_LENGTH=//
596.46

To extract the number of frames, you need to set the frame rate to one frame per second (stumbled upon this trick at Yahoo Answers):
$ mplayer -identify -fps 1 -frames 0 video.avi 2>&1 | grep ID_LENGTH | sed s/ID_LENGTH=//
14315.00

Sunday, October 26, 2008

One Liner: Extract MP3 Sound Track from an AVI Video Clip

Easy, using ffmpeg:

ffmpeg -i video-clip.avi -acodec copy sound-track.mp3

(as usual here: replace stuff in red with your own)

If the original sound track is not encoded as an MP3 stream, then drop the -acodec copy bit. In this case you may also want to specify the bit rate (default is 64kbits/s):

ffmpeg -i video-clip.avi -ab 96k sound-track.mp3

Friday, October 17, 2008

Reinventing a Wheel: HAL and CD/DVD Playback

My wife called me up at work:
- "I'm on a tight schedule, I need to get some work done and the kid is making me crazy; can she watch a DVD on your computer?"
- "Yeah sure, just insert the disc into the external drive, and I'll start the movie for you..."

So we're not beyond brain washing our offspring, but that's not the point. You're probably wondering why my wife would make such a call - can't she do that herself? isn't it just a matter of inserting the disc and let the operating system do its thing? and why use a computer instead of a TV/DVD set in the first place?

I'll start with the second question: our only TV is in another room, and our daughter preferred staying around her mom. As for the first question: normally you'd be right, if my computer would only be running Window$ or Linux with a Desktop Manager like GNOME or KDE, but it does not. I'm using awesome as my Window Manager, with no Desktop Manager, so I have to reinvent a few wheels sometimes.

So I went ahead, connected to my home computer via ssh, with a clear concept of the future: I'll just launch a media player from the command line to play the DVD, and ask my wife to take over and select the right DVD menu option. It took me several long minutes, during which both wife and daughter became impatient, before I was forced to admit failure. I did promise them to sort it out later. My wife was less than happy, and was rather verbal about it.

I ended up writing a script that uses HAL utilities to detect the type of media in the optical drive in order to launch the relevant playback application. The optical disc may be a video DVD, audio CD, or a data disc containing multimedia files in its root directory. The script also disables the GNOME screensaver during playback, and re-enables it when done.

It's not automatic, i.e. one has to manually run the script in order to start playback. I did, however, add a key binding to awesome so that my wife can launch the script herself, if she so wishes, by pressing Mod4+Shift+d (Mod4 refers, by default, to the Winodws-logo key on normal PC keyboards). I'm using version 2.3 of awesome, so this translates to the following lines in ~/.awesomerc:

key {
modkey = {"Mod4", "Shift"}
key = "d"
command = "spawn"
arg = "exec ~/bin/dvd.sh"
}

And here's the script ~/bin/dvd.sh itself:

#! /bin/bash

gconftool-2 --set -t boolean /apps/gnome-screensaver/idle_activation_enabled false

device="${1:-/dev/scd0}"
udi=$(hal-find-by-property --key block.device --string $device | \
while read u ; do \
[[ "$(hal-get-property --udi $u --key block.is_volume)" == "true" ]] && \
[[ "$(hal-get-property --udi $u --key volume.is_disc)" == "true" ]] && \
[[ "$(hal-get-property --udi $u --key volume.disc.is_blank)" == "false" ]] && \
echo $u ; \
done)

if [[ "$udi" != "" ]]; then
if [[ "$(hal-get-property --udi $udi --key volume.disc.has_audio)" == "true" ]]; then
DISPLAY=:0 sound-juicer --device $device --play
elif [[ "$(hal-get-property --udi $udi --key volume.disc.is_videodvd)" == "true" ]]; then
DISPLAY=:0 xine -f dvd:///$device
elif [[ "$(hal-get-property --udi $udi --key volume.disc.has_data)" == "true" ]]; then
if [[ "$(hal-get-property --udi $udi --key volume.is_mounted)" == "false" ]]; then
pmount $device
fi
DISPLAY=:0 xine -f "$(hal-get-property --udi $udi --key volume.mount_point)"
pumount $device
fi
fi

gconftool-2 --set -t boolean /apps/gnome-screensaver/idle_activation_enabled true

Notes:
  1. The script accepts, as a command line argument, an optional device path to use instead of the default /dev/scd0 (the path to my external LG DVD re-writer).

  2. The prefix DISPLAY=:0 is used to ensure that playback starts on the default display on my home machine.

  3. xine is used for video playback, but that's a matter of personal taste.

  4. pmount is used to mount removable storage devices, but mount is also fine, as long as /etc/fstab has a correct entry for the drive in question, e.g.:

    /dev/scd0 /media/cdrom1 udf,iso9660 user,noauto 0 0

  5. It's necessary to unmount a data disc after playback, so that it can be manually ejected from the drive.


All together now: "Daisy, Daisy..."

[30 Oct 2008] UPDATE: fixed script to ignore blank media in drive.

Friday, September 26, 2008

"Time for Space Wiggle"

I've taken, by mistake, two photos of the same subject, at two slightly different angles.

As I was about to delete one of the photos I suddenly recalled a website I once stumbled upon - "Stereo Images - Time for Space Wiggle" (probably not safe for work - the site contains nudity) - which demonstrates how you can get a 3D effect by "wiggling" two images of the same subject, taken at slightly different angles...

So, without further ado, I give you Carnotaurus Sastrei:

bad breath


I'm no GIMP guru, but it wasn't too difficult to create this:
  1. open the first image in GIMP
  2. open the second image as a new layer (File->Open As Layers)
  3. scale the image to a reasonable size
  4. save as a GIF image, and when asked select the following: save layers as animation, loop forever, 10 milliseconds delay between frames
Boo!

Friday, September 19, 2008

Self Hosted: WordPress (multi-site)

Here's how to self-host your WordPress-based blog (oh, and please replace the stuff in red with your own stuff):
  1. install MySQL (e.g. by following the instructions in my previous Gallery2 self-hosting post)
  2. install WordPress
    aptitude install wordpress
  3. read /usr/share/doc/wordpress/README.Debian
  4. create /etc/apache2/sites-available/wordpress.example.com with the following contents:

    <virtualhost *:80>
    ServerName wordpress.example.com
    ServerAdmin webmaster@example.com
    UseCanonicalName Off
    DocumentRoot /var/www/wordpress.example.com
    Options All
    # Store uploads in /var/www/wp-uploads/wordpress.example.com
    RewriteEngine On
    RewriteRule ^/wp-uploads/(.*)$ /var/www/wp-uploads/%{HTTP_HOST}/$1
    ErrorLog /var/log/apache2/error.log
    LogLevel warn
    CustomLog /var/log/apache2/access.log vhost_combined
    </virtualhost>
  5. create a link:
    ln -s /usr/share/wordpress /var/www/wordpress.example.com
  6. enable the new website:
    a2ensite wordpress.example.com
    /etc/init.d/apache2 reload
  7. setup the database with the following magic:
    bash /usr/share/doc/wordpress/examples/setup-mysql -n wordpress wordpress.example.com

  8. visit http://wordpress.example.com (your new blog!) and follow yet more instructions...

The Blogosphere awaits!

Friday, September 12, 2008

Self Hosted: Gallery2 (multi-site)

Please read the Gallery2 documentation, and follow the installation instructions. I did.

But these instructions are for a single Gallery2 site. I'm running three different Gallery2 sites, with different domain names (courtesy of No-IP.com) all from the same machine, all with an almost identical Apache2 configuration file.

Installing Gallery2 on Debian is rather straight forward, albeit somewhat tedious (please replace the text in red with your own stuff):
  1. run the following to setup the MySQL server:

    aptitude install mysql-server
    mysqladmin -u root password "password"
    skip this step if you already have a database server installed - and please read about securing the initial MySQL accounts.

  2. create a database for each Gallery2 site, and grant full privileges for this database to a specific user (provide this username, later on, to the web-based Gallery2 installer):

    mysqladmin -uroot -p create gallery2photos
    mysql gallery2 -uroot -p -e"GRANT ALL ON gallery2photos.* TO username@localhost IDENTIFIED BY 'password'"

  3. run (once)
    aptitude install gallery2

  4. create a configuration file for your site, e.g. /etc/apache2/sites-available/photos

    <VirtualHost *:80>
    ServerName gallery2.example.com
    ServerAdmin webmaster@example.com
    <IfModule mod_rewrite.c>
    RewriteLog /var/log/apache2/rewrite.log
    RewriteEngine On
    RewriteRule ^/$ gallery2 [R]
    </IfModule>
    DocumentRoot /var/www/photos
    <Directory />
    Options FollowSymLinks
    AllowOverride None
    </Directory>
    Alias /admin/gallery2 /usr/share/gallery2
    <Directory /admin/gallery2>
    Options FollowSymLinks
    AllowOverride Limit Options FileInfo
    </Directory>
    <Directory /gallery2>
    Options FollowSymLinks
    AllowOverride Limit Options FileInfo
    </Directory>
    ErrorLog /var/log/apache2/error.log
    LogLevel warn
    CustomLog /var/log/apache2/access.log vhost_combined
    </VirtualHost>

  5. enable mod_rewrite:
    a2enmod rewrite
    the idea is that access to http://gallery2.example.com will automatically be diverted to http://gallery2.example.com/gallery2 (this is a dedicated photo album website)

  6. create a directory for your Gallery2 website:
    mkdir -p /home/username/www/photos/gallery2

  7. create a symlink /var/www/photos to the root directory of your Gallery2 website:
    ln -s /home/username/www/photos /var/www/photos

  8. set the directory ownership:
    chown -R www-data:www-data /home/username/www/photos

  9. create a directory to host the Gallery2 data and images (e.g. /home/username/g2data/photos - you'll be prompted to do this as part of the web-based Gallery2 installation procedure)

  10. enable the new site and restart the webserver:

    a2ensite photos
    /etc/init.d/apache2 restart

  11. open the following link in a browser: http://gallery2.example.com/admin/gallery2 and follow the web-based installer instructions... (you should also read the multi-site installation instructions)

Tip: when uploading new photos to Gallery2, that are already on your computer, you should use the "From Local Server" tab in the "Add Items" web page, and tick the option "Use symlink" for each photo that you upload. This keeps a single copy of each image on your computer, and separates the photos from the Gallery2 files.

Friday, September 5, 2008

Awesome!

Sometime ago I said that I intend to move from GNOME to an alternative, keyboard-centric tiling WM (Window Manager). Any such change requires some getting used to. Any such WM has its own philosophy regarding both its users and what it means to manage windows. Each of these WMs has its own set of default key bindings and different methods of configuration. That's the problem with freedom: one has to choose.

In my case I find that I'm most comfortable when every application window that I use is maximized to span most of the screen area. Sometimes I split the screen between two applications windows, usually horizontally. I also want (but I don't need) a simple status bar at the top of the screen, showing (at least) the name of the active window, some info (date/clock, load average) and maybe tray icons. Oh, and I want to still be able to run applications in floating-windows mode just like any other (read: normal) WM (the GIMP is probably the ultimate test here).

I attempted to get this behavior in GNOME using Devil's Pie - a cool little WM-like utility that lets you control window behavior using lisp-like scripts. For example, I made urxvt always open up maximized, without any window decorations, using the following Devil's Pie script ~/.devilspie/urxvt.ds :

(if (is
(window_class)
"URxvt")
(begin
(maximize)
(undecorate)
(focus)))

But don't be misled by the appearance of this .ds script: this is not a real programming language (no loops, no functions, no variables). Furthermore, Devil's Pie does not manage windows - it only serves as a startup hook that's applied to windows upon being first displayed. This just wasn't good enough for me.

This is actually my second round of WM evaluation. Last time was almost 3 years ago, at work - I was in a process of reinventing my work environment. Heck, I even tried using vim for two weeks (definitely not for me, thank you very much :q!). Eventually, I narrowed down my options to larswm, ion3, ratpoison and wmii. I ended up running an early release of ion3, heavily customized to do my bidding, together with a custom keyboard setup.

This time around I was shopping for a WM for my old laptop at home. I wasn't going to try ion3 - it's not in Debian testing, and may never be, due to its moronic non-free license. It was the most polished and stable WM of its kind, at the time. But there are better options these days. Plus, I was looking for a setup that would work almost out of the box, with little or no customization. I just couldn't stand the frustration of re-customizing my setup after each major release of the WM (I solved the problem at work by not upgrading ion3, can you believe?).

So I started using ratpoison at home, even though I knew it does not support floating windows, because, apart for that, it seemed to fit my usage patterns like a glove. I hit a dead-end though when I tried running some Window$ applications using Wine - there were some serious issues there (e.g. I could not select multiple files in the Window$ standard file-open dialog box). And, as much as I pretended not to, I really did need the WM to support floating windows. I simply had to switch over to a different WM.

I tried stumpwm, which is written in Common Lisp, by the same guy who wrote ratpoison, based on the same concepts. So cool. So flexible. And oh so very slow. It takes almost two minutes (!) for it to load on my old laptop. Not cool. Ah, and it does not support floating windows.

This meant one thing: I had to leave the comfort of static tiling WMs and go dynamic. With a static tiling WM, like ion3, ratpoison and stumpwm, you can arrange window layouts in advance, and place and move windows from one part of the layout to the other. A dynamic tiling WM arranges windows on its own, based on one or more layout algorithms.

I considered wmii: I tried it several times in the past, and was impressed by the underlying architecture and concepts (window tagging, floating layer, etc.). I was, however, repeatedly put off by its instability. I believe I sampled v2.5, v3.2 and v3.5, but I have no records to prove this. To be fair, it's been a while, so maybe things are better now. I decided to skip it anyway.

I tried dwm when it first came out. It was written by wmii's original author, after he got fed up with its complexity. It was very easy to pick up, very minimal, fast - quite cool. Well, not really - dwm can only be configured by editing its C source code and recompiling it. While it can be argued that this isn't too different than writing Lua scripts for ion3, it still feels wrong. Add to that the author's little fetish: one of his major goals is to pack the WM into as few lines of source code as he possibly can. I never bothered to follow its development. Next.

How about XMonad? it's written and configured in Haskell, which I don't grok. For a while, after it came out, it looked like The Great White Hope of tiling WMs: a lively, fun project, with a lot of features planned for the future, no ties to existing code, with some great coders hacking it, an ever growing number of contributed extensions, and a core code-base that is provably correct. It also seemed like a good opportunity for me to get into Haskell - I've been hearing about it all over the 'Net, and I wanted to see what the fuss is about.

It must be said that XMonad requires very little Haskell in order to get up and running, but in my mind using this WM meant getting knee-deep into Haskell. But between a day job, a growing family and this blog, it soon started to look daunting - I decided I should probably see what else is out there, dismiss the other WMs, go back to XMonad, and then put effort into it. I figured that by then it would mature and that its support for floating windows would improve.

Enter awesome - it started as a fork of dwm, but soon became a completely different WM. It has built-in, proper support for floating windows, built-in status bar, a simple configuration file (the upcoming version 3 has switched to Lua scripts), wmii-style window tagging system, built-in support for Xinerama (which I don't use) and a lot more. While the authors of most of the other tiling WMs attempt to achieve some form of technical purity, the stated goal of awesome is for it to simply be an awesome WM.

And it's on the right path.

During the first few days of using it (version 2.2) I would hide the status bar and work in max-mode - the effect was very similar to my ratpoison experience. I then started using the status bar, and configured some rules for window tagging and keyboard shortcuts, to support my usage patterns.

It's the only WM of the pack that supports anti-aliased fonts, via the Pango text layout and rendering library (since version 2.3). Pango also provides bi-directional text rendering - so that window titles are correctly rendered, even if Hebrew text is included.

Version 3 will include some major changes - Lua for extension scripting, status bar enhancements (tray icons!), tabbed windows and a lot of internal changes that suggest that awesome will be even more awesome than it already is.

So awesome is my WM of choice for the foreseeable future. As for Haskell - well, it seems that I need a different excuse for learning it.

Thursday, August 21, 2008

Self Hosted: Foxmarks Bookmarks Server (WebDAV)

A while ago I stumbled upon Google Browser Sync and got instantly hooked - I don't mind selling my soul to Google, it's a small price to pay for the comfort of having all my bookmarks, passwords and browsing history get automatically synchronized between all the computers I happen to use.

Privacy is for the weak.

And then something terrible happened: the service was discontinued. Did I learn my lesson? heck no. It had already become part of my routine, to bookmark some website at work, so as to check it out in detail later, at home (and vice versa).

Like, I imagine, many other disenchanted GBS users, I started looking into other synchronization solutions. The only other service that is meant to provide the same, and more, features is Mozilla Weave. But it's still in early beta, and as I write this they don't accept new users into their system. Which leaves us sync-hungry dolts with bookmarks-only synchronization services.

I've picked Foxmarks, because they sync the built-in browser bookmarks, and not some separate online list of links. Plus, the bookmarks can be stored on my own (WebDAV) server!

The Foxmarks support wiki points to an article that explains how to setup WebDAV on Apache2. I basically followed their instructions, but that HOWTO is a bit outdated - it took some futzing around before I got it right. I recommend reading the mod_dav section in the Apache2 Manual, and a more recent HOWTO at howtoforge.com.

So here's how I did it:

  1. enable modules:
    a2enmod dav_fs 
    a2enmod auth_digest
    
  2. create the directory for bookmarks storage:
    mkdir -p /var/www/foxmarks/.webdav

  3. generate passowrd (zungbang is the username - use your own!):
    htdigest -c /var/www/foxmarks/.webdav/.digest-password foxmarks-webdav zungbang

  4. make sure all files are owned by the www-data user:
    chown -R www-data:www-data /var/www/foxmarks

  5. create /etc/apache2/sites-available/foxmarks:
    <VirtualHost *:80>
    ServerName foxmarks.example.com # fake! fake! fake!
    ServerAdmin webmaster@example.com
    DocumentRoot /var/www/foxmarks
    <Directory />
    Options FollowSymLinks
    AllowOverride None
    order allow,deny
    Allow from all
    </Directory>
    Alias /webdav /var/www/foxmarks/.webdav
    <Location /webdav>
    DAV On
    AuthType Digest
    AuthName "foxmarks-webdav"
    AuthDigestProvider file
    AuthUserFile /var/www/foxmarks/.webdav/.digest-password
    Require valid-user
    </Location>
    ErrorLog /var/log/apache2/error.log
    LogLevel warn
    CustomLog /var/log/apache2/access.log vhost_combined
    </VirtualHost>
    


  6. enable the new site and restart the webserver:
    a2ensite foxmarks
    /etc/init.d/apache2 restart
    


  7. test the setup with cadaver
    aptitude install cadaver
    cadaver http://foxmarks.example.com/webdav
    
    you should be prompted for a username and password and succefully log in - use help to list available commands.



  8. install the Foxmarks add-on, and configure it to access your own server at: http://foxmarks.example.com/webdav/foxmarks.json


It took several browser restarts before the Foxmarks browser add-on agreed to connect to my server for the first time (this happened on three different machines). Other than this initial mess, Foxmarks now works like a charm, correctly synchronizing my bookmarks. By the time I had this working though, an enterprising fellow found a way to hack Mozilla Weave into using a private server for storage. It too uses WebDAV, so most of the stuff above still applies. I guess I'll switch to Mozilla Weave eventually, but it'll take a while.

 [01 Sep 2008] UPDATE: "works like a charm" was a bit of an exaggeration... a few minutes ago I found out that the bookmarks store file foxmarks.json simply disappeared - I don't know how or why, but it was definitely gone. It wasn't a big deal though - I just restored the file from the nightly backup. I could've probably also attempted to "force overwrite of server bookmarks" by hitting the Upload button at the advanced tab in the Foxmarks settings dialog box, but I didn't.

 [13 Sep 2008] UPDATE #2: It happened again. Well, not quite - this time the file foxmarks.json was still there, yet it was truncated at about half its size, and the Foxmarks add-on refused to synchronize (I got a tiny red question mark right next to the Foxmarks icon in the status bar). I tried the Upload button and it worked. I'm not amused.

 [16 Oct 2008] UPDATE #3: as of version 2.5.0 Foxmarks provides password synchonization - I've been using it for the past three days and it seems to be working nicely. I'm not sure about using Mozilla Weave anymore, but who knows...

 [17 Apr 2009] UPDATE #4: I've recently updated Foxmarks to Xmarks version 3.0.2. The option to use my own server was automatically disabled, so I had to re-enable it. But for some reason I couldn't get it to sync - I only got an unspecified error in the log. I was finally able to get it working by forcing Xmarks to download the bookmarks and passwords from the server (press the Download button in the Manual Overwrite section of the Advanced tab in the Xmarks settings dialog box).

I also disabled all the options under the Discovery tab - I don't need it, and I guess it requires an Xmarks account anyway.

Mozilla Weave Beta is now open again to new users but it requires Firefox v3.5, and I'm using Iceweasel 3.0.2 (Debian/Squeeze), so it's not an option yet.

Monday, August 18, 2008

gnome-screensaver Doesn't Lock the Screen

I've configured the GNOME screen saver to lock the screen on my live-HDD after some period of no activity - it seems to make sense on a mobile platform, to prevent a passerby from messing about with my system. All was well until a recent upgrade (I can almost hear you: "when will this guy get the point?").

The screen saver comes up after a while, but it doesn't lock the screen. The password dialog box doesn't come up when I touch the mouse or keyboard, and I simply get my desktop.

My first guess: I turned off locking by mistake. Easy to check (just launch gnome-screensaver-preferences). Nope. The screen saver is definitely set to lock the screen.

My next guess: it's a bug. After a quick look at the gnome-screensaver bug page on the Debian BTS, I found bug #481119. While the reported problem isn't quite similar to my own, the bug submitter provided a workaround that seemed worth a try.

I opened up gconf-editor, found the key /apps/gnome-screensaver/lock_dialog_theme and modified its value from default to an empty string. A longshot. I know. But it works. I would never have guessed.

I'll go wait for the screen to lock now. Bye.

[Aug. 19 2008] UPDATE: I posted too early. It sometimes works and sometimes doesn't, and I can't quite put my finger on it. In the meanwhile I've reverted the value of /apps/gnome-screensaver/lock_dialog_theme to its previous default value default.

Thursday, August 14, 2008

Self Hosted: Getting Started

This is the first post in a series of posts I'm planning about self hosting: the act of hosting your own web server.

Self hosting requires a computer and an Internet connection that's on most of the time, some software (webserver and probably a database and other stuff), an account with one of the dynamic address service providers (such as No-IP and DynDNS), and last but not least: lots of free time.

In each of the following posts in this series, I'll present a specific type of website: a plain static website, Foxmarks bookmarks server (WebDAV), Gallery2 (multi-site), WordPress, Gitweb, and a CUPS proxy site.

I'm no webmaster - my experience in web hosting is limited to what I've done in the past year and half, which isn't much. Nevertheless, it's work that begs to be documented. Please don't be shy: corrections and suggestions are welcome!

I'll start with a plain static website, so as to demonstrate the steps needed to install it. I'm using Apache2 as my webserver - the default on a Debian system (if it's not installed, just type aptitude install apache2 as root).
  1. create a directory (or a symlink to a directory): /var/www/plain
  2. create a file /var/www/plain/index.html:
    <html><body>Hello World!</body></html>
  3. create a new configuration file /etc/apache2/sites-available/plain with the following contents:
    <VirtualHost *:80>
    ServerName example.no-ip.com
    ServerAdmin webmaster@example.com
    DocumentRoot /var/www/plain
    <Directory />
    Options FollowSymLinks
    AllowOverride None
    </Directory>
    ErrorLog /var/log/apache2/error.log
    LogLevel warn
    CustomLog /var/log/apache2/access.log vhost_combined
    </VirtualHost>
    (I've marked in red a fictitious No-IP domain and webmaster e-mail address - please replace with your own stuff).

  4. enable the new website:
    a2ensite plain
  5. reload the webserver:
    /etc/init.d/apache2 reload
A few notes are in order:
  1. the new site should be accessible at http://example.no-ip.com (fake! fake! fake!)
  2. the new site can be disabled with the following sequence:
    a2dissite plain
    /etc/init.d/apache2 reload
  3. Virtual Hosts: other similar sites with different server names can be similarly installed by modifying the ServerName property. The webserver will serve web pages according to the server name used in the URL being accessed - all from the same IP address.

  4. if you're getting the following message every time the server is reloaded or restarted:
    Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName
    just add the following line to /etc/apache2/httpd.conf
    ServerName 127.0.0.1
  5. SECURITY: better safe than sorry...

It's your serve(r) now.

Thursday, August 7, 2008

Thinking Outside the (Virtual) Box (3)

It was quite inevitable.

Following my previous escapades with WinXP and VirtualBox, I simply had to get my live-HDD running under VirtualBox...

The first step was to create a disk image that actually points to the raw disk (see section 9.9 in the VirtualBox User Manual):
VBoxManage internalcommands createrawvmdk -filename live-hdd.vmdk -rawdisk /dev/sda -register
The next step was to create a virtual PC in VirtualBox that's based on this image. And that was basically it. There's really nothing more to it.

Well, I've also installed the VirtualBox Guest Additions on the live HDD, and that wasn't as neat.

The X server on my live HDD is configured to auto-detect the video adapter, and it works just fine, allowing easy resizing of the virtual PC display. The mouse, however, is not auto-detected. I've added the following bit of shell-script at the end of do_start in /etc/init.d/bootmisc.sh, to fix this (note the use of lspci to figure out if this is a real or virtual session):
# Setup display
rm -f /etc/X11/xorg.conf.200*
# are we running inside VirtualBox ?
if [ -z "$(lspci -d 80ee:beef)" ]; then
# real
dpkg-reconfigure -fnoninteractive xserver-xorg
else
# virtual
/usr/share/virtualbox/x11config.pl
fi
The other issue is accessing the VirtualBox shared folder. It's done like this (as root):
modprobe vboxvfs
mount -t vboxsf -o uid=zungbang,gid=zungbang,rw /vboxsvr/tmp /mnt/vbox
(replace the colored parts with the your own stuff).

It seems that mount.vboxsf doesn't grok the noauto flag, so there's no way to add entries for shared folders in /etc/fstab, if, like me, you need these to not be mounted at startup.

I'm virtually happy now.

Thursday, July 31, 2008

Thinking Outside the (Virtual) Box (2)

Previously I described how I installed WinXP Pro on a VirtualBox virtual PC, as a last ditch attempt to get MS Office running on my box. It turned out that VirtualBox is amazingly fast, even on my old laptop, so I decided to go ahead and install MS Office. But first there were several lesser chores to complete: Window$ Update and installing a printer.

Windows Update was easy enough - but tedious: my installation media is from 2002 (SP1). Several hundred megabytes and a few virtual reboots later, and the upgrade process was complete. Installing the printer, however, was less of a picnic.

Our HP Officejet 5510 all-in-one is connected via USB to my laptop, driven by HPLIP and managed by CUPS. My plan was to setup an IPP printer on the virtual Windows machine, similar to the setup on my wife's real laptop.

HP ships an installation CD with its printers with shit-load of useless software, which takes forever to install. It was only after installing and using HPLIP that I realized how shitty it really is on Windows. What's really strange about this is that HP provides the software for both Windows and Linux. Go figure.

Anyhow, my point is that the CD isn't required: HP provides a "corporate" version of the printing drivers, a lean 34MB (!?) package, that you can download from their website, in case you need to install the printer in a "corporate environment" (read: when the printer is not directly connected to your computer).

I did not anticipate any problem here:
  1. download the corporate driver package (it's a self extracting archive),
  2. launch it, (it should happily extract itself to C:\temp)
  3. find the freshly extracted setup.exe and launch it,
  4. connect a printer directly to the USB port, and let plug-n-play do its stuff.

This very same procedure worked nicely on my wife's laptop. But not on the virtual PC. That setup.exe just died several seconds after starting - no error message, no BSOD, it just dies.WTF?

What do you do when you have no decent logging, no source code and no strace to help you start figuring out what's wrong? easy: you guess. Oh, and you're very likely to guess wrong and end up doing some damage before hitting the right solution, if at all.

Guess #1: a corrupted download. It took a while to verify - my link to the HP website was damn slow at the time - but both md5sum and sha1sum insisted that I got all the bits and in the right order.

Guess #2: just before it died, the installer seemed to setup a recovery point. Maybe that's the problem? I disabled the system restore feature (and, in the process, lost any previously created recovery point) and tried again. Same result.

Guess #3: it suddenly dawned on me - the installer dies because it can't find any USB controller. The Open-Source Edition of VirtualBox does not support USB. This seemed like a plausible explanation, yet I had no supporting evidence. I also had my doubts: this was the corporate driver package, which should support a network-printer configuration.

And if this theory was right, what do I do now? it looked hopeless. But I had an idea: maybe there's a software only virtual USB controller out there, that can be used to fool the installer into thinking that there's USB on my virtual PC?

As improbable as this may seem, such a beast does exist - it's called the Device Simulation Framework - and is written by none other than Microsoft itself. It's also freely distributed as part of the WDK - the Windows Driver Kit. Getting the WDK ISO image is a bit of a chore, but it can be done. Eventually.

Once downloaded, you only need to mount the ISO image into a virtual optical drive, and follow the DSF installation instructions:
  1. double click dsfx86runtime.msi in the \dsf directory,
  2. create a virtual USB controller:
    • open a Command Prompt window,
    • navigate to the \Program Files\dsf\softehci folder,
    • run softehcicfg /install


This time around the installer did not die. I guessed right. Whew.

My next problem was to convince the plug-n-play machinery that the printer drivers should actually be installed, so that I could specify it as the IPP printer's driver. With my wife's PC it was easy: I simply hooked up the printer directly to one of its USB ports, and a new printer icon was automagicly installed. But, as we already know, VirtualBox OSE does not have USB...

I tried double clicking some of the .inf files that were installed, I tried running some of the setup executables that were also installed. I shouldn't have done that - but it was late at night, and I wasn't thinking straight. Luckily nothing happened. Which was also unfortunate.

It took another guess to get this done: I shared the USB printer on my wife's laptop (which isn't really connected to anything - I kept it installed just in case), and added a printer on the virtual PC that points to this shared printer. The drivers were then automatically installed, and I could then create yet another printer to point to the real printer via its CUPS URL.

For some reason I can't print a test page from the printer's property pages, but otherwise printing works just fine.

The rest of the story is rather boring: I installed MS Office, and then ran Microsoft Update several times, until no updates were left to install.

Finally, I installed the VirtualBox Guest Additions. The Guest Additions provide shared folders, auto screen resize, auto input focus for keyboard and mouse, and the cool-yet-useless seamless mode.

Next time: running a live-HDD under VirtualBox.

Thursday, July 24, 2008

Tracing a Daemon

I've recently needed to run a daemon under strace, in an attempt to figure out what was making it fail.

Tracing a running process is simple enough - just attach strace to it:
strace -p <pid>
(replace <pid> with the process id). But in this case I wanted to start the daemon under strace, which is a bit tricky.

A daemon is typically (always?) started from an init script located in /etc/init.d/ via a program called start-stop-daemon, e.g.
start-stop-daemon --start --exec $DAEMON -- $ARGS

where $DAEMON is the executable being launched and $ARGS are its command line arguments (optional).

The idea is to replace that line in the init script with something else that launches an strace-d daemon, that can later be stopped with start-stop-daemon. I've tried various combinations of start-stop-daemon, strace and $DAEMON, before I hit the following incantation:
start-stop-daemon --oknodo --start --exec $DAEMON --background --startas /usr/bin/strace -- -f -o /tmp/$NAME.strace $DAEMON $ARGS

This will trace the executable and any of its forked child processes (-f) to a file named /tmp/$NAME.strace.

Note that a daemon may be started in several places in the init script ("start" and "restart").

Monday, July 14, 2008

Are We Running on AC Power ???

I once mentioned that I use the script on_ac_power from the powermgmt_base package to determine whether my laptop is running on AC power.

Well, following a kernel upgrade to version 2.6.25, that script stopped working (see bug #473629). Here's a (debian-specific-works-on-my-machine) replacement script, based on the output of acpi:


#! /bin/bash
# this is a drop-in replacement for /usr/bin/on_ac_power
check_ac_power()
{
local ret=255
if [ -x /usr/bin/acpi ]; then
status=$(/usr/bin/acpi -aB | cut -d' ' -f 6)
case "$status" in
on-line)
ret=0
;;
off-line)
ret=1
;;
*)
;;
esac
fi
return $ret
}

check_ac_power

Note that I'm assuming that the system has a single AC power supply.

Saturday, July 12, 2008

Thinking Outside the (Virtual) Box (1)

I managed to drop a heavy book on my wife's laptop, and now its Z key is busted. I hooked a spare USB keyboard to the laptop, so that my wife can continue using it. But now it's more "desktop" than "laptop".

With its DVD drive already busted, it seemed prudent that we should send the laptop to be fixed, as long as it's still covered by warranty. The only problem with this plan is that my wife needs the laptop for work, and the turn-around can as long as 7 work days (excluding UPS shipping to and from the lab).

We considered our options:
  1. copy the documents she's currently working on to a USB flash drive, and continue working on a computer at her malware infested workplace,
  2. move her stuff to my Debian box, and work there.

The only problem with the latter option is MS Office - how do I get that to run on my box?

OpenOffice.org. Please don't talk to me about OpenOffice.org: this piece of crapware simply doesn't cut it even when it comes to mildly complex documents that were generated with the rival moneyware crapware. The compatibility issues become painfully apparent with bi-directional text (specifically Hebrew and English). And all of my wife's colleagues use MS Office, so there's no real choice here.

Wine. I tried installing MS Word 2003 under Wine. The installer wizard is localized, so all the messages appear in Hebrew, but aligned to the left instead of to the right. I ignored that and just pressed the Next button until the installation was completed. I then launched MS Word with Wine. I got an error dialog box saying that there was an installation problem. I promptly dismissed it, and MS Word came up, and seemed be functioning. Except for one little issue: Hebrew text is reversed! How useless.

I spent some time searching the 'Net for relevant tips. It turns out that Wine does not support BiDi text rendering. It used to have BiDi support, but it was dropped due to various technical reasons. There are several open bugs about BiDi support in the Wine bug tracking system, but there's no real effort to fix them. The BiDi support maintainer does not have time to do the necessary work. Actually, it seems that he's looking for someone to sponsor (read: pay) him. And nobody seems to care. Bottom line: it's a No-Go.

Virtual PC. I didn't consider setting up a virtual PC, because I assumed it would be nearly impossible to interact with an emulated Window$ machine running on top of my already slow Debian box. But I had nothing to lose (except my time...), so I tried installing VirtualBox OSE.

It wasn't easy: together with VirtualBox, aptitude decided to auto-install a 486 Linux kernel image, presumably because VirtualBox requires a kernel module to be installed on the host machine, and the default flavor is probably 486. I dunno. I removed that module and the 486 kernel image and selected to install the 686 specific module.

Time to launch it:

$ virtualbox
WARNING: You are not a member of the vboxusers group. Please add yourself
to this group before starting VirtualBox.

You will not be able to start VMs until this problem is fixed.

Why isn't this done automatically? nevermind:

$ adduser zungbang vboxusers

Ahh, I have to login again... OK, done...

$ virtualbox
WARNING: The character device /dev/vboxdrv does not exist.
Please install the virtualbox-ose-modules package for your kernel and
load the module named vboxdrv into your system.

You will not be able to start VMs until this problem is fixed.

WTF? didn't I spend a few minutes doing just that?

$ su -
# modprobe vboxdrv
^D
$ virtualbox

Nirvana. Time to setup a virtual PC. It's rather easy actually. VirtualBox is nice in that way.

But when I started the virtual PC VirtualBox told me that the kernel module version does not match its own. WTF? Didn't I ... ahh, I get the drift - time to visit the Debian BTS. Not surprisingly, it's a known issue, and the solution is to install the source code for the host module, compile and install it:

rmmod vboxdrv
aptitude purge virtualbox-ose-modules-2.6.24-1-686
aptitude install virtualbox-ose-source
module-assistant prepare virtualbox-ose
module-assistant auto-install virtualbox-ose
modprobe vboxdrv
echo vboxdrv >> /etc/modules

Finally, I used forbid-version (F) in aptitude to prevent the module package from being "upgraded" from version 1.6.2 (source code module version) to 1.5.6 (the version of the binary package).

This time around the virtual PC came up just fine, booting from my Windows XP Professional installation CD. Installation took about an hour to complete, and was uneventful. The virtual PC crashed during the final reboot, but it booted fine after a manual restart.

At this point I was pleasantly surprised: VirtualBox is damn fast. I have no idea how it's done. Maybe it's just my sleepless brain that's playing tricks on me. But even before I installed the Guest Additions, WinXP seemed to run at least as fast on this virtual PC, as it ran on the host PC when it was natively installed.

Amazing.

To be continued.

[15 Dec. 2008] UPDATE: As you may have noticed, I added the vboxdrv kernel module to /etc/modules, in order to load it when my computer starts up. A recent post on the debian-user mailing list pointed me to a better way of doing this: open (as root) /etc/default/virtualbox-ose for editing, and edit it so that it contains the following line:
LOAD_VBOXDRV_MODULE=1

Friday, July 11, 2008

ZoneAlarm Hotfix

A recent Windows Update has killed Internet access from my wife's laptop. The connection was restored as soon as I disabled the ZoneAlarm Firewall.

A hotfix for this issue is already available.

Thursday, July 10, 2008

Mount may Fail for UUID Entries in /etc/fstab

I was just hit by a nasty bug (#487758, #487783), in blkid that may cause mount to fail, for UUID style entries in /etc/fstab, e.g.

UUID=de018d5f-4dbc-4ed6-9724-4d5c793658aa /boot ext3 defaults 0 2

Yes! this is the boot partition on my live-HDD, which must be specified in UUID style, because the device path (/dev/sd*) is dynamically determined, depending on the current system configuration and boot sequence.

The workaround, until the fixed version of e2fsprogs trickles down to Testing is to specify UUID style entries like this:

/dev/disk/by-uuid/de018d5f-4dbc-4ed6-9724-4d5c793658aa /boot ext3 defaults 0 2

Thursday, July 3, 2008

One Liner: Disable/Enable GNOME Screensaver

Here's how to disable the GNOME screensaver from the console (or script):

gconftool-2 --set -t boolean /apps/gnome-screensaver/idle_activation_enabled false

Replace false with true in order to enable it again.

Saturday, June 28, 2008

Selecting the Emacs Spell Checker Default Dictionary

While writing the upcoming blog post I noticed that Emacs tells me its using the British dictionary while spell checking my text (via M-x ispell-buffer, or in flyspell-mode). I have no idea how it got to be like that, but I'm more comfortable with American spelling.

In any case, after verifying that I had the iamerican package installed (the American dictionary for ispell) I ran
dpkg-reconfigure dictionaries-common
selected the American English dictionary, and then restarted Emacs. Dandy.

Tuesday, June 17, 2008

UnDBX: Extract E-Mail Messages from Outlook Express DBX files

If you can't beat them, join them.

Here's my tiny, hopefully half decent, contribution to the FOSS universe: UnDBX, a command-line utility that I've developed to extract e-mail messages from Outlook Express DBX files.

There are many such utilities around, so why write another one? because I had to.

As I described on this blog some time ago, I used to backup my wife's mailboxes with a combination of DbxConv and rsync, launched from a VB script run by the Bacula file daemon (phew!). This allowed me to backup a few megabytes of data a day (i.e. just the new messages), instead of several gigabytes (i.e. a bunch of very large monolithic DBX files). The objective was to save precious disk space on my backup device (an external USB hard disk). The price was a complicated backup scheme, wasted disk space on my wife's PC, and long backup jobs (more than 3 hours every night!).

This backup scheme failed mysteriously several times. Debugging it is a real pain, simply because it takes so much time to complete a backup job. I finally decided, almost three months ago to stop using it, and directly backup the gigantic DBX files, until I can come up a with a better solution.

My original intent was to add an incremental extraction option to DbxConv, so that it would only extract to disk e-mail messages that haven't been extracted yet. That would make the extraction process much shorter, and also save disk space because a scratch folder is not needed anymore. As I browsed through the DbxConv source code I realized that I can't modify it, because it uses MFC, and MFC is not available in MinGW, which is the toolchain I have available in Debian.

The solution? UnDBX - the DBX extraction tool.

I ported the DbxConv DBX parsing code from C++ with MFC to plain C, and wrote a main function that extracts messages from all the DBX files in a specified folder, to a sub-folder of a given output folder. The first round works very much like DbxConv - all messages are extracted to disk as EML files. Subsequent runs only extract new messages to disk, and also delete EML files on the disk that do not correspond to messages in the DBX files (i.e. deleted messages).

Unlike DbxConv, UnDBX cannot convert DBX files to MBOX files - its sole purpose is to facilitate fast incremental backup of DBX file.

Backup jobs are down to 8 minutes! that's with 14 DBX files, over 35000 messages, and 3.5GB of data - a nightmare. I hope some of you will find it useful too. Enjoy.

Saturday, June 14, 2008

Pipe Dreams (or: VBScript, Spawned Processes and StdOut/StdErr Capture)

I mentioned before that I'm writing a small console utility for Window$ that reads and writes a lot of files. It works nicely on my Debian machine, both when compiled natively and when cross compiled for Window$ and run with Wine. It even works when run in a console window on my wife's PC.

So far so good, but I intend to spawn the program from within a VB script, run by the Windows Script Host. So I wrote a little script that (I thought) does exactly that. Here's a script that runs a command and captures its output:

' do.vbs - run a command and echo its output
' usage:
' cscript do.vbs "command arguments ..."
Set WshShell = CreateObject("WScript.Shell")
If Wscript.Arguments.Count = 1 Then
runCommand Wscript.Arguments.Item(0)
Else
Wscript.Echo "Please supply command to run, enclosed in double quotes."
End If
Set WshShell = Nothing

Sub runCommand(strCommand)
Set objScriptExec = WshShell.Exec(strCommand)
strStdOut = objScriptExec.StdOut.ReadAll
WScript.Echo strStdOut
Set objScriptExec = Nothing
End Sub

This works nicely with commands like "dir C:" or "ipconfig /all", or any other program that only outputs text to the standard output stream (StdOut). Trouble starts when the program in question also outputs text to the standard error stream (StdErr) - a common practice among console utilities, mine included.

Such programs simply hang.

How lame.

Yes, even if you try to capture StdErr with StdErr.ReadAll.

Well, it seems that only one stream can be captured like this. It's some kind of a race condition, since you can get it to work for some programs (as in this Micro$oft knowledge base article). But in general it's hopeless.

Here's the best workaround I could come up with for this (tested on WinXP Home edition, YMMV):

Sub runCommand(strCommand)
Set objScriptExec = WshShell.Exec("cmd /c " & strCommand & " 2>NUL")
strStdOut = objScriptExec.StdOut.ReadAll
WScript.Echo strStdOut
Set objScriptExec = Nothing
End Sub

This completely discards the contents of StdErr. Alternatively, you may want to replace NUL with a path to a file, so that StdErr will be redirected to that file.

So very lame.

[29 Oct 2008] UPDATE: a kind anonymous soul posted a comment, providing a better workaround:

Sub runCommand(strCommand)
Set objScriptExec = WshShell.Exec("cmd /c " & strCommand & " 2>&1")
strStdOut = objScriptExec.StdOut.ReadAll
WScript.Echo strStdOut
Set objScriptExec = Nothing
End Sub

which not only prevents the script from hanging, but also allows it to collect messages from both StdOut and StdErr. Thanks!

Wednesday, June 11, 2008

Iceweasel, Plugins and Add-ons! Oh My!

I clicked on a link to a PDF file and nothing happened. I'm used to it - the combination of my slow machine, Acrobat Reader and the World-Wide-Wait is enough to grind my wide-band Internet connection to a halt. Only that this time nothing happened.

Well, actually how could I be sure? the halting problem is one of the more practically significant theorems that I'm aware of. Anyway, I clicked several times, with no response, I middle-clicked, to open the link in a different tab, and still nothing happened. It worked a few days ago. What gives?

Maybe it's a problem with the Acrobat Reader Iceweasel plugin? I opened about:plugins but the plugin seemed to be installed correctly. I reinstalled it anyway:
aptitude reinstall mozilla-acroread
No go. No surprise.

What next? - I usually launch Iceweasel with a shortcut key, so I tried launching it from a terminal window, in the hope that some diagnostic message would show up there. Nah. Get real. Why should I be so lucky?

I clicked again on the link and I suddenly noticed that the downloads statusbar appeared - I was wrong, something does happen when I click on a link to a PDF file: it gets downloaded automatically to some directory. This was weird on two accounts: first off it was downloaded instead of opened by the Acrobat Reader plugin; the other problem was that it was downloaded automatically to some directory even though I've set an option in the download preferences to make Iceweasel always ask me for a target directory.

Maybe it's a problem with the downloads statusbar add-on? I disabled it:
  1. select menu Tools -> Add-ons
  2. select the Extensions tab
  3. find the add-on to disable and press the Disable button
  4. quit and run Iceweasel again
it still did not work.

So maybe it's a problem with file types? let's check:
  1. select menu item Edit -> Preferences
  2. select the Content tab
  3. click the button "Manage..." in the File Types section
  4. verify that an action is registered for the PDF file type
It looked OK.

I tried opening the same link in a different tab using the context menu (right-click) and, surprisingly enough, it worked. I tried opening the link from the history panel (Ctrl-H), and again, it worked!

So it wasn't a file type problem after all. But what was it?

I decided to check where the PDF was downloaded to, and was surprised to find it in ~/iMacros/Downloads. Aha!

I installed the iMacros add-on several days ago because I use keyboard macros in emacs a lot and browser automation via macros sounded like a good idea at the time. I tried it once, was impressed by its potential, but realized that I didn't really need browser automation. I'm fickle minded. Sue me. I decided to leave it installed just in case, and then forgot all about it. It just so happens that this version seems to be buggy.

I disabled iMacros, restarted Iceweasel, and mouse clicks on PDF links started working again.

Joy.

Sunday, June 8, 2008

X.Org 7.3: The Good, The Bad and The Ugly (3)

It was Good, it was Bad, and finally it went horribly Ugly. The so called "User Experience", that is.

The story so far: after upgrading X.Org to version 7.3, my laptop would completely lockup at startup, upon switching from console display to graphical display. After some futzing around I isolated the problem to the ATI display driver.

What now? my options seemed clear:
  1. downgrade the driver (and, due to dependencies, all of X.Org) to the previous, working, version, file a bug report, and then wait for a fix...
  2. apply one of the workarounds that I found, file a bug report, and then wait for a fix...
Being me I started exploring another option: try to debug and fix it myself, file a bug report containing a patch that fixes the problem, and then wait for it to be included upstream...

I went over to the ATI driver page on the Debian PTS, and found out that the package source code repository is managed with Git. This was great news.

In brief, Git provides a tool called git bisect that (in theory) allows anyone (including non-programmers - again, in theory) to find the cause of a software bug by isolating a single bad commit (i.e. a single batch of source code modifications) that is causing it. But there's no guarantee that the problem is caused by a single commit. I decided to play the optimist (for a change) and dived in - head first.

First things first: install Git, like this
aptitude install git-core gitk
If you're running a firewall, you'd better open port 9418 for outgoing TCP connections. I use shorewall:
  1. add the following line to /etc/shorewall/rules:
    ACCEPT      $FW      net        tcp     9418
  2. restart the firewall
    invoke-rc.d shorewall restart
Next, clone the source code repository:
git clone git://git.debian.org/git/pkg-xorg/driver/xserver-xorg-video-ati
Now figure out how to build, install and test it, which, in this case, is as simple as:
cd xserver-xorg-video-ati; dpkg-buildpackage -rfakeroot -b -tc -uc
dpkg -i ../xserver-xorg-video-ati_6.8.0-1_i386.deb
... and then reboot.

This is where the fun starts. You start out by telling Git that a bisection process has started and marking the current version as bad:
cd xserver-xorg-video-ati
git bisect start
git bisect bad
We now need to mark the previous version as good:
git checkout -f xserver-xorg-video-ati-1_6.6.3-4; git clean -d -f
git bisect good
Git responds by selecting a commit halfway between the bad and good commits:
Bisecting: 426 revisions left to test after this
[2f87bff293a343b40c1be096933a5ae126632468] RADEON: Fix subtle change in crtc reg init
At this point we need to build this halfway snapshot, test it and tell Git if it works or not with git bisect good or git bisect bad, respectively.

So much for theory. I couldn't build the halfway snapshot that I got! the problem was rather odd - there was no debian sub-directory. I figured out what happened by using gitk to inspect the commit history in the repository.

It turns out that the Debian package Git repository contains both downstream unique files (i.e. the debian directory and its contents) and the upstream source code. Occasionaly, when a new version of the driver's package is being prepared, upstream commits are pulled to the downstream repository and merged. The debian directory is missing from the upstream repository (and this is as it should be), so that whenever Git bisects the downstream repository it is most likely to create a repository without this directory.

My solution to this was to have two clones of the package Git repository - one of them was used only for bisection, and the other for actual package building and testing. After each bisection step I pulled from the first repository to the second repository, which was reset beforehand to the previous working version. This way I got a repository that included both the debian directory and the commits upto the current bisection point.

It took around 13 iterations (read: around 13 reboots) before I hit the jackpot (did I mention that this is the Ugly part of my story?).

Eventually Git informed me that
80eee856938756e1222526b6c39cee8b5252b409 is first bad commit
RADEON: fix console restore on netbsd
This looked very relevant, but after inspecting the source code I was stumped: it was obvious that some hardware registers/modes were being saved/restored, but to what end? and what did this "fix" actually fix? and more importantly: what did this NetBSD related fix break on my box?

The only fix I could come up with was to revert the effect of this modification - but only under Linux. And what do you know? it solved my problem! I incorporated my fix into the current version, and it started to work fine (in case you're keeping count: two more reboots).

I reported the bug on the Debian BTS, complete with a patch (see bug #480312). My patch was eventually committed into the upstream Git repository a few days later.

A happy end?

I later spent some time browsing through more of the code, and my fix seemed to be at home: the driver's code contains quite a few code fragments that are either enabled or disabled, depending on both hardware type and target platform. It's quite obvious that the upstream author(s) of the driver need all the help they can get - the task they took upon themselves isn't easy.

I have a strong suspicion that it will break again - I just hope that I'll upgrade my hardware by then...

Sunday, May 25, 2008

X.Org 7.3: The Good, The Bad and The Ugly (2)

I first upgraded X.Org on my live HDD and it was Good. The proverbial poop hit the fan when I happily tried doing the same on my laptop at home. I backed up /etc/X11/xorg.conf and re-configured X in order to force hardware auto detection:
dpkg-reconfigure xserver-xorg
I then logged out, in order to restart the GNOME display manager (GDM) and ... my computer hard-locked. No display, Caps Lock LED keeps blinking, no disk activity, no ping. Dead as a Dodo.

I turned the computer off, turned it on again and waited for GDM to come up. Same result - my laptop ended up quietly blinking its Caps Lock LED.

By now you've no doubt realized that this is the Bad part of my story. After the initial shock, I realized that my plan to go to sleep early that night, was going to stay just that - a plan.

I first needed to get the laptop to boot at all. I cycled the laptop power and waited for the GRUB menu to come up, I then used the arrow keys to select the "Single User" kernel configuration, and hit <Enter>. The boot sequence ended up with a password prompt for the root user, I typed it in and then realized that I had no idea what to do next.

I looked at ~/.xsession-errors, but it seemed to belong to an earlier (working) X session. I browsed /var/log/ for a relevant log file and found /var/log/Xorg.0.log - I hoped to see an error or warning message close to the end of the log file that would point me to the cause of the problem. No luck: the log file ended abruptly at some point, with no obvious problem indication.

I then did something irrational:
invoke-rc.d gdm start
and to my surprise GDM came up, graphical display and all. WTF?!

I didn't know enough about the startup process to figure out the difference between "Single User" mode and the normal startup sequence. The only obvious difference was visual: a long while ago I added vga=791 to the default kernel command line in /boot/grub/menu.lst - this made the virtual terminals come up in 1024x768 resolution. I did not touch the single user command line, so it came up in the default low resolution (640x480 ?).

So I removed that extra command line parameter, ran update-grub, rebooted, and it "fixed" the problem: apart for the low resolution in the virtual terminal, X came up OK. A happy end?

I guess that most users, at this point, would just settle for a low resolution console, and would go on with their lives, chalking this one up as just another Linux hardware incompatibility issue. I guess I would've done the same, if I didn't know for certain that this was a regression - it worked before, so there's no reason for it to be broken now! I wanted my high resolution console.

After browsing the bugs filed against the ATI display driver at the Debian Bug Tracking System I realized that I was pretty much on my own - my hardware is simply too old, and my setup (Debian GNU/Linux "testing" on a Compaq Presario 900 laptop, with an on-board ATI Radeon Mobility IGP320M U1 video adapter) is probably rather unique.

I was mentally ready to compromise. It seemed very likely that the problem is driver related, so I tried using the VESA display driver instead of the ATI display driver:
  1. backup /etc/X11/xorg.conf
  2. open the file for editing
  3. find the line that starts with Driver in the section named Display
  4. modify the string on the Driver line from whatever it is to "vesa"
  5. save the file
  6. restart X (e.g. by logging out and then hitting <Ctrl>-<Alt>-<Backspace>)
I don't use any spiffy 3D stuff, so I figured I could live with a generic SVGA driver instead of the hardware specific driver. I reinstated the vga=791 kernel command line option, rebooted my box and it all seemed to work OK.

That is, until I tried playing a video file with mplayer - I hit 'f' to go fullscreen, and to my dismay the image was not stretched to fill the screen - instead it was centered, still at the same size, surrounded by a black frame that spanned the rest of the screen area. Apparently, hardware acceleration is used not just for spiffy 3D, but also for image scaling.

Video playback is more important to me than high resolution display in virtual terminals. But I just couldn't let it go. And this is where my story gets Ugly.

To be continued...