Friday, December 25, 2009

Backup Revisited: Disk Full

The nightly Bacula backup failed a few nights ago. The reason was simple - the external backup disk was full.

A full backup weighs close to 30GB (and it's steadily growing). A differential/incremental backup weighs about 500MB, on average. So, with a file retention period of 4 months, the storage space needed for backup is about 4×(30+30×0.5)=180GB.

I've configured Bacula's maximum volume size to 4GB (do read the fine manual). This means that it'll divide the backup archive into chunks of no more than 4GB in size. This allows Bacula to recycle volumes when their contents is not needed anymore, i.e. if all their contents is older than the retention period.

I've also configured Bacula to use separate pools of volumes for the monthly full backup jobs and for the nightly incremental/differential backup jobs. It seemed like a good idea at the time. It wasn't.

Bacula does not recycle volumes before it actually needs them. This means that I ended up with left over volumes on disk, that are not needed anymore, that would only be recycled on the next backup. And since I separated the volumes into two pools per client, the full backup leftover volumes remained on disk for a month, were then recycled, and replaced by other, more recent, leftover volumes. The overhead is about 1 volume per full backup, and for two clients it amounts to 8GB.

Furthermore, I use the same disk to store the VirtualBox disk image of my virtual WinXP PC. That's about 15GB.

The disk capacity is 230GB, but 1 percent of this disk is used by the OS - that's 2.3GB down the drain.

That leaves me with close to 25GB of slack. Which doesn't seem too bad, but it's actually pretty bad. The problem is that Bacula, by default, will perform a full backup whenever it detects that the fileset, i.e. the list of files/directories that's included/excluded in each backup job, has been modified. And, as you can imagine, I did just that, at least once, during the past few months.

I've had to reconfigure Bacula as follows:
  1. use a single backup pool per client (I could merge both to a single pool, but it seems to me that keeping clients volumes separate is a more robust approach) - this should reduce the recycling overhead, because I expect volumes to be recycled more often now
  2. reduce the volume size to 700MB, in an attempt to lower the leftover overhead even more, by lowering the chance that a volume contains files from different backup jobs (another, more accurate, approach is to set the Maximum Volume Jobs to 1)
  3. reduce the retention period to 2 months (actually, I've never had to restore files older than a week or so, but... better safe than sorry)

I stopped the Bacula director daemon
invoke-rc.d bacula-director stop
erased all the backup volumes, reset the Bacula database (aka the catalog)
(yes, I'm using the SQLite3 backend), started the director daemon, and then used bconsole to manually launch backup jobs for both my wife's PC and my own.

Planning ahead is a good idea. It's only that I realized this fact too late.

Friday, December 18, 2009

Recovering From a Bad Kernel Upgrade on QEMU PowerPC

A recent comment prompted me to launch my QEMU hosted PowerPC virtual machine. Once launched, I couldn't just shut it down, I had an unrelenting urge to upgrade it.

I launched aptitude as root, hit u to update the packages list. Later, I hit SHIFT-u to mark all upgradable packages for installation, hit g, reviewed the list of actions that aptitude was about to perform, and hit g again, in order to actually start the upgrade.

That's a routine ritual that I perform almost everyday on my Debian/Testing laptop, usually with no ill effects, neither to my home computer, nor to my questionable sanity. It pretty much just works.

Now, my PowerPC VM runs Debian/Stable - I expected no problem at all. Nevertheless, sh*t did happen.

The Kernel was also upgraded in the process, which triggered an update of the initrd image. Nothing unusual.

I rebooted the VM and eventually got the login prompt. I typed root as the username, and my password at the subsequent password prompt, and got "Login incorrect".

OK, so I mistyped the password, nothing unusual here. I tried it again. But then something quite unusual happened: when I typed root I got this string echoed back at me: ^]r^]o^]o^]t, and I couldn't login.

I switched to the next console with ALT-right arrow and tried it again, with exactly the same results. I couldn't login.

Now, since there's no SSH daemon running on this VM, I had no other way of logging in.

I couldn't find a relevant bug report on the Debian BTS. I hate it when this happens.

The solution to my problem was obvious: downgrade the Kernel. But how? after all, I had to login first.

Normally, on an x86 PC or VM running Debian, with GRUB as the boot-loader, you still have the option to boot into the previous Kernel, until it is explicitly uninstalled. With a PowerPC VM running Quik as the boot-loader, there's no such option. What a PITA.

There's always the option to discard the disk image, and start over with a fresh install - but I did customize the disk image enough to make it rather painful.

I decided to attempt to boot the VM from an old image of the Debian PowerPC installation CD that was lying around on my hard disk, and see if it gets me anywhere:
qemu-system-ppc debian_lenny_powerpc_small.qcow -cdrom debian-501-powerpcinst.iso -boot d
I tried it twice before I figured what I had to do in order to mount the disk image and attept to repair it:
  1. hit ENTER at the boot prompt to start the Debian installer
  2. select the defaults, continue until you reach the hardware detection stage, and wait
  3. when prompted to configure the host name, go back until you get a menu with an option to detect disks, select it and wait
  4. go back until you get the menu with an option to open a shell - select it
  5. type the following at the shell prompt:
    mount /dev/hda2 /media
    cd media
The /dev/hda2 partition happens to be the Linux boot partition on this VM. I inspected its contents and found that while there was only a single Kernel image there, there were two initrd images - both with the same name, but one of them with a .bak file extension.

This was, apparently, a backup copy of the old initrd image, made during the previous upgrade. I had a hunch that the old initrd image might still work with the new Kernel, because it had the same version number. I hoped that my problem was with the initrd image, rather than with the new Kernel itself.

I swapped the images
mv initrd.img-2.6.26-2-powerpc.bak initrd.img-2.6.26-2-powerpc.good
mv initrd.img-2.6.26-2-powerpc initrd.img-2.6.26-2-powerpc.bak
mv initrd.img-2.6.26-2-powerpc.good initrd.img-2.6.26-2-powerpc
cd /
umount /media
shutdown the VM and started it again without the install CD.

It actually worked: I managed to login!


The next thing to do was to downgrade the Kernel:
  1. launch aptitude
  2. select to install the older Kernel version (luckily, it was still available in the packages list)
  3. install it
  4. select the newest Kernel version (the one that caused all this grief)
  5. forbid it by pressing SHIFT-f, so that next time I don't upgrade to this specific version by mistake
I rebooted the VM and found that I could still login.

I guess I should've investigated this further and submitted a proper bug report, but running QEMU on my slow box is such a bucket of pain, that I'd rather avoid any more of it.

Friday, December 11, 2009

Anonymous Browser Uploads to Amazon S3

I've joined the Cloud.

I've signed up for the Amazon Simple Storage Service (aka S3). It costs nothing when unused, and almost nothing when used.

My original motivation for signing up was the potential for off-site backup. You know, just in case. The worst case.

But cheap remote storage isn't enough - what I hadn't considered at all, when I signed up, was bandwidth. The upload bandwidth that my ISP provides me, for the price I'm willing to pay, is a measly 512 Mbit/s. Consider uploading a 35GB snapshot via this narrow straw of a connection. I'll let you do the math. Bottom line is that it seems I won't be using S3 for backup.

But now that I've already signed up for the service, I started looking for other ways of using it: file sharing (of the legal kind) of files that are too large to send/receive as e-mail attachments.

After some digging I found S3Fox Organizer, which provides easy access to S3 from within Firefox. It allowed me to create buckets and folders, and then upload files, and then generate time-limited URLs that I could distribute to friends and family members, in order to allow them to download these files.

It works, but it's rather cumbersome when compared to Picasa, YouTube, SkyDrive, etc. And, while cheap, it ain't free.

And it's unidirectional - I could only send files.

Receiving files to my S3 account seemed to require web development karma that I don't posses. Luckily, after some more digging, I found a relevant article at the AWS Developer Community website: Browser Uploads to S3 using HTML POST Forms. The accompanying thread of reader comments is even more useful than the article itself, since it provided a ready made PHP script for generating a working, albeit rather spartan, browser upload interface:
  1. prerequisites:
    1. AWS S3 account
    2. create a storage bucket and an upload folder under that bucket (I did this with S3Fox)
    3. PHP enabled web server (see this howto for example) that will host the upload script (I host my server at my home computer)
  2. download getMIMEtype.js and place it at the document root directory
  3. place the following PHP script at the document root directory as s3upload.php
  4. edit the script and plug in your own AWS access key, AWS secret key, upload bucket name, upload folder name, and maximum file size (currently set at 50MB)
  5. share a link to this script with anyone you want to get files from

And here's the script itself:

// Send a file to the Amazon S3 service with PHP
// Taken, except for some fixes, from
// which refers to the article at
// Puts up a page which allows the user to select a file and sendit directly to S3,
// and calls this same page with the results when completed

// Change the following to correspond to your system:
$S3_BUCKET = 'my-upload-bucket';
$S3_FOLDER = 'uploads/'; // folder within bucket
$MAX_FILE_SIZE = 50 * 1048576; // MB size limit
$SUCCESS_REDIRECT = ' http://' . $_SERVER['SERVER_NAME'] . ($_SERVER['SERVER_PORT']=='' ? '' : ':') .
$_SERVER['SERVER_PORT'] . '/' . 's3upload.php'/*$_SERVER['SERVER_SELF']*/ .
'?ok' ; // s3upload.php is URL from server root

// create document header
echo '
    <title>S3 POST Form</title>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    <script type="text/javascript" src="./getMIMEtype.js"></script>
    <script type="text/javascript">
    function setType(){
        document.getElementById("Content-Type").value = getMIMEtype(document.getElementById("file").value);


// process result from transfer, if query string present
$query = $_SERVER['QUERY_STRING'];
$res = explode('&', $query);
foreach($res as $ss) {
//echo 'ss: ' . $ss . '<BR/>';
if(substr($ss,0,7) == 'bucket=') $qBucket = urldecode(substr($ss,7));
if(substr($ss,0,4) == 'key=') $qKey = urldecode(substr($ss,4));
if($qBucket != '') {
// show transfer results
echo 'File transferred successfully!<BR/><BR/>';
$expires = time() + 1*24*60*60/*$expires*/;
$resource = $qBucket."/".urlencode($qKey);
$stringToSign = "GET\n\n\n$expires\n/$resource";
//echo "stringToSign: $stringToSign<BR/><BR/>";
$signature = urlencode(base64_encode(hash_hmac("sha1", $stringToSign, $AWS_SECRET_KEY, TRUE/*raw_output*/)));
//echo "signature: $signature<BR/><BR/>";
$queryStringPrivate = "<a href='$resource?AWSAccessKeyId=$AWS_ACCESS_KEY&Expires=$expires&Signature=$signature'>$qBucket/$qKey</a>";
$queryStringPublic = "<a href='$qBucket/$qKey'>$qBucket/$qKey</a>";

echo "URL (private read): $queryStringPrivate<BR/><BR/>";
echo "URL (public read) : $queryStringPublic<BR/><BR/>";

// setup transfer form
$expTime = time() + (1 * 60 * 60); // now plus one hour (1 hour; 60 mins; 60secs)
$expTimeStr = gmdate('Y-m-d\TH:i:s\Z', $expTime);
//echo 'expTimeStr: '. $expTimeStr ."<BR/>";

// create policy document
$policyDoc = '
{"expiration": "' . $expTimeStr . '",
  "conditions": [
    {"bucket": "' . $S3_BUCKET . '"},
    ["starts-with", "$key", "' . $S3_FOLDER . '"],
    {"acl": "private"},
    {"success_action_redirect": "' . $SUCCESS_REDIRECT . '"},
    ["starts-with", "$Content-Type", ""],
    ["starts-with", "$Content-Disposition", ""],
    ["content-length-range", 0, ' . $MAX_FILE_SIZE . ']

//echo "policyDoc: " . $policyDoc . '<BR/>';
// remove CRLFs from policy document
$policyDoc = implode(explode('\r', $policyDoc));
$policyDoc = implode(explode('\n', $policyDoc));
$policyDoc64 = base64_encode($policyDoc); // encode to base 64
// create policy document signature
$sigPolicyDoc = base64_encode(hash_hmac("sha1", $policyDoc64, $AWS_SECRET_KEY, TRUE/*raw_output*/));

// create file transfer form
echo '
<form action=" https://' . $S3_BUCKET . '" method="post" enctype="multipart/form-data">
                           <input type="hidden" name="key" value="' . $S3_FOLDER . '${filename}">
                           <input type="hidden" name="AWSAccessKeyId" value="' . $AWS_ACCESS_KEY . '">
                           <input type="hidden" name="acl" value="private">
                           <input type="hidden" name="success_action_redirect" value="' . $SUCCESS_REDIRECT . '">
                           <input type="hidden" name="policy" value="' . $policyDoc64 . '">
                           <input type="hidden" name="signature" value="' . $sigPolicyDoc . '">
                           <input type="hidden" name="Content-Disposition" value="attachment; filename=${filename}">
                           <input type="hidden" name="Content-Type" id="Content-Type" value="">

                     File to upload to S3:
                           <input name="file" id="file" type="file">
                           <input type="submit" value="Upload File to S3" onClick="setType()">

// create document footer
echo '

Friday, December 4, 2009

Auto Restart of Daemons after Upgrades

I use checkrestart to check which processes need to be restarted after an upgrade. This handy script is part of the debian-goodies package.

checkrestart does not only list processes that need to be restarted, but it also attempts to deduce which service/daemon the process belongs to and lists the associated init scripts that have to be restarted. It's not perfect, but most of the time it gets the right results.

For a long while I followed a manual routine:
  1. upgrade packages
  2. manually run checkrestart (as root)
  3. restart the suggested init scripts, e.g.
    /etc/init.d/shorewall restart
  4. repeat from step 2 until checkrestart reports
    Found 0 processes using old versions of upgraded files

  1. restarting GDM means that you lose your current X session - pay attention!
  2. but, restarting your current X session is probably the cleanest way to remove desktop related processes running old files (e.g. tray icons)
  3. if sshd needs to be restarted, you'll also have to disconnect all current ssh sessions
  4. with some processes (e.g. perl and python) it's necessary to look at the command line in order to figure out what needs to be restarted:
    ps -p <process-id> -o pid= -o cmd=
  5. some processes (e.g. console-kit-daemon) require restarting dbus (see Debian bug #527846 - and it seems that it's advisable to also restart GDM afterwards)
  6. some upgrades (kernel, GRUB, libc) probably require a reboot
  7. you may still end up having to manually close some applications (e.g. mutt, emacs) or kill some stubborn processes, depending on the specific packages that were upgraded

I've automated some of this with the following script:
#! /bin/bash
/usr/sbin/checkrestart | grep -e "/etc/init\.d/[^\ ]* restart" | grep -v "/etc/init.d/gdm restart" | 
while read cmd; do 
echo "${cmd}"
eval ${cmd} 
/usr/sbin/checkrestart |
/usr/bin/awk \
    if ( NF == 2 && $1 ~ /^[0-9]+/ )
        system("ps -p "$1" -o pid= -o cmd=");
        print $0
which runs automatically after installing/upgrading packages. This is accomplished by adding the following line to /etc/apt/apt.conf.d/99local (create this file if necessary):
DPkg::Post-Invoke { "if [[ -x /usr/sbin/checkrestart && -x /path/to/checkrestart/script ]]; then /path/to/checkrestart/script; fi;" };
Note that I specifically avoid restarting GDM automatically (you may want to add a similar check for your own favorite display manager).

Friday, November 27, 2009

Déjà vu: HPLIP Upgrade or Yet Another Printer Problem

I use hp-timedate to synchronize the clock on my Officejet 5510 All In One Printer with my PC clock.

Here's an error message that I found embedded in the output of hp-timedate a few days ago:
error: Unable to communicate with device (code=12): hpfax:/usb/officejet_5500_series?serial=MY3C1D11KS96
error: Unable to open device. Exiting.
I checked the USB cable connection and it looked OK. I listed the USB devices with lsusb and it seemed OK:
Bus 001 Device 004: ID 03f0:3a11 Hewlett-Packard OfficeJet 5500 series
I launched hp-toolbox and was surprised to discover that both the printer and fax icons appeared with a little red X-mark decoration. And sure enough, hp-toolbox insisted that both were unavailable.

I cycled the printer power, but to no avail. I considered rebooting my box, but then I had an idea - or, rather, I finally realized that I faced a very similar situation in the past. It all seemed very familiar. I tried running hp-timedate as root, and what-do-you-know - it worked.

So, this was a permissions issue. Again. I opened /usr/share/doc/hplip/NEWS.Debian.gz and found the following:
Access to the full functionality of hplip; ink check, toolbox,
printing and scanning is now provided for members of the 'lp'
group. The use of the scanner group is depreciated.
So I added myself to the lp group from the GNOME Control Center, using the "User and Groups" task, logged out and logged in, and then verified that I could finally manage the printer from my own account.

Unlike the previous time, the permissions issue was documented alright. Not that it helped, but it's progress nonetheless. I guess.

Friday, November 20, 2009

Reinventing a Wheel: Autostarting Applications

Once reinvented, a wheel is bound to be reinvented again.

I've already described how I use the ~/.xprofile script to launch applications when X starts. One issue with this method is that the applications are launched before the Window Manager is launched, which can and does cause all sorts of weird problems. Some applications need to be started from within the Window Manager.

GNOME, and, persumably, other desktop managers, conform with the Desktop Application Autostart Specification. This means, among other things, that you can autostart applications by placing a file with a .desktop extension under ~/.config/autostart for each of these applications. The contents of the .desktop file has to conform with the Desktop Entry Specification.

All that technical Mumbo Jumbo is usually well hidden under the hood. A normal GNOME user is expected to configure auto-startup applications from the "Startup Applications" control center task. It's easy, clean and sane.

But I use an alternative Window Manager - awesome. It lets me autostart applications in a myriad of ways from the ~/.config/awesome/rc.lua configuration script. None of these conforms with the freedesktop standards, which means that whenever I install/remove a package with an associated autostart application (e.g. tray icon), I need to manually update my startup script.

Well, not anymore. Here's how I handle autostart applications in my rc.lua:
os.execute('grep -ie \'^exec=\' '..
'/etc/xdg/autostart/*.desktop '..
'$HOME/.config/autostart/*.desktop '..
'$HOME/.config/awesome/autostart/*.desktop '..
'| sed -e \'s@.*autostart/@@g\' -e\'s@Exec=@@g\' '..
'| awk -F: \'{e[$1]=$2}END{for(d in e) system(e[d]"&")}\'')
This hack autostarts the applications specified by the .desktop files in ~/.config/autostart and /etc/xdg/autostart, with the former taking precedence over the latter.

Being a hack, it only considers the Exec key in the .desktop file. It does not take into account the working directory set in the Path key and/or whether to start the application in a terminal (as specified by the Terminal key). There are other deficiencies too, but these are the ones that I might be inclined to address if the need arises.

As you may have noted from the code, I added a new autostart directory under ~/.config/awesome, which has the highest priority. It houses .desktop files that correspond to autostart applications, that I only want to start under awesome. I also use it to disable some of the GNOME autostart applications by copying their corresponding .desktop file to this directory, and replacing the command specified by the Exec key within the file, with the command /bin/true.

Saturday, November 14, 2009

Cellular Phone Backup Script

As promised, here it is (commentary follows):
#! /bin/bash


delay=2 # time (in seconds) to wait between each phone access

grep port= $HOME/.gammurc | cut -d= -f2 |
while read phone
name="$(grep -m $(( $section + 1 )) name= $HOME/.gammurc | tail -1 | sed s/name=//)"
echo $name

rm -rf /tmp/$phone
mkdir -p /tmp/$phone
mkdir -p "$backup/$phone"
cd /tmp/$phone

# backup photos
gammu -s $section getfolderlisting $photos_root | grep "$photos_root" | sort |
while read filerecord
phone_path=$(echo $filerecord | cut -d\; -f1)
file=$(echo $filerecord | cut -d\; -f3 | sed s/\"//g)
timestamp=$(date -d "$(echo $filerecord | cut -d\; -f4 | sed s/\"//g)" +'%s')
size=$(echo $filerecord | cut -d\; -f5)
echo $file

# copy file from phone if not found on disk or has different size and/or timestamp
if [[ ! ( -e $local_path && \
"$timestamp $size" == "$(stat --format='%Y %s' $local_path)" ) ]]; then
sleep $delay
gammu -s $section getfiles "$phone_path" 2>/dev/null

# backup phone settings
sleep $(( $delay * 2 ))
# the first no is to save backup in ascii not unicode
# the second no is to disable broken backup of contacts in SIM (gammu bug?)
gammu -s $section --backup settings.backup 2>/dev/null <<EOF
# # backup sms
# # disabled - seems to hang waiting for phone
# sleep $delay
# gammu -s $section --backupsms sms.backup
cd - > /dev/null
rsync -avz /tmp/$phone $backup
let section++
My initial plan was to backup photos, contacts, settings and SMS contents, but I've hit several problems which made the script so ugly:
  1. Gammu fails to copy contacts from the SIM card on our phones (Nokia 2600) - I decided to only backup the contacts from the phone memory, which meant that I had to synchronize the contacts list on the phone memory with that on the SIM card
  2. Gammu hangs while trying to retrieve SMS contents from my wife's phone - I've decided not backup SMS contents
  3. photo retrieval is slow - so I complicated my script to make sure that I only retrieve new photos
  4. photo retrieval is prone to failure - waiting a second or two before each transfer seems to make communications much more robust

Phone contents is copied over to a backup directory whose name is determined by the port entry in each phone's settings section inside ~/.gammurc - which, in my case, is the Bluetooth address of the phone. You may want to use the name entry instead.

And, in case it isn't obvious, you must have a settings section for each phone being backed up, inside ~/.gammurc. The easiest way to do this is to let the Wammu phone setup wizard guide you.

Gammu comes with a Python library, which I might use someday for rewriting this script in Python. The main benefit would be that communications with each phone would only need to be initiated once, potentially making the whole process both faster and more reliable.

Friday, November 6, 2009

On Being S.M.A.R.T

After reading "Watching a hard drive die", "Checking Hard Disk Sanity With Smartmontools" and the Wikipedia article about S.M.A.R.T, I decided to install smartmontools and test my hard drives for problems:
aptitude install smartmontools

The first hard drive that I diagnosed with smartctl was my laptop's primary hard disk:
smartctl -a /dev/hda
This showed a few unsettling results, but luckily no reallocation errors or other critical errors.

I got similar results for my external Western Digital Elements hard disk. That was good, because that's my backup disk. Phew.

My other external hard disk is an old Western Digital hard disk that's in a USB connected disk enclosure. I tried diagnosing it and got the following error:
root@machine-cycle:~# smartctl -a /dev/sda
smartctl 5.39 2009-10-10 r2955 [i686-pc-linux-gnu] (local build)
Copyright (C) 2002-9 by Bruce Allen,

/dev/sda: Unsupported USB bridge [0x04b4:0x6830 (0x001)]
Smartctl: please specify device type with the -d option.

Use smartctl -h to get a usage summary
I did as I was told (i.e. read the usage summary) and then tried the following:
smartctl -a -d usbcypress /dev/sda
Cypress happens to be the manufacturer of this enclosure's USB to IDE bridge, but smartctl doesn't seem to recognize it without my help.

Well, now I got a report from smartctl but it showed that one DMA CRC error was logged.

I ran tests on all hard disks with smartctl -t short ... for each device and they were all completed successfully. Phew. /Me wiping cold sweat off brow/

Next thing to do was to enable smartd to monitor all my hard disks:
  1. edit /etc/default/smartmontools and make sure you have the following line in it:
  2. start the daemon:
    invoke-rc.d smartmontools start
On my system this doesn't work as is, and I had to edit the daemon configuration file /etc/smartd.conf, based on the examples in the comments and the manual page:
  1. comment out the line that starts with DEVICESCAN (i.e. prepend it with a sharp sign #)
  2. add lines per each hard disk to be tested:
    # primary disk                                                                                                                               
    /dev/hda -a -o on -S on -s (S/../.././05|L/../../7/04) -m root -M exec /usr/share/smartmontools/smartd-runner
    # /dev/gigapod (multimedia)
    /dev/disk/by-path/pci-0000:02:00.2-usb-0:1.4:1.0-scsi-0:0:0:0 -a -d usbcypress -d removable -o on -S on -s (S/../.././05|L/../../7/04) -m root -M exec /usr/share/smartmontools/smartd-runner
    # /dev/elements (backup)
    /dev/disk/by-path/pci-0000:02:00.2-usb-0:1.1:1.0-scsi-0:0:0:0 -a -d sat -d removable -o on -S on -s (S/../.././05|L/../../7/04) -m root -M exec /usr/share/smartmontools/smartd-runner
    Note that this schedules short self tests to run each morning at 5AM and long self tests to run on Sunday mornings at 4AM.

    Also note that I use the /dev/disk/by-path links to the external disk block device, in order not to be hit by udev's tendency to reorder device names.

Testing, 1, 2, 3 !

Friday, October 30, 2009

One Liner: Synchronize Digital Camera Clock

The clock on my Canon Powershot A620 is yet another clock that I want to synchronize with my PC.

Hook the camera over USB to the PC and run:
gphoto2 --set-config /main/settings/synctime=1
Use the following to read the camera clock and verify that it's synchronized:
gphoto2 --get-config /main/settings/time
(requires gphoto2)

Friday, October 23, 2009

Wine and Missing MFC42.DLL

Every once in a while I need to run a Window$ application. If the application at hand is a standalone application that does not require installation, I'll usually attempt to run it first with Wine, instead of launching a full blown WinXP virtual machine.

Using Wine is a no brainer:
wine /path/to/application.exe <command-line-arguments>
and if the file happens to have executable permissions (chmod +x ...) then it's even easier - just launch it like any other script or binary executable, by typing
/path/to/application.exe <command-line-arguments>
Last time I tried it I hit a problem:
err:module:import_dll Library MFC42.DLL (which is needed by L"Z:\\path\\to\\application.exe") not found
err:module:LdrInitializeThunk Main exe initialization for L"Z:\\path\\to\\application.exe" failed, status c0000135
This means that a required DLL is missing - in this case it's MFC42.DLL. This specific DLL is needed for (older) GUI applications that use MFC, and it isn't part of Wine.

Whatever you do if this happens to you - don't try getting this DLL from any of the websites that Google will list when you search for it. Google marks quite a few of these sites as sites that can harm your computer. You have been warned.

Window$ users can get MFC42.DLL and other DLLs by installing the Microsoft Visual C++ Redistributable Package.

The recommended way of doing this under Wine is to follow the instructions on the Wine wiki:
  1. download winetricks:
  2. make it executable:
    chmod +x winetricks
    (optional: place the file in a system directory such as /usr/local/bin)
  3. install cabextract:
    aptitude install cabextract
    (actually, I'm not sure it's necessary for fixing the MFC problem, but it's definitely recommended for fixing other Wine problems)
  4. run
    winetricks mfc42
The winetricks script has lots of other options for fixing a host of issues and installing a rather long list of third party packages that are not part of Wine.

Bottoms Up!

[25 Feb 2012] UPDATE: winetricks has been packaged in Debian/testing for quite a while - so I recommend that you don't install it manually as per steps 1 thru 3 above, but rather use one of the package managers to do it for you:
aptitude install winetricks

Friday, October 16, 2009

It Works. Again. (or: ZIP Archives and non-English Filenames)

The wait is over. My wife's laptop came back from the lab. They've replaced the motherboard. Again.

My wife has commandeered my laptop during the past few weeks, trying to get some work done on a VirtualBox hosted WinXP machine, with her My Documents folder pointing to a VirtualBox shared folder in my home directory.

She wasn't happy: I think that the virtual machine is surprisingly fast; she thinks that it's dead slow.

Anyway, the bottom line is that she did manage to modify a few documents and created several new ones. So all that remained to be done, before reverting my laptop to its Debian self, was to synchronize between the My Documents folder on the fixed laptop and the documents directory on my laptop's hard drive.

I used the following to create a ZIP archive containing only the files that my wife modified recently:
cd ~/docs/wife/                                              
find -newermt "Sep 23 2009" | grep -v Thumbs.db | grep -v Desktop.ini | zip -@ /tmp/
My intention was to copy this archive over to my wife's laptop and extract its contents to her My Documents folder.

Unfortunately, the Unicode encoded file names in the archive showed up as garbage when the archive was opened on the Window$ box. So I re-archived the files with 7zip (Debian package p7zip-full), which seems to handle non-English file names in a more sensible manner:
mkdir /tmp/wife-docs
cd /tmp/wife-docs
unzip ../
7z a ../wife-docs.7z .
Next time (hopefully never - but I don't kid myself) I'll probably mount my wife's documents folder over CIFS and then use rsync to synchronize the files. It's supposed to be The Right Way™ to do this.

The warranty on my wife's laptop expires in a month. Wish us luck.

Friday, October 9, 2009

Synchronize HP Printer/Fax Clock

After the stellar success of my cellular phone clock synchronization script, I had an idea: why not synchronize the printer clock too?

We have an HP OfficeJet 5510 printer/fax/scanner/copier combo, which is hooked directly to my laptop over USB. Its front panel clock tends to drift quite a bit, and I usually forget to switch it from/to Daylight savings time.

It took about two minutes to find hp-timedate, and here's the corresponding cron job specification:
 10 4    *   *   *   /usr/bin/hp-timedate
Now, how do I interface my laptop with the microwave oven?

Friday, October 2, 2009

Synchronizing Cellular Phone Date and Time with Gammu

After becoming the proud owner of a Bluetooth to USB adaptor, I started looking around for a way to automate the backup process of our cellular phones.

I've installed Wammu and was pleasantly surprised by how easy it was to get it up and running. But as soon as I had it configured to talk with both our phones, I realized that what I really wanted was to use Gammu, its command line alter ego.

My plan is to run a backup script every night that will copy contents and settings from the phones to my laptop over Bluetooth. I'll post my backup script as soon as it's up and running for a few days.

In the meantime, let me present another script that I run as a scheduled task every night. It's job is to synchronize the date and time on both our phones to the laptop clock. This has several benefits:
  1. the phone clocks are more accurate (because the laptop clock is updated by NTP)
  2. the phone clocks switch automatically to and from Daylight savings time,
  3. the phone clocks become synchronized to each other
And here it is:

#! /bin/bash
grep port= ~/.gammurc | cut -d= -f2 |
while read phone
name="$(grep -m $(( $section + 1 )) name= ~/.gammurc | tail -1 | sed s/name=//)"
echo $name
gammu -s $section setdatetime
let section++

I run the script as a Cron job at 4:05 AM. The idea is to synchronize clocks after 2:00AM, which is when the switch to/from Daylight savings time occurs where I live:
  1. launch crontab:
    crontab -e
    this should launch a text editor (you can configure which one by modifying the EDITOR environment variable)
  2. add the following line
      5 4    *   *   *   /full/path/to/
  3. save the file and exit the editor

Tuesday, September 29, 2009

It Hit the Fan. Again.

My wife's laptop is busted.


This time it just died. No lights, no sounds, no nothing. It's still covered by extended warranty. Which expires next month. Timing is everything.

Down time for my wife was less than 20 minutes, and that's only because I wanted to finish going through my RSS feeds for the day, before handing over my laptop to her hands.

Thank you Bacula.

Thank you VirtualBox.


Customer service was horrible this time, but they finally obliged and the laptop was sent to the lab. I'm pissed as hell.

If you're looking for a used laptop DO NOT buy this model: HP Pavilion dv6000. It's a piece of crap. And quite heavy at that (2.7KG).

If you're buying a new laptop - DO invest in extended warranty. You'll thank me later.

And now we wait.


Friday, September 25, 2009

Boot PC from Floppy Image w/ GRUB2 and MEMDISK

[The stuff below is outdated - see the updated howto]

I've already described how I installed GRUB2 on my Live HDD, under the false impression that it would allow me to chainload and boot from a WinPE ISO image.

While I searched for more info about this subject I hit upon MEMDISK, which is part of SYSLINUX - a boot loader for the Linux operating system which operates off an MS-DOS/Windows FAT file-system. MEMDISK is an auxiliary module of SYSLINUX that simulates a disk whose contents resides in a disk image file. This allows the bootloader to boot from that disk image.

MEMDISK can be used to boot from floppy disk images and (I haven't tried this myself) from hard disk images. It seems that the next major release (v4.0) will allow it to actually boot from ISO images, but at the moment Debian has v3.82 available.

Some popular utilities (like MHDD) are distributed as floppy disk images, and it seemed like a useful tool to have. It took some time to put the pieces of the puzzle together, but I finally managed to get it working on my Live HDD:
  1. install syslinux:
    aptitude install syslinux
  2. copy (as root) /usr/lib/syslinux/memdisk to /boot
  3. create a directory /boot/images to hold all the floppy disk images
  4. get some floppy disk images, e.g. like this:
    • download SystemRescueCD
    • loop mount the downloaded ISO image:
      mount -o loop systemrescuecd-x86-1.3.0.iso /mnt/iso
    • copy any disk images that you need from /mnt/iso/bootdisk/*.img (which include MHDD and a few others) to /boot/images
  5. replace the contents of /etc/grub.d/40_custom with the script below
  6. run update-grub - if all goes well, it should create an entry in the GRUB2 boot menu for each floppy image that you copied over to /boot/images
Note that on some machines, and with some floppy images, the MEMDISK hack won't work (hint: you may want to replace bigraw in the script with another command line option). YMMV.

And here's the script:

set -e

. /usr/lib/grub/grub-mkconfig_lib
if test -e /boot/memdisk ; then
MEMDISKPATH=$( make_system_path_relative_to_its_root "/boot/memdisk" )
echo "Found memdisk: $MEMDISKPATH" >&2
find $IMAGES -name "*.img" | sort | 
while read image ; do
IMAGEPATH=$( make_system_path_relative_to_its_root "$image" )
echo "Found floppy image: $IMAGEPATH" >&2
cat << EOF
menuentry "Bootable floppy: $(basename $IMAGEPATH | sed s/.img//)" {
prepare_grub_to_access_device ${GRUB_DEVICE_BOOT} | sed -e "s/^/\t/"
cat << EOF
linux16 $MEMDISKPATH bigraw
initrd16 $IMAGEPATH

Friday, September 18, 2009

Movie Trailers on

Watching movie trailers is one of my favorite passtimes when I'm online. I usually point Iceweasel to, and enjoy a few minutes of condensed cinematic bliss.

It was one of my first good impressions of Linux, when I found out that I could watch movie trailers in fullscreen with mplayerplug-in (the mplayer plug-in for Mozilla) - a feature that, at the time, was blatantly missing from the QuickTime browser plug-in on Window$.

A few months ago something has changed at the trailers site and mplayerplug-in started crashing Iceweasel whenever I tried to watch any movie trailer (see Debian bug #527293). I searched for a fix and learned that mplayerplug-in has been superseded by gecko-mediaplayer (from the same author). So, after verifying that it worked on, I made the switch and never looked back.

A few weeks ago gecko-mediaplayer started acting up too, due to another change by Apple. This time the browser would not crash but I could no longer watch video clips either (see Issue #34 on the gecko-mediaplayer issue tracker).

It seems that the trailers site requires that the media player, which is used to play the video clips, identify itself as QuickTime. A fix for this issue has already been implemented upstream, but it'll take some time before it trickles downstream to Debian.

In the meantime, I use the User Agent Switcher Firefox add-on to masquerade both browser and plug-in as QuickTime, when visiting
  1. select Tools->Default User Agent->Edit User Agents...
  2. select New->New User Agent...
  3. type "QuickTime (" in the Description edit box
  4. copy the following string to the User Agent edit box:
    QuickTime/7.6.2 (qtver=7.6.2;os=Windows NT 5.1Service Pack 3)
    (got this from a message on the mplayerplug-in mailing list)
  5. remove any text from all the other edit boxes
  6. press OK until you get back to the browser
Just remember to switch to the QuickTime user agent before you click on video clip links.

Saturday, September 12, 2009

Bluetooth Dongle Trouble

Cellular Phones and Kids

Cellular phones and kids under 3 don't mix. My wife and I have had each of our phones either fixed or replaced after being thrown without being caught, chewed upon and drowned in the toilet.

In at least two of these happy occasions the contents on the phone, namely photos, address book, calendar and messages, were wiped clean by the oh-so-competent cellular phone service provider. Well, I did sign a paper allowing these guys to do just that, either deliberately or by mistake.

Every time I hand over a phone to be serviced, I get the same response when I ask for a backup: "I can't backup your phone without USB". True enough, both our phones (low end Nokia 2600c) have no USB interface and can only be interfaced with over Bluetooth. But why should this make backups impossible? it shouldn't.

Our First Dongle: Window$

I was at the local Office Depot a few days ago, and while waiting in line to pay, I noticed that they had several Bluetooth to USB adapters on display. I made a decision that I lived to regret, and purchased one of these dongles.

I got the one that was mid-priced, simply because it had the Tux logo on it, along side the Window$ and Mac logos. "Any vendor who claims compatibility with Linux is likely to be technically superior than his competitors," I thought to myself, "their products are likely to be better engineered and better tested."

The dongle is marketed under a local brand name, so I have no idea what make it really is, otherwise I would strongly suggest you stay away from it. It came with an installation CD, and a leaflet that instructed me to install the software prior to connecting the dongle.

Being a sucker for manuals, I did as I was told and installed the software on my wife's Window$ XP laptop. The installation went along nicely, and then I was asked to connect the dongle. I did just that, and after a bunch of ballooned notifications appeared and disappeared near the system tray area, a little Bluetooth icon appeared there, and Window$ assured me, with yet another balloon, that the device was ready to use. Goody.

The dongle software, however, rewarded me with an error message:
Bluetooth Software license file not found.[2]

I hit the OK button and a standard file selection dialog appeared, allowing me to search for and open a file named license.dat. I found it on the installation CD, selected it and hit OK. The error message appeared again, and the process repeated itself, only that this time I hit Cancel when asked to find the file, disconnected the dongle in disgust, and uninstalled the software.

I sat there, weighing my options, and decided to attempt to return the dongle and purchase a different one.

Our First Dongle: Linux

But before returning the dongle, I decided I'd try to connect it to my Debian GNU/Linux laptop. I did not expect much.

At first I used lsusb to list the USB devices connected, and the dongle was there as Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode).

So far, so good.

I searched for packages whose names contain the word "bluetooth":
aptitude search bluetooth
and decided to install two packages:
aptitude install bluetooth gnome-bluetooth
Afterwards, I launched the GNOME control center and clicked the newly installed Bluetooth icon. This opened the Bluetooth Preferences, I clicked "Setup new device...", and a wizard appeared, prompting me to press "Forward" in order to scan for Bluetooth enabled devices. It didn't find any device, and I was somewhat disappointed, but not too surprised.

Well, after I enabled Bluetooth connectivity on the phone (ARGHH!), it was discovered by the wizard just fine.

The next step was to pair the phone with the computer: I was shown a set of digits and was asked to type them in on the phone.

After some more tinkering I found that with the gnome-bluetooth package came bluetooth-applet which showed a nice little Bluetooth tray icon after being launched. I right-clicked the icon and a menu appeared. I clicked "Browse files on device...", which launched the Nautilus file manager. I navigated through the directories, found the photos I made with the phone, and was able to copy them over to a directory on my hard disk.


I never expected to have a piece of hardware run (perfectly) only on Linux and be completely broken on Window$.

Our Second Dongle: Window$

The guy at Office Depot was very nice. He let me replace the dongle with a different one. I picked a more expensive dongle (made by Dynamode), paid the difference and went home.

There was no leaflet this time. The installation instructions were printed on the back of the package: load the enclosed CD-ROM into the computer's optical drive, install the software and connect the dongle.

I tried doing this on my wife's PC. It didn't work. There was no Bluetooth tray icon, like I had with the first dongle, and I could not start the Bluetooth software (Bluesoleil) after it was supposedly installed. I clicked and double clicked all the icons that seemed relevant, but nothing happened.

It took almost half an hour of futzing around before I decided to restart the computer. And guess what? it helped. In hindsight, it did seem odd that neither the instructions on the back of the package, nor the Bluesoleil installer, didn't even hint that a restart is advisable.

I was then able to use the overly animated Bluesoleil user interface in order to pair my phone to the computer, and browse the files on the phone.

Thank you very much.

Our Second Dongle: Linux

I had to try it on my box, and, basically, it just worked.

I did find out that frequent removals and insertions of the dongle sometimes required me to restart Bluetooth support:
invoke-rc.d bluetooth restart
But other than that, it just worked.

I also found out about hcitool - a command line utility that's part of BlueZ (the official Linux Bluetooth stack, that's installed automatically when you install the bluetooth package). It can be used, for example, to scan for Bluetooth enabled devices:
hcitool scan

Bad Dongle

All seemed fine, until Bluesoleil started crashing.

At first I thought this had to do with my wife's laptop resuming from hibernation, but I did not have the energy to investigate it any further.

Instead, I downloaded a software update for Bluesoleil, in the hope that this would fix the issue. But the setup program that I downloaded managed to remove the version installed on the box, and nothing more. So lame.

Oh, and the plastic casing of the dongle fell apart after two days of use.


Our Third Dongle

The guy at Office Depot was very nice. Again.

This time I picked a no-name made-in-china thumbnail sized dongle - the smallest and cheapest yet. It came with no software, and Window$ would not recognize it, even after a reboot. I pretty much expected this to happen.

I then removed any trace of Bluetooth device drivers from the computer, by following the instructions in the article "Removing unused device drivers from Windows XP machines" (open the device manager with the environment variable devmgr_show_nonpresent_devices set to 1), restarted the machine and re-plugged the dongle.

Window$ managed to find and install the drivers for the dongle with no external assistance and no extra software. I right-clicked the Bluetooth tray icon, and was able to scan for and find my phone, and then pair it with the computer - very similar to my Linux experience.

I have a hunch that either of the previously mentioned dongles would've worked just fine without installing the software that was bundled with them.

More Fun with Dongles

If you have a Nokia phone, I recommend that you install Nokia's PC Suite - it's a pretty cool, free of charge, integrated suite of applications that lets you control and manage your phone from your PC.

Linux has Gnokii and Gammu for doing pretty much the same tasks, from the command line or your own scripts/programs. There are also graphical frontends (XGnokii and Wammu, respectively), which aren't as polished as Nokia's PC Suite, but they do provide support for non-Nokia devices.

There's all sorts of neat stuff that one can do with Bluetooth:
  1. access the Net via the cellular phone
  2. setup a personal-area-network - wireless networking
  3. connect a headset as an audio input/output device
  4. send SMS from the computer
  5. automatically lock/unlock the desktop when the cellular phone is far from/close to the computer
  6. ...and more
But I haven't tried any of these things. Yet.

Hopefully the (blue)toothache is over now.

[14 Sep 2009] UPDATE: The bluetooth wizard stopped working after I upgraded some packages. It's a known bug and a patch is already available (see Debian bug #545549).

Let's just say that I'm not as ecstatic about the state of Bluetooth on Linux as I was two days ago. What a pain.

Saturday, September 5, 2009

Wasting Time with Git on Windows

It's quite easy actually: use Git to clone a certain repository to a Window$ machine and then attempt to build it.

I did just that with an autoconf based project. The machine at hand has Cygwin installed on it, so all I needed to do, in theory, was
git clone <git://address/of/repository.git>
cd <repository>
and have a binary to test.

I used PortableGit from msysgit, which seemed like a good idea at the time (instead of installing Cygwin-based Git).

Git clone worked like a charm. Luckily, that repository contained no symlinks and/or files whose names differ only in case (e.g. File.txt and file.txt).

But I couldn't run ./configure - I got a bunch of weird syntax error messages, which made no sense. The script looked fine (well, apart from being ridiculously long...).

It took an embarrassing half an hour before I figured it out, and even that was by chance. I opened the configure script for editing with emacs and noticed the text (DOS) at the status bar. Oh, so this is a Newline problem!

I tried
dos2unix configure
and I now got syntax errors in other files. Definitely a Newline problem.

I cloned the same project to my Debian box, and it looked OK - none of the scripts seemed to have CRLF newlines. So it had to be Git auto-convetring those text files between platforms.

A quick Net search later and I had a fix:
git config --global core.autocrlf false
I cloned the repository again, and this time everything worked nicely.

To be fair, I've been hit by Window$ specific problems with other SCMs, both free and commercial, with or without implied warranty, whenever I needed interoperability of some sort between Window$ and Linux.

If anything, I should've seen it coming.

Friday, August 28, 2009

Installing GRUB2

On my last post I described how I hit a problem while trying to rescue files from a bricked Window$ Vista laptop, using my Live HDD (a complete Debian system that I boot from a USB hard disk). My problem: how does one run chkdsk on a corrupted NTFS partition, without a Window$ installation disk?

As I mentioned last time, if you have it installed, you may be able to start the Recovery Console and check the disk. But chances are that either you don't have it installed (Window$ XP) or that it too fails to start.

While searching for a stand alone disk checking utility I recalled that I've once used BartPE to attempt to rescue files from an old desktop PC that I've managed to fry. BartPE was a real wonder at the time: a Window$ Live CD (also known officially as a Windows Preinstallation Environment). You must have a legitimate copy of Windows XP in order to create the Live CD, but other than that it just works. And it includes chkdsk from the original Windows install disk.

(Later on, as I searched some more, I found WinBuilder and LiveXP, which are more versatile and rich in features, but I'll leave that to a future post).

Anyway, I also hit a blog post on how to boot an ISO image via GRUB2. This looked promising: all I had to do was create a Windows PE ISO image, place it on my Live HDD boot partition, and boot into it using GRUB2.

Let me tell you upfront: it doesn't work this way - I misinterpreted that article, and only much later realized that while GRUB2 can loop-mount an ISO image, this doesn't mean that it can boot it. The blog post about why GRUB2 cannot actually boot CDROM images clarifies what can and cannot be done with GRUB2.

So, misguided as I was, I went on to upgrade GRUB on my Live HDD, and replace it with GRUB2. Basically, all that needs to be done is run
aptitude install grub-pc
this will add GRUB2 as the first boot option in the GRUB boot menu ("Chainload into GRUB 2") , but will not replace GRUB on the MBR, so that you may revert to GRUB if you encounter any problem.

Which is pretty considerate.

Once you're happy with it, you should run upgrade-from-grub-legacy in order to finish the transition.

As easy as pie. Really. Unless it bricks your machine. Luckily, it didn't brick mine, which I find somewhat surprising...

I've been hit by some quirks though, most annoying of which is the fact that currently memtest86+ cannot be launched from GRUB2, unless you apply a patch to /etc/grub.d/20_memtest86+ and run update-grub (see Debian bug #540572).

After I verified that GRUB2 worked as advertised, it took a few failed attempts to boot a BartPE ISO image from GRUB2, before I did some more research and realized that I hit a dead end.

What a disappointment.

All in all, I find that there's no real reason to upgrade to GRUB2, unless you really need it. Sure, it's sexier than legacy GRUB (I liked the default Debian graphical background), its configuration is scriptable, it can loop-mount ISO images, it can boot from an Ext4 partition, and much more. But, if your machine works fine with legacy GRUB, I'd suggest you wait for it to be officially obsoleted before upgrading to GRUB 2.

[24 Sep 2009] UPDATE: It didn't take long - the package grub-pc is now the official upgrade path for grub, so I've upgraded to GRUB2 on my laptop too. Works just fine. Just the way I like it.

Sunday, August 23, 2009

Some Things To Try When Windows Fails To Start

We went to visit my wife's aunt. While my wife and her aunt conducted civilized conversation over tea and biscuits, I was cajoled by the aunt's daughter into taking a look at her laptop. "It's broken" I was told, "and surely you can fix it!". Flattery goes a long way with me.

I was handed a brand new Dell laptop, that was literally broken after the teenage daughter dropped it. The plastic cover, close to the bottom right corner of the screen, was cracked. The other problem was that, after turning on the laptop, it remained stuck in the Window$ vista boot screen - the one with a green progress bar going back and forth, indefinitely.

I forcibly shutdown the laptop with the power button and turned it on again, and tried hitting F8 during the boot process, in order to enter safe-mode. I wasn't quick enough so I had to do it again.

Safe mode usually did the trick for me on XP machines, but it didn't help this time.

The boot process hung while loading crcdisk.sys. I now know that this is a pretty common complaint, but it seems to be caused by several unrelated problems. At the time, however, I could only guess that it's a disk access problem of some kind. Not that it helped - the box was bricked.

I wasn't aware of the fact that I could've tried hitting Alt-F10 during the Vista boot process in order to get to a recovery console. That would've at least allowed me to run chkdsk, or something.

I wasn't aware of Startup Repair.

I'm a Vista newbie.

My next step was to boot the laptop using my live HDD which I usually carry with me. It took two attempts to find how to convince the laptop's BIOS to boot from the USB disk, but I finally succeeded. The plan was to access the internal hard disk and either copy important stuff from it to the live HDD, or burn said data to a writable DVD.

The plan was foiled by the fact that I could not mount the NTFS partition - I was notified that I have to run chkdsk first...

At this point I had to admit failure and suggested to my disappointed audience that the laptop be taken to the repair shop.

When we got back home I was restless. I just had to find a way of making that simple plan of mine work in the future. This got me on a roller-coaster ride of activity: I found myself installing GRUB2, setting up a Window$ live CD, slipstreaming an XP install CD and more. I plan to summarize my efforts in some of the upcoming posts.

In the meanwhile you may want to consider installing the Window$ recovery console on an XP box that you care about, by running the following from the Window$ install CD:
d:\i386\winnt32.exe /cmdcons
I did this on my wife's laptop, and it made me feel all warm and fuzzy inside.

Friday, August 7, 2009

Sharing a Directory with a Windows PC

It's rather easy (see, for example, this thread at the Ubuntu forums):
  1. install samba:
    aptitude install samba
  2. open /etc/samba/smb.conf for editing
  3. add a stanza similar to the following:
    comment = Shared Files
    path = /path/to/shared/files
    browseable = yes
    read only = yes
    valid users = user
  4. save the file
  5. run testparm to verify that the new configuration is valid
  6. add the specified user to samba like this (you'll be prompted fro a password):
    smbpasswd -a user
    (note that the default settings require that this user be a valid Linux user on the machine where the samba daemon is running)
  7. restart the server:
    invoke-rc.d samba restart
  8. you should now be able to access this directory from a Window$ machine as \\computer-name-or-ip\files after providing the specified user's user name and samba password

Happy sharing.

[09 Aug 2009] UPDATE: if you're running a firewall (and you probably should) you'll need to configure it to accept incoming/outgoing SMB traffic.

I use shorewall, and I had to add these lines to /etc/shorewall/rules and then restart the firewall:
SMB/ACCEPT  $FW      loc

Note that this rule opens a lot of ports - you should only allow SMB traffic between hosts you fully trust.

Friday, July 31, 2009

Fixing Alt Arrow Key Bindings in tcsh

It took a while, but the IT department at work finally got around to upgrading the OS on my workstation to Ubuntu 8.04 LTS. It's a bit outdated when compared to my home setup (Debian/Squeeze), but familiarity goes a long way - it feels better. There were some kinks that had to be ironed out, but for the most part it was a pretty smooth transition.

One problem that irritated me was shell key bindings. I'm used to move the cursor a word at a time with <ALT> <LEFT> / <RIGHT> - and was horrified to find out that these key binding stopped working.

I contemplated the differences between my home setup and my work setup, and figured that it had to be the shell. You see, we use tcsh as the default shell at work, instead of bash - I have no idea why, but it's the standard here.

I tried the alt-arrow keys in bash and found that they work nicely. bash uses readline, so its key bindings are set in ~/.inputrc. tcsh, on the other hand, seems to be handling key bindings on its own.

With my hunch verified, all that needed to be done was add the missing key binding to my ~/.tcshrc. I already had the following:
bindkey -k up history-search-backward
bindkey -k down history-search-forward

so I guessed I could use alt-right as a key name, like this:
bindkey -k alt-right forward-word

but this only earned me an error message.

I did the sensible thing and asked a coworker, who had Ubuntu installed on his workstation before me, and he told me that he had the same problem and that there was a fix - "just add the following magic to your ~/.tcshrc":
bindkey '\e[1;3C' forward-word 
bindkey '\e[1;3D' backward-word

I did as I was told, and it didn't work. Well, not quite - it did fix the key binding under konsole, which is the terminal emulator that most of my coworkers use, but not under rxvt-unicode - the terminal emulator that I use.

So I did the other sensible thing, and searched the Net for a solution. I hit a message on the screen mailing list, which got me on the right track.

Basically all I had to do was run cat at the console, hit alt-arrow keys, record the strings that are echoed back, and use these as the key combinations to bind. Here's what I got:
bindkey '^[^[[C' forward-word
bindkey '^[^[[D' backward-word

Thank you Lazyweb.

Friday, July 24, 2009

Reordering Accounts in Thunderbird/Icedove

I have far too many email accounts. I use Icedove (the non-branded version of Thunderbird) as my mail client and am quite happy with it.

Lately, however, the email server at work was replaced and I had to setup a new email account in Icedove. Being the last to be added, the new account's folders naturally showed up last in Icedove's folders pane.

I wanted to move the new account upwards to a more visible position, so I attempted to drag and drop the new account's top level folder with the mouse.

Guess what? it doesn't work and nothing happens. What a drag. What a drop.

There seem to be two ways of doing this:
  1. the sane, yet obscure method: install the Folderpane Tools add-on and modify the ordering of the folders via its Preferences dialog.
    I only found out about this after I used the next method...
  2. Follow this procedure:
    1. close Icedove
    2. consider backing up your ~/.mozilla-thunderbird directory before going any further...
    3. open the file ~/.mozilla-thunderbird/20bir36j.default/prefs.js in a text editor (e.g. gedit) - replace the bit in red with your own profile directory name
    4. search for a line that starts with
      the rest of the line looks like this:
      and it corresponds to the current ordering of accounts, with two exceptions:
      • account1 is associated with local folders (which appear last)
      • the first account that's displayed is the default account, regardless of its position in this list
    5. reorder the accounts to your liking, e.g."account1,account4,account2,account3"
    6. save the file
    7. launch Icedove, verify that the ordering of accounts is correct
    8. Enjoy the resulting brain damage.