Friday, 24 February 2012

Apache mod_rewrite & CodeIgniter


This article isn’t really about CodeIgniter. I’m getting to grips with that at the moment, so I might write some more about it in the future. It is about Apache’s mod_rewrite module and trying to get it to work in a way that’s useful on a dev server for the way CodeIgniter (and other PHP) projects are set out.

What I wanted was to have a single server (i.e. one virtual host) with space for several different projects, or branches of a project. In my opinion, the easiest way to access each project is just to use http://server/project/ in the browser (there are other ways – notably virtual hosts – but they usually require configuration for each new project and / or each new dev machine). With simple websites, it’s fine to put each project in a sub-folder and access them as suggested. However, that does ignores a recommendation for CI projects and one that I think should be followed on any web project and that is to move code that does not need to be publicly accessible outside of the browsable section of your file system (in this case, CI’s “system” and “application” folders should be outside “webroot”, or whatever you want to call it).

My goal was to have the dev server set up so projects could be moved between it and a production server without modification and to have each project wholly contained in its own folder, which means that each project needs its own “webroot” and its own space outside “webroot”. I therefore want every request to //server/project/index.php to be rewritten to //server/project/webroot/index.php (and of course similar for other files in other folders below webroot): in essence, “webroot” needs to be injected after every project name. This means that files and folders other than "webroot" in the project folder become inaccessible to the browser, which isn’t just a matter of convenience for the developer, it means that no resources can be accidentally accessed outside the correct area of the web server’s file system and all relative links (stylesheets, images, etc.) must be properly located.

The first thing I learned is to put the rules directly in the <virtual host> section and not in a <directory> tag wherever possible. There are two reasons for this. Firstly, it’s more efficient – Apache deals with rewriting much faster if it’s not done on a per-folder basis which is because of (at least in part) the second reason, which is that <directory> entries (and .htaccess files in particular directories, which are equivalent) can be parsed multiple times as the request is processed. This can cause major headaches for the unwary because there’s nothing to stop Apache deciding it needs to run through the rules again (in fact, it always seems to do so if the URL has been rewritten) and rather than starting with the original URL, you get the modified one. This means that you can get into an infinite loop if you, say, simply add something on to the end of whatever URL comes in.

The difference between behaviour for rules located in different sections of the config file is not limited to multiple passes, unfortunately. The other thing that changes is the content of some of the variables that you can make use of in the rules. For this reason it is important to check (and potentially modify) any rules you see suggested unless you’re sure that the rules were designed to go in the same place that you want to put them.

I ended up with the following, the second part of which adds index.php after project names (if not present), whilst retaining the rest of the URL as parameters. It’s based on examples in the CodeIgniter documentation:
# Inject 'webroot/' if request starts with a valid folder
# and '/webroot' is not already 2nd folder
RewriteCond %{DOCUMENT_ROOT}$1 -d
RewriteCond $2 !/webroot
RewriteRule ^(/[^/]+)(/?[^/]*)(.*) $1/webroot$2$3

# Rewrite any */webroot/* file request to index.php
# Don't rewrite if file exists OR it's already
# index.php (even if 404)
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !/index\.php$
RewriteRule ^(/[^/]+/webroot)/?(.*)$ $1/index.php/$2
I've used %{REQUEST_FILENAME} in conditions for the second rule. Although there are several other variables with similar content, be careful which you choose to use in situations like that above: not only do the values of some of them change depending on the location of the rules within the Apache config files, but I found that some of them had their contents rewritten by earlier rules and some did not (and I found no reference to this in the mod_rewrite documentation).

Tuesday, 10 January 2012

Linux Fileserver and ClamFS

I recently needed to provide a file server for a client that would work with Windows and OS X clients. For reasons of cost and maintenance we decided to use Ubuntu LTS Server. We also wanted anti-virus scanning as customer files are introduced to this server regularly. I decided to use the popular, open source ClamAV engine, with ClamFS providing the on-access scanning. I want to talk briefly about ClamFS in general, because there isn't much comment on it that I can find and then about a specific problem I had, because the solution is not necessarily obvious and uses an interesting feature of samba.

ClamFS seems to be most straightforward way to provide on-access scanning with ClamAV. It's a FUSE based daemon that mirrors one part of the file system to a mount point elsewhere, providing on-access protection for reads and writes to the mirrored version. I discovered the following about it:

  1. The version I installed from the Ubuntu repository doesn't include an init.d script – adding a line to rc.local seems to be the preferred method of boot time initiation. You can, of course, write your own init.d script
  2. The config file is written in XML, rather than the more readable and more easily editable (certainly on a GUIless server) familiar format that pretty much every other Unix-based config file uses. You need to include the config filename when starting ClamFS
  3. There is apparently no way to stop the process other than using kill and then manually umounting the FUSE mount associated with it
  4. Lack of permissions caused a bit of difficulty – the ClamAV user might need some additional permissions before your users can read and write protected files
  5. There is little documentation; a tutorial taking new users through the steps of installation and configuration would make its use clearer
  6. Once set up, it seems to work fine: I've had no problems with it.

My configuration is as follows: Truecrypt volumes (which are normal files, stored at a point we'll call location A) are mounted at another point in the filesystem (location B) and ClamFS mounts a copy of B to a third point (location C). Location C is then used for the samba share path.

I wondered if having ClamFS start at boot time and mounting a copy of B elsewhere would prevent TC (which doesn't start at boot time) mounting a volume to B later on, but it turns out mounting volumes "underneath" an existing ClamFS mount works fine.

I had another problem though. Because I have more than one share and more than one encrypted volume, I configured ClamFS to protect the directory above the one in which all the TC drives were mounted. Because of this (or maybe because of some other aspect of the redirection), the free space reported by samba was not that of the individual drives mounted within the ClamFS protected directory, but the space on the drive that contained those mount points (or the point which the ClamFS was mounting to, I'm not sure which as they are on the same partition).

This can be more than an annoyance because Windows systems from Vista onwards actually check this free space before attempting to write a file. If there isn't room, you can't write. In my case, reported size was on a partition that was almost full of TC volumes, so the reported free space (and therefore the maximum file size that could be written by Windows 7 clients) was severely curtailed.

There are two possible ways round this. The most obvious is to only allow ClamFS to mount to and from points inside any TC volumes you want to share. This will cause you headaches if either you have many shares and only want to have ClamFS configured to protect one directory or ClamFS needs to be started before TC mounts its volumes (common, because manual intervention is usually needed on TC mounts for security reasons).

The second solution is to use a feature of samba which allows you to override the internal free space code with a method of your design. The smb.conf man page explains the details – essentially you need to provide a command (writing a script seems to be the most common solution) that will return two numbers. These give the total number of 1K blocks in the filesystem and the number that are free, respectively. The man page makes a suggestion which I tailored slightly:

#!/bin/sh
df -P $1 | tail -1 | awk '{print $2,$4}'

The "-P" switch (added to the df command) forces the results for each drive onto a single line. If you don't do this and the path reported for the partition is longer than 20 characters, a line break is inserted and the positional parameters to awk will be incorrect.

You then need to make sure the definition in smb.conf for each affected share contains the following:

[Sharename]
   …
   path = /path/to/share  # loc C
   dfree command = /path/to/script.sh /path/to/TC/mount  # loc B

A quick side note: samba calls the script with the location it is trying to ascertain the size of as a first parameter. We've included a first parameter here, which simply pushes the samba-appended one into second position (which is then ignored). I have read that samba may call the script with the parameter "/", having chrooted to the share point before executing the script. I haven't investigated exactly what is happening in my test or production installations, but both work with the procedure I have outlined and this would not be the case if any chrooting were going on. I can only conclude that this is not the behaviour of current versions of samba (I'm using 3.4.7, courtesy of Ubuntu 10.04 LTS) or something else about my environments is altering that behaviour. I'd be interested to hear about different experiences.

Wednesday, 15 June 2011

Installing Linux VMWare Tools on Ubuntu

Infrequently, I create a new VMWare Linux VM. I do this just infrequently enough that I can't quite remember the procedure for installing VMWare Tools to the VM. This is documented in lots of places, I'm sure, and I normally try to stay away from repetition of readily available material… but I can never find it when I want it. So, as an aide memoire to myself and (hopefully) a handy reference for anyone else who needs to go through the procedure, here are the necessary steps on Ubuntu. I'm using Ubuntu Server 11.04 (with no GUI) but the steps should work on other versions (including desktop versions if you open a terminal: the instructions assume you have a shell open already).

Many of the steps here will be obvious to most users, but I've detailed everything so you can (if you wish) just copy and paste the lot (almost - see the notes) into shell scripts which will get the job done quicker. And those just starting out will also have a reference they can use.
  1. [Optional] Change the kernel. Even with the server install I did to write this article, the generic kernel was installed by default even though a kernel optimised for server operations is available. Not only that, but there is a version of the server kernel trimmed down to have only what is necessary for use in common virtualised platforms, including VMWare
  2. # Install latest kernel version
    sudo aptitude update
    sudo aptitude install linux-virtual
    
    # Reboot, so the new kernel is running when the tools
    # package is built and the correct headers will be
    # selected in step 3
    sudo shutdown -r now
    
  3. Attach the Tools ISO to the VM. In vSphere Client, you can right-click the VM in the inventory and select Guest -> Install / Upgrade VMWare Tools
  4. Install tools, with necessary packages (I'm assuming you are starting in your home folder or somewhere equally appropriate for putting the tools installation directory)
    # Most commands need root access. You can use 'sudo'
    # where necessary instead
    sudo su
    
    # Update apt package database (if you didn't earlier)
    aptitude update
    
    # Install packages necessary to build tools
    aptitude install build-essential linux-headers-`uname -r`
    # note backticks around uname command, not ordinary
    # inverted commas
    
    # No suitable mount point existed in my default install:
    # create one
    mkdir /media/dvd
    
    # Mount tools image and extract tarball
    mount /dev/dvd /media/dvd
    tar -xzf /media/dvd/VMwareTools-*.tar.gz
    # You can use auto-complete above: it's just one file
    
    # Run install script
    cd vmware-tools-distrib/
    ./vmware-install.pl -d  # -d auto-accepts all defaults
    
    # Tidy up and exit root shell
    cd ..
    rm -rf vmware-tools-distrib/
    umount /media/dvd  # the script usually does this for you
    exit
    
Notes
  1. The kernel headers are installed by default on Ubuntu, so the linux-headers-* package is only necessary if the kernel has been changed since installation.
  2. The "uname" command in the install list ensures that the package for the running kernel is selected. If you've just installed a kernel using one of the metapackages listed above, it will be the latest one and headers can be installed simply with "linux-headers-virtual" (for example).
  3. To initialise the tools, the "/etc/bin/vmware-config-tools.pl" script needs to be run. If you used '-d' or allowed the script to run this itself (it prompts for this in interactive mode), this will already have been done, but it can be useful to know about this separate step in case of problems.
  4. If you put the second set of commands into a script, you'll need to remove "sudo su" from the start and run the script as root. "su" opens a new shell and the commands from the rest of the script will not be passed into it if you run as-is.
Once the tools are installed, updates can be performed automatically from the host so there is rarely a need to refer back to this process for an existing machine

Wednesday, 27 April 2011

Temporary PATH Additions: Modifying the standard CMD Here Extension

A relatively common shell extension for Windows systems is to have right-click for folders in Explorer offer the option to open a cmd prompt window with that folder as the current working directory (CWD). I seem to have added this to any Windows installation that I've used for any period of time. This is often called "cmd here" or "command here" and Microsoft provide an installable for this function in their PowerToys collection.

In fact, this feature is so useful that it's built-in to OSes from Vista onwards, but to access it you need to hold "shift" while you right-click the item and it only works for folders (some versions of the extension let you right-click a file and have the command prompt open in the folder containing that file). 

Something I find useful from time to time (and which I've never seen elsewhere) is to have file and folder context menus open cmd windows with the folder concerned added to the PATH environment variable, just for that session. This is great for uncommon or temporary use of a folder containing one or more exes without permanently bloating your PATH variable. Some programs with command line interfaces (such as Visual Studio) provide Start Menu shortcuts that open a cmd window with PATH modified for that window only and what I'm suggesting here is similar (but more dynamic).

All that happens when installing the "cmd here" extension is the addition of a few registry entries, and so I made a typical version of this add-on (based on the entries in the Win 7 registry and this web page) and adapted it. Save the following lines as a .reg file and you can add this to your file / folder context menus too:
Windows Registry Editor Version 5.00

    [HKEY_CLASSES_ROOT\*\shell\pathhere]
    @="Cmd with &Path here"
    ;"Extended"=""
     
    [HKEY_CLASSES_ROOT\*\shell\pathhere\command]
    @="cmd /k path %W;%%PATH%% && pushd %%USERPROFILE%%"

    [HKEY_CLASSES_ROOT\Directory\shell\pathhere]
    @="Cmd with &Path here"
    ;"Extended"=""

    [HKEY_CLASSES_ROOT\Directory\shell\pathhere\command]
    @="cmd /k path %L;%%PATH%% && pushd %%USERPROFILE%%"

    [HKEY_CLASSES_ROOT\Directory\Background\shell\pathhere]
    @="Cmd with &Path here"
    ;"Extended"=""

    [HKEY_CLASSES_ROOT\Directory\Background\shell\pathhere\command]
    @="cmd /k path %V;%%PATH%% && pushd %%USERPROFILE%%"

The entries for \Directory\Background enable the same effect by clicking in the empty space in an Explorer window: you just get the current folder the window is displaying.

As an aside, Raymond Chen explains the difference between the \Folder and \Directory classes in this blog post. Note that what the registry (and Raymond) are referring to here as "directories" are called "file folders" in parts of the Windows UI. We have used the "Directory" branch because it makes no sense to have virtual folders as targets for this sort of extension.

You'll have noticed that each entry's root has a commented-out, empty string called, "Extended". If these are un-commented (and you're using Vista onwards), commands will be added only to the extended context menu, available with a Shift-right-click.

You could copy the \Directory\shell entry to the \drive\shell branch if you wanted to provide the same facility for drive roots. You may also want to specify a different CWD, and this is controlled by the appended "pushd" command. If you delete this (everything from the first ampersand onwards, but don't forget to retain the closing quotation mark), the CWD will be the folder in which the context-menu was opened.

One thing I haven't sorted out completely is the variable expansion. When context menu entries for files are activated, %D, %L and %V all hold the filename with its path and %W just holds the path; some other letters hold other cryptic values. The details for directories seem similar, but my tests for \directory\background consistently crashed Explorer. The MS implementation of "cmd here" in Win 7 uses %L for \directory and %V for \directory\background. I can't find any documentation listing all these variables and I'd be interested to know if anyone's come across any.

Usual disclaimers apply (although it's highly unlikely to do anything you don't want) – specifically, I haven't tested this on anything other than one Win 7 Pro 32-bit installation. However, I'd expect it to work pretty much across the board, although older OSes (XP / 2003 and previous) will probably ignore the "Extended" key.

Tuesday, 15 March 2011

The Homeopathic Database

A few friends and I were discussing databases the other day. A colleague of one of us had tried to persuade him that a memory-based DB would be ideal for their project because of the increased commit speed compared to a disk-based system. Data would be eventually written to disk "at some point". My friend pointed out that /dev/null was even faster for writes and only moderately less useful if you need a cast-iron guarantee that all committed data will be available in the future.

If, instead of writing to /dev/null, you write to /dev/zero, it has much the same effect on your data, but reading from /dev/zero produces an infinite stream of zeros. Immediately, we realised this was the answer to every database user's dreams – dilute your data in an infinite sea of zeros: the Homeopathic Database.

Think about it. All those ones interspersed with zeros you started out with may seem important, but the advantages are worth considering. First of all, we know from the countless randomised, double-blind trials done on all homeopathic medicine* that it's a very effective idea. The fact that you only get zeros out at the end is not important because they have absorbed information from all the ones that have been diluted in them. As we know from homeopathic practice, the more zeros we have to dilute the ones in, the more effective the mixture, so the infinite number of zeros in /dev/zero means that what is stored in the database will be really good data.

Secondly, something that all DBAs worry about, backups, are really easy because the data is particularly well suited to compression: although there's an infinite amount of data in /dev/zero, as it's completely predictable, it's infinitely compressible. Backups therefore take no time at all.

The one thing you must remember to do is invert all your data before writing it to the database: the "law of similars" means that retrieved data will have the opposite effect in homeopathic concentrations as it did originally. And you may have to hit your server with a leather cushion while transactions are being committed.

Thanks to Mark, Steve and Alistair.

*They do do that, don't they? I mean surely no one would let people just sell any old rubbish without proper scientific investigation into whether or not it was better than placebo, would they? People who market it are able to make such grand claims for it, it seems certain they have data from repeatable, peer-reviewed trials or they wouldn't hold such strong beliefs.

Tuesday, 1 March 2011

.net Graphics in Windows Forms – Part 2: Anti-Aliasing Your Primitives

I promised this second instalment on Windows Forms graphics would be on rendering settings. Like the previous part of this series (on ControlStyles), this topic will not show any tricks or undocumented features, but I will discuss a few of the framework settings you can alter that change its behaviour and which I don't see discussed very often. The third instalment will discuss text rendering.

If you've never used any of the vector drawing primitives, you might like to try one or two out – just create an empty Forms project, override OnPaint() and you can draw (outlines) or fill (solid colour) several different shapes using methods in the Graphics object. The instance of Graphics you need is passed to OnPaint() in the PaintEventArgs object.

The basic things you need to know are:

  • Coordinates start at the top left of the control's client area and increase to the right (x) and down (y)
  • Angles are measured from the positive X axis (i.e. horizontally, to the right); angles increase clockwise and are given in degrees (many graphics systems and System.Math use radians!)
  • Brushes and Pens (which you'll find you need to fill and draw shapes, respectively) should always be disposed of if you create them. I've never looked into the mechanisms at work here, but the essentials are that they wrap non-managed resources and we're led to believe that the garbage collector can't be relied upon to release these resources before the OS runs out of them.
  • Manual disposal applies to several other objects associated with drawing so make sure you check. The golden rule is that if you make it, you break it. If you were passed it from elsewhere (e.g. the Graphics object in the PaintEventArgs) you should NOT call Dispose() on it yourself
  • You can use ready-made brushes and pens, available in the Brushes and Pens static classes with a myriad of colours to choose from. As these are pre-existing objects don't call Dispose() on them yourself (you can't anyway – you'll get an exception)
  • If you don't need to draw everything, don't! You are given ClipRectangle as part of the PaintEventArgs object and if you can get away without drawing outside this rectangle, that can speed things up

There are a wealth of drawing tutorials available, so I don't want to say anything more about the basics. Instead I want to look at a few of the properties that are available in the Graphics object and discuss what they do. The ones we are interested in are as follows:

  • SmoothingMode
  • PixelOffsetMode
  • InterpolationMode

SmoothingMode tells the rendering engine to use anti-aliasing when drawing lines and shapes. As I'm sure many of you will know, (at least in this context) anti-aliasing reduces jagged edges between areas of different colours by inserting transitional pixels of an intermediate colour where the shape of the (idealised) edge would cover only a proportion of the entire pixel.

The enum which defines the possible values for SmoothingMode has six members. However, one (Invalid) can't be used and of the other five, two are synonyms for "use anti-aliasing" (AntiAlias, HighQuality) and three for "don't use ant-aliasing" (Default, None, HighSpeed). Note that within each of these sets the results are identical: there is no difference at all between rendering done with AntiAlias or with HighQuality. Notice also where "Default" is: by default, anti-aliasing is switched off.

Anti-aliasing is not always the answer. It's slower, can make shapes look blurred and it doesn't really do anything if all your lines are vertical and horizontal, but I would strongly suggest that you at least try switching it on and examining the results if you are drawing anything remotely complex in your controls.

PixelOffsetMode tells the rendering engine how to align pixels on the screen with the coordinate system used to define points on the drawing surface. Like SmoothingMode, several of the enum members are equivalent (Default, HighSpeed and None are the same and HighQuality and Half are also equivalent). "Invalid" is again present but unusable. This setting is also dependent on the use of SmoothingMode.AntiAlias (or one of its synonyms) – PixelOffsetMode makes no difference if anti-aliasing is not being used.

The difference between the two modes is that any integer coordinate is considered to be at the top left of the pixel (None) or the centre of the pixel (Half). PIxelOffsetMode.Half takes longer to process but theoretically offers higher quality rendering because it is easier to follow the idealised course of a line if it goes through the centre of pixels rather than butts up against their edges. However, you may find it makes lines look softer than you want, especially verticals and horizontals.

In my opinion, it makes only a marginal difference to your results and given the extra calculation work it generates, is definitely in the "try it out" rather than "switch it on regardless" category. If anyone knows of a class of problem in which it makes a noticeable (and useful) difference to the results, I'd love to hear about it.

The final setting I want to briefly discuss is InterpolationMode. This property allows you to tell the rendering engine how you want it to treat images if they are scaled or rotated and it's only of use if you are using a bitmap in this way. I'm not going to go into the different modes available: it's enough to know that you don't have to stick with the defaults and a proper treatment of each method can easily be found elsewhere. For an interesting comparison of quality and speed, Bertrand Le Roy benchmarked the different methods: timings and rendered results can be found here.

For completeness, I'm also going to nod towards CompositingMode and CompositingQuality. These properties are worth looking into if you are creating images by adding partially transparent layers to your drawing surface, but are fairly self-explanatory and I'm not going to go into any more detail in this post.

Tuesday, 22 February 2011

.net WaitCursor: how hard can it be to show an hourglass?


I've seen a couple of different ways of using the 'Wait' cursor (aka the 'Hourglass') and several forum posts discuss the problems people have when they haven't been able to work out how to use it properly. Hopefully this is a comprehensive discussion of this small but seemingly complicated topic.

When your program is doing something which stops users from accessing the UI, you should display a 'Wait' cursor. There are three different things you can do to get this (and usually none of them work the way you'd want on their own):
// Method #1
Control.Cursor = Cursors.WaitCursor

// Method #2
Cursor.Current = Cursors.WaitCursor

// Method #3
Control.UseWaitCursor = true;
For the first two, Cursors.Default can be used to return to the expected arrow after the operation has finished; UseWaitCursor should simply be set to false again.

Controls all have a Cursor property and this sets the cursor shape when the mouse pointer is over a control. This property is examined and acted upon only when a Windows message (WM_SETCURSOR) is sent to a window. This means that until the next time this message is sent (perhaps when the pointer is moved away from and back over the control in question), updating this property won't have any effect. To exacerbate the problem, if the UI thread is blocked by whatever operation the WaitCursor would be displayed for, any WM_SETCURSOR messages that are generated won't be processed until the operation has finished.

The other problem with using this alone is that it is a per-control setting: set it for a form and all the child controls on the form will still display the default cursor unless you update all of their Cursor properties as well.

The solution for this problem (and the suggested 'proper' way of displaying the WaitCursor) is to set the Form's UseWaitCursor property. This has the advantage of working for any given control and all its child controls, so set it for a Form and the whole UI for the form will display the WaitCursor when the pointer is over it, regardless of the control under the mouse. There is also an Application.UseWaitCursor which has the same effect across all the windows of a running application. However, this still suffers from the problem of needing a WM_SETCURSOR message before the cursor shape will change.

So what about the other option? Cursor.Current is a static member of the Cursor class and accesses the OS to change the current cursor immediately. This is great… until the next WM_SETCURSOR message is processed and it goes back to whatever the control underneath is supposed to display.

Problems with all these approaches are made less predictable with UI thread blocking too: Cursor.Current will affect a change for some time if UI messages aren't being processed, for example, and then might suddenly change back for no reason that is obvious as a message gets handled.

So the best approach looks like it is to set MyForm.UseWaitCursor to true and then set Cursor.Current. As well as putting any long-running activities in a separate thread, of course. Well, that does solve the problem, unless you want the relatively common ability to cancel a long-running activity and you want the default cursor shape (i.e. an arrow) over your cancel button.

If you look into the Control.UseWaitCursor setter in Reflector, you'll see that it sets its flag (a bit in the private 'state' field in Control) and then recurses into the UseWaitCursor setters in each of its child controls. You might think (i.e. I thought) that all that's then needed is to reverse the setting of this flag in the button in which you want to display a normal arrow and all would be well. Unfortunately, this doesn't work – you still get WaitCursor everywhere. So how can you do it?

Well, it turns out that if you use all of:

MyForm.UseWaitCursor = true;
CancelButton.UseWaitCursor = false;
CancelButton.Cursor = Cursors.Default;

Then you can get the desired behaviour. And of course, a Cursor.Current call would also be in order if you find the cursor shape isn't changing until the mouse is moved.

I don't know how this works: I would have thought that if Control.UseWaitCursor = true sets Control.Cursor to the WaitCursor (which appears to be the case) then setting it to false would have the opposite effect, but I found that CancelButton.Cursor was still set to WaitCursor even after the UseWaitCursor flag in the control (not the form) had been reset.

This solves the problem and is not overly arduous but if you can explain the behaviour, leave a comment and I'll update the article accordingly!