# Inject 'webroot/' if request starts with a valid folder
# and '/webroot' is not already 2nd folder
RewriteCond %{DOCUMENT_ROOT}$1 -d
RewriteCond $2 !/webroot
RewriteRule ^(/[^/]+)(/?[^/]*)(.*) $1/webroot$2$3
# Rewrite any */webroot/* file request to index.php
# Don't rewrite if file exists OR it's already
# index.php (even if 404)
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !/index\.php$
RewriteRule ^(/[^/]+/webroot)/?(.*)$ $1/index.php/$2
Friday, 24 February 2012
Apache mod_rewrite & CodeIgniter
Tuesday, 10 January 2012
Linux Fileserver and ClamFS
I recently needed to provide a file server for a client that would work with Windows and OS X clients. For reasons of cost and maintenance we decided to use Ubuntu LTS Server. We also wanted anti-virus scanning as customer files are introduced to this server regularly. I decided to use the popular, open source ClamAV engine, with ClamFS providing the on-access scanning. I want to talk briefly about ClamFS in general, because there isn't much comment on it that I can find and then about a specific problem I had, because the solution is not necessarily obvious and uses an interesting feature of samba.
ClamFS seems to be most straightforward way to provide on-access scanning with ClamAV. It's a FUSE based daemon that mirrors one part of the file system to a mount point elsewhere, providing on-access protection for reads and writes to the mirrored version. I discovered the following about it:
- The version I installed from the Ubuntu repository doesn't include an init.d script – adding a line to rc.local seems to be the preferred method of boot time initiation. You can, of course, write your own init.d script
- The config file is written in XML, rather than the more readable and more easily editable (certainly on a GUIless server) familiar format that pretty much every other Unix-based config file uses. You need to include the config filename when starting ClamFS
- There is apparently no way to stop the process other than using kill and then manually umounting the FUSE mount associated with it
- Lack of permissions caused a bit of difficulty – the ClamAV user might need some additional permissions before your users can read and write protected files
- There is little documentation; a tutorial taking new users through the steps of installation and configuration would make its use clearer
- Once set up, it seems to work fine: I've had no problems with it.
My configuration is as follows: Truecrypt volumes (which are normal files, stored at a point we'll call location A) are mounted at another point in the filesystem (location B) and ClamFS mounts a copy of B to a third point (location C). Location C is then used for the samba share path.
I wondered if having ClamFS start at boot time and mounting a copy of B elsewhere would prevent TC (which doesn't start at boot time) mounting a volume to B later on, but it turns out mounting volumes "underneath" an existing ClamFS mount works fine.
I had another problem though. Because I have more than one share and more than one encrypted volume, I configured ClamFS to protect the directory above the one in which all the TC drives were mounted. Because of this (or maybe because of some other aspect of the redirection), the free space reported by samba was not that of the individual drives mounted within the ClamFS protected directory, but the space on the drive that contained those mount points (or the point which the ClamFS was mounting to, I'm not sure which as they are on the same partition).
This can be more than an annoyance because Windows systems from Vista onwards actually check this free space before attempting to write a file. If there isn't room, you can't write. In my case, reported size was on a partition that was almost full of TC volumes, so the reported free space (and therefore the maximum file size that could be written by Windows 7 clients) was severely curtailed.
There are two possible ways round this. The most obvious is to only allow ClamFS to mount to and from points inside any TC volumes you want to share. This will cause you headaches if either you have many shares and only want to have ClamFS configured to protect one directory or ClamFS needs to be started before TC mounts its volumes (common, because manual intervention is usually needed on TC mounts for security reasons).
The second solution is to use a feature of samba which allows you to override the internal free space code with a method of your design. The smb.conf man page explains the details – essentially you need to provide a command (writing a script seems to be the most common solution) that will return two numbers. These give the total number of 1K blocks in the filesystem and the number that are free, respectively. The man page makes a suggestion which I tailored slightly:
#!/bin/sh
df -P $1 | tail -1 | awk '{print $2,$4}'
The "-P" switch (added to the df command) forces the results for each drive onto a single line. If you don't do this and the path reported for the partition is longer than 20 characters, a line break is inserted and the positional parameters to awk will be incorrect.
You then need to make sure the definition in smb.conf for each affected share contains the following:
[Sharename] … path = /path/to/share # loc C dfree command = /path/to/script.sh /path/to/TC/mount # loc B
A quick side note: samba calls the script with the location it is trying to ascertain the size of as a first parameter. We've included a first parameter here, which simply pushes the samba-appended one into second position (which is then ignored). I have read that samba may call the script with the parameter "/", having chrooted to the share point before executing the script. I haven't investigated exactly what is happening in my test or production installations, but both work with the procedure I have outlined and this would not be the case if any chrooting were going on. I can only conclude that this is not the behaviour of current versions of samba (I'm using 3.4.7, courtesy of Ubuntu 10.04 LTS) or something else about my environments is altering that behaviour. I'd be interested to hear about different experiences.
Wednesday, 15 June 2011
Installing Linux VMWare Tools on Ubuntu
Many of the steps here will be obvious to most users, but I've detailed everything so you can (if you wish) just copy and paste the lot (almost - see the notes) into shell scripts which will get the job done quicker. And those just starting out will also have a reference they can use.
- [Optional] Change the kernel. Even with the server install I did to write this article, the generic kernel was installed by default even though a kernel optimised for server operations is available. Not only that, but there is a version of the server kernel trimmed down to have only what is necessary for use in common virtualised platforms, including VMWare
- Attach the Tools ISO to the VM. In vSphere Client, you can right-click the VM in the inventory and select Guest -> Install / Upgrade VMWare Tools
- Install tools, with necessary packages (I'm assuming you are starting in your home folder or somewhere equally appropriate for putting the tools installation directory)
# Most commands need root access. You can use 'sudo' # where necessary instead sudo su # Update apt package database (if you didn't earlier) aptitude update # Install packages necessary to build tools aptitude install build-essential linux-headers-`uname -r` # note backticks around uname command, not ordinary # inverted commas # No suitable mount point existed in my default install: # create one mkdir /media/dvd # Mount tools image and extract tarball mount /dev/dvd /media/dvd tar -xzf /media/dvd/VMwareTools-*.tar.gz # You can use auto-complete above: it's just one file # Run install script cd vmware-tools-distrib/ ./vmware-install.pl -d # -d auto-accepts all defaults # Tidy up and exit root shell cd .. rm -rf vmware-tools-distrib/ umount /media/dvd # the script usually does this for you exit
# Install latest kernel version sudo aptitude update sudo aptitude install linux-virtual # Reboot, so the new kernel is running when the tools # package is built and the correct headers will be # selected in step 3 sudo shutdown -r now
- The kernel headers are installed by default on Ubuntu, so the linux-headers-* package is only necessary if the kernel has been changed since installation.
- The "uname" command in the install list ensures that the package for the running kernel is selected. If you've just installed a kernel using one of the metapackages listed above, it will be the latest one and headers can be installed simply with "linux-headers-virtual" (for example).
- To initialise the tools, the "/etc/bin/vmware-config-tools.pl" script needs to be run. If you used '-d' or allowed the script to run this itself (it prompts for this in interactive mode), this will already have been done, but it can be useful to know about this separate step in case of problems.
- If you put the second set of commands into a script, you'll need to remove "sudo su" from the start and run the script as root. "su" opens a new shell and the commands from the rest of the script will not be passed into it if you run as-is.
Wednesday, 27 April 2011
Temporary PATH Additions: Modifying the standard CMD Here Extension
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\*\shell\pathhere]
@="Cmd with &Path here"
;"Extended"=""
[HKEY_CLASSES_ROOT\*\shell\pathhere\command]
@="cmd /k path %W;%%PATH%% && pushd %%USERPROFILE%%"
[HKEY_CLASSES_ROOT\Directory\shell\pathhere]
@="Cmd with &Path here"
;"Extended"=""
[HKEY_CLASSES_ROOT\Directory\shell\pathhere\command]
@="cmd /k path %L;%%PATH%% && pushd %%USERPROFILE%%"
[HKEY_CLASSES_ROOT\Directory\Background\shell\pathhere]
@="Cmd with &Path here"
;"Extended"=""
[HKEY_CLASSES_ROOT\Directory\Background\shell\pathhere\command]
@="cmd /k path %V;%%PATH%% && pushd %%USERPROFILE%%"
Tuesday, 15 March 2011
The Homeopathic Database
A few friends and I were discussing databases the other day. A colleague of one of us had tried to persuade him that a memory-based DB would be ideal for their project because of the increased commit speed compared to a disk-based system. Data would be eventually written to disk "at some point". My friend pointed out that /dev/null was even faster for writes and only moderately less useful if you need a cast-iron guarantee that all committed data will be available in the future.
If, instead of writing to /dev/null, you write to /dev/zero, it has much the same effect on your data, but reading from /dev/zero produces an infinite stream of zeros. Immediately, we realised this was the answer to every database user's dreams – dilute your data in an infinite sea of zeros: the Homeopathic Database.
Think about it. All those ones interspersed with zeros you started out with may seem important, but the advantages are worth considering. First of all, we know from the countless randomised, double-blind trials done on all homeopathic medicine* that it's a very effective idea. The fact that you only get zeros out at the end is not important because they have absorbed information from all the ones that have been diluted in them. As we know from homeopathic practice, the more zeros we have to dilute the ones in, the more effective the mixture, so the infinite number of zeros in /dev/zero means that what is stored in the database will be really good data.
Secondly, something that all DBAs worry about, backups, are really easy because the data is particularly well suited to compression: although there's an infinite amount of data in /dev/zero, as it's completely predictable, it's infinitely compressible. Backups therefore take no time at all.
The one thing you must remember to do is invert all your data before writing it to the database: the "law of similars" means that retrieved data will have the opposite effect in homeopathic concentrations as it did originally. And you may have to hit your server with a leather cushion while transactions are being committed.
Thanks to Mark, Steve and Alistair.
*They do do that, don't they? I mean surely no one would let people just sell any old rubbish without proper scientific investigation into whether or not it was better than placebo, would they? People who market it are able to make such grand claims for it, it seems certain they have data from repeatable, peer-reviewed trials or they wouldn't hold such strong beliefs.
Tuesday, 1 March 2011
.net Graphics in Windows Forms – Part 2: Anti-Aliasing Your Primitives
I promised this second instalment on Windows Forms graphics would be on rendering settings. Like the previous part of this series (on ControlStyles), this topic will not show any tricks or undocumented features, but I will discuss a few of the framework settings you can alter that change its behaviour and which I don't see discussed very often. The third instalment will discuss text rendering.
If you've never used any of the vector drawing primitives, you might like to try one or two out – just create an empty Forms project, override OnPaint() and you can draw (outlines) or fill (solid colour) several different shapes using methods in the Graphics object. The instance of Graphics you need is passed to OnPaint() in the PaintEventArgs object.
The basic things you need to know are:
- Coordinates start at the top left of the control's client area and increase to the right (x) and down (y)
- Angles are measured from the positive X axis (i.e. horizontally, to the right); angles increase clockwise and are given in degrees (many graphics systems and System.Math use radians!)
- Brushes and Pens (which you'll find you need to fill and draw shapes, respectively) should always be disposed of if you create them. I've never looked into the mechanisms at work here, but the essentials are that they wrap non-managed resources and we're led to believe that the garbage collector can't be relied upon to release these resources before the OS runs out of them.
- Manual disposal applies to several other objects associated with drawing so make sure you check. The golden rule is that if you make it, you break it. If you were passed it from elsewhere (e.g. the Graphics object in the PaintEventArgs) you should NOT call Dispose() on it yourself
- You can use ready-made brushes and pens, available in the Brushes and Pens static classes with a myriad of colours to choose from. As these are pre-existing objects don't call Dispose() on them yourself (you can't anyway – you'll get an exception)
- If you don't need to draw everything, don't! You are given ClipRectangle as part of the PaintEventArgs object and if you can get away without drawing outside this rectangle, that can speed things up
There are a wealth of drawing tutorials available, so I don't want to say anything more about the basics. Instead I want to look at a few of the properties that are available in the Graphics object and discuss what they do. The ones we are interested in are as follows:
- SmoothingMode
- PixelOffsetMode
- InterpolationMode
SmoothingMode tells the rendering engine to use anti-aliasing when drawing lines and shapes. As I'm sure many of you will know, (at least in this context) anti-aliasing reduces jagged edges between areas of different colours by inserting transitional pixels of an intermediate colour where the shape of the (idealised) edge would cover only a proportion of the entire pixel.
The enum which defines the possible values for SmoothingMode has six members. However, one (Invalid) can't be used and of the other five, two are synonyms for "use anti-aliasing" (AntiAlias, HighQuality) and three for "don't use ant-aliasing" (Default, None, HighSpeed). Note that within each of these sets the results are identical: there is no difference at all between rendering done with AntiAlias or with HighQuality. Notice also where "Default" is: by default, anti-aliasing is switched off.
Anti-aliasing is not always the answer. It's slower, can make shapes look blurred and it doesn't really do anything if all your lines are vertical and horizontal, but I would strongly suggest that you at least try switching it on and examining the results if you are drawing anything remotely complex in your controls.
PixelOffsetMode tells the rendering engine how to align pixels on the screen with the coordinate system used to define points on the drawing surface. Like SmoothingMode, several of the enum members are equivalent (Default, HighSpeed and None are the same and HighQuality and Half are also equivalent). "Invalid" is again present but unusable. This setting is also dependent on the use of SmoothingMode.AntiAlias (or one of its synonyms) – PixelOffsetMode makes no difference if anti-aliasing is not being used.
The difference between the two modes is that any integer coordinate is considered to be at the top left of the pixel (None) or the centre of the pixel (Half). PIxelOffsetMode.Half takes longer to process but theoretically offers higher quality rendering because it is easier to follow the idealised course of a line if it goes through the centre of pixels rather than butts up against their edges. However, you may find it makes lines look softer than you want, especially verticals and horizontals.
In my opinion, it makes only a marginal difference to your results and given the extra calculation work it generates, is definitely in the "try it out" rather than "switch it on regardless" category. If anyone knows of a class of problem in which it makes a noticeable (and useful) difference to the results, I'd love to hear about it.
The final setting I want to briefly discuss is InterpolationMode. This property allows you to tell the rendering engine how you want it to treat images if they are scaled or rotated and it's only of use if you are using a bitmap in this way. I'm not going to go into the different modes available: it's enough to know that you don't have to stick with the defaults and a proper treatment of each method can easily be found elsewhere. For an interesting comparison of quality and speed, Bertrand Le Roy benchmarked the different methods: timings and rendered results can be found here.
For completeness, I'm also going to nod towards CompositingMode and CompositingQuality. These properties are worth looking into if you are creating images by adding partially transparent layers to your drawing surface, but are fairly self-explanatory and I'm not going to go into any more detail in this post.
Tuesday, 22 February 2011
.net WaitCursor: how hard can it be to show an hourglass?
// Method #1 Control.Cursor = Cursors.WaitCursor // Method #2 Cursor.Current = Cursors.WaitCursor // Method #3 Control.UseWaitCursor = true;
MyForm.UseWaitCursor = true; CancelButton.UseWaitCursor = false; CancelButton.Cursor = Cursors.Default;