Saturday 27 August 2011

Firefox "Caret Browsing"

A very short post ...

For a while my Firefox has been showing a blinking caret on all pages, even on pop-ups. Turns out I must have inadvertently hit 'f7' at some point which turns on a traditional text editor style caret for text selection (i.e one you can see).

There's nothing obvious in the options to disable this. But pressing 'f7' again sorts it out.   I wouldn't have thought to search for 'caret' in the options(!)


Thursday 21 July 2011

Crontab Shells

Sometimes you'll be able to execute a Linux shell command fine from the command prompt, but fail when you use the same command from Cron. You might even have remembered to address the script/whatever correctly, and run Cron from the appropriate user, yet it might complain about the syntax of the actual command.

I've found that this is generally down to different shell interpreters being used. Although the documentation strongly implies that when a SHELL directive is not specified in a user-level Crontab, it will use the one specified in /etc/passwd, this doesn't seem to be the case.

I have a user with /bin/bash set-up, but it's clear in the output mailed by Cron that the SHELL is /bin/sh. 

So you might have to explicitly set a SHELL= environment in your crontab.

(I don't think it helps when Ubuntu distro's jump between bash and dash, but that's probably another tale...)

Addendum
Another important note for CRONTAB entries. The CRON program treats '%' as a line terminator. If your commands are failing, and the email you get seems to show only part of the command line, you probably have a '%' in the command. You need to either escape the % with a preceding '\' (e.g. "date +\%e") or pipe the command through sed.

Tuesday 28 June 2011

Monit for Monitoring - check your default page!

I have a few servers that I look after at work, and having spent too long staring at them day after day, night after night, just to make sure they are working, I decided I needed some better alerting and monitoring.

Being a Ruby fan I looked closely at both 'god' and 'blue-pill', but at the end of the day decided against both. Partly due to concerns raised about memory-usage with a Ruby-based package (although most of these fears seem to have been allayed in recent versions), but also because you really don't want a big footprint on a monitoring product. You should not need to monitor a monitor.

So at the end of a little research I installed 'Monit' on the servers. This has a nice little web interface (secured) that will enable you both to monitor, and restart monitored services. Ideal if you can't SSH onto a server and just need to get things back up and running. The monitoring file syntax is simple and there are examples for the common sorts of services you might want to monitor (MySQL, Postfix, Apache, ...) You can be simply alerted on certain conditions (storage, CPU, #children) or have corrective action taken (restarting a service; stopping it; so on).

I've found at lest one thing to be wary of though. You can monitor a "web process" (e.g. Apache) by checking once per 'monitoring cycle' that your specific port responds to a specific protocol. This can go as far as checking the response from the protocol - e.g. test that page 'home.html' returns some regular expression. Quite nice, but rarely needed - Apache is rock solid in my experience. Simply knowing the process is active is good enough. However, if you *do* choose to run a default  'protocol HTTP' test, then this will 'GET /' from your site. I was looking at my MySQL logs and noticed a connection and a couple of SELECTs every 5 minutes. Baffling, until I noticed they coincided with the Monit probes as seen in the Apache logs. The website in question sits at the base of the domain, and when sending the home page also checks to see if a cookie is present in the database, and if not builds a default response, from default settings held in the database - hence the calls to the database. I'm not happy about polling my database server every 5 minutes (see one of my previous posts!) so I've now reverted to a simple port 80 TCP probe - that just checks the port is open rather than sends in an HTTP request. I feel this is good enough ... as I said, if Apache is there at all, then it's working.

So you might have a large home page with lots of dynamic content. Be aware that that page will be fetched by Monit whenever it checks your web server. Probably not an issue for most, but weigh the cost against the value.

When submit !=== submit

One of those little 'gotchas' in HTML/Javascript ... if you're happily attempting to submit a form using the 'document.form.submit()' tied to an event (such as an 'onChange' on a select box) then you must make sure that 'submit' is actually the function/method 'submit' and not an element on your page.

By that, I mean, you might have a "normal" submit button that you've called 'submit'. And that will "not be a function" so it won't go submitting the form if you invoke it.
Nice post on it: http://thedesignspace.net/MT2archives/000292.html

In the past I've always "blanket-covered" my submit button by using id='submit', name='submit', and type='submit'. Not so in future.

Monday 23 May 2011

How I handle "MySQL Server has gone away"

The Problem

I use Ruby and the Sinatra framework for developing web-based applications. I prefer Sinatra's lightweight approach over Rails. For similar reasons, I "roll-my-own" database classes rather than utilise something like ActiveRecord or DataMapper (the latter looks more tempting of the two for Ruby developers).

During development I've occasionally encountered the "MySQL Server has gone away" situation, which is a frequently raised issue, and various solutions exist. These range from periodic "false transactions" to keep the MySQL connection active; adding a 'before' filter to your Sinatra application to verify the connection status; and using ActiveRecord (which would appear to perform its own 'verify' prior to each call).
I'm against these solutions.

I don't like wasting resources, and these sound a lot like that. Okay, while my server is idling and under no stress then I have resources to burn, and that's typically the case. Certainly stray database access costs next to nothing when your database and application are on the same server, but imagine when your app takes off (dream of Twitter here!) and you have a distributed load-balanced farm of servers to work with. Okay - you won't be using the current solution, but to me these solutions just smack of quick-fixes without a lot of thought.

To put it another way - consider the 'verify before use' scenario. At 11 in the morning, when your application has been up for 3 hours, and served 10,000 MySQL requests, you will have asked (10,000 times) "are you there?", to which MySQL will have responded "yup". After about 9,999 queries, MySQL ought to respond with "yes ... didn't you listen the last 9,999 times!" and take its ball away. When we're working, we're working, and there's no need to check. It's only in those edge cases where we haven't been doing anything for a while we need to ask before using.

Another solution is to increase the 'wait_timeout' parameter of MySQL. By default it's 8 hours, but is increasing it a solution? What do you increase it to? 12 hours will probably get you from day-to-day, but not across a weekend. 48 hours? Okay - what about  a long weekend? And do you want the connection to remain open when nobody is going to use it for several days? No. If you aren't going to be used, don't be there.

-----

Okay. Back to how the issue arises. You connect to MySQL, and generally re-use that connection on subsequent calls, rather than open new connections for each request (because spread across a large user base, making many calls you soon run out of connections ... your solution doesn't scale that way). Effectively you're using some form of 'pooled thread manager' to handle connections between your application and the database. When both are being utilised, all is well; what happens when your application is quiet though, is that (after 8 hours / 'wait_timeout' seconds) MySQL (silently) closes the open connections. And then a disconnect exists between what your application believes is the state of the connection, and the actual state. So your application attempts a request on an inactive connection, and "MySQL Server has gone away".

Another solution is the inelegant 'try/catch' around any SQL call. Catch the error, verify it's a connection issue, and re-connect. This is not resource wasteful (on the MySQL side) but leads to sloppy coding practice. I've seen entire applications where every method is enclosed within a try/catch pair. If that were really a 'solution' (to everything) the language would already support it out-the-box, rather than demand a ubiquitous trap around every event. Prolific use of such techniques points to an architectural issue, beyond solely your database access mechanism. But enough of that ...

What we really want is to ensure our database connection state is accurately reflected within the application (when it needs to know it). It would be useful if MySQL triggered an event when it closed a connection (through timeout) rather than doing it silently (the .net modules do have such an event, but it is not for this situation - it is a reflection of a deliberate application event, not of one originating within MySQL) but it doesn't appear to - or if it does, I could find no documentation of it, nor anything in any logs - so we'll have to monitor the situation ourselves and react accordingly. However, once we've achieved this we can ensure that the application re-connects only when it needs to, yet also enable the connection to timeout naturally, based on its configuration. Moreover we would not need to unnecessarily verify the connection status prior to each call.

-----

My Solution

To implement a simple 'synchronisation' mechanism between application and database connection, I have written a simple server-side job which queries the database connection:

A cron job which runs each day at 23:30 (on a work day the connections will typically be open at this time).
  • It interrogates MySQL (SHOW PROCESSLIST / SHOW GLOBAL VARIABLES) to see how 'old' the database connections are, and when they are due to expire
  • Sleeps until this time (or 10 seconds after that time actually)
  • Re-awakens to check the connections again
  • If nothing has happened in the interim, the connections will now be inactive - in which case the application is signalled to update it's connection view to 'closed' (and hence the thread management will re-connect when next used)
  • If they are active (implying some database calls have been made) the new 'timeout' time is recalculated, and the process sleeps again, until that time.
This carries on until the new 'timeout' time is beyond the next day's process start time (in my case, 23:30 the next day) when the process quits.

On a typical week (work) day the process runs at 23:30; immediately sleeps until 03:00; wakes and tells the application that the connections are closed, and then exits. The application re-connects (around 07:00) and the day progresses as 'normal'. I have only a sleeping process which exists in the server for a few hours overnight.

At the weekends, the process runs at 23:30 then does nothing as the connections have been 'closed' the previous day.

When work continues overnight, the process can run around 03:00, then at 10:00, and 18:00. Each time it awakes briefly, tests the connection, and sleeps again.

-----

This solution keeps the application clean - it is concerned with using connections that it knows the state of, and does not add unnecessary verifications to connections that are being used and are active. The database connections can quiescence based on their timeout values and are not unnecessarily open for longer than they need be. The server has an extra process runnning, but not all of the time, and it sleeps for the majority of the time.

I need to improve the process to create 'monitor threads' on a per database level (I have isolated databases on a per system basis - at present they are all accessed synchronously, so they all timeout in harmony, but this isn't necessarily the case).

"My SQL Server has gone away" still happens - I just know when it does now.

Tuesday 19 April 2011

Adding Ruby fast-debugger to Windows

I've recently started to use Netbeans at work for my Ruby development (I fancy having a go with RubyMine, but Netbeans will do for now. RubyMine looks v similar but is more Rubyesque (of course) and permits easier remote debugging).

One issue with the Windows installation is that the ruby-debug-ide gem will not install; it's unable to build from native extensions. The fix I found is in the following comment (not in the main article):

http://www.definenull.com/content/netbeans-6-ruby-fast-debugger#comment-251

If that comment goes, here are the instructions:

RubyInstaller DevKit fixes this

If you install Ruby using RubyInstaller (http://rubyinstaller.org/download.html), and next install RubyInstaller Development Kit (http://wiki.github.com/oneclick/rubyinstaller/development-kit) on top of that, then the NetBeans "Install Fast Debugger" feature works without problems.


The RubyInstaller Development Kit provides some standalone elements of MinGW for compiling Ruby native C extensions.


Joe Howse

So basically, install the DevKit and then the gem installation from inside Netbeans works fine. Thanks to "Joe Howse"!


UPDATE : Ruby 1.9.3


For details of the installation alongside Ruby 1.9.x (there are xxxxx19 versions of most of the Gems involved as the 1.8.x versions don't work) see here.

Although I still couldn't get it to work inside of Netbeans - 'debugger' might be the option.

Also, do install the correct DevKit version! Stack-Overflow.

Sunday 20 February 2011

Editing XLS files with embedded Web Queries...

...is very tough!  It's as though as soon as you go near them everything goes wrong. It's not helped by the fact that you can define the query in an external IQY file, but if that file doesn't exist (the spreadsheet was built by user A, but is passed to user B), Excel falls back to using the definition embedded within the spreadsheet itself. So there doesn't have to be a connection between the values shown on the screen and what is in the file (and that sneaky option which says "keep instep with file" always seems to be grey-ed out!)

What's worse is that when you look at a query, or change the query definition, Excel silently resets the parameters to some defaults. Losing whatever you had. This can screw up formatting and make the standard "prompt for values" pop-up at execution time. No good if your spreadsheet is being run by the (useless) Task Scheduler at 3 in the morning.

Basically, leave well alone.

Which is a problem when you're trying to change a spreadsheet which uses them. One which refreshes data on the Load event and just doesn't work if that doesn't happen.

My latest cunning solution (attempt) is to edit the file outside of Excel. As you know, the new 'x' formats (.xlsx/m and .docx/m) are really zip archives with the data described in XML format within that archive.  The data connections are in a connections.xml file in the 'xl' folder. Looking at that it's fairly easy to make the necessary changes, without Excel getting a chance to 'help' you be auto-updating lots of other things.

I'm hopeful that there's no checksum in the file contents that verify whether the archive has been tampered with. I suspect not; these are standard zip archives I believe.

Thursday 27 January 2011

Curvy Corners in IE

Have been fighting with the lack of CSS3 compatibility in IE8 ... and not sure what will be there in IE9. Just like 'pure' curved DIV corners rather than faking it with background images.

Opera, Firefox, and the delightful Epiphany all do them natively, but IE, grrrrrr ...

Several options tried, and my results are:
  • curvycorners (js) : initially promising but event redraw costly and occasional errors popping up
  • border-radius : uses .htc/vml but only handles one configuration for each corner, and seems to not handle re-draws properly
  • DD-roundies : similar but more complete version of above [not tried]
  • PIE/CSS3 : again, an htc/vml solution and appears to work beautifully, adding most all of CSS3's commonly used features to IE
So, my conclusion is to go for PIE. It lets the other browsers get on with it, whilst supporting CSS3 natively from within your CSS - so less intrusive than most other options. I now have Firefox and IE displaying almost identically without any fudged HTML or CSS. Yay.

Final tip for XP Firefox users - switch on ClearType if you have an LCD display. Enables the sub-pixel smoothing across Windows that IE uses. Be rid of those spindly letters!

Thursday 13 January 2011

Why Cloud Computing isn't

There's been a lot of talk over the past couple of years regarding 'The Cloud' - people advocating the use of it either within the realms of business or personal computing, as a means of reducing costs and enabling anyone to have their own virtual data centre.

However, that utopia isn't really the case, is it? Cloud computing by that standard is still some way off, and might never really arrive. The idea of The Cloud marketed to us is one of your computer, your data, being somehow "out there" in some floating opportunity, scalable and configurable at a whim. The reality is that the majority of 'Cloud' suppliers are the same people who have been selling server-space and virtual servers for the past 5 years. All we're doing is buying a re-branded offering from them. There's no amorphous 'cloud'; you buy cloud-space and you can go look at the servers you are running on - touch them, feel them, and - most importantly - have them turned off. Have your cloud dissolved. Just ask Wikileaks ... how did their Amazon cloud work out for them? Turns out that rather than being "out there" it was on a collection of Amazon owned servers. Not a worldwide entity, but very firmly based on US soil, and able to be switched off.

In a true 'Cloud' the data and processors would be truly distributed, and replicated across multiple nodes not located in any single country. It would be no more in one place than in any other. A bit like an internet on top of our current one. And if you tried to switch off a server here another would take its place there. Imagine - not just the internet everywhere, but all the web pages everywhere too. A web of collected nodes all providing data and processing services, owned and controlled by no-one. TBL's vision, just extended beyond the communications level.

Real Cloud Computing.