Wednesday, May 2, 2012

Detecting non-ASCII characters in a git commit hook

If you don't want to allow non-ASCII characters in your code, which can appear when pasting text from Word, you can simply add a pre commit hook to git to check for this. Create a file called pre-commit in the .git/hooks folder of your code repo with the following contents, and change the permissions to user executable (chmod u+x .git/hooks/pre-commit), and git will halt when you attempt to commit if there are non-ASCII characters in the commit (binary files are not looked at). Git will also display the character(s) found, and show the diff of the file that includes the character. Here is what the pre-commit file should look like: If you need to add non-ASCII text that you know is safe, you can temporarily disable the script by running "chmod u-x .git/hooks/pre-commit", make your commit, then "chmod u+x .git/hooks/pre-commit" to re-enable it.

Sunday, April 22, 2012

My Trips Facebook app will not work after June 1

Starting on June 1, 2012, the My Trips Facebook app will no longer be available. This is because Facebook will stop supporting a technology, FBML, that My Trips is built with. Because My Trips is just a fun little side project for me that I did on the site, completely outside of my regular job, and the usage of My Trips is very low, I can't justify spending the time that it would take to redesign My Trips with a supported technology.

In a nutshell, FBML allowed me to pretty quickly create My Trips without having to specify font sizes, colors, etc. Things like the tabbed look of My Trips are possible with a very simple FBML command. When I started work on My Trips in 2009, Facebook was promoting FBML as one way to create Facebook apps. Had it not been for FBML, I probably would not have created My Trips. However, in 2010 Facebook started discouraging the use of FBML. I suspect this is mainly because it uses too many resources on their servers. I don't agree with Facebook's decision to completely abandon FBML, however, as a a software developer I can understand why they would abandon it.

I'd like to thank everyone for using My Trips over the years. If you know of any Facebook apps that provide similar functionality, please post a comment here!

Thursday, March 29, 2012

jQuery .on performance

jQuery's .on() function is very useful.  It allows you to bind event listeners for elements that haven't yet been created.  On pages where you're dynamically adding elements, this can make the code much cleaner and unobtrusive.  Rather than attaching the event handler to every newly created element one at a time, simply attach a class to all new elements, and call .on() for this class name with the event handler function once when the page loads for the first time.

.on() simply grabs the event when it happens at the higher level that you specify (usually document or a container div), checks if the element that caused the event matches any of the selectors for any added .on() calls, and if so calls your handler.

This functionality is also provided by .live(), but as of jquery 1.7, this function is deprecated. Use .on() instead.

tl;dr

Use .on()! Using .on() to capture the event over attaching a handler directly to each element has virtually no performance impact when the event is triggered, even when there are a huge number of unique elements with their own .on() handler on the page. However, using .on() does have a very noticeable performance advantage when generating/rendering elements. So any performance arguments against .on() are invalid.

Measuring Performance

Because of the way that it works, you may think that there is a performance hit to using .on() instead of attaching the handler to each element when it's created.  So I decided to do some extensive testing to see if this was the case.

I wrote a simple test page that dynamically generates lots of clickable elements.  See this page at http://coordinatecommons.com/jquery-on-test.html.

For each test case, there are two different measures of performance. First is how long it takes to dynamically generate the elements. When using .on, this is mostly the time to simply generate the DOM elements. However, when using .click to bind the listener one at a time, it takes longer because of the added step to attach the listener at this point.

The second measure is how long it takes for the callback to be called after clicking. For this, the time is how long between the parent container's mousedown event and the event handler being called. Because the initial time is on mousedown, there is some variability test to test based on how much time it took me to let go of the mouse button. So any result here can vary by 100-150ms, the results should not be analyzed any more precise than 150ms intervals. And realistically you can probably subtract on average 80-100ms from each of these to get the actual times.

Test Cases

  1. Generate 10,000 divs with the same class name, using .on - generate 10,000 of the same type of element that will all use the same event handler. Attach the same class name to all elements, one call to .on.
  2. Generate 10,000 divs with one of 100 different classes names, click handler using .on - 100 different event handlers, 10,000 total elements. .on is called 100 times
  3. Generate 1,000 divs with unique classes, click handler using .on - 1,000 unique event handlers for 1,000 elements. .on is called 1,000 times
  4. Generate 10,000 divs with unique classes, click handler using .on - 10,000 unique event handlers for 10,000 elements. .on is called 10,000 times
  5. Generate 1,000 divs with unique IDs, click handler using .click - attach an event listener to each element with .click as the element is being added.
  6. Generate 10,000 divs with unique IDs, click handler using .click - same as above but with 10,000 elements.

Tests 1 and 6 are the ones that will really evenly compare performance of attaching a handler to each element as it's added versus using .on.

Test Conditions

For Chrome, Firefox, and IE9, a desktop machine (quad core 3 GHz, 8 gigs of RAM) running Windows 7 Professional 64 bit was used. For IE6, 7, and 8, a Windows XP Virtualbox VM running on the desktop machine above was used.

Performance Results Table

Chrome 17 Firefox 11 IE9 IE8 IE7 IE6
Test 1 RENDER- 10K same class/handler .on 912 ms 271 ms 3020 ms 3142 ms 3668 ms 3877 ms
Test 1 CLICK - 10K same class/handler .on 70 ms 74 ms 110 ms 121 ms 110 ms 133 ms
Test 2 RENDER - 10K one of 100 class .on 1081 ms 344 ms 3270 ms 4857 ms 5732 ms 5965 ms
Test 2 CLICK - 10K one of 100 .on 94 ms 114 ms 111 ms 131 ms 137 ms 95 ms
Test 3 RENDER - 1,000 unique classes .on 328 ms 164 ms 832 ms 1483 ms 1385 ms 1021 ms
Test 3 CLICK - 1,000 unique classes .on 140 ms 162 ms 107 ms 140 ms 107 ms 120 ms
Test 4 RENDER - 10,000 unique classes .on 2772 ms 1397 ms 14050 ms 15602 ms 47609 ms 29614 ms
Test 4 CLICK - 10,000 unique classes .on 245 ms 252 ms 149 ms 421 ms 409 ms 442 ms
Test 5 RENDER - 1,000 unique ID .click 281 ms 175 ms 898 ms 1983 ms 2133 ms 2023 ms
Test 5 CLICK - 1,000 unique ID .click 106 ms 112 ms 100 ms 103 ms 100 ms 90 ms
Test 6 RENDER - 10,000 unique ID .click 2826 ms 1576 ms 14618 ms 50673 ms 65835 ms 66606 ms
Test 6 CLICK - 10,000 unique ID .click 80 ms 113 ms 106 ms 94 ms 100 ms 130 ms

Results


Using .on() to capture the event over attaching a handler directly to each element has virtually no performance impact when the event is triggered, even when there are a huge number of unique elements with their own .on() handler on the page. I expected there to be at least some noticeable lag in the click times when there are 10,000 unique elements, but it was only noticeable on IE8 and below and just barely noticeable. And, that's with using .on() in a way that it shouldn't be used. Test 1 is the way that .on() should be used, and it performs wonderfully. Times are identical to test 6, where each element has a directly attached handler.

However, using .on() does have a very noticeable performance advantage when generating/rendering elements. This is obvious in test 1, the render times for the same number of elements is anywhere from 7 to 17 times faster than attaching the handler to each rendered element!

So based on this my recommendation is to use .on() to attach event handlers any time there will be more than one element added with the same function used for the handler.

Other observations


Another thing I found interesting is that on nearly all tests, Firefox is the fastest. Chrome is definitely behind Firefox for these tests. Also, seeing the numbers for IE8, it's a real shame that nearly 25% of the world is using this browser. Microsoft did very little to improve performance in between 6 and 8, and performance improvements in 9 many times are very small. Microsoft, IE10 better be blazingly fast! And, please, work on getting Windows XP users to upgrade to IE10. Firefox and Chrome run perfectly well on Windows XP, your own browser should as well.

Sunday, January 22, 2012

Database migrations with deployed JRuby WAR files

In my previous post Compiling Rails project for distribution in a WAR file with JRuby, I explained how to build a  WAR file from a Rails project to distribute on to systems that have a Java app server like Tomcat or Glassfish.  If you're running this in production, you're probably going to want to run database migrations after the WAR file is deployed.  Unfortunately this is not as straightforward as you might expect.  But it's not too difficult.  To run database migrations, you must first create a file in your project, config/warbler.rb with the following contents:

Then, add a file named script/db_migrate with the following contents:

Now, on the production system, after the WAR file has been deployed, from the root directory of your web app, run the command:

jruby -S ./script/db_migrate

If you're running in 1.9 mode, add --1.9 before the -S. This assumes that you have a jruby executable in your path somewhere on the server. There should be a way to run the JRuby that is bundled in the WAR file, but I have not spent enough time looking in to it to figure out how. Has anyone had success with this?

Tuesday, January 17, 2012

Compiling Rails project for distribution in a WAR file with JRuby

I recently started using JRuby for a Rails project and overall the experience has been excellent.  Using RVM, you can just switch to jruby to build a new project (rvm use jruby), and just about everything will work the same.  One of the big features of JRuby is that you can bundle your entire app, including JRuby itself, in to a WAR file that Java servers like Tomcat and Glassfish can serve up, so your app can be distributed on to servers that only have Java.

After you have JRuby installed, simply install the gem warbler.  You'll then get a command line tool, warble, to generate war files from your project.  Simply run warble from your project's directory, and a war file will be produced.  It's as easy as that!

Another great feature is that the Ruby code can be compiled down to Java class files, so your source code is not visible.  This is great for distributing on to a server where other companies will have access that you don't want seeing your source code.  However, this is not working for me.  Warbler should support this, just run "warble compiled war" from the command line in your projects directory instead of "warble".  This will produce a war file with both .rb and .class files.  The .rb files however are simply stubs that require the .class file, none of your code is in there.  But, for me, it's not generating .class files on all of my controllers.  I've entered an issue for this at https://github.com/jruby/warbler/issues/72.


I will be making another post on how to do database migrations on the server you're deploying to.

Wednesday, January 4, 2012

Rails 3.1 "Could not find a JavaScript runtime" error

I just got my first Rails 3.1 on JRuby project created.  However I keep getting the error "Could not find a JavaScript runtime ..." when trying to do pretty much anything with it.  After digging in to this some, it turns out that with the addition of CoffeeScript in 3.1, Rails needs to run Javascript code natively.  A gem called execjs was added to Rails to allow this.  EXCEPT, execjs itself needs something else to actually evaluate the Javascript.    See https://github.com/sstephenson/execjs for a list.  If you're running in Mac or Windows, you're good, there are system libraries to do it.  But if you're in Linux, unless you have node.js installed, it won't work.

Let me repeat that.  Rails 3.1 apps by default will NOT work in Linux, unless you have node.js installed.  This is absolutely ridiculous... guess what Rails team, a lot of people are developing in Linux and not Mac.  I understand how stuff not working in Windows is released, but Linux??

So to get things to work you'll need to add one of the gems listed on the execjs page.  If you're running the MRI Ruby, just add:

gem 'therubyracer'

to your Gemfile.  I've read a lot of complaints about therubyracer.  But it appears to be the most popular.  If you're running JRuby like me, add:

gem 'therubyrhino'

to your Gemfile.  I've verified that this works correctly.

FOLLOW UP 1/13/2012:
See my comments.  I have made a fix to Rails to put the appropriate gem in the Gemfile if you are in Linux, and issued pull requests to the Rails core to incorporate this code.  They have made some comments on it, hopefully it will be pulled in to Rails soon.