Google Analytics Tagging with HTML5 data-* Attributes

Imagine if you will, a page with a significant amount of dynamic page elements. For instance, a slide-out panel containing a number of ‘panes’ containing topical information on various ‘factors’. Let’s assume that once this slide-out is open, we wish the user to be able to jump from factor to factor (similar to a slideshow) using buttons along the bottom of each factor. Let’s go even deeper and say that within each factor, we have a collection of close-up images. Again, the user should be able to navigate the close-ups via next/previous buttons similar to a slideshow. In addition to the next/previous buttons, along the top edge of the ‘close-ups’ section are progress indicator dots that highlight according to which close-up is active (and link directly to individual close-ups).  Given that this functionality should work without JavaScript, all of these UI controls are marked up as links. And we have quite a few of them—to wit, x*(x-1) factor links (for x factors) plus x*y (direct links to close ups per factor) plus x*(2y) (next/prev links for each y close-up for each factor). So, let’s say we have 3 factors and 3 close-ups per factor. We now have 33 links! And now the task is to tag each of these links with unique labels that will be sent back to Google Analytics upon each click event. I’ve broken down a brief subset of the analytics tags for each of the three types of links below.

Tag Templates

Tagging template for the factor navigation along the bottom of each [x] factor:

/page_name/factor_[x]/factor_1_icon
/page_name/factor_[x]/factor_2_icon
/page_name/factor_[x]/factor_3_icon

Tagging template for the prev/next buttons on each [y] close-up on each [x] factor:

/page_name/factor_[x]/closeup_[y]/next_arrow
/page_name/factor_[x]/closeup_[y]/prev_arrow

Tagging template for the close-up direct links (also used as progress indicator) on each each [x] factor:

/page_name/factor_[x]/closeup_dot_1
/page_name/factor_[x]/closeup_dot_2
/page_name/factor_[x]/closeup_dot_3

Approach

As with anything, I try to keep my code DRY. Considering that these tags will likely end up as magic strings in some form or another, I’d like to reduce the maintenance overhead of these tags as new factors or close-ups are added or removed. The tags themselves convey the hierarchy of the structure in which they are contained. So let’s map the tagging hierarchy onto the structural hierarchy. This tagging information could be embedded in the id or class attributes of an element, though I think that would be coupling two separate needs (JS behavior/CSS styling + Analytics) onto the same data. This information could also conceivably go into the title attribute, though this attribute is meant for human (read: end-user) consumption. Nothing seems to fit, so let’s try out an HTML5 data-* attribute: data-ga.

First, the /page_name segment should map to the page:

<body data-ga="/page_name">

Each ‘factor_[x]’ segment should map to its own factor pane:

<ul class="factors">
  <li data-ga="/factor_1" />
  <li data-ga="/factor_2" />
  <li data-ga="/factor_3" />
</ul>

Each ‘closeup_[y]’ segment should mapt to its own close-up pane:

<ul class="closeups">
  <li data-ga="/closeup_1" />
  <li data-ga="/closeup_2" />
  <li data-ga="/closeup_3" />
</ul>

 

And each link or button gets its own respective value. Keep in mind, multiple data-ga values will be ‘scoped’ by their ancestors’ data-ga values:

<a href="#factor-1" data-ga="/factor_1_icon" />
<a href="#factor-1-closeup-1" data-ga="/closeup_dot_1" />
<a href="#factor-1-closeup-2" data-ga="/next_arrow" />
<a href="#factor-1-closeup-3" data-ga="/prev_arrow" />

Now, whenever a link is clicked, we simply concatenate the data-ga values from each ancestor! The method below should have the context of this as an anchor element with a data-ga attribute. Generally, it would be in the click handler of any element matching the selector: "a[data-ga]". Once ga is concatenated, it can be used as the tag for a Google Analytics API call (_trackPageview or _trackEvent).

$("a[data-ga]").click(function(event){
  var ga = $(this).parents('[data-ga]').andSelf()
           .map(function(){return $(this).attr('data-ga');});
  ga = $.makeArray(ga).join('');
  _pageTracker._trackPageview(ga);
});

Demo

//jsfiddle.net/eY4cq/1/embed/

A few additional notes

Performance

It would be much better to calculate the ga-tag for every link on the page during page load and cache the result in the data store of the element. As written, the ancestor traversal is executed on every click, and would result in the same return value each time. However, as a special case when I was first implementing this, there were some widgets that altered the hierarchy of the page, thus it was necessary to only perform the concatenation at event-time rather than load-time.

As an additional performance boost, I added a class of ga-scope to each ancestor element that contained a data-ga attribute. This allowed me to use a class selector in the .parents() filter. This will only yield a performance boost in browsers that support a native implementation for querying by class name, thus allowing the selector engine (MooTools or jQuery) to avoid stopping to inspect every single ancestor on its way up the tree.

MooTools

I originally implemented this with MooTools. During the implementation I discovered a bug in the MooTools selector parser where the attribute selector doesn’t properly find attributes with hyphens. So, I created a custom pseudo-selector as a workaround:

Selectors.Pseudo.data_ga = function(){
  return Boolean($(this).get('data-ga'));
};
$$('a:data_ga').addEvent('click', function(event){
  var ga_code = this.getParents('.ga-scope')
                .get('data-ga').reverse().join('')
                + this.get('data-ga');
});

OpenID: Redirects and Delegation

The Introduction

I’m a big fan of OpenID. I like the fact that my online (public) identity is associated with a URL that I own. This affords quite a few benefits, such as associating my public profile at various networks with one another. Even better, OpenID supports the delegation of identifiers to OpenID Providers. This allows the owner of a domain to use the domain as an OpenID without operating his own OpenID server. He simply delegates the provider responsibilities to an existing provider by adding some HTML link references at the top of his OpenID URL. But before I get too far ahead of myself, a bit of history.

The History

The Global Name Registry was delegated the .name top-level domain by ICANN in 2001. [Wikipedia] The intention was to set aside a specific top-level domain for individuals to register as their own domain. These domains may be registered on the second level (john.name) and the third level (john.doe.name). Generally, the second level domains are shared among the registrants of the third-level. (Aside: I find it rather surprising that the assortment of ancestry sites don’t take advantage of this for linking in family trees) In 2007 the GNR spun-off a small start-up formed as a partnership with JanRain‘s OpenID provider, myOpenID. This partnership created FreeYourID.com. The goal of FreeYourID was to make it dead simple for users to both register their own .name domain and use it as an OpenID. FreeYourID provided a couple great features. The domain registration was transparent to the end user, making it very friendly to non-techies. A .name email address was created (john@doe.name). FreeYourID provided a few URL forwarding options for those with existing domains. They had a decent default landing page that aggregated various social network profiles (YouTube, Flickr, Blogs). They even supported microformats with XFN! And via the partnership with myOpenID, the .name domain was automatically setup as an OpenID.

The Situation

So in 2007 I registered jason.karns.name from FreeYourID.com. A myOpenID account was created behind the scenes which handled the OpenID login. So as an end user I would attempt OpenID log-in at a relying party, which would navigate to jason.karns.name and encounter the OpenID delegation snippets. This would foward over to myOpenID (tied to the shim account) where I would authenticate and be redirected back to the original service having been authenticated as jason.karns.name. I used this service for 2 years as OpenID was beginning to gather steam. I used jason.karns.name as my primary web address (which was forwarded to other sites). I collected a few social network links on FreeYourID’s social network page (which provided a great resource for XFN crawlers). But most importantly, I used my OpenID as my primary (in some cases, only) authentication method at quite a few online services.

The Problem

At the beginning of 2009, the .name domain was transferred to VeriSign. This spelled the beginning of the end for FreeYourID. Toward the end of the year, FreeYourID announced it was shutting down and would be transferring all services over to DomainDiscount24. Prior to the transfer, I purchased jasonkarns.com. Once my .name was transferred, I setup an HTTP 301 redirect from jason.karns.name to jasonkarns.com. The landing page of jasonkarns.com contained the same OpenID delegation snippets as jason.karns.name so I assumed everything would continue to work. I was wrong. This setup prevented me from logging in at every service that used OpenID.

The Discovery

Through some digging and a bit of guidance by this post (thanks Will), I discovered that an OpenID relying part must follow all redirects and the final destination URL is used as the OpenID Identifier rather than the original URL. So in my case, I would attempt to login with jason.karns.name which redirected to jasonkarns.com, which was then delegated to myOpenID. I would authenticate normally at myOpenID because the delegation snippets specify which account to use at the provider. However, when redirected back to the relying party, my authentication token reported my ID as jasonkarns.com. As there was no existing account registered for jasonkarns.com, most relying parties would initiate their ‘new user’ flow. Others just error-ed out.

The Fix

So now I realize what the root problem is, but I’m not sure how to fix it. I definitely have to get my OpenID working again so I must serve the OpenID delegation code from jason.karns.name directly. However, I also want to continue using jasonkarns.com as the primary online destination for people looking for me.

  1. Configure DomainDirect24 to stop the HTTP 301 redirect to jasonkarns.com
  2. Create a landing page for jason.karns.name which contains the OpenID delegation code
  3. Use the old-school meta-refresh to handle the redirect from jason.karns.name to jasonkarns.com. OpenID Relying Parties won’t follow the meta-refresh because they are only interested in the delegation code.
  4. Setup a frameset to load jasonkarns.com from within jason.karns.name. This is only for user-agents that don’t or won’t follow the the meta-refresh. This way end users still end up with the same content.

The Result


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd">
<html xmlns='http://www.w3.org/1999/xhtml' xml:lang='en' lang='en'>
 <head>
 <title>jason.karns.name</title>
 <meta http-equiv='Content-type' content='text/html; charset=utf-8' />
 <!-- OpenID server and delegate -->
 <link rel="openid.server openid2.provider" href="http://www.myopenid.com/server" />
 <link rel="openid.delegate openid2.local_id" href="http://jason.karns.name" />
 <meta http-equiv="X-XRDS-Location" content="http://www.myopenid.com/xrds?username=jason.karns.name" />
 <!--Redirect has been requested -->
 <meta http-equiv='refresh' content='0;url=http://jasonkarns.com' />
 </head>
 <!-- Frame Redirection for human content readers -->
 <frameset rows='100%,*' style='border:0;'>
 <frame src='http://jasonkarns.com' frameborder='0' />
 <frame frameborder='0' noresize='noresize' />
 </frameset>
</html>

The Future

While this isn’t the optimal solution, it works for now. I rather like the idea of potentially having 2 separate OpenIDs (though they are both delegated to the same myOpenID account). However, I don’t like the meta-refresh redirect. At one point there was a discussion going on for the OpenID spec around giving 303 (See Other) redirects special behavior—as oppposed to 301 (Moved Permanently), 302 (Found) and 307 (Temporary Redirect). Although at this point, I don’t hold much hope. My only other choice of action is to follow Will Norris’ method and use server-side HTTP Request sniffing and respond accordingly (OpenID delegation for suspected Relying Parties, 301 Redirect for everyone else).

JavaScript Best Practices

@ikeif recently tweeted a request for some JavaScript best practices. Rather than simply reply to him, I thought I’d post them here and beat him to the blog-post-punch. I’m not going to expand much on any of these, although any discussion that arises will likely spawn its own post. These are in no particular order and are really nothing more than a brain-dump. I’ve numbered them for easy reference in the comments.

  1. Avoid global variables. When you must use a ‘global’, use your own namespace.
  2. Avoid cluttering the global namespace with functions. Assign your ‘global’ functions to a single namespace (see above).
  3. Discover the JavaScript framework/library that speaks to you and stick with it. Don’t load jQuery and Prototype on the same project. Yes, I know you can run many libraries in noConflict mode now, but think of the additional overhead you are placing on your users. All for some snazzy plugin? Port it!
  4. Use JSLint. I assume you validate your HTML? Use JSLint to validate your JavaScript. If your code is JSLint safe, you can avoid a few browser idiosyncrasies (hasOwnProperty() anyone?). As a bonus, JS-Lint safe code is also JSMin safe so you can minify your scripts without worrying if the minification will affect functionality.
  5. A side effect of using the JSLint validator in Aptana, my IDE of choice for front-end development, is my use of JSLint’s special `/*global */` comment. JSLint will flag any global variables unless they are explicitly listed as dependencies in this comment. This means at the top of all of my scripts, one can easily spot any required dependencies (specific MooTools modules for instance).
  6. Use feature detection not browser sniffing.
  7. Write unobtrusive scripts instead of inline event handlers.
  8. Keep your styles in your CSS! Although most libraries make it easy to manipulate element styles, it’s much better to keep your styling where it belongs- in your CSS. Mixing the two violates separation of concerns and makes your code less maintainable. Instead, add and remove classes as necessary in your scripts.
  9. Make sure your UI elements support proper interaction when JS is disabled. If a link opens a lightbox, set the href to point to the lightbox content so no-JS users can still access the content. Links with `href=”#”` kill kittens.
  10. Any UI elements that *only* support JavaScript interaction (and think carefully about this) should be created by JavaScript. Don’t litter your HTML will dummy elements that are only there for JS events. Your script should create and inject them.
  11. If at all possible, don’t modify libraries or plugins directly. This makes future upgrades a nightmare. It is much better to extend the plugin/library without modifying the original.
  12. Your .js files should be served with HTTP `Content-Type: application/javascript`. BUT the `type` attribute in your HTML `script` element must be `text/javascript` or else Internet Explorer will crap itself.
  13. Language=”JavaScript” was deprecated, like, a zillion years ago. Stop using it.
  14. Don’t pre-optimize your code by using fancy looping structures. It is much better to have readable code. Once your app is running, then you can go back and profile it to eliminate bottlenecks. There is no point in pre-optimizing your scripts when the overhead of your background image is 10x slower.
  15. Don’t return false from an event listener when all you really want is event.preventDefault(). Maybe someone else wants to listen for that click event, too, mmmkay?
  16. Stop using document.write

Okay, that’s my list. I may add more later. Disagree with any of these? What best practices or anti-patterns do you have?

CSS Reset

Why start with a blank slate? After years of web development and hundreds of sites, starting from scratch on each project really turns into a buzz kill. Nobody wants to spend time rehashing the same issues from site to site. So many of us have turned to CSS Resets. As we all know, CSS Resets are designed to fix cross-browser inconsistencies by rebasing all or most default styles to a common state. I’ve always had a problem with these resets. Many of the styles in these resets are never used (how often do you use q, ins, del, and table anymore, really?). Other styles are completely overridden. I would wager that by the end of a long project, one could probably remove the CSS Reset without affecting the design (save maybe the margin/padding rules). Jonathan Snook feels the same way. For these reasons, I’ve generally used the universal margin/padding reset:

* {margin:0; padding:0;}

There is quite a lot of contention around the subject both for and against as well as the reasoned centrist.

So, rather than continue to rail against their futility, performance penalty, or outright boorishness, I thought I’d actually use a CSS reset a few times and report my findings.

Decision Time!

CSS Reset by Eric Meyer or YUI Reset? Well, after watching this video (you should, too), my decision was firmly in the Meyer camp.

First Reactions

Who uses Firebug? Okay, sorry, who doesn’t use Firebug? If you don’t, you should, and if you do, you likely won’t like Meyer’s reset without it first being modified. Ladies and gentlemen of the jury, Exhibit A:

reset_thumb

Due to the first rule in the reset, the font-size property is applied to (nearly) every element. However, font-size is also an inherited property. Which means nearly every element inherits its value from its parent, while simultaneously being reset itself by the same rule it inherited! The first rule of nearly every stylesheet of mine usually includes a set of font properties (font-family, font-size, and line-height). With these properties already being set, there is no reason to have them in my CSS Reset, so let’s remove the offending rule and relieve some of Firebug pressure.

reset3_thumb

Whew, that’s better.

Don’t Lose Your Focus!

The most offending rule in Eric’s reset is his outline rule:

:focus { outline: none; }

Sure, he adds a comment to remind users to be sure to specify proper outlines for keyboard users. But you and I can both count on one hand the number of times a proper outline is reinstated for the :focus pseudo-class. Besides, I subscribe to the belief that frameworks and tools should make it easy to fall into the pit of success rather than making it harder to do things the right way. Luckily, Patrick H. Lauke has outlined (sorry, I couldn’t help it) a method to remove the outline during its less-useful moments, while retaining the outline as necessary for keyboard navigation. In brief, simply:

a:hover, a:active { outline: none; }

This will hide the ugly outline during the click action on a link as well as during the time the page loads (so long as the user doesn’t move their mouse). I think this fits nicely in the 80/20 category.

And Now?

So where does that leave us? I’m not sure. I’m still not entirely convinced of the utility of a CSS Reset. However, I believe my two minor modifications do bring Meyer’s Reset a bit further into the ‘’useful’ category without being a pain or downright harmful. My version of the reset is hosted at GitHub, so if you don’t like it, go fork it!

http://github.com/jasonkarns/css-reset

Roy.G.Biv alpha

I was recently on a project that called for a translucent background color over a an image similar to this:

rgba_thumb http://www.flickr.com/photos/yeowatzup/ / CC BY 2.0

Of course, being the conscientious web developers we are, we want to be as semantic as possible with our markup. This means that text should be marked up as text and not flattened into the image, forever to remain hidden from the world of web spiders, search engines, assistive technologies, and mash-up artists. We give the text a background color to keep the text readable over the background image but we still want the background image to be slightly visible through the text area. Before RGBa, we would resort to a 1 x 1px translucent PNG but this adds additional overhead (both with the extra HTTP request, the maintenance of the image should the color change, and a PNG fix for IE6). Another option would be the CSS opacity property. Unfortunately, the opacity property applies to an element and all of its descendants. This means the text itself would become translucent as well, something we would like to avoid if possible. So, let’s use some RGBa!

First, add the standard RGB background color so the text block will still be legible in browsers that don’t support RGBa:

  div {
    background: rgb(100, 100, 183);
  }

Now we can enhance this for conforming browsers:

  div {
    background: rgba(100, 100, 183, .75);
  }

We now have support in Firefox 3+, Webkit (Safari 3+, Chrome 1+). What about that other browser? To add support for IE6-IE7, we need to use IE’s proprietary filter property. As this is a proprietary property, it should be included via an IE-only stylesheet referenced using Conditional Comments.

  div {
    background:transparent;
    filter:progid:DXImageTransform.Microsoft.gradient(startColorStr=#BF6464B7,endColorStr=#BF6464B7);
    zoom: 1;
  }

A bit of an explanation is in order. First we set the background to transparent which overrides the solid color rgb declaration. Next we apply IE’s proprietary filter. Notice we set the startColorStr and the endColorStr to the same values. These values are not your standard HEX values. Instead of 0xRRGGBB, the first 2 digits are the alpha transparency. Converting our 75% into HEX (.75 * 255 –> 191.25 –> 0xBF). Lastly, we apply the zoom property to trigger hasLayout on the element. This is required for the filter to take effect.

Keen observers will note that the filter property is not supported in IE8 standards mode. As IE8 now properly follows the CSS grammar, we must add the vendor prefix and quote the value. The hasLayout trigger is no longer needed.

  div {
    background:transparent;
    -ms-filter:"progid:DXImageTransform.Microsoft.gradient(startColorstr=#BF6464B7,endColorstr=#BF6464B7)";
  }

Combined, we have our main CSS:

  div {
    background: rgb(100, 100, 183);
    background: rgba(100, 100, 183, .75);
  }

and IE’s CSS:

  div {
    background:transparent;
    filter:progid:DXImageTransform.Microsoft.gradient(startColorStr=#BF6464B7,endColorStr=#BF6464B7);
    -ms-filter:"progid:DXImageTransform.Microsoft.gradient(startColorstr=#BF6464B7,endColorstr=#BF6464B7)";
    zoom: 1;
  }

We have now achieved cross-browser, CSS-only (no PNGs needed), alpha transparency!

jQuery.Firebug: A call for feedback.

As a result of some of the discussion following from my post on my new jQuery plugin, jQuery.Firebug I’m soliciting feedback for its desired behavior. Example:

$('.setA').log();
$('.setB').log("some", "information");
$('.setC').log("title attribute is: ", ".attr('title')");

Some explanation. The log method follows the same rules as the Firebug console.log method. It can take 0 or more arguments that are concatenated into a space-separated string when finally printed to the console. For some jQuery-specific behavior, I have added a little wrinkle as shown with the log statement following SetC. If an argument to the log method:

  1. is a string
  2. begins with a period (dot)
  3. is a valid jQuery method

then the jQuery method specified is executed on the jQuery selection and the result is printed to the console. In the example above, if the title attribute on the first element of SetC is 'example title' then the final log message would be "title attribute is: example title".

Further, my the plugin will feature an additional option (off by default) that will explicitly print each element in the jQuery selection wrapped in a console.group. In the example above, say SetC contains 2 elements. If the option were turned on, the output would be similar to the output of the following:

console.log("title attribute is: example title");
console.group($(".setC"));
console.log($(".setC").get(0));
console.log($(".setC").get(1));
console.groupEnd();

So, back to the problem at hand. My issue, is when and where to print the jQuery selection itself. The different options are:

  1. only print the jQuery selection when there are no arguments to the log method
  2. only print the jQuery selection when there are no arguments to the log method but also print the jQuery selection in place of any string argument equalling "this" (similar to my jQuery method replacement demonstrated above with .attr("title"))
  3. always prepend the jQuery selection to the arguments (so the jQuery selection is printed before the rest of the arguments)
  4. always append the jQuery selection to the arguments (so the jQuery selection is printed after the rest of the arguments)

I’m leaning towards either #3 or #4 but am open to feedback. Please comment with your suggestions. Keep in mind that all four above choices will still result in just one log message per log() call. Turning on the ‘explicit‘ option is the only thing that will result in more console messages than log() calls. Also, keep in mind that printing the jQuery selection itself to the console will allow deep inspection. For instance, clicking on the jQuery selection in Firebug shows what elements are selected, etc.

You’ve Got Mail!

I’m a big fan of Gmail and Google Reader. I generally leave these tabs (along with Google Calendar) open all day long. To minimize the amount of visual space these long-lived tabs consume in the tab bar, I use the great FaviconizeTab extension. However, with these tabs minified, you lose the ability to be notified visually (via the title text) of new mail or new posts. A simple solution that I found years ago was to write a simple user style to change the background color of the tab whenever a new item arrives. It’s a great simple solution that doesn’t require any new programs or extensions. For those of you who use the Stylish extension, I’ve finally gotten around to posting this style to userstyles.org so installation is dead-simple. This style also works well with the Badges on Favicon extension which actually displays the number of unread items via small ‘badge’ on the tab. Go ahead and grab the style now! I’ve posted a Firefox 1.5 – 2.0 compatible version as well. (But seriously, if you’re not using Firefox 3, upgrade now!)

Gmail S/Mime Icons

I’m a big fan of Gmail and I’m a big fan of S/MIME for securing your email. Unfortunately, the current state of S/MIME on web-based email is currently quite sad. There is a Firefox extension which allows you to send signed/encrypted messages from Gmail by Richard Jones and Sean Leonard. (called Gmail S/MIME, surprisingly enough) However, Gmail does not provide any visual indicator to differentiate between unsigned/unencrypted messages and signed/encrypted messages. I found a great user style by Moktoipas (updated for compatibility with the recent Gmail changes) which replaces the default Gmail attachment icon (paperclip) with icons that represent the standard attachment file types (.doc, .txt, .gif, etc). Lo and behold, lownoise took this idea and created two userstyles to provide the same icon support for signed and encrypted messages. Signed messages sport a certificate icon and encrypted messages sport a padlock icon. However, these userstyles haven’t been updated to cope with Gmail’s newest changes. I have taken it upon myself to make the required changes and post the new style for everyone’s use. I can’t claim too much credit, however, as the changes required were only a couple lines of code, which I simply copied from Moktoipas’ styles. In addition, I merged the two styles – one for signed messages and one for encrypted messages – into one style. While this style is extra beneficial for users of the Gmail S/MIME extension, it does not require it. Further, the style is packaged as both a Stylish userstyle and a Greasemonkey userscript so users of either extension can get their style on. Grab the style from userstyles.org.

MDC Detroit: The PDC on wheels!

In a couple of days (this Thursday, January 22nd) I’ll be heading up to Detroit, Michigan for the MSDN Developer Conference as it makes its way through the heartland. But not only will I be attending, my colleague Jeff Hunsaker and I have the great pleasure of speaking at the MDC! We will be introducing jQuery and showing how ASP.NET AJAX and jQuery can work together. Many of you have of course heard the not-so-recent news (a few months on the internet is like a lifetime) that Microsoft will be ‘adopting‘ jQuery which will ship with future versions of Visual Studio. So if you live anywhere near Detroit and would like to see how jQuery fits into ASP.NET, (or are interested in any of the other cool topics and presentations you might have missed from the PDC) come check out the MDC! Also be sure to check out Jeff’s blog post on our presentation. He’s included loads of related links and info.

Announcing jQuery.Firebug

I have been sitting on my latest jQuery plugin for some time now. Although I realize that the code is not yet of production quality and there are certainly bugs and features that remain to be addressed, I’ve decided that I should at least release this plugin to the wild. At the very least, I would love some feedback on it and possibly new features to be added. “So let’s see it!” you ask?

jQuery.Firebug is a jQuery plugin that simply exposes the Firebug Console API to the jQuery object. That’s about it. Under the covers, it bridges some functionality between Firebug and Firebug Lite and has a host of other small feature. But all in all, it simply adds the Console API methods directly to the jQuery object.

The goal of this plugin is to allow inspection of your jQuery selections while in the middle of a chain. For those of you who have ever had a jQuery chain like:

$(".elements").parents("div")
.find(".new").show().end()
.find(".old").hide();

and you load up the page and it doesn’t work. How do you begin debugging? You open up Firebug but are unable to easily ‘step through’ the jQuery chain. Inevitably, you have to break up each selector, assign it to a temporary variable solely to call console.log(temp) on your selection. Enter jQuery.Firebug:

$(".elements").log()
.parents("div").log()
.find(".new").log()
.show().end().log()
.find(".old").log()
.hide();

Each log method returns the same selection that was passed to it, so you can simply continue your chain as if it weren’t even there. Every Firebug method (as of Firebug 1.2) is supported so you can call debug(), assert(), info(), dir(), profile(), etc.

There are a few additional features that I will address later as the code begins to settle down. For now, the source and documentation can be found in Subversion at svn.jasonkarns.com/jquery/firebug.  There is much work to be done on the plugin as well as on the documentation. Until then, let me hear any feedback you may have.