SLO-JS and how to speed up widgets

This Thursday I gave a talk at NY Web Performance about third-party content and topic proven to be quite hot with many people in the audience, good Q&A and group discussion afterwards. We talked about ad banners, widgets and tracking “pixels” and overall theme was lack of control and heavy reliance on 3rd parties.

Ads and tracking pixels, unfortunately are quite complex and changing their behavior is hard, but widgets are a bit simpler to deal with and can probably be improved.

Working on my project, I saw quite a few widgets and over time I started to realize that sales people’s magical promise of “easy integration with single line of JavaScript” which you can see on any site that provides widgets (and especially those that rely on widgets as primary method of distribution) is fundamentally flawed.

While it’s a great sales promise and simplifies integration, maintenance and gives a lot of control over user’s information, it brings slowness and introduces SPOFs to publisher’s sites.

To make people think about performance when they hear the magical sales pitch of Single line of JavaScript, I decided to abbreviate it to SLO-JS which gives a very clear impression about the performance quality of this solution.

Now, next time when you see the magical words in vendor’s proposal, there is no need to explain or complain, just abbreviate it and this will hopefully give the right impression and make people think about testing the performance of the 3rd party source.


It would’ve been pointless if there was no alternative to the SLO-JS, but in reality, it is not the only way to include code, it’s just the oldest and there are other, more reliable and fast alternatives. All we need to do is to explain how they work and make it easy for vendors to provide these alternatives to sites that care about performance.

First alternative is quite simple – instead of copy-pasting a SLO-JS snippet, just let people copy-paste a mix of HTML, CSS (optional) and async JS call that will create a reasonable placeholder until data is loaded and will not block the rendering for the rest of the site.

A few vendors are already do that – DISQUS, for example, provides the following code:

<div id="disqus_thread"></div>
<script type="text/javascript">
  (function() {
   var dsq = document.createElement('script');
   dsq.type = 'text/javascript'; dsq.async = true;
   dsq.src = '';
      || document.getElementsByTagName('body')[0])

Notice that it gives you a div tag with disqus_thread id to insert into your page (in this case the placeholder is just empty) and then uses asynchronous script tag to load the data without blocking the rest of the page.

Transforming your code to take advantage of this probably requires a couple hours of work at best – just replace document.write with static HTML wrapper (with specific ID or class) and build additional DOM elements within the code, then use wrapper’s ID to insert the DOM tree and use the async tag to load all your JS code.

Use this pattern for good widget development, suggest similar approach to vendors that give you the most grief, and we’ll have faster web pages pretty soon.

Another alternative I proposed at the Meetup is to download content on the back-end and insert it right into HTML with optional JavaScript used only for data load.

Most of the time content doesn’t change for each request and can be cached for minutes, hours or even days at a time. In some cases it never changes, especially for embeddable content like YouTube videos or Flickr photos.

Andrew Schaaf proposed a name for it – Fetch and store JavaScript with great “superhero” abbreviation of FAST-JS.

It is quite easy to implement them in the language of your choice and probably worthy of Open Source project that will enable appropriate caching mechanism and make it easy for people to implement fast solutions to SLO-JS.

It’s a bit harder to do then async widgets and requires some backend coding, but will allow reducing front-end complexity and amount of extra JS on the page to zero.

OpenSocial has a long way to reach maturity

Google did a good job announcing OpenSocial, bringing many partners and creating lots of marketing buzz – many business people already see it as Facebook platform killer open to all and everywhere and ready to jump on board, all developer shops and widget catalogs already announced that they support and ready to deploy OpenSocial applications.

I took a look at what’s done so far and what kind of data and features is available in Orkut, Ning and Hi5 OpenSocial containers. What I see so far is that Google declared “learn once, write everywhere” instead of “write once, run everywhere” for a reason (see Mark Andresseen’s thoughts on this) – functionality varies and will vary a lot from container to container and not only because of limitations of current release (no SDK is publicly available yet and app developer APIs are probably far from being final), but because each service will provide different calls and features within it’s container – I hope there will be some standard way to describe all these features, otherwise “learn once” will become “re-learn many times”.

We’ll definitely see more reliable information once Google releases SDK, but unless they get a hold on how this developer community is organized, it’ll be a mess and misunderstanding. Actually such platform is bigger then everything they did before – all APIs they provided so far didn’t really include so many variables when every piece of the technology chain involved can be implemented, customized and hosted by different company and none of those is necessarily Google. This might be a much bigger organizational undertaking then they expect, although marketing effect is pretty good (can’t say for sure if it’ll keep being this effective if platform will not take enough momentum).

There is another issue here – I was doing simple research to see what kind of data is available and if my Friends Explorer can get birthday data for timeline if implemented as OpenSocial app and none of 3 containers provided such information – all of them give internal user id, person’s name and userpic URL (Orkut adds my approximate geographic location and primary language to that), they all give you friend lists, but only Orkut provides activity information. That’s it. OpenSocial API seems to be defining profile URL as well, but nothing more. Also, API calls to get that information are different (which is probably because of pre-release naming confusion though).

It looks like it’ll be for each container to decide what information to release to the app developers. It seems reasonable keeping in mind privacy concerns, but I expect it to be complete chaos because it’s doubtful that every service will implement granular permissioning layer for each bit of information and add OAuth layer to it (which is not fully out there either). What it means is that unless application uses just viral distribution of the containers and nothing but friends’ names and ids, it will not be really portable.

Conclusion: Keeping all this in mind and that such open platform is much harder to drive and maintain then proprietary one (e.g. Facebook’s), I would think that OpenSocial will take momentum and initial wave of simplest applications (e.g. Google gadgets) will get going pretty quickly, but we’ll have to wait much longer to see something more sophisticated to be standardized and available. I hope Google and partners realize this and not going to pump market too much playing political games with Facebook (who just had a big take on progressive ad market that Google kept monopolizing).

JSON Remote Loader

I’ve been looking at JavaScript recently, especially after watching those courses by Douglas Crockford.

I was doing some bookmark and widget coding I realized that there is no tool to load JSON data from remote hosts. While plenty of libraries provide you with various tools to organize your XMLHttpRequests and other hacks to load data asynchronously from your site, none of them allow you to overcome same-host security limitations and load data from other sites without using some sort of server side proxy.

So to streamline the rest of the development I wrote a library that can be used to load JSON-formatted data from remote hosts without spending your own traffic.

See project page here: JSON Remote Loader

P.S. as part of this project, I got some experience using JSLint, Firebug, Google code project hosting and even wrote Makefile to automate minification and colorizing as well as zipping of the code.


Apparently some browsers (e.g. IE) perform a clicking sound on each URL update and this begavior wasn’t really pleasant when map URL was constantly updated when map was dragged. I tweaked a code a little so it updates URL only when dragging is over (link on the right updates in real time) so now you should here reasonable amount of clicking if you use those clicking browsers.

As a side effect URL stays the same when you originally load the page as

Persisting location attributes in URLs hash

I noticed that new Flash Yahoo! maps interface uses location parameters using URLs hash (text that you can see after # sign).
Apparently it’s quite known way of storing page state when developing AJAX applications and I see several reasons to use it in application like GvsY:

  • It can be updated without reloading the page and therefore can be made always up to date
  • It removes a need for separate permalink because you can bookmark the page at any moment and always get working link (you can click “” link at any moment to store specific location)
  • It helps keep the page in the browsers cache without re-requesting it separately for each location
  • Geographical location fits anchor paradigm pretty well since anchor is a “location within a document” and since this document is AJAX page representing the world map, this location can be geographical too

I liked the idea and that’s why I replaced passing parameters through query string with passing them using hash (or anchor if it sounds better). Enjoy!