Data service APIs should be usable by JavaScript running in the browser. The data services can be implemented in any choice of server, but the API should not indicate the server implementation. For instance, if I'm using a getItems service, it should not have getItems.jsp/getItems.php/getItems.aspx as part of the API URL.
The web application should be runnable from any domain on any web server. This means the data services must support some sort of cross-domain access from a web browser. My preferences:
- Cross Domain XMLHttpRequest using IFrame Proxies (IFP XHR). The data service should offer returning JSON or XML. Ideally JSON but XML will work too.
- Using SCRIPT tags to fetch JSON data (ScriptSrcIO).
I believe this approach gives the best overall value with scalability and perceived performance when compared to server cost. Scalable because the code can live on any web server, and well-performing because the entire application can be heavily cached by the browser. There is a first time load cost to fetch the files, but after that, the files should have very long cache times. This gives great perceived performance to the user -- things start to visibly happen in the browser window as soon as the user goes to the application URL.
There are solutions that might give you better scalability or better performance, but this approach gives the best overall value when considering server cost. Only simple web servers are needed, and with the high cacheability of the application, the web servers will not need to do work for every invocation of the application. The user's computer and browser are doing more work, but since many (most?) of the applications and services are free, or nearly free, I believe this is an acceptable tradeoff. Of course there are exceptions to this rule, in particular cell phone web applications. However, I do desktop/laptop web application development at the moment.
A very important point about this development approach -- *anyone* can run the web application. Every ISP gives you some amount of disk space on their servers to serve files. This development approach allows you to use that ISP space to run a web application. And I believe that is a key driver to allowing widespread "mashup" usage. Anyone with a text editor, access to the internet, access to cross-domain data service APIs, and a web browser can make a web application usable by the whole world. I am not talking about just serving some web pages, I mean serving web applications: email, instant messaging, calendar, a map service mashup. That is beautiful.
Tools I want to use
A web server
Preferably Apache, but any solid web server that allows for setting cache controls on files will do.
A solid JavaScript toolkit
I prefer Dojo, because I contribute to it, and it offers a nice breadth of libraries with an include system, dojo.require(), that allow you to do progressive optimizations on the code you use without having to recode a bunch of your JavaScript. Once the accessibility and internationalization support is fully integrated, it will be a hard toolkit to beat.
A non-server, i18n-friendly templating system
JSP, ASP, PHP and Rails enable some nice templating systems, but they all require server infrastructure. Tagneto is my attempt at a templating system that does not need server infrastructure. It requires a compile-time step by the developer, but the output is just plain HTML/JavaScript/CSS runnable from any web server (even local disk).
Dojo has some support for a type of templating via widgets and the i18n work going in now, but I don't feel it is as performant as something that you can do with a compile step. Most of the templating work (and in particular i18n string work) could be done once in a compile time step instead of downloading all the code to do the template transformation and doing the transformation every time the code is loaded.
The downside with a compile step, is that the developer needs to run a bunch of compiles while developing. One thing I would like to do is enable development using Dojo for the templating needs, but then apply Tagneto during a build process. Or expand Dojo to do the work as part of its build process. Or just use Tagneto, but allow using it from something like Jetty on the local computer, and have that do the transform on the fly. Once the developer is finished, run the compile step as part of the build to generate the final static files that can be served from a web server.
But for me, I'm fine with doing the compile step as part of the development process. At least for now, until I commit more time to solving the problem. I want to help get Dojo 0.4 out the door first.
2 comments:
... and well-performing because the entire application can be heavily cached by the browser. There is a first time load cost to fetch the files, but after that, the files should have very long cache times.
Just setting long cache times doesn't guarantee the files won't get pushed out of local cache, though. Caches are limited in size and heavy activity can push out data before expiration.
I've tried to get some data to model cache lifecycle effect, but haven't found a good source. Any ideas?
jeff: True, the local cache is of limited size, and the files can be pushed out of the cache, but it still seems to be the fastest option, considering the alternatives that don't require custom browser extensions.
Sorry, I don't have any data on the cache lifecycle effect.
Post a Comment