I'm back from a much appreciated vacation, and just wanted to jot down some thoughts that I want to expand on later. This is a continuation of how I want to build web applications on the RIA-end of the web scale, but an approach I could use for just about any data-driven web site.
I only want to use the server to serve files and to implement data storage. There should be no server-side work dealing with the UI controller. This means the "application server" part of the server (the PHP, Java, .Net, RoR part) should only be handling the API to save/modify/get the data. It should not be doing any presentation or UI flow. JavaScript, CSS and HTML should be used for the presentation and UI flow.
So, the app server implements the data API as a REST API. This gives a good boundary of what should be done in the app server. The data should use XML (preferrably ATOM) or JSON as the data markup. The only presentation part might be to apply something like an XSLT transform of the data if the GET request matches a browser or searchbot user agent. Alternatively, use the domain name of the GET request to know whether or not to apply the XSLT transform (sometimes it is nice to host the pure data API on a different domain for security and load reasons).
Progressive Enhancement should be used for adding the JavaScript controller to the page, so that hyperlinks to other resources can be followed by searchbots or non-JS enabled browsers. Editing of the resources via an HTML interface could be restricted to JavaScript-enabled browsers (mostly where a UI controller is needed anyway, and search bots only care about indexing GET resources anyway).
So that is a baseline. This baseline now cleanly separates the UI controller from the server side, and makes it less relevant what sort of app server technology you use. The UI controller is kept in the JavaScript realm, no more splitting it between JavaScript and [Struts, RoR, Django, etc...].
Now it is time to jump the shark:
Restrict the data API to its own domain, separate from the domain used for presenting the UI for the browser/search bots. For example, use api.domain.com for the data API and ui.domain.com for the browser/search bot interface.
The HTML files on ui.domain.com use Ajax-like techniques access the resources on api.domain.com to show the UI. The Ajax-based data loading would have to be a cross-domain ajax. If the API and UI domains share a common sub-domain, document.domain tricks could be used. Otherwise (and more interesting to me personally), JSONP or XMLHttpRequest IFrame Proxy could be used.
A nice benefit of this approach is that all of the UI pieces (HTML, JavaScript and CSS) are highly cacheable and servable from very different domains than the API domain.
So what about search bots? They cannot do the Ajax techniques, at least not yet. For them, in the web server config, route their user agents to a web server module that uses Gecko to fetch the page, and do the ajax loading. The HTML can send an event to Gecko (this custom, headless gecko can export a function, something like window.onAjaxFinished()) to tell it when it is done rendering. Then the Gecko module serializes the current state of the DOM and sends that HTML string back to the search bot. It could even keep a local cache if it liked, if the resources did not change that often.
This approach would allow the search bot to get a good representation of the page for their indexing, and since Progressive Enhancement was used to bind to resource hyperlinks in the document, the search bot can still navigate to other resources on the domain (and those requests would be funneled through the web server's headless Gecko module).
In this model, REST via HTTP becomes the new JDBC. And more importantly to someone like me who enjoys the front-end -- I don't need to learn about any of the app server flavors of the month to implement data access (the other app server-hugging developers on my team can do that).
So now I just need to do a custom build of a headless Gecko that I can use in a web server module. Any pointers are appreciated. Ideally someone should do this as an open source project so everyone can benefit.
Thanks for the post. This sounds a whole lot like how I wish to build web apps too. I'm fuzzy on all the details but it seems similar to what authenteo.net is doing. Have you seen that before? It's a commercial product, but check it out if you haven't already. Of course it would be interesting to hear your thoughts on it too.
ReplyDeletemitchell: Thanks for the link. Authenteo does seem like it has similar motivations behind it.
ReplyDeleteI'm not too keen on the Narrative JavaScript approach. I prefer the straight JavaScript language, but that is surely a personal preference thing. I'm also not clear on the need for the JavaScript Persistent Object Notation. My first thought is just to make the JSON objects smaller and have specific APIs to load and aggregate the JSON objects. But maybe I haven't hit the use case that might make it useful.
Authenteo seems to have created/used a few newer concepts (Narrative JavaScript, JS Persistent Object Notation, custom templating) that I'm not sure will gain enough traction in the long run, (and I say this as creating something myself that probably won't survive in the long run). It could also just be my ignorance though. I think I'll try straight JavaScript, JSON/XML/ATOM with Dojo widgets and see how far that takes me. But Authenteo certainly has made a significant achievement by having something working.
HTMLUnit for the server-side HTML renderer is a nice option, but it requires Java, and it seems harder to get that deployed on non-Java based server systems. I was liking the headless gecko support since it would be a C/C++ compiled thing that would most likely fit into different web servers. Plus it is a full, real browser. But maybe HTMLUnit could be used on a server farm, and just use a proxy module on the actual web servers to forward the render requests to the Java-based HTMLUnit farm (I was considering that approach for the headless Gecko too). That might work, just not sure how robust the JavaScript/DHTML support is in HTMLUnit. I'll have to look at that more.
One small thing reading their blog post on dynamic javascript loading. The xdomain version of Dojo loading uses script tags, which is asynchronous loading.
Overall, an interesting, although commercial, project.
james: Thanks for the thoughts. I don't consider myself qualified to recommend any technology/architecture choices so all I can say is go for it. You never know, you might create something that can "survive in the long run". This approach to building web apps is only going to grow if you ask me. I hope to see you continue your efforts with this. I'm subscribed so if there's ever anything I have to offer I'll chime in.
ReplyDeleteSee my posts about a future web framework at http://mv.asterisco.pt/
ReplyDeleteWondering if you've made any progress on finding/building the headless gecko you talked about in the post. I've got a similar problem to the one you express but in a real running app (which thoroughly muddies the situation). I've got a particular page where I need to log the state of the javascript application any time a user visits it. So, obviously, I built a logging class and the javascript app posts in to the right url to trigger the server to create a record. Thing is, I sometimes I want this same logging to be triggered without a user having to visit the page. So, I need to either recreate all the logic of my javascript application to calculate the necessary data for logging (ugly), or to be able to programmatically load and render my existing page in an environment that will run the javascript triggering the ajax callback just as would happen if a normal user loaded the page. After an afternoon researching the current state of play in the mozilla/gtkmozembed/etc space, I'm about to tie something together with glue and string (cron jobs, automator, and Firefox Beta 3). Why can't I just say "gecko 'http://myurl.com'" after running a straightforward install process on my server? Grr...
ReplyDelete(forgot to check the box for follow up emails, so I'm posting again to get another shot at it -- sorry)
ReplyDeleteGreg: I think there is hope using Aptana's Jaxer. It runs gecko on the server. The thing I have not figured out yet is how to tell Jaxer "you can run the whole page at the server and client".
ReplyDeleteIt looks like you can get it to work if you put a runat="both" for all the script tags in your page, but I have not done a lot of experiments on it. I would prefer to just point Aptana at a directory and have it work.
If you find anything that works, feel free to leave a comment about it.
James, I figured out an interesting hack that works, at least for my purposes. Turns out there's plugin for Firefox called jssh that provides telnet access to a javascript shell inside of a running copy of firefox. From there, you can trigger events, inspect the inner workings of the chrome, and manipulate the DOM. The Firewatir team (which is working on porting the popular WATIR framework for browser interaction testing) is using jssh so it must be pretty full featured. I wrote up some instructions based on what I learned getting started here: Automating Firefox for Web Application Integration.
ReplyDelete