Any new posts I do will be at jrburke.com. There is a post over there about the new blog.
I will keep this blog for historical purposes, but any new posts will be at the new location.
Wednesday, September 05, 2012
Wednesday, August 29, 2012
volo 0.2.3: semver and a web site
volo, a command line tool to create web projects, add front end dependencies and automate tasks, is now at 0.2.3. Get it via npm:
npm install -g volo
The complete set of changes are here. The notable ones:
1. semver ranges can be used with dependencies now. For example:
volo add requirejs/~2
will download the latest 2.x version of jrburke/requirejs from GitHub.
2. volo add mentions possible incompatibilities after doing a nested dependency install, and gives commands for manually choosing one of the other versions.
3. volo now has a simple web site that hopefully gives a better idea intro to volo. Many thanks go to James Long for starting the structure and helping to refine its message. There is more to do for the site, but it is already much better than what was there before.
npm install -g volo
The complete set of changes are here. The notable ones:
1. semver ranges can be used with dependencies now. For example:
volo add requirejs/~2
will download the latest 2.x version of jrburke/requirejs from GitHub.
2. volo add mentions possible incompatibilities after doing a nested dependency install, and gives commands for manually choosing one of the other versions.
3. volo now has a simple web site that hopefully gives a better idea intro to volo. Many thanks go to James Long for starting the structure and helping to refine its message. There is more to do for the site, but it is already much better than what was there before.
Saturday, August 18, 2012
RequireJS 2.0.6 released
RequireJS 2.0.6 is available.
The main focus of this release was cleaning up some rough edges in the r.js optimizer after switching to esprima for all module parsing/tracing. Most notably, the findNestedDependencies build option should work correctly again. The bundled UglifyJS was updated to 1.3.3 too.
Complete list of fixes:
The main focus of this release was cleaning up some rough edges in the r.js optimizer after switching to esprima for all module parsing/tracing. Most notably, the findNestedDependencies build option should work correctly again. The bundled UglifyJS was updated to 1.3.3 too.
Complete list of fixes:
Wednesday, August 08, 2012
RequireJS 2.0.5 released
RequireJS 2.0.5 is available, along with almond 0.1.2 that matches the 2.0.5 behavior.
The most notable changes:
The most notable changes:
- require.js: fix for a 'use strict' issue in Safari 6: should only show up in certain non-optimized scenarios.
- r.js optimizer: changed over to esprima for all dependency tracing. This set the stage for allowing some forms of JavaScript 1.8 to be optimized, with the help of some regexps.
Wednesday, July 25, 2012
On client components for web apps
This is a response to a blog post by TJ Holowaychuk about browser-based components for web applications, and Isaac's notes on TJ's post.
I am going to try to make this brief because I get tired of people in the Node community wanting to apply the same patterns from Node in the browser. I feel like I say these things on a periodic basis, but human communication is hard, and I certainly could do better. But I also want to get back to just making things. So this will be terser than I normally would like.
Web components
I suggest TJ look at volo, my attempt in this space. It does lots of what he describes already, and it can even be used as a module in another command line tool. We use volo for some things in Mozilla already.
volo uses GitHub as the component registry. It does so without the downsides that TJ mentions.
Specifically, volo uses the GitHub HTTP API to get version tags, do registry searches. I grabs .zip snapshots for a given version/github commit/branch, so the command line tool (the consumer) does not need use git. Git is not necessary on the client side.
This means the downloaded code is smaller -- no need to get a full repo and all of its commits.
volo also understands dependencies via the shorter "owner/repo/tag" IDs instead of the full github URLs.
It has a "shim" repo that means it can support installing components without needing the author of the component to conform to some new publishing system. Since it allows {version} replacement in URLs, the registry setup just needs to be done once. From then on, normal best-practice versioning via git tags is enough.
Some other notes in this post.
Base module format
Node bros, the AMD trolling is getting tiresome. Node's module system is woefully under-specified for web-based loading. While you can limp along with browserify, there are still these issues:
* For builds you need a wrapped format. For CDN deployment you need a wrapped format. browserify uses a wrapped format. AMD anyone? For that reason alone, AMD will never go away. Get used to it already.
* Web code needs a callback-style require for on-demand loading.
* Browserify's uses of file paths for IDs is awful for mixed local and CDN-based loading. Module IDs need to stay in ID format, not translated to a specific file path.
* Loader plugins reduce the need for callback-style APIs, and callback pyramid of doom, or inside-out callback hell, or the need for promise based programs. This more than makes up for the extra level of indent in AMD.
Loader plugins solve the translation issues TJ talks about, and they can participate in optimization builds, meaning templating engines can inline the JS function form of the template. Ditto for language transpilers like CoffeeScript.
By doing this:
define(function (require) {
//node module code in here.
//Return module value instead
//of needing `module.exports`
});
you have an AMD module.
Quit dismissing AMD for surface issues. AMD avoids mandating translation layers that lead to more things for the developer to understand and fix, and more process for the user to go through to deploy code. It is a net win when the source file works when deployed anywhere, without requiring specialized builds/converters.
Even if you want to personally use Node style and always do builds before loading in the browser, AMD is a great target for the built, wrapped format. You can even use the requirejs optimizer to do this, with the cjsTranslate option.
The universal module boilerplate gets simpler when Node supports AMD's define along with Node's existing module format. If you want to help improve the ugliness, start there.
AMD comes from real world exprience in Dojo with trying to deploy an unwrapped module format that depended on XHR+eval in dev and a wrapped format for builds. Yes, you can something to to work but the second order translation and support costs are not worth it. Some environments disallow eval. CORS configuration is awkward, and potentially hazardous if your API is on the same domain and CORS is done incorrectly.
The simplicity of the complete module lifecycle is worth the function wrapping. Quit looking just at what you type once, and consider the complete code lifecycle, and how much time could be wasted there.
npm's registry as the component registry
The implied rules with npm and node's module behavior are not good fits for front end web development:
By using GitHub, it comes with user auth handled, private repos, and robust social tools that will not be matched by a something like npm because the financial incentives are not there. Plus developers already use it. For simple open source sharing, make it easy without introducing more things in the middle.
Github as the registry is not perfect, and we still need some standalone servers that can be run inside corporations/for mirrors, but I would model those standalone servers on the github API. At least the default case of a public repo can be bootstrapped very quickly.
I am going to try to make this brief because I get tired of people in the Node community wanting to apply the same patterns from Node in the browser. I feel like I say these things on a periodic basis, but human communication is hard, and I certainly could do better. But I also want to get back to just making things. So this will be terser than I normally would like.
Web components
I suggest TJ look at volo, my attempt in this space. It does lots of what he describes already, and it can even be used as a module in another command line tool. We use volo for some things in Mozilla already.
volo uses GitHub as the component registry. It does so without the downsides that TJ mentions.
Specifically, volo uses the GitHub HTTP API to get version tags, do registry searches. I grabs .zip snapshots for a given version/github commit/branch, so the command line tool (the consumer) does not need use git. Git is not necessary on the client side.
This means the downloaded code is smaller -- no need to get a full repo and all of its commits.
volo also understands dependencies via the shorter "owner/repo/tag" IDs instead of the full github URLs.
It has a "shim" repo that means it can support installing components without needing the author of the component to conform to some new publishing system. Since it allows {version} replacement in URLs, the registry setup just needs to be done once. From then on, normal best-practice versioning via git tags is enough.
Some other notes in this post.
Base module format
Node bros, the AMD trolling is getting tiresome. Node's module system is woefully under-specified for web-based loading. While you can limp along with browserify, there are still these issues:
* For builds you need a wrapped format. For CDN deployment you need a wrapped format. browserify uses a wrapped format. AMD anyone? For that reason alone, AMD will never go away. Get used to it already.
* Web code needs a callback-style require for on-demand loading.
* Browserify's uses of file paths for IDs is awful for mixed local and CDN-based loading. Module IDs need to stay in ID format, not translated to a specific file path.
* Loader plugins reduce the need for callback-style APIs, and callback pyramid of doom, or inside-out callback hell, or the need for promise based programs. This more than makes up for the extra level of indent in AMD.
Loader plugins solve the translation issues TJ talks about, and they can participate in optimization builds, meaning templating engines can inline the JS function form of the template. Ditto for language transpilers like CoffeeScript.
By doing this:
define(function (require) {
//node module code in here.
//Return module value instead
//of needing `module.exports`
});
you have an AMD module.
Quit dismissing AMD for surface issues. AMD avoids mandating translation layers that lead to more things for the developer to understand and fix, and more process for the user to go through to deploy code. It is a net win when the source file works when deployed anywhere, without requiring specialized builds/converters.
Even if you want to personally use Node style and always do builds before loading in the browser, AMD is a great target for the built, wrapped format. You can even use the requirejs optimizer to do this, with the cjsTranslate option.
The universal module boilerplate gets simpler when Node supports AMD's define along with Node's existing module format. If you want to help improve the ugliness, start there.
AMD comes from real world exprience in Dojo with trying to deploy an unwrapped module format that depended on XHR+eval in dev and a wrapped format for builds. Yes, you can something to to work but the second order translation and support costs are not worth it. Some environments disallow eval. CORS configuration is awkward, and potentially hazardous if your API is on the same domain and CORS is done incorrectly.
The simplicity of the complete module lifecycle is worth the function wrapping. Quit looking just at what you type once, and consider the complete code lifecycle, and how much time could be wasted there.
npm's registry as the component registry
The implied rules with npm and node's module behavior are not good fits for front end web development:
- Forcing a directory structure is complicating project layout and loading for web-based projects. It should be possible to publish and install single JS libraries as single files. volo can do this.
- Related: the "index.js" convention is awful for web development and debugging. Debugging 'jquery.js' instead of trying to find 'jquery/index.js' in the web tools? No thank you.
- npm's registry namespace is already polluted. Check searches for 'jquery'. Maybe that just means having a separate npm registry-based registry for client code. But if there needs to be a separate repo, might as well use one that can adapt better to front end development. Like single JS/CSS file installs without extra Java-esque directory structures and metadata debris on the file system.
By using GitHub, it comes with user auth handled, private repos, and robust social tools that will not be matched by a something like npm because the financial incentives are not there. Plus developers already use it. For simple open source sharing, make it easy without introducing more things in the middle.
Github as the registry is not perfect, and we still need some standalone servers that can be run inside corporations/for mirrors, but I would model those standalone servers on the github API. At least the default case of a public repo can be bootstrapped very quickly.
Monday, July 09, 2012
RequireJS 2.0.4 released
I apologize for the extra noise.
RequireJS 2.0.3 released
RequireJS 2.0.3 is available.
Just a maintenance bug fix release. Most notable changes are probably:
Just a maintenance bug fix release. Most notable changes are probably:
- optimizer now does not fully resolve "paths" until all config sources (mainConfigFile, build profile and comand line args) have been merged.
- a fix for data-main resolution for require.js.
Saturday, July 07, 2012
volo 0.2.2 released
volo 0.2.2, a JS package manager and project automator, has been released. To install/update:
Here is a list of changes. Probably the most notable one:
volo create will run npm if there is a package.json in the downloaded project template with a dependencies property, and if there is not an existing node_modules directory.
This makes it easier to share volo commands between projects and their volofiles.
Example: the create-responsive-template now uses these separate volo commands in its volofile:
npm install -g volo
Here is a list of changes. Probably the most notable one:
volo create will run npm if there is a package.json in the downloaded project template with a dependencies property, and if there is not an existing node_modules directory.
This makes it easier to share volo commands between projects and their volofiles.
Example: the create-responsive-template now uses these separate volo commands in its volofile:
- appcache: generates an appcache manifest for a project.
- ghdeploy: deploys a directory of code to github pages.
Tuesday, June 26, 2012
Comments on Isaac's ES modules post
Isaac Schlueter posted some thoughts on the ES modules proposal. He works on Node and NPM, so it is great to see things from his perspective vs. my browser-based perspective.
I believe his post an my previous ES modules post fit together well. Here is some feedback on Isaac's post to point out where we align and what may still need more discussion. Section titles and numbered points below match the ones in Isaac's post.
Problems with the Current Spec
1. It seems to be based on the assumption that nesting module systems is a thing that people want.
As Isaac says, "no one wants to write a module system". Agreed. The default behavior of a built in loader should be enough for most, nearly all people. Others can build their own systems with existing tech, and if they catch on, consider them later.
I prefer to have support for loader plugins that have a constrained API. I'm not sure how that fits with Isaac's view. My previous post outlines uses cases where AMD folks have found them useful, and they help reduce the "pyramid of doom" of callbacks for resources that are important for a module to do its work, and therefore reduces the complexity of the module's external API. If Isaac has a different way to solve those issues, it would be good to know.
2. It puts too many things in JavaScript (as either API or syntax) which belong in the host (browser/node.js).
I think Isaac means that the browser implements a default Loader.resolve(), and Node does its own thing, but the language does not have a built default.
I think there is value in using the browser model as the default language one, with Node having the ability to override as it sees fit. The browser model of single file IO lookup based on a declarative config can work in "everything is available locally" model.
But for me this is a small distinction. I do not relish trying to convince a separate standards body that is even more unwieldy than es-discuss to code the default browser resolver, but if that is how it must go, so be it.
3. It borrows syntax from Python that many Python users do not even recommend using.
Agreed. I would not include import * if it were just up to me. The "evaluate dependencies first before evaluating the current module" means that just normal let/var destructuring could be used, no more need for import at all. All that is needed is a way to call out a dependency either via a keyword like "from" or via an API like "from()" or "require()". Or reusing "import" to be simpler, as Isaac does later in the post.
4. It favors the “object bag of exported members” approach, rather than the “single user-defined export” approach.
I too prefer a way to set the exported value (return in AMD parlance, module.exports = in Node). There needs to be a way to allow just the "exports" property assignment for circular dependencies, but for everything else (the vast majority of modules) assignment is nice.
I believe that exports approach in the ES design was chosen so that they can do the type checking of export values, supports circular dependencies, and with destructuring, it was argued that it was more palatable for getting the export value with this model:
import {jQuery} from 'jquery';
Not my favorite, but I could live with it.
The "middle way" evaluation approach would support module value assignment.
A Simpler Proposal
1. A
See feedback above. If a front end developer needs to ship a JS library to bootstrap the Loader, that is a failure condition for me.
I can understand exposing a Loader API so that for example someone could make an AMD API shim, but the ideal, future case should be no need for imperative setup. Simple declarative setup only, and just for how to deal with module IDs-to-paths, and possibly a declarative shim config for legacy scripts. Note that this config is about specific modules, not implementing the general module/resolution API.
2. Within a module, the
I was thinking in terms of "from", but "import" is fine if it is used like this. I do not care about the name.
I also prefer an API that could be used/detected by browser scripts that need to live in the on-ES modules world and the new one.
This is what AMD loaders do today for the sugared form, except it looks for require('string literal').
3.
I suggest using "module ID" instead of path here. Once modules are combined, it is best to refer to logical IDs, like 'jquery' instead of 'my/specific/path/to/jquery.js', because it makes it hard to combine sets of built modules together.
AMD loaders use module IDs, and it works out well for us.
This Loader.define call is similar to the capabilities in AMD. AMD uses a function (require, exports, module) {} wrapper for, but I can see where perhaps an ES loader could just get by with {}.
No comments on 4,5,6. Those are about the resolver API, which I do not care too much as long as what is used in the browser means an app developer does not have to ship another library to use it.
7. Within a module, the
I believe his post an my previous ES modules post fit together well. Here is some feedback on Isaac's post to point out where we align and what may still need more discussion. Section titles and numbered points below match the ones in Isaac's post.
Problems with the Current Spec
1. It seems to be based on the assumption that nesting module systems is a thing that people want.
As Isaac says, "no one wants to write a module system". Agreed. The default behavior of a built in loader should be enough for most, nearly all people. Others can build their own systems with existing tech, and if they catch on, consider them later.
I prefer to have support for loader plugins that have a constrained API. I'm not sure how that fits with Isaac's view. My previous post outlines uses cases where AMD folks have found them useful, and they help reduce the "pyramid of doom" of callbacks for resources that are important for a module to do its work, and therefore reduces the complexity of the module's external API. If Isaac has a different way to solve those issues, it would be good to know.
2. It puts too many things in JavaScript (as either API or syntax) which belong in the host (browser/node.js).
I think Isaac means that the browser implements a default Loader.resolve(), and Node does its own thing, but the language does not have a built default.
I think there is value in using the browser model as the default language one, with Node having the ability to override as it sees fit. The browser model of single file IO lookup based on a declarative config can work in "everything is available locally" model.
But for me this is a small distinction. I do not relish trying to convince a separate standards body that is even more unwieldy than es-discuss to code the default browser resolver, but if that is how it must go, so be it.
My main desire: the front end developer should not have to ship a library file to do imperative configuration of a loader.
3. It borrows syntax from Python that many Python users do not even recommend using.
Agreed. I would not include import * if it were just up to me. The "evaluate dependencies first before evaluating the current module" means that just normal let/var destructuring could be used, no more need for import at all. All that is needed is a way to call out a dependency either via a keyword like "from" or via an API like "from()" or "require()". Or reusing "import" to be simpler, as Isaac does later in the post.
4. It favors the “object bag of exported members” approach, rather than the “single user-defined export” approach.
I too prefer a way to set the exported value (return in AMD parlance, module.exports = in Node). There needs to be a way to allow just the "exports" property assignment for circular dependencies, but for everything else (the vast majority of modules) assignment is nice.
I believe that exports approach in the ES design was chosen so that they can do the type checking of export values, supports circular dependencies, and with destructuring, it was argued that it was more palatable for getting the export value with this model:
import {jQuery} from 'jquery';
Not my favorite, but I could live with it.
The "middle way" evaluation approach would support module value assignment.
A Simpler Proposal
1. A
Loader
built-in object, with a few methods that must be
specified before modules can be used.See feedback above. If a front end developer needs to ship a JS library to bootstrap the Loader, that is a failure condition for me.
I can understand exposing a Loader API so that for example someone could make an AMD API shim, but the ideal, future case should be no need for imperative setup. Simple declarative setup only, and just for how to deal with module IDs-to-paths, and possibly a declarative shim config for legacy scripts. Note that this config is about specific modules, not implementing the general module/resolution API.
2. Within a module, the
import pathstring
syntax that can be easily
detected statically in program text before evaluation, and returns a
module’s exported object.I was thinking in terms of "from", but "import" is fine if it is used like this. I do not care about the name.
I also prefer an API that could be used/detected by browser scripts that need to live in the on-ES modules world and the new one.
This is what AMD loaders do today for the sugared form, except it looks for require('string literal').
3.
Loader.define(path, program text)
defines a module at the
specified path
, with the program text
contents.I suggest using "module ID" instead of path here. Once modules are combined, it is best to refer to logical IDs, like 'jquery' instead of 'my/specific/path/to/jquery.js', because it makes it hard to combine sets of built modules together.
AMD loaders use module IDs, and it works out well for us.
This Loader.define call is similar to the capabilities in AMD. AMD uses a function (require, exports, module) {} wrapper for
No comments on 4,5,6. Those are about the resolver API, which I do not care too much as long as what is used in the browser means an app developer does not have to ship another library to use it.
7. Within a module, the
export expression
statement marks the result
of expression
as the exported value from the module.
As Isaac alluded, I think circular dependencies will kill this one. Or at least, there needs to be a way to do what is possible in Node/AMD where an exports object can be created for a module before it executes, and that exports is available for other dependencies in a cycle.
8. The global object within a module context is equivalent to
Object.create(global)
from the main global context. and:
9. If a module does not contain an
export
statement, then its global
object is its export.
This is tricky since some libs may export more than one global. I would probably still favor allowing all scripts loaded within a container to share the same "global" space.
With a shim config, this would allow consuming scripts that use browser globals, and for using library plugins that use a browser global to attach functionality (jQuery or Backbone plugins for example).
I wouldn't mind a declarative loader config that allowed for #8, not sure if it should default to on or off. Maybe on, and if using legacy scripts, you have to do a bit of work by configuring it to off.
This area needs more thought, but my general goal is not needing a library.js to set up imperative loader APIs to do loading in the browser. As long as that works out, I will probably be fine with it.
In Web Browsers
As mentioned above, I believe it is better to use module ID instead of <path> for the name of a module via Loader.define(), but to use module IDs instead. That would probably mean transforming the script tag to be script suggestion to script module="moduleId" src="".
But really, I'm fine with just a JS API for this, so just doing System.load('id'); or Loader.load('id') in an inline script tag is fine by me.
What’s Missing from this Proposal
For me, Loader.define('id', {}) is just another way to say module 'id' {}. I like having an API instead of/in addition to a keyword, for shimming something that could work in today's browsers, although in this case shimming may not be important. So I'm neutral on that. module 'id' {} is slightly shorter.
As for sourcemaps: maybe what was meant there was sourceURL, and I agree, it would be good if the Loader.define() method or whatever it is automatically got the same script debugger treatment similar to what sourceURL gets today.
As mentioned above, I believe it is better to use module ID instead of <path> for the name of a module via Loader.define(), but to use module IDs instead. That would probably mean transforming the script tag to be script suggestion to script module="moduleId" src="".
But really, I'm fine with just a JS API for this, so just doing System.load('id'); or Loader.load('id') in an inline script tag is fine by me.
What’s Missing from this Proposal
For me, Loader.define('id', {}) is just another way to say module 'id' {}. I like having an API instead of/in addition to a keyword, for shimming something that could work in today's browsers, although in this case shimming may not be important. So I'm neutral on that. module 'id' {} is slightly shorter.
As for sourcemaps: maybe what was meant there was sourceURL, and I agree, it would be good if the Loader.define() method or whatever it is automatically got the same script debugger treatment similar to what sourceURL gets today.
Monday, June 25, 2012
ES Modules: suggestions for improvement
There has been a recent bout of comments about ECMAScript (ES) harmony modules on twitter and elsewhere. Here is my attempt to explain parts of it, some of the design tradeoffs, and perhaps a middle ground that would open up some options that may bridge some gaps.
Modules are one of those things that seem very simple, but involve quite a lot of decisions and tradeoffs. This post is mostly just about module linking and module ID resolution, and even with that, it is quite long.
If ES Modules do not come up with different ways to work (or maybe explain where I have it wrong), they are not competing well with what can be done with a combination of CommonJS/Node and AMD.
My background: I work on RequireJS and AMD.
What is it
First, some links to the specs. The "harmony" moniker means it is in process for the next version of the ECMAScript (JavaScript) language:
The module examples page is suggested if you want to get a feel for it, but it is good to read the other docs too. It can be a bit daunting though, unless you speak the spec language.
Points of reference
One way to evaluate how ES Modules works is to compare it to something you may already know:
Run time vs compile time
ES is a "compile time" approach where the formats mentioned above are "run time" approaches. Maybe not precise terms, but here is a definition of what is meant by those terms for the purposes of this post:
"Compile time" means:
"Run time" means: there is usually no pre-parse stage. The JS text is evaluated, and any module API that is encountered is run as it is encountered.
AMD will actually do a small parse step if the module looks like:
define(function (require) {
var a = require('a');
});
In that case, it will parse the function to look for require() dependencies, and then load and execute the dependencies first before running the function above.
"Compile time" was chosen for ES because:
The CommonJS/Node style of pure runtime, no parse, was hard to get to work with some edge cases as I understand it, but I heard that second-hand, I did not see that discussion.
Impact of compile time
For compile time to work well, it should use new keywords in the language, to have clear markers on what is participating in the module system.
Although, it could work with a module API instead of new syntax, by only recognizing literal use of that API, and do not support variable assignment of the API or dependencies to other names. This is what AMD does for the "sugared CommonJS form" mentioned above.
For "import *", static binding is critical because anything that is a runtime scope lookup gets into "with" territory, and "with" has been seen as a mistake by the committee. ES5's 'use strict' bars its use.
Since new syntax is involved, ES Modules cannot be "shimmed" into existing JS libraries. There is a Module Loader "runtime" registration call that can used for a module to register its, but it means those libraries cannot participate in the compile time linking stage, so they need to be pre-loaded by a script loader before an ES Module can effectively reference it with module syntax.
Special factors in JavaScript
There are a few special factors with JavaScript that are not usually in other programming languages, and they have an impact on the design:
My goals
I want AMD and RequireJS to go away.
They solve a real problem, but ideally the language and runtime should have similar capabilities built in.
Native support should be able to cover the 80% case of RequireJS usage, to the point that no userland "module loader" library should be needed for those use cases, at least in the browser.
If the ES module format requires a web developer to use a script loader to use existing, non-AMD/CommonJS, non-ES JS in a project for those 80% use cases, it is a failure.
Example: If I cannot use jquery and backbone in an ES 6 module without needing another library to preload or prep those libraries for ES 6 module use, then existing JS users will not see much advantage over using AMD.
If the web developer needs to code any imperative logic to wire up the ES Module Loader, that will result in a loader library. That is a failure condition.
As compared to AMD: if the ES approach cannot do the above without a helper loader library and the ES approach does not allow something like loader plugins, then there is no contest -- AMD will still be more useful to a developer than the built in system. Small savings in the amount to type and a thin layer of type checking is not enough.
This may very well not be the goal of ES modules, and it would be great if the specs or some background material acknowledge that, and list out the mitigation strategies developers are expected to use.
Shortcomings of ES modules
Right now, ES harmony modules do not improve an AMD user's workflow because of the following:
New syntax makes it very hard to optionally upgrade
If I am the author of something like jQuery or Backbone, I cannot optionally add in a way to register as an ES module because ES modules use new syntax. However, there are many uses of those libraries which will not be in ES module-capable browsers.
The Node and AMD communities have found optional opt-in via a runtime API very useful for adoption of code that works with their module systems, but still work in older "use plain script tags with browser globals" approach.
There is a runtime API in the ES module loader proposal that would allow a legacy script to register something as a module, but that requires the end developer to use another script loader library to load that library so it can do that runtime call, then start loading ES module code.
The developer may as well just stick with AMD. Complexity has not been reduced.
Register module and a global
Declarative module ID resolution
While I have made tools to allow a developer to "convert" an existing library to AMD, there are many developers that did not want to touch existing libraries. It makes it difficult to compare against new versions and there is a concern that the conversion introduces breaking scope changes (rightly so).
So the "shim" configuration for requirejs was introduced to allow specifying dependencies and an export value for JS code that does not call a module API. This has been well received in the community. More background on shim here.
"shim" with "paths" and "map" make it possible to declaratively set up a configuration that allows for one file IO lookup per module ID, an easy way to "shim" old libraries, and to load more than one version of a module for use by different modules. That covers the 80% case for using old and new code with a module loader.
By using a declarative configuration that is supported by the "default" module ID resolution mechanism in the language, then it avoids having to ship a userland loader library for browser use. This is a big win because it will help kill AMD.
It is fine if the Module Loaders API still has an imperative API to set up different module ID resolution logic. That would allow Node to maintain its current multiple IO, nested directory lookup logic. However, the default should favor the harsher browser environment in such a way that an extra loader library is not needed.
Loader plugins
I can appreciate that supporting Loader plugins may seem out of scope for the default module loader, but they have been incredibly useful for AMD. Node has seen a use for them, even though they are done in a different way via require.extensions. They effectively allow use of transpiled languages.
I find the AMD loader plugins better than Node's approach because:
AMD loader plugins can participate in build steps, so the "text!" plugin can inline the text as module in a built file:
define('text!index.txt', function() {
return 'hello world';
});
That is incredibly useful for getting good network performance in the browser.
Even for local file environments like Node, being able to combine all the assets for a program into one file is really great for distribution. It is conceptually simpler to reason about tracking one file vs. "nested directory of directory" installs. Think of it as a way to easily share shell scripts.
The middle way
For developer workflow, right now AMD is a better alternative than ES harmony modules, given the choices around compile time linking, new syntax, and the use of imperative ID resolution.
Here are some suggestions on how to allow some of the benefits of the compile time approach with the run time ones used by AMD. The goals are reuse non-module code in modular systems, allow for a way to get some static version of import *, and perhaps even macros.
Fetch dependencies, execute, modify, execute
The core of the middle way for compile time vs run time:
These are effectively what AMD does today, except it does not have a way to alter the AST before final execution. Well, an AMD loader could do that, but AMD loaders have traditionally avoided it. However, the harmony loader plugin I made effectively does this to support "import *". More below:
The ES module loader would operate like so:
When parsing out dependency references, look for any new keywords, but also any API that corresponds to that keyword. So, look for at('moduleID') for dependency references in addition to at 'moduleID'.
The runtime API for the module should be something like at('moduleID') for dependencies and exports.propertyName for specifying export properties. I am not arguing for that specific API, just mentioning that there would be an API alternative to the new syntax. The API alternative does not need an import alternative though.
Since a module is executed before giving the exports to a module that depends on it, and since there is a runtime API for modules, then that allows existing JS code to opt-in to ES modules without getting bitten by new syntax.
Since all dependencies are executed before executing the current module, an "import *" can be supported, and I believe that would allow for macros later.
There are some limitations around circular dependencies, but they are still possible, and the restrictions are minor in comparison to allowing existing code to opt in to ES modules and still work in non-ES module environments.
Declarative configuration
Support something like the "paths", "map" and "shim" config as used in RequireJS. This allows easier use of old code, and scales up to very large code without requiring a developer to ship a library that sets up an imperative resolution API.
Support loader plugins
Now that all dependencies are executed before executing the current module, then it is easier to support loader plugins, as the loader will have the exported value for that plugin resource before running the current module.
These environment-based loading, like an "env!" plugin that can load a module for Node and a different API-compatible one for the browser. See also a "has!" plugin for feature detection-based loading, and plugins to enable transpilers.
Yes, it is more to sort out, but they provide a lot of benefit. AMD has already primed this pump. It even works with a build/optimization step for inlining resources.
Use string IDs for module identifiers
This allows the module references in dependencies to be the same as the ID that is inlined when modules are combined and named in built files. Right now it is weird to use a JS identifier, like module Foo {} to name a module, but then see module Foo at "Foo". It is hard to match up at "Foo" with module Foo.
The extreme positions
The following is based on my limited experience. I am not a language designer. I am but a simple plumber that uses the pipes that available to build things. I may not have the right long term thinking involved, but I think the following make the ES module proposal simpler.
To be clear though, I think the middle way above is enough to bridge the gap. Please, do not read the following and then discount the middle way. The middle way is separate from these more extreme measures.
No new syntax
If there is a runtime API available to allow existing code to opt in, just shed the new syntax. Just have one way to do it via API that can also then be shimmed.
no import *
import * makes it difficult to determine where code comes from. If
this type of construct is allowed:
module foo {
var sin = function () {};
module bar {
import * from "Math";
sin();
}
}
for a minifier, it now needs access to Math to do its work correctly. This has not been the case in the past. It would suck to need all of the code for all the modules used in a system just to complete a minifier pass.
For developers, if you have two modules that do import * it can be difficult to know where something comes from.
Destructuring provides enough benefit for these use cases, just do the
comma separated list for things you really use:
import {sin, cos} from "Math";
import * is a bad pattern and it does not save much.
If you get rid of import * then with the "middle way" of evaluating modules, then regular var/let-based destructuring is enough, there is no need for an import keyword.
No macros
Similarly, rethink the need for macros long term. They suffer from the same "where did this come from" problem as import * does. The function capabilities in JavaScript are good enough to get the job done for the "don't repeat yourself" task.
A way forward for today's code
The nice thing is that we can prototype this new world by combining what CommonJS/Node does today with AMD. So we can just use the require() and define() as used today to get there. The ES committee does not have to ratify it, and we get the benefit of real world implementation and use before committing to default language support.
Cajon is my attempt from the AMD side to bridge the gap with plain Node code and a runtime browser loader. LinkedIn's Inject is another AMD loader that uses a similar approach. So, just use CommonJS/Node modules in the browser in dev, use the r.js optimizer to compile down to AMD for final deployment.
The cjsTranslate capability in the r.js optimizer allows a developer that always likes to do builds, even in dev, can code in Node syntax but output to AMD and load it in the browser either by the small Almond AMD shim, or the full dynamic loader via RequireJS. Or choose Dojo or curl.js.
If Node adds define() support, callback-require for use within a module for dynamically calculated dependencies, and supports at least a limited form of loader plugins, then we're done. The amdefine project is an implementation proof of that support. There are details to sort out, but it is doable. Any node committers are interested, give me holler. We can work out the details.
Summary
For developer workflow, the current ES module spec is not competing well with a combination of CommonJS/Node and AMD with loader plugins. Or even just AMD with loader plugins.
Using the middle way for module execution and getting a good declarative module ID to path configuration in the ES spec will level the playing field. Add loader plugins to get language transpiler support and environment/feature detection loading that is efficient for the browser.
I have given some of this feedback to the es-discuss list, but I think some of it, in particular the "middle way" module evaluation flow, got lost in my poor communication where it seemed like I was proposing a dynamically scoped import *. Hopefully this post clarifies what I was trying to achieve with that earlier feedback.
Finally, I appreciate working on the ES committee is very difficult work. I do not envy them. I do not mean for this feedback to come across harshly, but the committee is running out of time, and I do not feel like it has made the case very well for how what is being proposed is better than what we have cobbled together with existing technology. To be clear, I want an ES Modules proposal to succeed because I do not want to do AMD or RequireJS forever. Hopefully this feedback can be viewed as loyal opposition, and as a challenge to do better, or at least to do it in a way that is explained more clearly.
Modules are one of those things that seem very simple, but involve quite a lot of decisions and tradeoffs. This post is mostly just about module linking and module ID resolution, and even with that, it is quite long.
If ES Modules do not come up with different ways to work (or maybe explain where I have it wrong), they are not competing well with what can be done with a combination of CommonJS/Node and AMD.
My background: I work on RequireJS and AMD.
What is it
First, some links to the specs. The "harmony" moniker means it is in process for the next version of the ECMAScript (JavaScript) language:
- Harmony Modules: The basic module spec.
- Harmony Module Loaders: Specifies how you can create isolated loaders for modules. Very useful.
The module examples page is suggested if you want to get a feel for it, but it is good to read the other docs too. It can be a bit daunting though, unless you speak the spec language.
Points of reference
One way to evaluate how ES Modules works is to compare it to something you may already know:
- CommonJS / Node. Node implements a version of the CommonJS module API.
- AMD / RequireJS. RequireJS implements the AMD module API.
Run time vs compile time
ES is a "compile time" approach where the formats mentioned above are "run time" approaches. Maybe not precise terms, but here is a definition of what is meant by those terms for the purposes of this post:
"Compile time" means:
- JS text is parsed, and the "module" "import", and "export" syntax is found.
- Any dependencies are fetched and parsed.
- Once the dependency tree has been all fetched, the ES module loader will wire up the exports from a dependency to a module's "module" or "import" use, and do type checking on that export type and how it is referenced in the module
- The module code is then evaluated/executed.
"Run time" means: there is usually no pre-parse stage. The JS text is evaluated, and any module API that is encountered is run as it is encountered.
AMD will actually do a small parse step if the module looks like:
define(function (require) {
var a = require('a');
});
In that case, it will parse the function to look for require() dependencies, and then load and execute the dependencies first before running the function above.
"Compile time" was chosen for ES because:
- it is familiar from other scripting languages
- sets the way for other possible static features, like macros
- ensures that "import *" are static, *not* dynamic bindings
- allows some type checking on the values that are explicitly "export"ed.
- generally seen as safer and easier to reason about that run time.
The CommonJS/Node style of pure runtime, no parse, was hard to get to work with some edge cases as I understand it, but I heard that second-hand, I did not see that discussion.
Impact of compile time
For compile time to work well, it should use new keywords in the language, to have clear markers on what is participating in the module system.
Although, it could work with a module API instead of new syntax, by only recognizing literal use of that API, and do not support variable assignment of the API or dependencies to other names. This is what AMD does for the "sugared CommonJS form" mentioned above.
For "import *", static binding is critical because anything that is a runtime scope lookup gets into "with" territory, and "with" has been seen as a mistake by the committee. ES5's 'use strict' bars its use.
Since new syntax is involved, ES Modules cannot be "shimmed" into existing JS libraries. There is a Module Loader "runtime" registration call that can used for a module to register its, but it means those libraries cannot participate in the compile time linking stage, so they need to be pre-loaded by a script loader before an ES Module can effectively reference it with module syntax.
Module ID resolution
One other consideration, one that is usually overlooked when talking about modules, is how a module ID like "jquery" is resolved to a file path and loaded.
Both CommonJS/Node and AMD/RequireJS support "short, logical names" for dependencies. So, you can say require('jquery') and that jquery gets converted to a path using some algorithm. Node uses multiple paths to find jquery.js, and AMD in the browser relies on a declarative configuration to do so.
ES modules do not really have support for this, unless you also implement an imperative resolver. They support full URLs, like:
module foo at 'http://example.com/scripts/foo.js'
but we have found in AMD that it is useful to be able to say require('jquery'), but then declaratively map that to zepto.js.
So, an individual module specifies a dependency on an API provider, but how that provider is satisfied is resolved using the declarative configuration.
If there is only an imperative resolve API, no simple declarative API to resolve short names, it will mean shipping a userland "loader library" to effectively use modules. This opens the door to balkanization in module ID resolution since there is not built in support.
Special factors in JavaScript
There are a few special factors with JavaScript that are not usually in other programming languages, and they have an impact on the design:
- The largest deployed use case of JavaScript, the browser, is async, network IO. File size and number of requests are very important to performance. So combining modules together into one file, and minifying/transforming the source for smaller delivery is common.
- There is a large legacy codebase of browser-based JavaScript that just use browser globals, and no real module format. Some small uses of JavaScript do not need modules, and browsers will support those use cases indefinitely.
My goals
I want AMD and RequireJS to go away.
They solve a real problem, but ideally the language and runtime should have similar capabilities built in.
Native support should be able to cover the 80% case of RequireJS usage, to the point that no userland "module loader" library should be needed for those use cases, at least in the browser.
If the ES module format requires a web developer to use a script loader to use existing, non-AMD/CommonJS, non-ES JS in a project for those 80% use cases, it is a failure.
Example: If I cannot use jquery and backbone in an ES 6 module without needing another library to preload or prep those libraries for ES 6 module use, then existing JS users will not see much advantage over using AMD.
If the web developer needs to code any imperative logic to wire up the ES Module Loader, that will result in a loader library. That is a failure condition.
As compared to AMD: if the ES approach cannot do the above without a helper loader library and the ES approach does not allow something like loader plugins, then there is no contest -- AMD will still be more useful to a developer than the built in system. Small savings in the amount to type and a thin layer of type checking is not enough.
This may very well not be the goal of ES modules, and it would be great if the specs or some background material acknowledge that, and list out the mitigation strategies developers are expected to use.
Shortcomings of ES modules
Right now, ES harmony modules do not improve an AMD user's workflow because of the following:
New syntax makes it very hard to optionally upgrade
If I am the author of something like jQuery or Backbone, I cannot optionally add in a way to register as an ES module because ES modules use new syntax. However, there are many uses of those libraries which will not be in ES module-capable browsers.
The Node and AMD communities have found optional opt-in via a runtime API very useful for adoption of code that works with their module systems, but still work in older "use plain script tags with browser globals" approach.
There is a runtime API in the ES module loader proposal that would allow a legacy script to register something as a module, but that requires the end developer to use another script loader library to load that library so it can do that runtime call, then start loading ES module code.
The developer may as well just stick with AMD. Complexity has not been reduced.
Register module and a global
Backbone originally had trouble adopting AMD because if it called define() to register a module, they found other libraries, like Backbone plugins, would break. The plugins were expecting to find a Backbone global variable but when Backbone called define() it was not also exporting a global.
This same problem will exist in ES-mixed code. Any dynamic registration also needs to allow an export of a global so that downstream libraries will work until they are also converted to optional module registration.
There should be a migration path, one that allows gradual rollout of modules without requiring a project to go whole hog on module syntax.
Declarative module ID resolution
While I have made tools to allow a developer to "convert" an existing library to AMD, there are many developers that did not want to touch existing libraries. It makes it difficult to compare against new versions and there is a concern that the conversion introduces breaking scope changes (rightly so).
So the "shim" configuration for requirejs was introduced to allow specifying dependencies and an export value for JS code that does not call a module API. This has been well received in the community. More background on shim here.
"shim" with "paths" and "map" make it possible to declaratively set up a configuration that allows for one file IO lookup per module ID, an easy way to "shim" old libraries, and to load more than one version of a module for use by different modules. That covers the 80% case for using old and new code with a module loader.
By using a declarative configuration that is supported by the "default" module ID resolution mechanism in the language, then it avoids having to ship a userland loader library for browser use. This is a big win because it will help kill AMD.
It is fine if the Module Loaders API still has an imperative API to set up different module ID resolution logic. That would allow Node to maintain its current multiple IO, nested directory lookup logic. However, the default should favor the harsher browser environment in such a way that an extra loader library is not needed.
Loader plugins
I can appreciate that supporting Loader plugins may seem out of scope for the default module loader, but they have been incredibly useful for AMD. Node has seen a use for them, even though they are done in a different way via require.extensions. They effectively allow use of transpiled languages.
I find the AMD loader plugins better than Node's approach because:
- load behavior vs. file format: since a prefix is used on the resource ID instead of just using a file extension suffix, it allows multiple plugins to deal with the same type of file extension. For example, "text!index.html" and "template!index.html" can be used in the same app, the first one just giving the raw text, the second one "compiling" some text for use as a template. The developer, not the plugin provider, chooses the right use. It still allows "single extension" plugins too, and for those, they can omit the file extension in the ID, so no increase in ID length.
- one IO lookup: For a resource ID "foo", node may do a lookup for "foo.js", "foo.coffee" and "foo.node". By specifying the loading mechanism via the prefix, it avoids multiple IO lookups, which are important for browser use. It also makes it clear what handles the loading.
AMD loader plugins can participate in build steps, so the "text!" plugin can inline the text as module in a built file:
define('text!index.txt', function() {
return 'hello world';
});
That is incredibly useful for getting good network performance in the browser.
Even for local file environments like Node, being able to combine all the assets for a program into one file is really great for distribution. It is conceptually simpler to reason about tracking one file vs. "nested directory of directory" installs. Think of it as a way to easily share shell scripts.
The middle way
For developer workflow, right now AMD is a better alternative than ES harmony modules, given the choices around compile time linking, new syntax, and the use of imperative ID resolution.
Here are some suggestions on how to allow some of the benefits of the compile time approach with the run time ones used by AMD. The goals are reuse non-module code in modular systems, allow for a way to get some static version of import *, and perhaps even macros.
Fetch dependencies, execute, modify, execute
The core of the middle way for compile time vs run time:
- do not force compile time operations to be all up front, before any evaluation.
- evaluate dependencies before executing the current module.
- provide an API for modules, not just new syntax
These are effectively what AMD does today, except it does not have a way to alter the AST before final execution. Well, an AMD loader could do that, but AMD loaders have traditionally avoided it. However, the harmony loader plugin I made effectively does this to support "import *". More below:
The ES module loader would operate like so:
- Load JS text. Parse out dependency references.
- Load dependencies, parse out their dependencies, load them, etc...
- Before executing a given module, execute its dependencies, and wait for the dependencies to finish exporting their module values.
- Take that exported value and if there is an "import *" in the current module, modify the AST of the current module such that it gets a locally bound variable to any of the hasOwnProperties of the dependency that are known at that time. So, any properties added to the module after this point are not visible. This should avoid concerns about dynamic scope.
- Once the module AST has been fixed up for any import *, then evaluate it.
When parsing out dependency references, look for any new keywords, but also any API that corresponds to that keyword. So, look for at('moduleID') for dependency references in addition to at 'moduleID'.
The runtime API for the module should be something like at('moduleID') for dependencies and exports.propertyName for specifying export properties. I am not arguing for that specific API, just mentioning that there would be an API alternative to the new syntax. The API alternative does not need an import alternative though.
Since a module is executed before giving the exports to a module that depends on it, and since there is a runtime API for modules, then that allows existing JS code to opt-in to ES modules without getting bitten by new syntax.
Since all dependencies are executed before executing the current module, an "import *" can be supported, and I believe that would allow for macros later.
There are some limitations around circular dependencies, but they are still possible, and the restrictions are minor in comparison to allowing existing code to opt in to ES modules and still work in non-ES module environments.
Declarative configuration
Support something like the "paths", "map" and "shim" config as used in RequireJS. This allows easier use of old code, and scales up to very large code without requiring a developer to ship a library that sets up an imperative resolution API.
Support loader plugins
Now that all dependencies are executed before executing the current module, then it is easier to support loader plugins, as the loader will have the exported value for that plugin resource before running the current module.
These environment-based loading, like an "env!" plugin that can load a module for Node and a different API-compatible one for the browser. See also a "has!" plugin for feature detection-based loading, and plugins to enable transpilers.
Yes, it is more to sort out, but they provide a lot of benefit. AMD has already primed this pump. It even works with a build/optimization step for inlining resources.
Use string IDs for module identifiers
This allows the module references in dependencies to be the same as the ID that is inlined when modules are combined and named in built files. Right now it is weird to use a JS identifier, like module Foo {} to name a module, but then see module Foo at "Foo". It is hard to match up at "Foo" with module Foo.
The extreme positions
The following is based on my limited experience. I am not a language designer. I am but a simple plumber that uses the pipes that available to build things. I may not have the right long term thinking involved, but I think the following make the ES module proposal simpler.
To be clear though, I think the middle way above is enough to bridge the gap. Please, do not read the following and then discount the middle way. The middle way is separate from these more extreme measures.
No new syntax
If there is a runtime API available to allow existing code to opt in, just shed the new syntax. Just have one way to do it via API that can also then be shimmed.
no import *
import * makes it difficult to determine where code comes from. If
this type of construct is allowed:
module foo {
var sin = function () {};
module bar {
import * from "Math";
sin();
}
}
for a minifier, it now needs access to Math to do its work correctly. This has not been the case in the past. It would suck to need all of the code for all the modules used in a system just to complete a minifier pass.
For developers, if you have two modules that do import * it can be difficult to know where something comes from.
Destructuring provides enough benefit for these use cases, just do the
comma separated list for things you really use:
import {sin, cos} from "Math";
import * is a bad pattern and it does not save much.
If you get rid of import * then with the "middle way" of evaluating modules, then regular var/let-based destructuring is enough, there is no need for an import keyword.
No macros
Similarly, rethink the need for macros long term. They suffer from the same "where did this come from" problem as import * does. The function capabilities in JavaScript are good enough to get the job done for the "don't repeat yourself" task.
A way forward for today's code
The nice thing is that we can prototype this new world by combining what CommonJS/Node does today with AMD. So we can just use the require() and define() as used today to get there. The ES committee does not have to ratify it, and we get the benefit of real world implementation and use before committing to default language support.
Cajon is my attempt from the AMD side to bridge the gap with plain Node code and a runtime browser loader. LinkedIn's Inject is another AMD loader that uses a similar approach. So, just use CommonJS/Node modules in the browser in dev, use the r.js optimizer to compile down to AMD for final deployment.
The cjsTranslate capability in the r.js optimizer allows a developer that always likes to do builds, even in dev, can code in Node syntax but output to AMD and load it in the browser either by the small Almond AMD shim, or the full dynamic loader via RequireJS. Or choose Dojo or curl.js.
browserify can be updated to use AMD as its transport format instead of its home-grown require.define() API, and then not have to ship a loader, but use one of the AMD loaders/API shims. browserify is nice in that, unlike the r.js optimizer+cjsTranslate, it provides browser module shims for the native node modules. It would be great to break those out as a separate project that could be consumed by a project just using the r.js optimizer.
If Node adds define() support, callback-require for use within a module for dynamically calculated dependencies, and supports at least a limited form of loader plugins, then we're done. The amdefine project is an implementation proof of that support. There are details to sort out, but it is doable. Any node committers are interested, give me holler. We can work out the details.
Summary
For developer workflow, the current ES module spec is not competing well with a combination of CommonJS/Node and AMD with loader plugins. Or even just AMD with loader plugins.
Using the middle way for module execution and getting a good declarative module ID to path configuration in the ES spec will level the playing field. Add loader plugins to get language transpiler support and environment/feature detection loading that is efficient for the browser.
I have given some of this feedback to the es-discuss list, but I think some of it, in particular the "middle way" module evaluation flow, got lost in my poor communication where it seemed like I was proposing a dynamically scoped import *. Hopefully this post clarifies what I was trying to achieve with that earlier feedback.
Finally, I appreciate working on the ES committee is very difficult work. I do not envy them. I do not mean for this feedback to come across harshly, but the committee is running out of time, and I do not feel like it has made the case very well for how what is being proposed is better than what we have cobbled together with existing technology. To be clear, I want an ES Modules proposal to succeed because I do not want to do AMD or RequireJS forever. Hopefully this feedback can be viewed as loyal opposition, and as a challenge to do better, or at least to do it in a way that is explained more clearly.
Wednesday, June 20, 2012
volo 0.2.1 released, using volo as a library
volo 0.2.1 has been released. This is a bug fix release, if you are using 0.2.0, you are encouraged to upgrade. Here is the list of fixes. To install:
npm install -g volo
Also new: the ghvolo project. It shows how to use volo as a library to just resolve dependency IDs to github-hosted resources. This is useful if you are building a command line interface for fetching front end dependencies and you want to use the same github resolution logic as volo, but without using volo to do the actual installation of dependencies.
ghvolo is a command line tool that supports "search" and "resolve". Nothing is installed by using ghvolo. It just uses volo as library to resolve IDs to some JSON data. See the README for more information and some sample commands.
npm install -g volo
Also new: the ghvolo project. It shows how to use volo as a library to just resolve dependency IDs to github-hosted resources. This is useful if you are building a command line interface for fetching front end dependencies and you want to use the same github resolution logic as volo, but without using volo to do the actual installation of dependencies.
ghvolo is a command line tool that supports "search" and "resolve". Nothing is installed by using ghvolo. It just uses volo as library to resolve IDs to some JSON data. See the README for more information and some sample commands.
Wednesday, June 13, 2012
Cajon: a browser module loader for CommonJS/node/AMD modules
I just released the first version of Cajon. From the README:
Cajon is a JavaScript module loader for the browser that can load CommonJS/node and AMD modules. It is built on top of RequireJS.
You can use it to code modules for your project in CommonJS/node style, then use the RequireJS Optimizer to build all the modules into an AMD-compliant bundle. This allows you to then use a small AMD API shim, like almond, to get nicely optimized code without needing a full runtime loader.
See the README for more information and restrictions.
Why do this?
It is an experiment, to see what people like to build with. The attributes of AMD are needed for any comprehensive JavaScript module solution, but some people really like sugar, and the sugared form of AMD may not be enough for them. They also may want to use existing CommonJS/node modules as-is but still want to get a good, optimized/built story, and something that works well cross-domain in the browser.
This experiment goes along with the latest requirejs 2.0.2 optimizer setting, cjsTranslate, which will automatically convert CommonJS/node modules to have a define() wrapper, so they can be consumed by the optimizer. This would allow you, for example, to build a node command that watched your js lib folder as you did changes to modules in the CommonJS/node format, and build them into an optimized AMD bundle.
End result, if you cannot bring yourself to use AMD:
If you do not want to do builds during your CommonJS/node-based module development, use Cajon. If you like doing builds, you can now use the RequireJS optimizer (with the almond AMD shim) to do that.
To be clear: CommonJS/node modules as-is are not enough for a comprehensive JS module solution. These tools allow you to use them though and fill in the gaps by "compiling down" the code to AMD.
Cajon is a JavaScript module loader for the browser that can load CommonJS/node and AMD modules. It is built on top of RequireJS.
You can use it to code modules for your project in CommonJS/node style, then use the RequireJS Optimizer to build all the modules into an AMD-compliant bundle. This allows you to then use a small AMD API shim, like almond, to get nicely optimized code without needing a full runtime loader.
See the README for more information and restrictions.
Why do this?
It is an experiment, to see what people like to build with. The attributes of AMD are needed for any comprehensive JavaScript module solution, but some people really like sugar, and the sugared form of AMD may not be enough for them. They also may want to use existing CommonJS/node modules as-is but still want to get a good, optimized/built story, and something that works well cross-domain in the browser.
This experiment goes along with the latest requirejs 2.0.2 optimizer setting, cjsTranslate, which will automatically convert CommonJS/node modules to have a define() wrapper, so they can be consumed by the optimizer. This would allow you, for example, to build a node command that watched your js lib folder as you did changes to modules in the CommonJS/node format, and build them into an optimized AMD bundle.
End result, if you cannot bring yourself to use AMD:
If you do not want to do builds during your CommonJS/node-based module development, use Cajon. If you like doing builds, you can now use the RequireJS optimizer (with the almond AMD shim) to do that.
To be clear: CommonJS/node modules as-is are not enough for a comprehensive JS module solution. These tools allow you to use them though and fill in the gaps by "compiling down" the code to AMD.
Tuesday, June 12, 2012
RequireJS 2.0.2 released
RequireJS 2.0.2 is available.
More bug fixes, thanks to the community for really putting it through its paces.
Notable change for the optimizer: The "dir" output directory is now deleted before each build run, if it exists. This is done to avoid problems with transforms via onBuildRead/onBuildWrite that made it difficult to do incremental builds. If you want to keep the "dir" directory between optimizer runs, then set "keepBuildDir" to true.
Complete list of changes:
Thursday, June 07, 2012
volo 0.2.0 released
volo 0.2.0 has been released. volo is a front end JavaScript package manager, which uses github as the source repository. It can install whole project templates, and it understands volofiles for doing project-specific task automation.
The 0.2.0 release is significant in a few ways:
1) It is now only delivered via npm. So to install it:
2) the format of volofiles have changed. The old format will still work in this release, but to get the new, shorter "shell string" support or for reusing volo commands that can be installed via npm, you will need to convert your volofile to the new format. Some details in the conversion ticket, but see the "Creating a volofile" and "Creating a volo command" pages for more information.
3) "volo add" now recursively adds dependencies. It does not do fancy conflict resolution -- if a dependency with the same name is already installed in that location, then that existing dependency is used.
Recursive dependency installation only works if your package.json has a volo.dependencies section (or there is a shim for the project you want to use in the shims repo). The backbone library has a shim entry, so you can try it out by doing:
volo add backbone
It is a big release, but there is more to do, particularly on console feedback and formatting. At least now the code structure is set better for the future.
The 0.2.0 release is significant in a few ways:
1) It is now only delivered via npm. So to install it:
npm install -g volo
2) the format of volofiles have changed. The old format will still work in this release, but to get the new, shorter "shell string" support or for reusing volo commands that can be installed via npm, you will need to convert your volofile to the new format. Some details in the conversion ticket, but see the "Creating a volofile" and "Creating a volo command" pages for more information.
3) "volo add" now recursively adds dependencies. It does not do fancy conflict resolution -- if a dependency with the same name is already installed in that location, then that existing dependency is used.
Recursive dependency installation only works if your package.json has a volo.dependencies section (or there is a shim for the project you want to use in the shims repo). The backbone library has a shim entry, so you can try it out by doing:
volo add backbone
It is a big release, but there is more to do, particularly on console feedback and formatting. At least now the code structure is set better for the future.
Friday, June 01, 2012
RequireJS 2.0.1 released
RequireJS 2.0.1 is available.
A few fixes to clean up some edges from the 2.0 release. If you are using 2.0.0, you should upgrade to 2.0.1 right away. Changes:
A few fixes to clean up some edges from the 2.0 release. If you are using 2.0.0, you should upgrade to 2.0.1 right away. Changes:
Thursday, May 31, 2012
Package Management for Front End JavaScript
I have been working on volo, which is a real, working attempt at providing a solution for front end Javascript package management.
It seems like more people are starting to look at this, and here are my suggestions on what the solution should look like. I do not expect everyone to use volo, but if we all agreed to some basics, then it will allow some easier interop (see the end of this post).
Resist the temptation to make your own registry
One of the harder parts is to agree on a registry for code. With volo I have gone with using GitHub because:
It would be easy to replicate a stand-alone server that has the same API later if it seemed like GitHub as the registry did not make sense, but by using GitHub and its API, some of the mundane bikeshedding over API and registry construction goes away.
The "owner/repo" IDs with github, vs just the "repo" single namespace that is in something like npm, is a distinct benefit to have for a registry. Forks should be possible, with the default search on just "repo" giving the most popular version, which is usually the original repo. Given GitHub's social tools, they have a way to measure popularity, and it seems to work out pretty well.
Easy convention
Try to find a convention so that configuration is not required. The basic convention that volo uses: it checks for any explicit configuration (see below) but if none is found, it pulls down the zipball of a version tag, and if there is one JS file at the top level of the zipball, that is the script that is installed.
There is a bit more to the convention, but this basic convention works for many JS libraries. It helps encourage providing small libraries that can be composed together well with other libraries in other repos.
Easy configuration
The above convention does not work for every project. Some folks do not want to host their code on github, and some projects need to do a "build" step to deliver the final code, and like hosting that built code outside the git repository. So there needs to be a way to configure what to download for a dependency.
volo uses a package.json property, volo.url, to find it. Example for jQuery:
{
"volo": {
"url": "http://code.jquery.com/jquery-{version}.js"
}
}
It supports {version} substitution with a version value from a version tag.
More is documented in the package.json page. volo also understands a volo.archive, but I want to reduce that config to just volo.url, and have it do content type detection to know if it is a single JS file or an archive zip/tarball.
Do not require server devs to change
volo has a "shims" repo it checks if a library does not have the package.json "volo" property it is looking for, so it makes it easy to bootstrap new libraries into the system without requiring the library author to do anything.
Of course it is best and more distributed if the library author supports the package.json property/properties directly, but for now it has been easy to consume scripts without getting complete buy-in from library authors.
Let's coordinate
I would love it if there were other tools besides volo for this functionality, as long as we agreed to the above GitHub bootstrapping and some package.json properties we can all read and understand.
In particular, I do not want these properties under a "volo" name in the package.json. I am doing that for now just so I do not claim a more generic name without any agreement with others. What is a name we can use instead? "frontend"? "browser"? I'm up for a more generic name so that we can open up the client tool building space.
I can be reached on gmail at jrburke. If you want a public space to talk, there is the volo list, but I am happy to talk on another list if that is preferable.
It seems like more people are starting to look at this, and here are my suggestions on what the solution should look like. I do not expect everyone to use volo, but if we all agreed to some basics, then it will allow some easier interop (see the end of this post).
Resist the temptation to make your own registry
One of the harder parts is to agree on a registry for code. With volo I have gone with using GitHub because:
- It already exists and works/scales.
- It has social/feedback tools and search.
- Used by many JS libraries already, it is already part of developer workflow.
It would be easy to replicate a stand-alone server that has the same API later if it seemed like GitHub as the registry did not make sense, but by using GitHub and its API, some of the mundane bikeshedding over API and registry construction goes away.
The "owner/repo" IDs with github, vs just the "repo" single namespace that is in something like npm, is a distinct benefit to have for a registry. Forks should be possible, with the default search on just "repo" giving the most popular version, which is usually the original repo. Given GitHub's social tools, they have a way to measure popularity, and it seems to work out pretty well.
Easy convention
Try to find a convention so that configuration is not required. The basic convention that volo uses: it checks for any explicit configuration (see below) but if none is found, it pulls down the zipball of a version tag, and if there is one JS file at the top level of the zipball, that is the script that is installed.
There is a bit more to the convention, but this basic convention works for many JS libraries. It helps encourage providing small libraries that can be composed together well with other libraries in other repos.
Easy configuration
The above convention does not work for every project. Some folks do not want to host their code on github, and some projects need to do a "build" step to deliver the final code, and like hosting that built code outside the git repository. So there needs to be a way to configure what to download for a dependency.
volo uses a package.json property, volo.url, to find it. Example for jQuery:
{
"volo": {
"url": "http://code.jquery.com/jquery-{version}.js"
}
}
It supports {version} substitution with a version value from a version tag.
More is documented in the package.json page. volo also understands a volo.archive, but I want to reduce that config to just volo.url, and have it do content type detection to know if it is a single JS file or an archive zip/tarball.
Do not require server devs to change
volo has a "shims" repo it checks if a library does not have the package.json "volo" property it is looking for, so it makes it easy to bootstrap new libraries into the system without requiring the library author to do anything.
Of course it is best and more distributed if the library author supports the package.json property/properties directly, but for now it has been easy to consume scripts without getting complete buy-in from library authors.
Let's coordinate
I would love it if there were other tools besides volo for this functionality, as long as we agreed to the above GitHub bootstrapping and some package.json properties we can all read and understand.
In particular, I do not want these properties under a "volo" name in the package.json. I am doing that for now just so I do not claim a more generic name without any agreement with others. What is a name we can use instead? "frontend"? "browser"? I'm up for a more generic name so that we can open up the client tool building space.
I can be reached on gmail at jrburke. If you want a public space to talk, there is the volo list, but I am happy to talk on another list if that is preferable.
Monday, May 28, 2012
RequireJS 2.0 released, onward AMD
RequireJS 2.0 is available.
There is an in-depth upgrade guide to what is new, but the big ideas behind the release:
I am really grateful for the quality and quantity of community feedback that prompted the 2.0 release, with community members paving the way for core changes by doing implementations via the loader plugin API.
Plugin APIs are awesome, and the loader plugin API is one of the great strengths of the AMD module ecosystem. In addition to reducing the "pyramid of doom" that can occur in async resource fetching, it helps try out ideas for the loader, and supports transpiled languages like CoffeeScript.
I sometimes see some confusion about AMD modules and RequireJS, and I want to take a moment to reiterate the problem they are trying to solve.
The AMD API is about getting workable module syntax in JavaScript that meets all the async, networked nature of browsers without having the hidden costs down the road (no eval, CORS/cross-origin concerns, needing a transform to ship code).
Node's module API and using something like browserify is not enough for browser modules. It works for a certain case of problems, but to be a complete solution, a standardized callback-based require for on-demand async loading after initial module load and a wrapped module format that supports named modules for bundling are needed at a minimum. Loader plugins are also incredibly useful for reducing the async "pyramid of doom" for resources that are needed as part of module initialization. If node was able to integrate something like amdefine into their core it would really make it a complete JS module solution.
ECMAScript harmony modules account for the async network IO in browsers, and it has the ability to change the internals of browser script loading so that it will not hit the eval and CORS issues that node/commonjs modules have. But it is not done, and still has quite a few kinks to work out. It will still have the same "how do I use browser globals scripts that do not declare their dependencies" issues that people see when starting to use AMD modules with older scripts. The shim config in RequireJS 2.0 is an attempt to ease that problem, and I hope that AMD and RequireJS can continue to help inform the ECMAScript effort.
So until node closes the gap on some things needed for browser loading, or ECMAScript harmony modules work out the kinks and ship, there is AMD, and RequireJS aims to be the reference implementation for the AMD APIs.
If you want to support AMD loaders in your library code, there are some code templates that can help you do that in a way that still allows your code to work in traditional "browser globals" situations.
AMD: the worse JS module API, except for all the others. Because it works better in more cases.
There is an in-depth upgrade guide to what is new, but the big ideas behind the release:
- Treat legacy, traditional "browser globals" scripts more like modules (but seriously, upgrade your libraries to optional AMD registration).
- Handle errors better.
- Even better large scale module usage support.
I am really grateful for the quality and quantity of community feedback that prompted the 2.0 release, with community members paving the way for core changes by doing implementations via the loader plugin API.
Plugin APIs are awesome, and the loader plugin API is one of the great strengths of the AMD module ecosystem. In addition to reducing the "pyramid of doom" that can occur in async resource fetching, it helps try out ideas for the loader, and supports transpiled languages like CoffeeScript.
I sometimes see some confusion about AMD modules and RequireJS, and I want to take a moment to reiterate the problem they are trying to solve.
The AMD API is about getting workable module syntax in JavaScript that meets all the async, networked nature of browsers without having the hidden costs down the road (no eval, CORS/cross-origin concerns, needing a transform to ship code).
Node's module API and using something like browserify is not enough for browser modules. It works for a certain case of problems, but to be a complete solution, a standardized callback-based require for on-demand async loading after initial module load and a wrapped module format that supports named modules for bundling are needed at a minimum. Loader plugins are also incredibly useful for reducing the async "pyramid of doom" for resources that are needed as part of module initialization. If node was able to integrate something like amdefine into their core it would really make it a complete JS module solution.
ECMAScript harmony modules account for the async network IO in browsers, and it has the ability to change the internals of browser script loading so that it will not hit the eval and CORS issues that node/commonjs modules have. But it is not done, and still has quite a few kinks to work out. It will still have the same "how do I use browser globals scripts that do not declare their dependencies" issues that people see when starting to use AMD modules with older scripts. The shim config in RequireJS 2.0 is an attempt to ease that problem, and I hope that AMD and RequireJS can continue to help inform the ECMAScript effort.
So until node closes the gap on some things needed for browser loading, or ECMAScript harmony modules work out the kinks and ship, there is AMD, and RequireJS aims to be the reference implementation for the AMD APIs.
If you want to support AMD loaders in your library code, there are some code templates that can help you do that in a way that still allows your code to work in traditional "browser globals" situations.
AMD: the worse JS module API, except for all the others. Because it works better in more cases.
Thursday, May 03, 2012
Web app template update, now with GitHub auth
This is a quick update about the latest version of the web app template I have been working on.
You can check out some background on this app template from the previous post.
The big change since that last post: Dan Mosedale had some good feedback about treating GitHub Pages as a deploy target, so I changed around the template to do do that.
As part of these changes, volo has a new release 0.1.2 that has support for doing GitHub authorization to do OAuth-based API calls with GitHub. The app template uses this to create a GitHub repo on the fly for GitHub pages deployment.
The nice thing about this setup is that the app creation is simpler (no more questions), and you can decide later to deploy to GitHub pages without needing to make the decision up front:
See the video for a more complete walkthrough, but basically to get started now it just looks like this:
volo ghdeploy (copies the built contents and pushes them to GitHub Pages)
Useful links from the video:
You can check out some background on this app template from the previous post.
The big change since that last post: Dan Mosedale had some good feedback about treating GitHub Pages as a deploy target, so I changed around the template to do do that.
As part of these changes, volo has a new release 0.1.2 that has support for doing GitHub authorization to do OAuth-based API calls with GitHub. The app template uses this to create a GitHub repo on the fly for GitHub pages deployment.
The nice thing about this setup is that the app creation is simpler (no more questions), and you can decide later to deploy to GitHub pages without needing to make the decision up front:
See the video for a more complete walkthrough, but basically to get started now it just looks like this:
volo create myproject volojs/create-responsive-template
cd myproject
volo appcache (generates the appcache-enabled build)volo ghdeploy (copies the built contents and pushes them to GitHub Pages)
Useful links from the video:
- create-responsive-template - the web app template.
- jrburke/gaia-devserver - easy way to serve the Boot to Gecko Gaia UI in Nightly Firefox on your Desktop.
- Getting Started with the Mozilla Web Apps API - for installing apps into Gaia.
Friday, April 27, 2012
Draft plan for RequireJS 2.0
I outlined what I think RequireJS 2.0 will look like. It includes links to some working code, but the code is provided more to prove out the ideas in the wiki page. The code is still considered experimental.
See the link on that wiki page to provide feedback.
See the link on that wiki page to provide feedback.
Friday, April 20, 2012
RequireJS 1.0.8 released
RequireJS 1.0.8 is available.
It has been about two months since the last release, and there were some issues in the queue that would be nice to have fixed while I work on the requirejs.next release.
1.0.8 Release Notes
On requirejs.next, I have started "dev2.0" branches to play with the next bigger release for RequireJS and the r.js optimizer. The support for the AMD APIs will be stable. The main focus is on reevaluating some of the configuration options for the loader and how and when it resolves modules internally.
I was thinking of doing the changes as part of a 1.1 release, but since I'm thinking of possibly removing some config options, I am doing the work in "2.0" dev branches while I experiment, since semver conventions ask that the major version changes if there are backwards incompatible changes.
But again, those incompatible changes would only be around the configuration options used for the loader. The AMD API support will remain stable. I am also not sure how much will actually change, still experimenting with implementation. I will post a longer update once I feel confident of a plan going forward, and when it will be easier to give feedback on the direction.
It has been about two months since the last release, and there were some issues in the queue that would be nice to have fixed while I work on the requirejs.next release.
1.0.8 Release Notes
On requirejs.next, I have started "dev2.0" branches to play with the next bigger release for RequireJS and the r.js optimizer. The support for the AMD APIs will be stable. The main focus is on reevaluating some of the configuration options for the loader and how and when it resolves modules internally.
I was thinking of doing the changes as part of a 1.1 release, but since I'm thinking of possibly removing some config options, I am doing the work in "2.0" dev branches while I experiment, since semver conventions ask that the major version changes if there are backwards incompatible changes.
But again, those incompatible changes would only be around the configuration options used for the loader. The AMD API support will remain stable. I am also not sure how much will actually change, still experimenting with implementation. I will post a longer update once I feel confident of a plan going forward, and when it will be easier to give feedback on the direction.
Friday, April 13, 2012
Web apps on GitHub Pages that install into Gaia
This is part of my exploration into making apps with the web. This particular example explores:
A text summary of the video:
B2G and Gaia
B2G is Mozilla's mobile device operating system. It has a set of web apps called Gaia that show the home screen and some default apps, like a phone dialer.
I did not want to flash a phone to try out Gaia and the B2G ideas. Fortunately, Firefox Nightly is very similar to the Gecko version used in B2G. So by running Firefox Nightly, you can run Gaia on your desktop. Plus, you can do so without messing up your default Firefox installation.
This way you can try out the UI and to experiment with web apps and APIs that would allow you to make apps that install into Gaia. Firefox Nightly does not have all the device APIs, like the telephony APIs, but for most apps it has enough to get you started.
Gaia, and the web apps it supports, are all served from domains. There can only be one app per domain. Fortunately you can use subdomains. This works out well when hosting your web apps on GitHub Pages. You do need to obtain a domain, but then you can map your subdomains (a.domain.com, b.domain.com) to different GitHub Pages repos.
I used Node to run a dev web server that serves the Gaia UI from localhost. See the jrburke/gaia-devserver README on information on how to set that up.
Setting up your web app project
I used volo to create a new appcache-enabled, responsive web project using the volojs/create-responsive-template project template. When creating the template, you have the option to create one that works as a GitHub Pages project. volo also runs the build to generate the appcache-enabled build of the project. Since it uses appcache, the app could be usable even when a B2G device is offline.
Once I created the project, I ran git init and set up gh-pages branch. Then I added the CNAME file so that I could configure a subdomain that I own to point at GitHub Pages. Then I committed all the changes.
The web project also comes with tools/devserver.js to run a local web server that serves the dev version of the app. By modifying my local /etc/hosts file to point a dev test domain at my localhost, I could do development and try out the app install code.
Uninstalling
I did not show this in the video, but to uninstall the app go to https://myapps.mozillalabs.com/, find the app, click the mouse and hold down the button. An X will appear. Release the mouse button to uninstall it.
For that to work, you need to give myapps.mozillalabs.com access to uninstall apps:
1) Close Firefox Nightly
2) edit gaia/profile/user.js
3) Add https://myapps.mozillalabs.com to the 'dom.mozApps.whitelist:
4) Launch Firefox Nightly. Now you should be able to uninstall your app when going to https://myapps.mozillalabs.com/
End Result
Pretty neat. There are still some rough edges, but it is exciting to try out.
At some point, I want to trick out the project template to allow adding in Persona/BrowserID for identity hookup, but I will need to run a smart server to handle the user info. I also want to look more into IndexedDB as a client side data storage.
In addition, it would be interesting to see how I could get this to work so it shows up in the Chrome Web Store and how it might fit in to the Windows 8 web app support.
- Trying out the Boot to Gecko's (B2G) Gaia UI in Firefox Nightly
- Using the Mozilla Web Apps API to install a web app
- Hosting the appcache-enabled web app using GitHub Pages
A text summary of the video:
B2G and Gaia
B2G is Mozilla's mobile device operating system. It has a set of web apps called Gaia that show the home screen and some default apps, like a phone dialer.
I did not want to flash a phone to try out Gaia and the B2G ideas. Fortunately, Firefox Nightly is very similar to the Gecko version used in B2G. So by running Firefox Nightly, you can run Gaia on your desktop. Plus, you can do so without messing up your default Firefox installation.
This way you can try out the UI and to experiment with web apps and APIs that would allow you to make apps that install into Gaia. Firefox Nightly does not have all the device APIs, like the telephony APIs, but for most apps it has enough to get you started.
Gaia, and the web apps it supports, are all served from domains. There can only be one app per domain. Fortunately you can use subdomains. This works out well when hosting your web apps on GitHub Pages. You do need to obtain a domain, but then you can map your subdomains (a.domain.com, b.domain.com) to different GitHub Pages repos.
I used Node to run a dev web server that serves the Gaia UI from localhost. See the jrburke/gaia-devserver README on information on how to set that up.
Setting up your web app project
I used volo to create a new appcache-enabled, responsive web project using the volojs/create-responsive-template project template. When creating the template, you have the option to create one that works as a GitHub Pages project. volo also runs the build to generate the appcache-enabled build of the project. Since it uses appcache, the app could be usable even when a B2G device is offline.
Once I created the project, I ran git init and set up gh-pages branch. Then I added the CNAME file so that I could configure a subdomain that I own to point at GitHub Pages. Then I committed all the changes.
The web project also comes with tools/devserver.js to run a local web server that serves the dev version of the app. By modifying my local /etc/hosts file to point a dev test domain at my localhost, I could do development and try out the app install code.
Uninstalling
I did not show this in the video, but to uninstall the app go to https://myapps.mozillalabs.com/, find the app, click the mouse and hold down the button. An X will appear. Release the mouse button to uninstall it.
For that to work, you need to give myapps.mozillalabs.com access to uninstall apps:
1) Close Firefox Nightly
2) edit gaia/profile/user.js
3) Add https://myapps.mozillalabs.com to the 'dom.mozApps.whitelist:
user_pref("dom.mozApps.whitelist","http://dialer.gaiamobile.org:8424,http://homescreen.gaiamobile.org:8424,https://myapps.mozillalabs.com");
4) Launch Firefox Nightly. Now you should be able to uninstall your app when going to https://myapps.mozillalabs.com/
End Result
Pretty neat. There are still some rough edges, but it is exciting to try out.
At some point, I want to trick out the project template to allow adding in Persona/BrowserID for identity hookup, but I will need to run a smart server to handle the user info. I also want to look more into IndexedDB as a client side data storage.
In addition, it would be interesting to see how I could get this to work so it shows up in the Chrome Web Store and how it might fit in to the Windows 8 web app support.
Thursday, April 12, 2012
Making apps with the web
The pieces are coming together: you can can use the web (HTTP, HTML, JS, CSS) to package focused pieces of functionality as apps, with the possibility to make money.
This post outlines how it is coming together, and how you can get involved. While I work in the Mozilla Labs group and I am sure this post will have a Mozilla slant, this is my personal outlook.
What is a web app
I am old school, so when I hear "web app" I associate that with the "more of an app style than document style" web page. Something more like GMail instead of the New York Times.
However, it seems to be getting a more focused definition, something that implies characteristics around the actual usage of the app. Here is a fuzzy definition of it:
A web app is a focused piece of functionality implemented using web technologies. These pieces of functionality are grouped in an "app dashboard".
In the ancient web times, the app dashboard was just your browser bookmarks. However, the newer dashboards have a richer relationship to the apps. There are app icons/sections that can change state, and with Windows Metro, even provide some data display services. There are other associated concepts like an app being purchased and working offline.
So, in a way, everything old is new again, but better and richer. The "native app" success on mobile devices has set the stage and help define what should be possible.
While web apps may not be completely equivalent with the capabilities of native apps yet, the stars are moving into position.
Alignment of the stars
Using web apps to deliver functionality is coming together because to the following forces:
Here are the platforms where you can use the web to make apps, and how you can make money doing so.
Today's mobile: PhoneGap/Cordova
The PhoneGap approach works now, today. Apps are made with just plain HTML/JS/CSS and wrapped up in an platform-specific binary that gives the code access to the device capabilities.
Cordova is the Apache-housed, open source parts that PhoneGap is built on. Work in Cordova feeds into web standards discussions. The hope is to not need Cordova at all, but just have all the capabilities built into the web platform.
Make money the usual ways via the platform-specific app stores: charge up front, do freemium, in-app purchases, use ads.
Future mobile: Boot to Gecko (B2G)
Mozilla is working on a web-only mobile OS called Boot to Gecko (B2G). The great thing about this platform: it is just plain web technologies. All apps are fetched from URLs, and they can work offline through web technologies like AppCache.
The tagline for B2G is "the web is the platform". The "native" in B2G is the web. If the web is not sufficient in some way, Mozilla is putting real effort into improving it. Just like Cordova, the development is all out in the open. You can be the change you want to see by participating.
The B2G effort includes a set of default web apps, including a web app "home screen". This set of web apps is called Gaia. You can "install" your app into the home screen using the web app APIs, and Mozilla is working on a marketplace and APIs to allow other marketplaces for apps. There is an identity API via Persona, and efforts to work out other details like digital receipts. The apps roadmap page will help you get acquainted with some of the moving pieces if you want to get involved.
Right now the B2G code is still really early in development. It is a bit rough to get going. Think of it as getting access to iOS a year before the first iPhone shipped.
Desktop
Both Firefox and Chrome will know how to install web apps. The hope is that all the browser vendors can converge on some common APIs and get those uplifted into all browsers.
Mozilla is also working out a way to "install" a web app from your browser into your desktop operating system. A small OS app shell will be wrapped around a web renderer that is tied to that specific web app. For the end user, it just looks like any other app installed on their desktop.
You will be able to use web technologies for Windows 8 Metro apps and in Windows Phone. Microsoft is making the web stack a first class development option for their platform.
On the money side:
There is the Chrome Web Store for Chrome browsers, and Mozilla is working on a Marketplace. The hope is to work out open, standardized APIs that would allow other marketplaces and integration with any web browser. There is a Windows Phone Marketplace, and I fully expect a marketplace for Metro apps for Windows 8.
While marketplaces by themselves do not translate directly into money, they establish ways for users to buy functionality and discover new apps. They set up a fertile environment. Using ads and providing apps for free with money coming in from other channels will still be possible.
Design practices
There are a few design practices that fit well with web app development:
Mobile first
The number of mobile, non-desktop PC devices are growing like crazy. The "app" phase was brought on by these mobile devices. By focusing on these mobile experiences first, it sets you up to reach a very big market.
Focused execution
This falls out of the Mobile First design approach. When you start with mobile, the interfaces are usually more focused. This is a great way to do any app design -- make sure to distill the design problem to the smallest user goal possible, and then build up from there as necessary.
Responsive design
Different kinds of devices that have different resolutions and input methods are best served with some custom work done for each device. However, there is a lot of code that can be shared by moving away from a "pixel perfect" and device-specific look to more of a "web aesthetic" using responsive design.
URL-based design
"Mobile Apps Must Die" brings up some interesting points when apps are addressable via URLs. It opens up new discovery possibilities and different ways to "organize" your apps.
Developer tools
Here are some APIs that can help with web app plumbing:
I hope to share more tools and approaches as I discover them.
Under construction
As you jump into making web apps you will discover some rough edges. You may get frustrated. This is OK. The future is under construction.
If you want to see change, get involved. Move the Web Forward is a great resource that can help you figure out how you want to engage.
I am a web hacker and my current interest is the B2G platform, so I will be checking out Gaia. I will share what I find as I go along. If you want to engage on a lower level, check out the Gonk and Gecko layers in B2G.
I am really excited to see web apps come into their own for many different platforms that include ways to make money. It is great to see some smart designers figure out how to design for the web app world.
This post outlines how it is coming together, and how you can get involved. While I work in the Mozilla Labs group and I am sure this post will have a Mozilla slant, this is my personal outlook.
What is a web app
I am old school, so when I hear "web app" I associate that with the "more of an app style than document style" web page. Something more like GMail instead of the New York Times.
However, it seems to be getting a more focused definition, something that implies characteristics around the actual usage of the app. Here is a fuzzy definition of it:
A web app is a focused piece of functionality implemented using web technologies. These pieces of functionality are grouped in an "app dashboard".
In the ancient web times, the app dashboard was just your browser bookmarks. However, the newer dashboards have a richer relationship to the apps. There are app icons/sections that can change state, and with Windows Metro, even provide some data display services. There are other associated concepts like an app being purchased and working offline.
So, in a way, everything old is new again, but better and richer. The "native app" success on mobile devices has set the stage and help define what should be possible.
While web apps may not be completely equivalent with the capabilities of native apps yet, the stars are moving into position.
Alignment of the stars
Using web apps to deliver functionality is coming together because to the following forces:
- Platforms and money
- Design practices
- Development tools
Here are the platforms where you can use the web to make apps, and how you can make money doing so.
Today's mobile: PhoneGap/Cordova
The PhoneGap approach works now, today. Apps are made with just plain HTML/JS/CSS and wrapped up in an platform-specific binary that gives the code access to the device capabilities.
Cordova is the Apache-housed, open source parts that PhoneGap is built on. Work in Cordova feeds into web standards discussions. The hope is to not need Cordova at all, but just have all the capabilities built into the web platform.
Make money the usual ways via the platform-specific app stores: charge up front, do freemium, in-app purchases, use ads.
Future mobile: Boot to Gecko (B2G)
Mozilla is working on a web-only mobile OS called Boot to Gecko (B2G). The great thing about this platform: it is just plain web technologies. All apps are fetched from URLs, and they can work offline through web technologies like AppCache.
The tagline for B2G is "the web is the platform". The "native" in B2G is the web. If the web is not sufficient in some way, Mozilla is putting real effort into improving it. Just like Cordova, the development is all out in the open. You can be the change you want to see by participating.
The B2G effort includes a set of default web apps, including a web app "home screen". This set of web apps is called Gaia. You can "install" your app into the home screen using the web app APIs, and Mozilla is working on a marketplace and APIs to allow other marketplaces for apps. There is an identity API via Persona, and efforts to work out other details like digital receipts. The apps roadmap page will help you get acquainted with some of the moving pieces if you want to get involved.
Right now the B2G code is still really early in development. It is a bit rough to get going. Think of it as getting access to iOS a year before the first iPhone shipped.
Desktop
Both Firefox and Chrome will know how to install web apps. The hope is that all the browser vendors can converge on some common APIs and get those uplifted into all browsers.
Mozilla is also working out a way to "install" a web app from your browser into your desktop operating system. A small OS app shell will be wrapped around a web renderer that is tied to that specific web app. For the end user, it just looks like any other app installed on their desktop.
You will be able to use web technologies for Windows 8 Metro apps and in Windows Phone. Microsoft is making the web stack a first class development option for their platform.
On the money side:
There is the Chrome Web Store for Chrome browsers, and Mozilla is working on a Marketplace. The hope is to work out open, standardized APIs that would allow other marketplaces and integration with any web browser. There is a Windows Phone Marketplace, and I fully expect a marketplace for Metro apps for Windows 8.
While marketplaces by themselves do not translate directly into money, they establish ways for users to buy functionality and discover new apps. They set up a fertile environment. Using ads and providing apps for free with money coming in from other channels will still be possible.
Design practices
There are a few design practices that fit well with web app development:
Mobile first
The number of mobile, non-desktop PC devices are growing like crazy. The "app" phase was brought on by these mobile devices. By focusing on these mobile experiences first, it sets you up to reach a very big market.
Focused execution
This falls out of the Mobile First design approach. When you start with mobile, the interfaces are usually more focused. This is a great way to do any app design -- make sure to distill the design problem to the smallest user goal possible, and then build up from there as necessary.
Responsive design
Different kinds of devices that have different resolutions and input methods are best served with some custom work done for each device. However, there is a lot of code that can be shared by moving away from a "pixel perfect" and device-specific look to more of a "web aesthetic" using responsive design.
URL-based design
"Mobile Apps Must Die" brings up some interesting points when apps are addressable via URLs. It opens up new discovery possibilities and different ways to "organize" your apps.
Developer tools
Here are some APIs that can help with web app plumbing:
- Application Cache: learn it. Yes, it has its quirks, but for serving offline UI, it is the state of the art.
- localStorage: for storing smaller name/value paires of data offline.
- IndexedDB: for storing larger amounts of data offline. Right now iOS/Android devices do not support IndexedDB, just the older, deprecated Web SQL. However, that should change over time. Lawnchair can provide an adapter layer if you need to target platforms with uneven storage support today.
I hope to share more tools and approaches as I discover them.
Under construction
As you jump into making web apps you will discover some rough edges. You may get frustrated. This is OK. The future is under construction.
If you want to see change, get involved. Move the Web Forward is a great resource that can help you figure out how you want to engage.
I am a web hacker and my current interest is the B2G platform, so I will be checking out Gaia. I will share what I find as I go along. If you want to engage on a lower level, check out the Gonk and Gecko layers in B2G.
I am really excited to see web apps come into their own for many different platforms that include ways to make money. It is great to see some smart designers figure out how to design for the web app world.
Friday, March 09, 2012
A dependency manager for web projects
The problem
Effective, modular development usually comes with a dependency management tool. Think easy_install for python, ruby gems for ruby, npm for Node.
Front-end web development does not have this kind of tool. Traditionally it has been ad-hoc script management done by the developer. However, now that modular development is available now, and will become more prevalent as ECMAScript committee solidifies native module support, front end developers should have a dependency management tool.
volo, a solution
volo is a concrete attempt at this solution. While I do not believe it is fully formed yet, it is already useful, and by having something real to start from, it will better inform any coordinated effort.
volo does not have to be the only dependency management tool. Hopefully we can use it as way to discuss common approaches that we can all share.
There have been some efforts in this space, in particular npm, cpm and bpm. I go into why those were not used for volo in a Prior art page.
The main point of departure is the choice of registry. volo uses GitHub.
GitHub as the registry
One of the difficulties with running a dependency management tool is having an effective way to find dependencies. The JavaScript community has really embraced GitHub, and I think it has all of the right pieces to serve as a registry:
Conventions
volo uses some conventions that are used by some JS projects already, to make it easier to bootstrap. For other projects, usually one package.json property is enough for it to be used by volo.
Check out the Library best practices document for the conventions and configuration used by volo. volo has some project-specific overrides it uses during this bootstrap phase, to simulate how those projects would behave if they used the package.json property that mapped to their development style.
Try it Out
So try it out. It is just one JS file that runs in Node, so it is easy to get rid of it if you do not like it.
volo's 0.1.0 release has search built in, so it is fun just to try the following kinds of commands to see what it finds on GitHub:
Next Steps
While volo is useful already, it is certainly not done. In particular,
If you have a specific code issue, feel free to open a volo issue.
Effective, modular development usually comes with a dependency management tool. Think easy_install for python, ruby gems for ruby, npm for Node.
Front-end web development does not have this kind of tool. Traditionally it has been ad-hoc script management done by the developer. However, now that modular development is available now, and will become more prevalent as ECMAScript committee solidifies native module support, front end developers should have a dependency management tool.
volo, a solution
volo is a concrete attempt at this solution. While I do not believe it is fully formed yet, it is already useful, and by having something real to start from, it will better inform any coordinated effort.
volo does not have to be the only dependency management tool. Hopefully we can use it as way to discuss common approaches that we can all share.
There have been some efforts in this space, in particular npm, cpm and bpm. I go into why those were not used for volo in a Prior art page.
The main point of departure is the choice of registry. volo uses GitHub.
GitHub as the registry
One of the difficulties with running a dependency management tool is having an effective way to find dependencies. The JavaScript community has really embraced GitHub, and I think it has all of the right pieces to serve as a registry:
- Allows forks. Namespacing is possible via the use of the owner's name, not just a project name.
- Has a search function that can give owner/repo values for a given dependency name.
- Encourages open source and social communities around the source.
- Supports tags for versions.
- GitHub has a great simple API for getting versions/repo info.
- Provides zipballs/tarballs of tags/branches.
- Dependencies can be pulled not only by version tag but by branch name.
- Provides a Downloads area for built code.
Conventions
volo uses some conventions that are used by some JS projects already, to make it easier to bootstrap. For other projects, usually one package.json property is enough for it to be used by volo.
Check out the Library best practices document for the conventions and configuration used by volo. volo has some project-specific overrides it uses during this bootstrap phase, to simulate how those projects would behave if they used the package.json property that mapped to their development style.
Try it Out
So try it out. It is just one JS file that runs in Node, so it is easy to get rid of it if you do not like it.
volo's 0.1.0 release has search built in, so it is fun just to try the following kinds of commands to see what it finds on GitHub:
> volo search jquery
> volo search backbone
> volo add jquery
> volo add dojo
> volo add dijit
> volo add ember
Next Steps
While volo is useful already, it is certainly not done. In particular,
- Do the library best practices make sense? Also, there is info the rules "volo add" uses. I am hopeful volo allows for a reasonable default convention but still give enough flexibility via a couple package.json properties, including a way to embed a /*package.json*/ comment for single file JS libraries.
- volo does not install nested dependencies yet. It is doable, just needs a bit more coding.
- A private/local server that responds to the GitHub APIs used by volo would be useful. Not all code can be placed on GitHub or a public URL, and sometimes even for a developer, having a local cache server is useful. I'm hoping this server can proxy to GitHub for common public libraries and cache them, and then provide a mechanism for private libraries to live in that server.
- volo does not support .tar.gz files yet, but it might be nice if it did. zip files were just easier to get working cross-platform.
- Allow access to private GitHub repos by using OAuth with GitHub.
- Allow a "volo.versions" section in a package.json, to allow listing versions without having to use GitHub tags. This could be useful for libraries that do not want to host their code in GitHub.
If you have a specific code issue, feel free to open a volo issue.
Subscribe to:
Posts (Atom)