Archive for May, 2007

REST Redux

The Rise of REST! I’ve noticed an increasing number of “What is REST?”, or alternatively, explanations of REST appearing lately. Most attempt to compare REST to SOAP which is basically the wrong argument. SOAP and REST cannot be directly compared, though for the last seven years it has been. Keep in mind that REST does not mean exclusively HTTP. Just take a look at the Java content-management standards JSR 170 and JSR 283 for an example of an API based on REST using Java.

Before I attempt to put my own spin on what and why you would use REST; If you are interested at all on Software Architecture, then I highly recommend that you take the time to read Roy’s dissertation – maybe even twice. It is a tour-de-force of what Software Architecture is. And if you feel really brave read Rohit’s companion dissertation.

Why base your software architecture on REST?

The high-level answer: When you don’t “own” each component along the wire it is important that each component be allowed to evolve independently. What matters in this context is the ability to work with components at different levels of implementation. That means constraining the interface to a commonly agreed standard so every component along the wire can understand what it means to GET, PUT, POST, DELETE etc.

Secondly no matter how fast the wire becomes we are still constrained by the speed of light i.e. there will always be latency. To combat this you end up using a course grained Document-Centric mode of interaction – sending HTML/XML/JSON documents etc – instead of a fine grained Control-Centric mode. The latter is RPC the former REST.

To make sure that each component knows what is being talked about it needs to be identified. Hence URIs identify a Resource. Response meta-data describes the context of the document that was returned. [in REST terms the context of the Representation returned.]

A Resource can be thought of as an object that implements various interfaces. It is a concept, a notional software component that implements many interfaces. In REST those interfaces are identified by URIs.

Without constrained interfaces, a data-centric mode of interaction and identity, caches won’t work for example. This also means that Hypertext now becomes the engine of application state. Every component interface implements a state machine regardless of the underlining component technology, hence in REST URI become a way of making the application state explicit. Tim Ewald framed it nicely:

“The essence of REST is to make the states of the protocol explicit and addressible by URIs. The current state of the protocol state machine is represented by the URI you just operated on and the state representation you retrieved. You change state by operating on the URI of the state you’re moving to, making that your new state. A state’s representation includes the links (arcs in the graph) to the other states that you can move to from the current state. This is exactly how browser based apps work, and there is no reason that your app’s protocol can’t work that way too. (The ATOM Publishing protocol is the canonical example, though its easy to think that its about entities, not a state machine.)”

RPC (and therefore SOAP) rely on tool support to be vaguely usable. REST works because humans can do a wget and see what comes back. That reduces the coordination costs of building systems.

If you control all the components in the chain and you release them all at the same time and there is no latency issue to worry about, use a RPC style of interaction. Otherwise use REST. In today’s Web2.0 climate that means using a REST style, as a minimum, for externally exposed Resources via APIs.

An Architectural Style

REST is an Architectural Style in the true sense of Alexander’s patterns, compared to, the more prescriptive approach adopted by the Design Patterns movement, lead by the Gang of Four. SOAP is a XML language for specifying and constructing messages for a network protocol. The typical message exchange pattern is modelled after the RPC style. So just to confuse the matter SOAP can be compared more directly to HTTP; the latter being a Network API that attempts to conform to the REST style.

If REST is to be compared then it should be compared to RPC or SOA. Both are approaches to software architecture. SOAP is an implementation that can be used in either a Control-centric (RPC/SOA) style or a Document-centric (REST) style. It just happens than most uses of SOAP are RPCs.

No comments

Offline Web Apps now a lot more achievable

Early last week the Dojo Toolkit launched a toolkit extension for making the development of offline web applications a lot easier. From the launch announcement:

“Dojo Offline is a free, open source toolkit that makes it easy for web applications to work offline. It consists of two pieces: a JavaScript library bundled with your web page and a small (~300K) cross-platform, cross-browser download that helps to cache your web application’s user-interface for use offline.”

The 300K download is actually a locally installed web proxy that makes the detection, caching and online/offline transitions seemless. Part of the approach is to define a framework for handling the connectivity so that it is as easy and transparently from an end-users’ perspective. This includes a Dojo UI Widget that detects the presences of the Offline Proxy, prompting the user to download and install it. Being only 300K makes this a very quick process. If the proxy is installed then the widget will indicate the “connected” status.

This checking of connectivity goes beyond local network connectivity, by sending pings out across the Internet, it actually determines if Internet connectivity is present. This avoids the trap of being locally connected but globally disconnected.

From a implementation perspective the local proxy will cache all those files (XHTML, CSS, Javascript) to ensure that the Web application will still operate even if started offline. Once online the Sync component of the toolkit kicks in, performing a transparent merge operation with the back-end. The reason it is transparent is to avoid confusing the user. Instead the approach is to perform the sync transparently but let the user view the audit log of what happened if they wish too. Most won’t.

The Offline toolkit extends the Dojo offline storage engine. A portable, browser and OS, storage engine. Basically if you are using Firefox 2.x, then the storage engine uses Firefox’s offline storage capability, otherwise it uses Flash as the storage engine. This functionality has been in Dojo for sometime, so is well baked.

At the Web 2.0 Expo, Brad Neuberg (the developer) gave a presentation on the Offline Toolkit; watch is his presentation for more details.

Having the capability to build web applications that are “connectivity” aware is a big boost to the acceptance of web applications. Ubiquitous connectivity is still a dream. The thing is that the way Web applications are designed and developed changes as a result. The MVC structure of your typical Web application now needs to push the entire MVC structure into the Javascript. Rock on PAC! Dojo is well suited for this architecture as it includes many features for neatly structuring the Javascript application and deploying just what is needed and it can incorporate Comet.

No comments