Archive for April, 2007

Concurrency: You’ll have to understand it one day soon.

It is pretty clear that scale is an important topic these days. Scaling cheaply and reliably in a seamless fashion to boot. I noticed recently several talks/tutorials on scale and concurrency at OSCON 2007 and that Web2.0 Expo and ETech07 had their share too. There have also been numerous mentions of Live Journal’s and Myspace’s approach to scaling out recirculating recently. These approaches are considered the current blue-print for handling scale of web applications based on LAMP. I suspect mostly because it suggests you should forget about scaling and just let your architecture evolve along the lines prescribed i.e. Scale is a nice problem to have. And in a pragmatic-way I totally agree with that.

Returning forager The general approach at the application layer can be summed up as: Partition your database around some key object in the system, typically a user. Replicate these partitions around your various database servers. Deploy your front-end on a web-farm and potentially factor out some application functionality to dedicated servers. Manage sessions separately either by creating a discrete service within the architecture or by resolving sessions to a particular database partition – basically push state down to a data-store where the application coordination happens. Then cache whatever you can. That’s it – well the devil is in the details.

Don MacAskill’s presentation at ETech07 details SmugMug’s architecture using Amazon’s S3 and experiments with EC2 and SQS. It contains a great lot of information about using Amazon’s Web Services. The utility model – or compute power on demand – of deploying services will become very prevalent over the next few years. While Amazon are the first movers, others will follow with potentially more powerful computing models.

All good. Basically they are all large grained approaches to leveraging concurrent execution using lots of boxes. Application design is just chopped up into a few tiers. It is easy to understand and at this point we can happily forget about concurrency because each component in the system is a big black box with synchronous calls between them.

Several things are happening that suggest our sequential synchronous existence may be over. Power consumption, Multi-core cpu’s, on-demand scaling as exemplified by Amazon’s EC2 service, transaction reliability and the rise of data mining; All make confronting scaling issues early on important for your application architecture.

The situation for CPU design and the implications for software design was clearly discussed, back in 2005, in a Dr Dobbs article by Herb Sutter. The Free Lunch is Over: A Fundamental Turn Towards Concurrency in Software. In the article Herb discusses the limitations of chip design and why this will lead to a focus on multi core design rather than increasing clock cycles on a single core. A key barrier to increasing the clock speed is power consumption. Heat is a by product of power consumption so power consumption isn’t related just to the operation of a machine but the cooling required to keep it operating. Power consumption is a major issue for Google and any company that runs data centers. Even my little server room at home should have air conditioning during the summer months! Server power consumption is one of the motivations given as justification for O’Reilly Media’s conference on energy innovation later this year.

While two or four core cpu architectures are now mainstream we are seeing an increase in the number of cores. An example demostrated recently is Intel’s 80-core research chip crunches 1 Tflop at 3.2 GHz

“Intel’s 80-core chip is basically a mainframe-on-a-chip-literally,” said McGregor. “It’s the equivalent of 80 blade processors plugged into a high-speed backplane, or 80 separate computers using a high-speed hardware interconnect.”

I suspect that given Moore’s Law we can expect some associated Law regarding the number of cores on a square inch wafer increasing exponentially over the coming years. No doubt there will be some correlation to Peak Oil issues and increasing awareness of Global Warming as the market impetus for more cores. If approaches to concurrency in software are any indication, I can easily imagine clock speeds decreasing. Chip designers would focus on throughput rather than outright performance. This would suggest the rise of commercial asynchronous chip designs. [Maybe they are already – I know nothing about current chip designs beyond their multi-coreness.] It would also fit with Clayton Christensen’s “law of conservation of modularity“.

Current software development techniques can only hide so much. Eventually we will need to utilise the multiple cores to maximise the benefit they provide. Multi-threading is notoriously difficult to do correctly so I don’t see that as a solution. Asynchronous event or message passing approaches provide a better way. Basically they model how hardware works and micro-kernel operating systems have been modelled in a similar fashion.

We have being doing this application composition stuff with messages for a while now (albeit synchronous in nature) with Unix pipes and the Web but these architectures are only just beginning to be appreciated beyond administration script-based coordination Unix pipes have been traditionally used for.

Google have raised awareness and appreciation with publication on internal toolkits like MapReduce, Chubby and BigTable.

One thing that surprises me about various presentations around scaling web applications is the lack of mention about message passing, specifically message queues or event buses. These are the staple tools when you want to scale and ensure data doesn’t get lost. Then again many of the consumer oriented services don’t need such reliability. The cost of losing a customers IM message, or comment post, or uploading of an image is minimal. An annoyance at worst. As web applications evolve into business applications losing data, due to scale demands, has real costs.

No comments

The next thing after 2.0!

It seems technology suffers from fashion as much as the fashion industry in that we must keep moving on to the next big thing before we have even understood the current thing. Regardless, despite disliking the term and being too lazy to even attempt defining a new one, recently I took the bait over at ReadWriteWeb to provide a definition for Web3.0; It seems they liked my definition of Web3.0. Here is what I wrote:

Web 1.0 – Centralised Them.
Web 2.0 – Distributed Us.
Web 3.0 – Decentralised Me

Hindsight: Web 1.0 turned into a broadcast medium. It was all about them. A case of industrial age thinking applied to a new landscape. Web 2.0, largely based on an analysis of what worked in Web1.0, is an alignment with TBL’s initial vision of the Web. The Web as connective tissue between us. Platform, participation and conversation. Really it is more than the Web. It is the Internet. It is new practices too. Ultimately it is about connectivity; applying constrains in the form of some sort-of agreed upon standards that make it easier to talk to one another. With new layers of connective wealth come new tools. In Web2.0’s case that allowed new forms of communication. With it associated ‘acceptable’ business models – hence the Google economy.

Web 1.0 was the first time to show the value of standards, Web 2.0 is teaching us how liberating standards can be. Web 3.0 will reflect on what worked in Web2.0. It will mean more constraints for better communication/connectivity. Improved connectivity will mean revised practice and new business models.

Therefore Web 3.0 must be about me! It’s about me when I don’t want to participate in the world. It’s about me when I want to have more control of my environment particularly who I let in. When my attention is stretched who/what do I pay attention to and who do I let pay attention to me. It is more effective communication for me!

When it is about me it means Web 3.0 must be about more semantics in information, but not just anything. Better communication comes from constraints in the vocabularies we use. Micro formats will lead here helping us to understand RDF and the Semantic Web. With more concern over my attention comes a need to manage the flow of information. This is about pushing and pulling information into a flow that accounts for time and context. Market based reputation models applied to information flows become important. Quality of Service (QOS) at the application and economic layer where agents monitor, discover, filter and direct flows on information for me to the devices and front-ends that I use. The very notion of application [Application is a very stand-alone PC world-view. Forget the Web, Desktop, Offline/Online arguments] disappears into a notion of components linked by information flows. Atom, the Atom API and semantics, particularly Micro formats initially, are the constraints that will make this happen. Atom features not because of technical merit but by virtue of it’s existing market deployment in a space that most EAI players won’t even consider a market opportunity. Hence Web based components start using Atom API as the dominate Web API – Feed remixing is indicative. Atom will supplant WS* SOA.

User centric identity takes hold. This extends the idea that everyone has an email address and mobile number, why not manage it for single sign-on and more. Universal Address-book anyone?

More Market based brokerage business models emerge, earning revenue on the ‘turn’, as we learn more about the true power of AdSense/Adword’s underling business model and realise there are close parallels to the worlds financial markets.

Reliable vocabularies, user identity and trusted [i.e. user controllable] reputation models, market based brokerage business models all become a necessity as the more decentralized event driven web becomes a reality.

Web 3.0 – a decentralized asynchronous me.

There were a few things I forgot to put into the above definition and from the comments a few things that need explanation. I’ll attempt to expand on the above in latter posts as I’m a little stuck for time.

What I left off is the relationship to the physical world; “the Internet of Things” with 2D Bar Codes, RFID etc. and Just-in-time just-about-one-to-one manufacturing that is partly represented by what Threadless and others are doing. I’ll also need to clarify what I mean by Them, Us, Me. And why Web 3.0 cannot be classified as “Read / Write / Execute.”

Some comments past on to me ask how is this different from what Web2.0 is about? At a technology level it really isn’t, the technology is already here. From a cultural and hence practice level it is. As we starting seeing more value in using things like Atom, Meta-Data, Open-Data and feed remixing etc, then how we use the Internet and our connected devices will change. That is what, at the core, is the basis of Web2.0 – changing usage and practice.

2 comments