Concurrency: You’ll have to understand it one day soon.

It is pretty clear that scale is an important topic these days. Scaling cheaply and reliably in a seamless fashion to boot. I noticed recently several talks/tutorials on scale and concurrency at OSCON 2007 and that Web2.0 Expo and ETech07 had their share too. There have also been numerous mentions of Live Journal’s and Myspace’s approach to scaling out recirculating recently. These approaches are considered the current blue-print for handling scale of web applications based on LAMP. I suspect mostly because it suggests you should forget about scaling and just let your architecture evolve along the lines prescribed i.e. Scale is a nice problem to have. And in a pragmatic-way I totally agree with that.

Returning forager The general approach at the application layer can be summed up as: Partition your database around some key object in the system, typically a user. Replicate these partitions around your various database servers. Deploy your front-end on a web-farm and potentially factor out some application functionality to dedicated servers. Manage sessions separately either by creating a discrete service within the architecture or by resolving sessions to a particular database partition – basically push state down to a data-store where the application coordination happens. Then cache whatever you can. That’s it – well the devil is in the details.

Don MacAskill’s presentation at ETech07 details SmugMug’s architecture using Amazon’s S3 and experiments with EC2 and SQS. It contains a great lot of information about using Amazon’s Web Services. The utility model – or compute power on demand – of deploying services will become very prevalent over the next few years. While Amazon are the first movers, others will follow with potentially more powerful computing models.

All good. Basically they are all large grained approaches to leveraging concurrent execution using lots of boxes. Application design is just chopped up into a few tiers. It is easy to understand and at this point we can happily forget about concurrency because each component in the system is a big black box with synchronous calls between them.

Several things are happening that suggest our sequential synchronous existence may be over. Power consumption, Multi-core cpu’s, on-demand scaling as exemplified by Amazon’s EC2 service, transaction reliability and the rise of data mining; All make confronting scaling issues early on important for your application architecture.

The situation for CPU design and the implications for software design was clearly discussed, back in 2005, in a Dr Dobbs article by Herb Sutter. The Free Lunch is Over: A Fundamental Turn Towards Concurrency in Software. In the article Herb discusses the limitations of chip design and why this will lead to a focus on multi core design rather than increasing clock cycles on a single core. A key barrier to increasing the clock speed is power consumption. Heat is a by product of power consumption so power consumption isn’t related just to the operation of a machine but the cooling required to keep it operating. Power consumption is a major issue for Google and any company that runs data centers. Even my little server room at home should have air conditioning during the summer months! Server power consumption is one of the motivations given as justification for O’Reilly Media’s conference on energy innovation later this year.

While two or four core cpu architectures are now mainstream we are seeing an increase in the number of cores. An example demostrated recently is Intel’s 80-core research chip crunches 1 Tflop at 3.2 GHz

“Intel’s 80-core chip is basically a mainframe-on-a-chip-literally,” said McGregor. “It’s the equivalent of 80 blade processors plugged into a high-speed backplane, or 80 separate computers using a high-speed hardware interconnect.”

I suspect that given Moore’s Law we can expect some associated Law regarding the number of cores on a square inch wafer increasing exponentially over the coming years. No doubt there will be some correlation to Peak Oil issues and increasing awareness of Global Warming as the market impetus for more cores. If approaches to concurrency in software are any indication, I can easily imagine clock speeds decreasing. Chip designers would focus on throughput rather than outright performance. This would suggest the rise of commercial asynchronous chip designs. [Maybe they are already - I know nothing about current chip designs beyond their multi-coreness.] It would also fit with Clayton Christensen’s “law of conservation of modularity“.

Current software development techniques can only hide so much. Eventually we will need to utilise the multiple cores to maximise the benefit they provide. Multi-threading is notoriously difficult to do correctly so I don’t see that as a solution. Asynchronous event or message passing approaches provide a better way. Basically they model how hardware works and micro-kernel operating systems have been modelled in a similar fashion.

We have being doing this application composition stuff with messages for a while now (albeit synchronous in nature) with Unix pipes and the Web but these architectures are only just beginning to be appreciated beyond administration script-based coordination Unix pipes have been traditionally used for.

Google have raised awareness and appreciation with publication on internal toolkits like MapReduce, Chubby and BigTable.

One thing that surprises me about various presentations around scaling web applications is the lack of mention about message passing, specifically message queues or event buses. These are the staple tools when you want to scale and ensure data doesn’t get lost. Then again many of the consumer oriented services don’t need such reliability. The cost of losing a customers IM message, or comment post, or uploading of an image is minimal. An annoyance at worst. As web applications evolve into business applications losing data, due to scale demands, has real costs.

No comments Digg this

No comments yet. Be the first.

Leave a reply