Concurrency & Long running requests: Sinatra, EventMachine, fibers and Node

The problem – long running requests with blocking IO

Recently, I’ve been working on an alternative view onto Google Analytics data. This presents some challenges:

  1. Calls to the Google Analytics API vary but some can take over 10s.
  2. At this point I only have a single VPS instance which runs various ruby and wordpress instances and has limited memory and db connections

Problem 1, long running API requests (yes 10s is long) normally means that a process is tied up (blocking) waiting for a reply for 10s.

Problem 2, the single VPS instance has limited resources. Memory wise, I cannot have many processes running each using up a chunk of mememory.

1 + 2 = crappy performance, when there are multiple concurrent users the limited number of processes available to service the users requests may be tied up waiting for replies from Google.

The solution – get more concurrency from a process

Solution: get more concurrency from a limited number of processes. I can think of two solutions in the Ruby world:

A) Use JRuby which has native rather than cooperative threads
B) Use EventMachine and non-blocking async http requests.

Like node.js, eventmachine implements the reactor pattern – essentially requests cooperate by giving back control to the reactor when they themselves are blocking on something e.g. an http API call.

Solution A, use JRuby is reasonable but has some drawbacks. In theory a single JRuby process can concurrently handle 100s of concurrent requests, processing each in it’s own native thread. However, in practice we still need to be careful about non-reenterant code, particularly in gems and extensions we depend upon.

Solution B, initially appears more complex but is perhaps more predicatable. We’re explicitly recognising the blocking API calls issue and making it a 1st class issue for our system. Whilst I don’t like to make things more complex than they need be, often it’s better to recognise a fundemental issue and address it explictly.

How evented/async approach works

So how does B work:

  1. server receives a request from browser
  2. server “queues” the request in EventMachine’s reactor
  3. EventMachine’s reactor gets back control from whatever was running and pulls the next event from its reactor queue and executes it
  4. our app gets control from the reactor and starts processing the request
  5. our app makes a http request to Google Analytics API using an async library (em-http) and provides a callback to be executed when a reply is received; control returns to EventMachine’s reactor
  6. the EM reactor pulls the next event from its queue and executes it
  7. our earlier http API call (from 5) returns and the callback get’s queued on the EM reactor
  8. the currently executing event finishes, returning control to EM
  9. EM picks up the callback (from 7) which processes the results of the API query, builds a response and sends it to the browser. The HTTP request is finished
  10. So where’s the pain? Well async code with callbacks can quickly get messy, especially if there are callbacks nested within callbacks, nested within callbacks… and then an exceptional condition occurs. It is possible to structure code as sets of actions, where the callback code just links actions together, handling the flow of control between them. However, it’d be nicer if we could just write code that looks like normal sync server code. Ruby (1.9) Fibers and Sinatra-synchrony to the rescue…

    Fibers – explicit cooperation

    Ruby 1.9 has Fibers. Threads enable implicit concurrency, with some layer deciding when to switch control from one thread to another. Fibers enable explicit cooperative concurrency. A piece of code is wrapped in a fiber and executed. The fiber can yield, freezing its current state and returning control to its caller, at some later point the caller can resume the fiber, with the fiber executing from where it yielded control, with it’s pre-yield state intact.

    Here’s an example:

    Calling main( fiber ) gives the following output:

    Hello from main: 321
    Hello from the fiber: 123
    and we're back in main: 321
    And we're back in the fiber: 123
    back in main and finishing: 321

    Fibers, making async code look like sync

    I’ll just give a very brief overview. For more detail here’s a great article explaining how Fibers can be used to make async code look like sync code.

    Now we can just write async code that looks like sync code – no callbacks:

    So this is great, async code without the nested callback headache! I’m a fan of the lightweight ruby web framework Sinatra. This helpful and clever person has put Sinatra and EventMachine together (with a few other useful pieces) in he form of Sinatra::Synchrony.

    Node Fibers

    Ruby not your bag? Prefer javascript/coffeescript and node.js? I’m finding myself writing more and more javascript/coffeescript code in the browser – coffeescript + backbone.js + jasmine (for testing) is pretty good. I can certainly see the attraction of node.js, using the same language on the server as the client, particularly if it comes with the high-concurrency goodness of async events. Well there may be some good news in the form of Node-fibers – in particular take a look at the Fiber/future API.

Developers, Vagrants, Puppets and Chefs

A friend recently pointed me at Vagrant. At first I wasn’t sure but after a play, I’m starting to thing this is a great idea.

Vagrant helps you create VMs for use as development environments. So for example, on my Mac I might run a Linux distro in a VM and use it to run the website + db that I’m developing. What’s more I can share that VM config with other developers working on the same project.

Vagrant works with Oracle’s free (and open source) Virtual Box virtualization software. If you’re a Mac user, you might be familiar with running Windows in VMWare Fusion or Parallels. This is the same sort of thing – though if you’re like me you’re more likely to run a Linux distro with Apache/Nginx, MySQL/Postgres etc..

Vagrant use VirtualBox to create and run virtual machines from your chosen distro and then provisions them with your development environment, webserver, app server, db etc using either Puppet or Chef. All this is done in a portable way and can be shared so that each of the developers on your team develop against exactly the same configuration.

Why is this a good idea? Here’s my top reasons..

  • consistent environment across the team – WoMM (“works on my machine”) is no longer an issue
  • bring in new developers more quickly with less frustration and teething problems
  • “hosed” environments can be dumped and new one created quickly and easily. Now you can ‘fork’ your environment not just your code.
  • dev environment can more closely match the server – even if you work on a MacBook. You could potentially use the same chef recipes or puppert scripts too.
  • It’s pretty easy to get up and running – why not run through the Getting Started with Vagrant guide and see what you think?

Here’s the Vagrant guys explanation of why this is a great idea.