Skip to content
Dotan J. Nahum edited this page Jan 10, 2014 · 11 revisions

I'll start by stating the performance goals of Sneakers.

  1. Be as fast as Sidekiq - my go-to background processing framework, and if possible, faster.
  2. Keep the previous goal, while providing reliability, high-availability, and advanced messaging semantics.

When ever I do not need (2) - I would probably use Sidekiq.

Further, if you do not need (2), for long-running background jobs, there's really no difference between Sneakers and things like Sidekiq, and in my opinion you should keep using Sidekiq as it has an immensely better ecosystem.

The silly microbenchmark

While I can't really enumerate all of the possible use cases out there, mixing up I/O and CPU, I can specify the simplest, most bare bones microbenchmark there is in a spirit similar to the Computer Language Benchmark. To me, this kind of benchmark is simply a gut feeling of the plumbing of the framework. The next step is to set up a full fledged POC and use case benchmark.

Just as you can't conclude what language to use by those benchmarks, you can't conclude which background framework to use by my microbenchmark - you should test your own use case as it may behave differently.

For an empty job, Sneakers is mostly only limited by the broker speed (RabbitMQ). On a recent 2012 MBP it reaches 7000req/s for a silly microbenchmark.

A detailed benchmark of standard EC2 instances and comparison will follow here soon.

From my experience, in a workload that's typical of fast event processing, I get up to 3000req/s on Sneakers from an EC2-Large while Sidekiq sits tightly at 600 (combined - 2 separate Sidekiq processes in order to use both CPUs).

Note: Generic benchmarks are a misleading, sneaky thing. This page was revised to be more explicit, due to Mike Perham's insightful advice. See here for the original discussion - https://github.com/jondot/sneakers/issues/9