During my journey as a Ruby developer, I have listened multiple times that Ruby is single-threaded. However, for a long time it has been confusing to me what does it really mean and how threads work in Ruby. This post aims to share some of my findings about this subject during my career. I hope it can help others understand at least a piece of such an important topic.
Let’s start by making an important distinction: when people say Ruby is single-threaded, they refer to the MRI implementation. The same is not true for JRuby and Rubinius, for example. The reason behind this is the existence of GIL (Global Interpreter Lock) in Ruby MRI.
Global Interpreter Lock
GIL is a global lock around the execution of Ruby code that prevents parallelism. Basically, any thread that wants to execute code has to acquired this global lock and one (and only one) thread can hold the lock at any given time.
For example, take a look at the code below that checks if a number is prime:
For each number from
100, we spawn a new thread and each thread will try to acquire the GIL (the GIL is a simple mutex and the operating system guarantees that only one thread holds the mutex at a time). While one thread holds the lock and executes Ruby code, all others are waiting until they have a chance to run. The time a thread will wait is unspecified and is controlled by the MRI internals.
Because of this behaviour, we can assure that Ruby code will never run in parallel on Ruby MRI. Although GIL prevents parallelism, it does execute Ruby code concurrently.
Concurrency vs Parallelism
It’s important to note that concurrency is not the same as parallelism. The best illustrative example I’ve found about this distinction is in Working with Ruby Threads by Jesse Storimer:
Imagine you’re a programmer working for an agency and they have two projects that require one full day of programming time each. There are (at least) three ways that this can be accomplished:
- You could complete Project A today and then complete Project B tomorrow;
- You could work on Project A in the morning and then switch to Project B in the afternoon, and then do the same thing tomorrow;
- You could work on Project A and another programmer could work on Project B;
The first way represents working serially, which is similar to a single-threaded running code.
The second way represents working concurrently, which is similar to a multi-threaded scenario running on a single CPU core.
Finally, the third way represents working in parallel, which is similar to a multi-threaded scenario running on a multi-core CPU.
The interesting part of this example is that working serially or concurrently takes the same amount of time (2 days), while working in parallel takes half the time (1 day). Therefore, working concurrently does not necessarily means working faster.
So, when should we use multiple threads in a Ruby program? Let’s take a look at a couple of examples.
Example 1: IO-bound program
In this first example, we implemented a small piece of code that makes several external HTTP requests. It’s a classic example of a IO-bound program.
I/O Bound means the rate at which a process progresses is limited by the speed of the I/O subsystem.
In such programs, the thread in execution blocks until the I/O completes. Thus, it makes sense to spawn more threads so while the first thread waits, others can use the CPU to perform their work.
Here is the output of the above program:
$ ruby thread-io-bound.rb Without threads: 0.130000 0.050000 0.180000 ( 0.591372) With threads 0.060000 0.040000 0.100000 ( 0.114966)
As you can see, the block with threads executes 4x faster than the block without threads.
Example 2: CPU-bound program
In this second example, we calculate the fibonacci from
30. This is an example of a CPU-bound program.
CPU Bound means the rate at which a process progresses is limited by the speed of the CPU.
In such cases, performance in MRI Ruby is not impacted with the introduction of more threads. Because GIL allows only one thread to be executed at a time and this program has no blocking I/O, switching context to run other thread has no performance gains.
Here is the output of the above program:
$ ruby thread-cpu-bound.rb Without threads: 0.660000 0.000000 0.660000 ( 0.663142) With threads 0.640000 0.010000 0.650000 ( 0.635634)
As expected, both blocks execute in a very similar time.
(If we run this code in JRuby however, we will be able to see that the code with more threads runs faster due to the parallelism of the language implementation.)
There is a common misconception that GIL guarantees your code will be thread-safe. This is not true. It does reduce the likelihood of a race condition, but it doesn’t mean it will not happen.
A piece of code is thread-safe if it only manipulates shared data structures in a manner that guarantees safe execution by multiple threads at the same time.
We can see that in a very simple example:
The above code opens an external URL and increments a counter. The correct output for this program would be
5, since it will increment the counter
5 times. However, this is not what happens.
$ ruby thread-safety-ruby.rb 1
Here is the reason why:
- The first thread retrieves the value of counter (which is
0) and makes an external HTTP request. This interrupts the current thread and puts another thread to execute.
- The second thread does the same thing: it retrieves the value of counter (which is still
0) and makes and external HTTP request. This also interrupts the thread and puts another thread to run.
- This flow continues until the external requests complete.
- The counter is then increment, but the counter value for each thread is
0so the output is
Actually, the output of this program is not guaranteed. Depending on how the thread scheduler switches context and how fast is the network, is possible that the program outputs different values in different executions.
This is a very common error in multi-threaded applications.
In this post I showed how MRI Ruby guarantees that one (and only one) thread executes at a given time due to the Global Interpreter Lock. I also presented that concurrency isn’t something that should be used everywhere and the issues that arise when writing multi-threaded code.
Concurrent code is naturally complex and it’s important that the added complexity comes with performance gains. Therefore, remember to always measure your application because, as we saw earlier, concurrent code isn’t necessarily faster.