Consistency vs. Concurrency

While transactions desires isolation, too strong isolation may offset the performance benefit from execution concurrency. In practice, isolation is defined at different levels with trade-offs between consistency and concurrency. The applicability of each isolation level is determined by the specific scenario targeted.

Isolation levels

We can’t get both consistency and concurrency at the same time. If we choose to obtain strong consistent data, we may end up with serial execution of transactions. Alternatively, if we prefer performance with high concurrency, we need to live with certain inconsistent data. It’s useful to understand such trade-offs from the definition of specific isolation levels in order to know the exact situation we are facing for each option. I will review each isolation level with my understanding through perspectives of both consistency and concurrency.

Serializable isolation

This is the strongest isolation level and also the definition of the property of transaction isolation in theory. Serializable isolation requires that concurrent execution of transactions should result in a state that would have been achieved if transactions were executed serially under a certain order. While it sorts the execution of transactions, an order can be arbitrary.

To learn how serializable isolation is achieved, let’s first see how consistency can be violated. Inconsistency may occur when there is read-write conflicts between transactions. Let’s review three typical inconsistent cases:

  1. Dirty reads – For example, transaction \(T_1\) updates some data and then transaction \(T_2\) reads that data. If \(T_1\) eventually aborts, then \(T_2\) has read the uncommitted data.

  2. Nonrepeatable reads – For example, transaction \(T_1\) updates some data, while transaction \(T_2\) reads that data before and after the update. The values of the data read by \(T_1\) vary for the duration of \(T_2\).

  3. Phantoms – For example, transaction \(T_1\) creates a new file under a directory, while transaction \(T_2\) lists the directory before and after the new file creation. It then gives different directory contents for the duration of \(T_2\). Similar examples exist under databases for the SELECT query with a predicate (is it necessary to have a predicate?).

All these inconsistent cases can be prevented if we have synchronization between reads and writes such as:

  • Reads block writes for the duration of each transaction.
  • Writes block reads for the duration of each transaction.
  • (Here writes include new insertions of data.)

With that, transactions are in essence executed serially, where at any time only one transaction is running. Therefore, no interference of state changes will occur across transactions. That is how serializable isolation works to ensure consistency. Apparently the cost is too high such that there can be no concurrency at all. In practice, serializable isolation is often released to trade consistency for concurrency.

Repeatable read isolation

Repeatable read isolation is similar to serializable isolation except that it allows phantoms. Synchronization between reads and writes are as below:

  • Reads block writes for the duration of each transaction.
  • Writes block reads for the duration of each transaction.
  • (Here writes exclude new insertions of data.)

Similar to serializable isolation, reads and writes block each other. However, such blocking does not include new insertions of data (that’s why phantoms may occur) and thus exhibits better concurrency over serializable isolation. Synchronizations of reads and writes on existing data are enforced on that data. For new insertions, we need to deal with data that haven’t existed. We then go higher levels, for example, the directory of files or the range defined by a SELECT query. Again, this is included by serializable isolation but not repeatable read isolation.

Read committed isolation

Read committed isolation releases consistency further. It mainly ensures that reads never get uncommitted data, while allowing nonrepeatable reads and phantoms. Correspondingly, synchronization between reads and writes is further weaker with even better concurrency:

  • Reads block writes until the data are processed.
  • A write block reads until that write commits.
  • (Here writes exclude new insertions of data.)

Read uncommitted isolation

Read uncommitted isolation further allows reading uncommitted data. It’s the most released isolation level where dirty reads, nonrepeatable reads, and phantoms are possible. There is no synchronization between reads and writes, and thus it has high concurrency:

  • Reads don’t block writes.
  • Writes don’t block reads.

Snapshot isolation

The way snapshot isolation supports concurrency is different from all that we have discussed above. It creates a snapshot of data for each transaction as that transaction starts. When multiple transactions read from or write to the same data, they end up working on different versions of that data. Since there are different versions, no read-write blocking involves and thus snapshot isolation provides high concurrency:

  • Reads don’t block writes.
  • Writes don’t block reads.

Note that with snapshot isolation, writes still block writes. If two transactions updates the same data and both commit, there is the conflict of the final write result and one of the transactions needs to roll back.

Interestingly, while snapshot isolation prevents all three inconsistent cases (i.e., dirty read, nonrepeatable reads, or phantoms) as serializable isolation, it’s not a serializable isolation level. One typical example, which applies to snapshot isolation but not serializable isolation (therefore, the later has stronger isolation than the former), is value swap:

  • Let’s say we have two variables, \(a=1\) and \(b=2\).
  • Transaction \(T_1\) reads the value of \(a\) and then assign it to \(b\).
  • Transaction \(T_2\) reads the value of \(b\) and then assign it to \(a\).

When executing \(T_1\) and \(T_2\) in parallel, with snapshot isolation, both transactions read the original values of \(a\) and \(b\), and therefore, the result becomes \(a=2\), \(b=1\). However, such value swap will never happen for serializable isolation, where the result can only ends up with \(a\) and \(b\) having the same value.

Best of both worlds - read committed snapshot isolation (RCSI)

We have seen that snapshotting is a nice way to implement isolation without involving read-write blocking. Besides snapshot isolation, can we apply snapshotting to the implementation of other isolation levels? The answer is yes and one case (may be the only one because the others don’t seem to make sense to me) is to realize the read committed isolation level through snapshotting, naming read committed snapshot isolation (RCSI). Note that RCSI itself is not another isolation level. It’s another implementation of the read committed isolation level. Consequently, RCSI prevents dirty reads while allowing nonrepeatable reads and phantoms. The property of snapshotting further allows RCSI to have high concurrency. Compared to snapshot isolation (SI), RCSI always read the latest committed data, while SI only sees the data snapshotted at the beginning of the transaction.

Contents