Optimistic Transactions vs. Pessimistic Locks
Hardware vendors are about to provide not only (pessimistic) locks but also (optimistic) transactions within the same shared memory architecture. Therefore, concurrent programmers have the important responsibility of choosing the best of these synchronization techniques to develop highly concurrent software. Yet, it is unclear whether locks can lead to higher concurrency than transactions. This question is not trivial as it requires to define (i) a concurrency metric that is independent from computational or hardware-specific costs and (ii) a common consistency criterion that any program must ensure. We define the concurrency of a program as its ability to accept a concurrent schedule, a metric inspired by both classic observations on database concurrency control and recent results in the context of transactional memories. We also introduce a new correctness criterion that applies to both optimistic (typically transaction-based) and pessimistic (typically lock-based) executions. It requires the executions to be linearizable at the object level and \emph{locally serializable} at the memory level in that each process observes a potentially different but sequential execution, thus preventing inconsistent intermediate states from crashing the program. Our definitions enable a direct comparison of concurrency provided by seemingly incomparable synchronization techniques, such as locks and transactional memory. First, even if the provided transactions are strong enough to implement any correct object, they may accept schedules that no lock-based implementations accept. Second, and more importantly, we show that there exist programs for which relaxed transactions provide strictly higher concurrency than locks.
View on arXiv