What Happens When Many Concurrent Request Tries to Book Same Movie Theatre Seat

What Happens When Many Concurrent Request Tries to Book Same Movie Theatre Seat

Understanding the Problem Statement: 

The Classic Seat Booking Scenario 

Imagine a popular new movie just released. Users flood your app to grab a seat. Many requests hit the server simultaneously, all aiming for the same seat—Row 10, Seat A1.

SnapCode 1 -

  It seems robust. But is it? 

A Quick Look into the Critical Section: 

The critical section here involves:

Reading the seat’s current status 

Checking availability 

Updating the status to booked 

In a single-threaded or even single-process multi-threaded environment, this can work fine. But when the system grows—think microservices, multiple servers, and distributed databases—this strategy can backfire. 

Concurrency Issues in Multithreaded and Distributed Systems 

Scenario 1: Single Process, Multiple Threads 

If you’re dealing with one JVM instance (e.g., P1 → t1, t2, t3, t4), then synchronized() works. Threads in the same process can honour the lock, ensuring only one thread executes the critical section at a time. 

singleProcessmultipleThread -

Scenario 2: Multiple Parallel Processes 

If your system runs across multiple servers or JVMs (e.g., P1 and P2), each with its threads (t1, th1, etc.), synchronized() becomes useless. Each JVM has its memory and cannot synchronize with another’s lock. 

Conclusion? synchronized() is not suitable for distributed systems. 

 

 

Concurrency Issues in Multithreaded and Distributed Systems visual selection -

 Using Synchronized Block in Java 

How synchronized() Works 

It provides an intrinsic lock on objects or methods, ensuring atomic access to the critical section—but only within the same process. 

Applicability in a Single Process 

Useful in monoliths or tightly coupled systems where all threads run in one process space. 

Limitations in a Distributed Architecture 

No shared memory = no shared locks = chaos in booking seats. 

Why Synchronization Fails in Distributed Systems 

JVM-Level Locking 

synchronized() is bound to the Java Virtual Machine’s memory model. Each process is unaware of the others. 

No Shared Memory Across Systems 

In distributed systems, each service might run in a container, VM, or cloud instance—no shared memory means no lock sharing. 

Pre-Requisites to Understanding Concurrency Control Mechanisms: 

Before diving into concurrency strategies like OCC and PCC, you must understand: 

1. The Role of Database Transactions 

Transactions bundle operations into a unit of work that must either complete entirely or roll back, ensuring ACID properties. 

2. Basics of DB Locking 

Locks prevent conflicts: 

Shared Lock: For reads 

Exclusive Lock: For writes

Dirty Read:  A dirty read occurs when a transaction reads data that has been modified by another transaction but not yet committed. If the modifying transaction later rolls back, the data read was never actually saved—hence, it was “dirty.” 

Example: 

  1. Transaction A updates a user’s balance from $100 to $500. 

  1. Transaction B reads the balance as $500. 

  1. Transaction A rolls back the update. 

  1. Transaction B used a value ($500) that no longer exists—this is a dirty read.
     

Non-Repeatable Read: A non-repeatable read happens when a transaction reads the same row twice and gets different values because another transaction modified the row and committed in between. 

Example: 

  1. Transaction A reads an order status → “pending”. 

  1. Transaction B updates the order status → “shipped” and commits. 

  1. Transaction A reads the same order again → now it’s “shipped”. 

  1. The value changed mid-transaction, resulting in a non-repeatable read.

     

Phantom Read: A phantom read occurs when a transaction executes the same query multiple times, and new rows appear (or existing ones disappear) due to another transaction’s insert or delete. 

Example: 

  1. Transaction A runs SELECT * FROM orders WHERE region = ‘North’ → gets 2 rows. 

  1. Transaction B inserts a new order in the ‘North’ region and commits. 

  1. Transaction A runs the same query again → now gets 3 rows. 

  1. That new row is a phantom—it wasn’t there before. 

 

3. Overview of Isolation Levels 

Isolation Level 

Dirty Read 

Non-Repeatable Read 

Phantom Read 

Read Uncommitted 

Yes 

Yes 

Yes 

Read Committed 

No 

Yes 

Yes 

Repeatable Read 

No 

No 

Yes 

Serializable 

No 

No 

No 

Optimistic Concurrency Control (OCC) 

OCC is a method to handle concurrency that optimistically assumes conflicts are rare. Instead of locking resources when a transaction starts (like pessimistic locking), it allows multiple transactions to proceed concurrently but verifies before committing whether any conflicting changes occurred. 

  • It’s ideal when conflicts are rare and performance is critical.
  • It prevents lost updates by detecting conflicts at commit time.
  • If a conflict is detected, one of the transactions rolls back and retries. 

Key Characteristics of OCC 

  • Uses versioning or timestamps
  • Allows maximum concurrency
  • Avoids deadlocks
  • Suitable for read-heavy, low-conflict systems

REPEATABLE READ Isolation Level 

Ensures the same row read in a transaction remains unchanged unless committed. 

Repeatable Read is an isolation level in databases that ensures: 

  • A row read in a transaction remains unchanged until the transaction ends.
  • Even if another transaction modifies that row and commits, the original transaction won’t see the new value. 

It prevents non-repeatable reads, where a row value changes mid-transaction. 

How OCC Handles Concurrent Seat Booking 

Step-by-Step Implementation 

  1. Transaction A reads Seat 10A1 at version V1 

  1. Transaction B reads the same seat at version V1 

  1. Both perform logic 

  1. Transaction A commits and increments the version to V2 

  1. Transaction B tries to commit → fails (conflict detected) 

  1. Transaction B rolls back and retries

  2.  

Optimistic Concurrency Control OCC visual selection 1 -

When to Use OCC? 

Optimistic Concurrency Control (OCC) is a powerful technique for managing concurrent access to data in modern applications. But when is it the right tool for the job? Let’s break it down with practical scenarios, clear explanations, and real-world analogies. 

The Core Principle of OCC 

OCC operates on the assumption that most transactions will not conflict. It allows multiple users to access and work with the same data simultaneously, only checking for conflicts at the moment of committing changes. If a conflict is detected—meaning another transaction has modified the data in the meantime—the transaction is rolled back and can be retried 

Ideal Scenario: High Concurrency, Low Conflict 

Imagine a popular flight booking site. At any given moment, you might have 1,000 users browsing available seats, but only a handful are trying to book the same seat at the same time. In this environment: 

  • Most users are just reading data (checking seat availability). 

  • Very few users are writing/updating data (booking a seat). 

  •  

With such a low probability of two users trying to book the same seat simultaneously, OCC shines. It lets everyone browse freely, and only those rare booking conflicts require special handling (a retry). 

Why OCC Works Well Here 

  • No Locking Overhead: Unlike pessimistic locking (which locks data and can cause bottlenecks), OCC lets everyone access data without waiting. This means your system can handle more users at once and remains highly responsive. 

  • Efficient for Reads: Since most operations are reads, which don’t cause conflicts, OCC avoids unnecessary performance costs. 

  • Simple Retry Logic: If a conflict does occur (two people try to book the same seat), only the second transaction fails and needs to be retried. This is a rare event, so the overall user experience remains smooth 

When NOT to Use OCC

OCC is not a silver bullet. If your system has frequent conflicts—such as a flash sale where hundreds of users are trying to buy the same limited item at once—OCC can lead to many failed transactions and retries, hurting performance. In such cases, pessimistic locking or queuing mechanisms may be better suited 

Conclusion

When many users try to book the same movie theatre seat at once, in-memory synchronization methods like Java’s synchronized block fail in distributed systems. While they work in single-process setups, they break down across multiple servers since memory and locks aren’t shared. To avoid issues like double booking, database-level concurrency control is essential. Optimistic Concurrency Control (OCC) is a smart approach here—it assumes conflicts are rare and allows concurrent operations, only checking for conflicts before committing. If two transactions try to book the same seat, the first one commits, while the second detects a version mismatch, rolls back, and retries. OCC is ideal for high-traffic, low-conflict systems like booking apps. Ultimately, ensuring consistency in distributed systems requires strategies like OCC or strong isolation levels such as REPEATABLE READ or SERIALIZABLE to guarantee reliable and fair results under load. 

-Krishna Kumar Saw
FullStack Engineer