HTAP Summit 2024 session replays are now live!Access Session Replays
tidb_feature_1800x600 (1)

Managing large transactions in distributed databases has always been a tough nut to crack. From processing millions of records during migrations to tackling complex workflows like ETL (Extract, Transform, Load), businesses need database solutions that are not only fast but also reliable.

TiDB has long been at the forefront of distributed SQL database innovation, and with Pipelined DML in TiDB 8.1, we’re raising the bar once again. This new feature, currently available as an experimental capability, redefines how large transactions are handled, introducing a seamless, memory-efficient approach that makes scaling up easier than ever.

In this post, we’ll dive into why we built Pipelined DML, how it works, and the transformational benefits it brings to managing modern data workloads.

Tackling Large Transactions: Challenges and Opportunities

Large-scale transactions are the backbone of many critical operations—think bulk data updates, system migrations, or ETL workflows where millions of rows need processing. While TiDB excels as a distributed SQL database, handling such transactions at scale brought two significant challenges:

  • Memory Limits: Before TiDB 8.1, all transaction mutations were held in memory throughout the transaction’s lifecycle. For operations touching millions of rows, this could lead to high memory usage and, in some cases, Out of Memory (OOM) errors if there weren’t available resources.
  • Performance Slowdowns: Managing large in-memory buffers relied on red-black trees, introducing computational overhead. As buffers grew, their operations slowed due to the $$O(N \log N$$ complexity inherent in these structures.

Take a common example: Archiving historical sales data into a separate table:

INSERT INTO sales_archive 
SELECT * FROM sales 
WHERE sale_date < '2023-01-01';

In such scenarios, holding millions of rows in memory until the transaction commits not only strains resources but also impacts speed. These challenges highlighted a clear opportunity to improve scalability, reduce complexity, and enhance reliability. With the rise of modern data workloads, the TiDB team developed a bold solution: Pipelined DML, designed to transform how large transactions are handled.

Existing Workarounds: Steps Toward a Complete Solution for Large Transactions

Before Pipelined DML was available, TiDB implemented several workarounds to bypass the large transaction problem. These interim solutions, while helpful in specific scenarios, came with trade-offs:

  1. BatchDML (now deprecated): Introduced in TiDB v4.0, Batch-DML allowed splitting transactions into smaller parts for individual commits, enabling support for larger data operations. However, as TiDB has evolved with more robust solutions, Batch-DML has been deprecated to ensure higher reliability and maintain data integrity. It is no longer recommended for use.
  2. Non-Transactional DML: The feature was first introduced in v6.1 and became GA in v6.5. TiDB splits a statement into multiple ones and executes them in sequence. It is safe for data integrity, but lacks atomicity and requires users to modify their statements, which often creates additional complexity for users.

These methods showcased TiDB’s flexibility and focus on user needs, even under challenging circumstances. However, they also underscored the need for a more seamless, built-in solution that is atomic, performant, and easy to use. This realization paved the way for Pipelined DML.

Pipelined DML: Revolutionizing Large Transactions

To address the growing demands of large-scale data operations, the TiDB team developed Pipelined DML, a transformative enhancement to the original Percolator protocol. This feature changes the game by enabling continuous write flushing to TiKV, TiDB’s storage layer, rather than relying solely on in-memory buffers until the commit phase. This shift ensures efficient, scalable, and reliable transaction management. Here’s what makes Pipelined DML a breakthrough:

1. Continuous Flushing: Incremental Writes for Better Memory Management

Pipelined DML writes data to TiKV in small, manageable batches, drastically reducing memory usage. This ensures smooth processing, even for large transactions, while eliminating the risk of out-of-memory (OOM) errors.

Example: Picture migrating a massive sales dataset to an archive table in TiDB. Previously, this would require storing the entire dataset in memory before committing, risking resource exhaustion. Now, Pipelined DML writes the data incrementally to TiKV as processed, keeping memory usage steady and workflows uninterrupted.

2. Asynchronous Buffer Management: Parallel Processing for Reduced Latency

By decoupling transaction execution from storage writes, Pipelined DML allows TiDB to process and write data simultaneously. This parallelism reduces transaction latency and optimizes resource utilization.

Example: Imagine processing millions of real-time log entries for an analytics system. With Pipelined DML, the system can process new entries while simultaneously writing completed logs to storage, enabling faster throughput and seamless handling of high-volume workloads.

3. Smoothed CPU and I/O Utilization: Consistent Resource Efficiency

Traditional batch processing often leads to spikes in resource usage, which can slow down other operations. Pipelined DML spreads the workload evenly, ensuring TiKV processes data at a steady rate.

Example: Updating global pricing data for a retail platform’s products can be resource-intensive. Pipelined DML ensures these updates happen gradually, avoiding sudden CPU or I/O bottlenecks and maintaining system stability.

These innovations make Pipelined DML a cornerstone of TiDB’s ability to meet modern data workload demands. It allows businesses to manage massive transactions with confidence, delivering a scalable, efficient, and reliable solution.

Traditional 2PC processes store all writes in memory until the commit phase (left), while Pipelined DML incrementally flushes writes to TiKV as generated, ensuring minimal memory usage (right) for large transactions.

Figure 1: Traditional 2PC processes store all writes in memory until the commit phase (left), while Pipelined DML incrementally flushes writes to TiKV as generated, ensuring minimal memory usage (right).

How Pipelined DML Works: A Step-by-Step Breakdown for Managing Large Transactions

Pipelined DML integrates seamlessly into TiDB’s architecture, enhancing transaction processing without disrupting established workflows. Let’s break down how it optimizes each phase of the transaction lifecycle:

1. Execution Phase: Efficient Start to Every Transaction

Like traditional Two-Phase Commit (2PC), the transaction begins with parsing, planning, and executing the SQL statement. However, Pipelined DML introduces a critical difference: instead of holding all writes in memory, it immediately flushes them to TiKV as generated. This approach prevents memory buildup and keeps processing efficient, even for transactions involving millions of rows.

2. Flushing Mechanism: Incremental Writes for Stability

Writes are sent to TiKV in small, manageable batches. This mechanism serves two vital functions:

  • Persistence: By progressively storing writes in TiKV, Pipelined DML significantly reduces memory usage, ensuring stable operations for large transactions.
  • Rate Limiting: If TiDB’s executor generates writes faster than TiKV can process them, the system dynamically slows down the producer. This ensures smooth operation, avoiding memory overloads or processing bottlenecks.

3. Commit Phase: Consistent and Reliable Finalization

TiDB moves to the commit phase after all flushed writes. At this stage, it scans and commits locks associated with the flushed data, ensuring transactional consistency. This approach maintains TiDB’s ACID guarantees, even with highly complex or large-scale workloads.

Memory Buffer Enhancements: Supporting Continuous Flushing

To make Pipelined DML possible, TiDB’s in-memory database (MemDB)—responsible for managing transaction writes—was upgraded to support continuous flushing.

The enhanced architecture consists of seven principles:

  1. Dual MemDB States: Each transaction uses one mutable MemDB and at most one immutable MemDB.
  2. Writes: All write operations are directed to the mutable MemDB.
  3. Reads: Read operations aggregate data from all active MemDBs for accurate results.
  4. Flushing: The immutable MemDB handles flushes exclusively.
  5. Transition: When no immutable MemDB exists, the mutable MemDB becomes immutable, and a new mutable MemDB is created.
  6. Memory release: The immutable MemDB is discarded after flushing completes.
  7. Flow Control: Incoming writes are paused when the mutable MemDB grows too large to maintain stability.

These enhancements enable TiDB to efficiently manage concurrent read, write, and flush operations while keeping memory usage under control.

The updated buffer structure supporting continuous flushing, designed to replace the original memory buffer.

Figure 2: The updated buffer structure supporting continuous flushing, designed to replace the original memory buffer.

Overcoming Challenges in Pipelined DML

Introducing a feature as transformative as Pipelined DML requires tackling unique technical hurdles. The TiDB team identified and addressed three primary challenges to ensure seamless and reliable operation.

Challenge 1: Out-of-Order Flush Operations

In distributed systems, network instability can cause lost, delayed, or arrive in the wrong order flush operations, risking data inconsistency.

How TiDB Solves This: Generation Numbers

Each flush operation is assigned a unique, incrementing generation number to ensure:

  • Correct Ordering: TiKV processes writes in sequential order based on their generation numbers.
  • Stale Write Prevention: If a delayed operation arrives after a more recent one, TiKV rejects it, preserving data integrity.
  • Example: Imagine Flush Operation 1 (Gen 1) and Flush Operation 2 (Gen 2) are sent correctly but arrive out of sequence. TiKV processes Gen 2 while discarding the outdated Gen 1, ensuring data consistency.

Challenge 2: Slow Point-Read Operations

When a requested key-value pair has already been flushed to TiKV and is no longer in TiDB’s memory, retrieving it requires a remote procedure call (RPC) to TiKV, increasing latency.

How Does TiDB Solve This?

  1. Lazy Check: For cases requiring verification of key non-existence, checks are deferred to the actual write in TiKV. The new buffer.GetLocal(key) method limits lookups to in-memory data, avoiding unnecessary RPCs.
  2. Prefetching: For scenarios like bulk updates, TiDB preloads key-value pairs into a cache using BatchGet. This cache ensures subsequent buffer.Get(key) operations hit the in-memory cache rather than initiating costly RPCs.

Challenge 3: Staging Across Nodes

TiDB’s staging mechanism allows temporary buffers (stages) to hold committed or rolled back changes. While this is simple in a memory-only setup, extending it across TiDB and TiKV nodes adds complexity.

How Does TiDB Simplify Staging?

By limiting staging operations to occur between flush points, the system ensures:

  • All staged changes resolve (committed or rolled back) before TiKV flushes data.
  • Memory-only staging is preserved, simplifying implementation while retaining essential features like ON DUPLICATE KEY UPDATE.

Performance Enhancements: TiDB Reaches New Heights for Large Transactions

With Pipelined DML, TiDB sets a new standard for processing large transactions, significantly improving speed, memory efficiency, and throughput. Extensive benchmarks of TiDB 8.4 against TiDB 7.5 highlight these advancements in practical, real-world scenarios.

Highlights of Testing Environment

All benchmarks follow the below parameters:

  • Workload: YCSB table with 10 million rows.
  • Test Environment: GCP n2-standard-16 machines.
  • Cluster Size: 3 TiKV nodes.

Results demonstrate the practical application of Pipelined DML for scaling operations with TiDB.

Key Improvements

Cluster SizeWorkload TypeLatency (TiDB 7.5)Latency (TiDB 8.4 with Pipelined DML)Data ThroughputPerformance Gain
3 TiKVsYCSB-insert-10M368s159s75.3 MiB/s2.31x
3 TiKVsYCSB-update-10M255s131s91.5 MiB/s1.95x
3 TiKVsYCSB-delete-10M136s42s285 MiB/s3.24x

How These Gains Translate to Real-World Scenarios

  • Insert Operations: Ideal for loading new datasets, such as importing sales records or customer profiles into TiDB. Continuous flushing ensures rapid, resource-efficient performance.
  • Update Operations: Useful for processing bulk adjustments, like updating pricing data across a large inventory. These updates are now faster and less memory-intensive.
  • Delete Operations: Perfect for archiving or clearing outdated records, such as cleaning up logs, where TiDB’s new capabilities demonstrate incredible speed and efficiency.

Steady Resource Utilization for Better Stability

One of the standout benefits of Pipelined DML is how it smooths CPU and I/O utilization. Unlike traditional Two-Phase Commit (2PC), which often leads to resource spikes during commit phases, Pipelined DML maintains steady performance throughout. This consistency improves system reliability, especially during peak workloads.

Steady resource utilization for better stability managing large transactions.

Latency and Scalability: A Closer Look in Processing Large Transactions

To process large-scale transactions with minimal delays, Pipelined DML employs a producer-consumer model that ensures data flows efficiently between TiDB and TiKV:

  • Producer: The TiDB executor generates data for TiKV.
  • Consumer: TiKV processes the incoming write requests.
  • Channel: Flush operations act as the bridge, maintaining smooth communication between the producer and consumer.

Managing Latency for Consistent Performance

In high-demand scenarios, the producer (TiDB executor) might generate data faster than the consumer (TiKV) can handle. Instead of overwhelming the system, Pipelined DML temporarily pauses the producer to ensure stability and prevent memory spikes. This process, called “flush wait,” is one of three factors influencing overall latency:

Latency Formula:

Overall latency = Execution Time + Flush Wait + Commit Primary Key Duration

Here’s how each factor contributes:

  • Execution Time: The time TiDB spends generating data for the transaction.
  • Flush Wait: Any delays caused when the producer must pause for the consumer to catch up.
  • Commit Primary KeyDuration: The time required to commit the primary key once the transaction is ready, which is negligible for large transactions.

Scalability Made Simple

Scaling out TiKV nodes directly reduces flush wait times, enabling the system to process larger workloads more efficiently. Benchmarks show significant latency reductions when the cluster adds additional TiKV nodes, demonstrating how easily TiDB adapts to increased demands.

Example Use Case

Suppose an e-commerce platform is preparing for a major sale event and anticipates a surge in transactions, such as bulk order updates or inventory adjustments. By scaling the TiKV nodes beforehand, the platform can handle these operations seamlessly without increasing latency.

Conclusion: Unlocking Large Transactions with TiDB

Pipelined DML marks a transformative step forward in TiDB’s journey to redefine large transaction management. By tackling longstanding challenges like memory constraints and transaction latency, this feature empowers organizations to process massive datasets with confidence and efficiency.

With its ability to deliver faster execution times, dramatically reduced memory usage, and unmatched scalability, TiDB ensures businesses meet the demands of modern data workloads. Whether you’re scaling your infrastructure, streamlining ETL pipelines, or optimizing high-traffic operations, Pipelined DML positions TiDB as a forward-looking, reliable solution for the future of data management.

If you have any questions about Pipelined DML, please feel free to connect with us on TwitterLinkedIn, or through our Slack Channel


Experience modern data infrastructure firsthand.

Try TiDB Serverless

Have questions? Let us know how we can help.

Contact Us

TiDB Cloud Dedicated

A fully-managed cloud DBaaS for predictable workloads

TiDB Cloud Serverless

A fully-managed cloud DBaaS for auto-scaling workloads