The culprit behind SeaTunnel Kafka Connector "OutOfMemory" found.The culprit behind SeaTunnel Kafka Connector "OutOfMemory" found.

The One Line of Code That Ate 12GB of SeaTunnel Kafka Connector's Memory in 5 Minutes

\

What happened?

In Apache SeaTunnel version 2.3.9, the Kafka connector implementation contained a potential memory leak risk. When users configured streaming jobs to read data from Kafka, even with a read rate limit (read_limit.rows_per_second) set, the system could still experience continuous memory growth until an OOM (Out Of Memory) occurred.

What's the key issue?

In real deployments, users observed the following phenomena:

  1. Running a Kafka-to-HDFS streaming job on an 8-core, 12G memory SeaTunnel Engine cluster
  2. Although read_limit.rows_per_second=1 was configured, memory usage soared from 200MB to 5GB within 5 minutes
  3. After stopping the job, memory was not released; upon resuming, memory kept growing until OOM
  4. Ultimately, worker nodes restarted

Root Cause Analysis

Through code review, it was found that the root cause lay in the createReader method of the KafkaSource class, where elementsQueue was initialized as an unbounded queue:

elementsQueue = new LinkedBlockingQueue<>(); 

This implementation had two critical issues:

  1. Unbounded Queue: LinkedBlockingQueue without a specified capacity can theoretically grow indefinitely. When producer speed far exceeds consumer speed, memory continuously grows.
  2. Ineffective Rate Limiting: Although users configured read_limit.rows_per_second=1, this limit did not actually apply to Kafka data reading, causing data to accumulate in the memory queue.

Solution

The community resolved this issue via PR #9041. The main improvements include:

  1. Introducing a Bounded Queue: Replacing LinkedBlockingQueue with a fixed-size ArrayBlockingQueue
  2. Configurable Queue Size: Adding a queue.size configuration parameter, allowing users to adjust as needed
  3. Safe Default Value: Setting DEFAULT_QUEUE_SIZE=1000 as the default queue capacity

Core implementation changes:

public class KafkaSource {     private static final String QUEUE_SIZE_KEY = "queue.size";     private static final int DEFAULT_QUEUE_SIZE = 1000;      public SourceReader<SeaTunnelRow, KafkaSourceSplit> createReader(             SourceReader.Context readerContext) {         int queueSize = kafkaSourceConfig.getInt(QUEUE_SIZE_KEY, DEFAULT_QUEUE_SIZE);         BlockingQueue<RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>>> elementsQueue =                  new ArrayBlockingQueue<>(queueSize);         // ...     } } 

Best Practice Recommendations

For users of the SeaTunnel Kafka connector, it is recommended to:

  1. Upgrade Version: Use the SeaTunnel version containing this fix
  2. Configure Properly: Set an appropriate queue.size value according to business needs and data characteristics
  3. Monitor Memory: Even with a bounded queue, monitor system memory usage
  4. Understand Rate Limiting: The read_limit.rows_per_second parameter applies to downstream processing, not Kafka consumption

Summary

This fix not only resolved the memory overflow risk but also improved system stability and configurability. By introducing bounded queues and configurable parameters, users can better control system resource usage and avoid OOM caused by data backlog. It also reflects the virtuous cycle of open-source communities continuously improving product quality through user feedback.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Safaricom to roll out pay-as-you-go pricing for internet services in Kenya

Safaricom to roll out pay-as-you-go pricing for internet services in Kenya

Kenyan-based telecoms operator Safaricom is set to launch a pay-as-you-go fibre broadband service for Kenyan homes and offices.… The post Safaricom to roll out
Share
Technext2026/01/19 22:04
How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings

How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings

The post How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings appeared on BitcoinEthereumNews.com. contributor Posted: September 17, 2025 As digital assets continue to reshape global finance, cloud mining has become one of the most effective ways for investors to generate stable passive income. Addressing the growing demand for simplicity, security, and profitability, IeByte has officially upgraded its fully automated cloud mining platform, empowering both beginners and experienced investors to earn Bitcoin, Dogecoin, and other mainstream cryptocurrencies without the need for hardware or technical expertise. Why cloud mining in 2025? Traditional crypto mining requires expensive hardware, high electricity costs, and constant maintenance. In 2025, with blockchain networks becoming more competitive, these barriers have grown even higher. Cloud mining solves this by allowing users to lease professional mining power remotely, eliminating the upfront costs and complexity. IeByte stands at the forefront of this transformation, offering investors a transparent and seamless path to daily earnings. IeByte’s upgraded auto-cloud mining platform With its latest upgrade, IeByte introduces: Full Automation: Mining contracts can be activated in just one click, with all processes handled by IeByte’s servers. Enhanced Security: Bank-grade encryption, cold wallets, and real-time monitoring protect every transaction. Scalable Options: From starter packages to high-level investment contracts, investors can choose the plan that matches their goals. Global Reach: Already trusted by users in over 100 countries. Mining contracts for 2025 IeByte offers a wide range of contracts tailored for every investor level. From entry-level plans with daily returns to premium high-yield packages, the platform ensures maximum accessibility. Contract Type Duration Price Daily Reward Total Earnings (Principal + Profit) Starter Contract 1 Day $200 $6 $200 + $6 + $10 bonus Bronze Basic Contract 2 Days $500 $13.5 $500 + $27 Bronze Basic Contract 3 Days $1,200 $36 $1,200 + $108 Silver Advanced Contract 1 Day $5,000 $175 $5,000 + $175 Silver Advanced Contract 2 Days $8,000 $320 $8,000 + $640 Silver…
Share
BitcoinEthereumNews2025/09/17 23:48
ArtGis Finance Partners with MetaXR to Expand its DeFi Offerings in the Metaverse

ArtGis Finance Partners with MetaXR to Expand its DeFi Offerings in the Metaverse

By using this collaboration, ArtGis utilizes MetaXR’s infrastructure to widen access to its assets and enable its customers to interact with the metaverse.
Share
Blockchainreporter2025/09/18 00:07