Ignite concurrent write

本文深入探讨了乐观并发和悲观并发控制下不同隔离级别的特性,包括数据缓存、锁定策略及其实现机制。通过对比READ_COMMITTED、REPEATABLE_READ、SERIALIZABLE等隔离级别的差异,分析了在高并发场景下的数据正确性和性能表现。同时,提供了代码示例说明如何在不同的并发模式下确保数据一致性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Pessimistic concurrency:

Isolation levelData cached in transactionData locked
READ_COMMITTEDNoFirst write operation
REPEATABLE_READYesFirst read operation
SERIALIZABLEYesFirst read operation

Optimistic concurrency: Lock only acquired at prepare phase of two phase commit

Isolation levelData cached in transactionThrow optimistic locking exception
READ_COMMITTEDNoNever
REPEATABLE_READYesNever
SERIALIZABLEYesOn version conflict at commit time

Data cached in local means that subsequent read will always be from local transaction. This is how repeatable read works. Otherwise another transaction may modify the value in cache. This can happen if key is not locked on first read.

Consider a simple read/increment/write pattern. How do we ensure data correctness under concurrent data access? Here is a code snippet that gets value from cache: create an entry if not exists, increase value by n and save value back to cache.

try (Transaction tx = ignite.transactions().txStart(
transactionConcurrency, transactionIsolation)) {
  result = cache.get(key);
  if (result == null) {
    result = new xxx(1);
  }
  else {
    result.increment(1);
  }
  cache.put(key, result);
  tx.commit();
} 

The code only works if <b> transactionConcurrency = PESSIMISTIC and transactionIsolation = REPEATABLE READ/SERIALIZABLE </b>
If we change the code pattern a bit

try (Transaction tx = ignite.transactions().txStart(
transactionConcurrency, transactionIsolation)) {
  result = cache.get(key);
  if (result != null) {
    result.increment(1);
  }
  else {
    result = new xxx(1);
    existing = cache.getAndPutIfAbsent(key);
    if (existing ! = null) {
      result = existing;
      result.increment(10);
      cache.put(key, result) 
    }
  }
  cache.put(key, result);
  tx.commit();
} 

Yet the code snippet does not work for lower isolation level. getAndPutIfAbsent does ensure that there is only one record created per key. However, if value associated with the key already exists, the value it reads might be stale and does not reflect the latest change. therefore, multiple write on the same key will overwrite each other and there is no atomicity for the whole operation.

while (retries < retryCount) {
  try (Transaction tx = ignite.transactions().txStart(
  transactionConcurrency, transactionIsolation)) {
    result = cache.get(key);
    if (result == null) {
      result = new xxx(1);
    }
    else {
      result.increment(1);
    }
    cache.put(key, result);
    tx.commit();
  }
  catch (TransactionOptimisticException oe) {
    retries ++; 
  }
}

This code works if <b> transactionConcurrency = OPTIMISTIC and transactionIsolation = SERIALIZABLE </b> as explained above, the conflict version check is done at commit time and the thrown exception implies that we need to retrieve the latest value from cache again and update it.

Optimistic serializable VS Pessimistic repeatable read
Pessimistic repeatable:

  • Deadlock can happen
  • Larger lock scope
  • Serialize data access if key conflict rate is high, no network round trip wasted on retrying
  • Network trips depends on the number of keys that the transaction spans
  • Lock prevents topology change, consider setting transactionConfiguration.txTimeoutOnPartitionMapExchange

Optimistic serializable:

  • Deadlock free
  • Smaller lock scope
  • Retry might degrade performance if key conflict rate is high and retry upper limit is hard to set
  • Network trips depends only on the number of nodes that the transaction spans

Some simple test result:
Round 1, 100 concurrent update for the same key:
transactionConcurrency = OPTIMISTIC and transactionIsolation = SERIALIZABLE with retry
max execution time: 400ms avg execution time: 50ms

transactionConcurrency = PESSIMISTIC and transactionIsolation = REPEATABLE READ
max execution time: 120ms avg execution time: 18ms

Round 2, 100 concurrent update for 100 different keys:
transactionConcurrency = OPTIMISTIC and transactionIsolation = SERIALIZABLE with retry
max execution time: 125ms avg execution time: 12ms

transactionConcurrency = PESSIMISTIC and transactionIsolation = REPEATABLE READ
max execution time: 125ms avg execution time: 14ms

It worth noticing that key conflict retrial significantly increases transaction execution time under extreme case for optimistic concurrency level, while for pessimistic concurrency, the impact is much less noticeable.

Other things to note:
Ignite supports SQL update. Eg. update table set count = count+1 where key = xxx. However unlike traditional relational DB, it is possible to throw concurrency exception here if the same key is updated simultaneously. Application code has to do due diligence to catch exception and retry. Optimistic or pessimistic concurrency of cache has no impact on SQL update here.

The official document encourages using putAll on multiple key update and specify the key order according to partition. Doing this allows a single key acquisition for multiple keys within the same partition and may significantly reduce network round trip.

Reference:
https://apacheignite.readme.io/docs/transactions#optimistic-transactions
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Key-Value+Transactions+Architecture

转载于:https://blog.51cto.com/shadowisper/2292337

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值