search
HomeJavajavaTutorialJava lock concurrency, lock-free concurrency and CAS example analysis

Locked concurrency

For most programmers (of course I am basically one of them), concurrent programming is almost equivalent to adding a lock to the relevant data structure. (Mutex). For example, if we need a stack that supports concurrency, the easiest way is to add a lock to a single-threaded stack std::sync::Mutex . (Arc is added to allow multiple threads to have ownership of the stack)

<code>
   
   
  <p>use std::sync::{Mutex, Arc};<br><br>#[derive(Clone)]<br>struct ConcurrentStack<t> {<br>    inner: Arc<mutex>>>,<br>}<br><br>impl<t> ConcurrentStack<t> {<br>    pub fn new() -> Self {<br>        ConcurrentStack {<br>            inner: Arc::new(Mutex::new(Vec::new())),<br>        }<br>    }<br><br>    pub fn push(&self, data: T) {<br>        let mut inner = self.inner.lock().unwrap();<br>        (*inner).push(data);<br>    }<br><br>    pub fn pop(&self) -> Option<t> {<br>        let mut inner = self.inner.lock().unwrap();<br>        (*inner).pop()<br>    }<br><br>}<br>
  
    

   
   
  </t></t></t></mutex></t></p></code>

The code is very convenient to write, because it is almost the same as the single-threaded version, which is obviously a benefit. You only need to write the single-threaded version, add a lock to the data structure, and then acquire and release (basically automatic in Rust) the lock when necessary.

So what’s the problem? First of all, aside from the fact that you may forget to acquire and release the lock (which is almost impossible to happen in Rust thanks to Rust), you may face the problem of deadlock (the Dining Philosopher Problem). And not to mention that some low-priority tasks may seize the resources of high-priority tasks for a long time (because locks come first). When the number of threads is relatively large, most of the time is spent on synchronization ( Waiting for the lock to be acquired), performance will become very poor. Consider a concurrent database with a large number of reads and occasional writes. If locks are used to handle it, even if the database does not have any updates, synchronization will be required between every two reads, which is too costly!

Lock-free concurrency

As a result, a large number of computer scientists and programmers turned their attention to lock-free concurrency. Lock-free object: If a shared object guarantees that no matter what other threads do, some thread will always complete an operation on it after a limited number of system operation steps. Her91 . In other words, at least one thread will achieve results from its operation. Concurrency using locks obviously does not fall into this category: if the thread that acquired the lock is delayed, no thread can complete any operation during this time. In extreme cases, if a deadlock occurs, no thread can complete any operation.

CAS (compare and swap) primitive

Then you may be curious, how to achieve lock-free concurrency? Are there any examples? Before that, let's take a look at an atomic primitive that is recognized to be very important in lock-free concurrency: CAS. The process of CAS is to compare a stored value with a specified value. Only when they are the same, the stored value will be modified to the new specified value. CAS is an atomic operation (supported by the processor, such as x86 compare and exchange (CMPXCHG)). This atomicity guarantees that if other threads have changed the stored value, the write will fail. in the Rust standard library std::sync::atomic The types in provide CAS operations, such as atomic pointers std::sync::atomic::AtomicPtr

<code>
   
   
  <p>pub fn compare_and_swap(<br>    &self,<br>    current: *mut T,<br>    new: *mut T,<br>    order: Ordering<br>) -> *mut T<br>
  
    

   
   
  </p></code>

(Here, don’t worry about what ordering is, that is to say, please just ignore it. Acquire , Release , Relaxed )

Lock-free stack (naive version)

<code>
   
   
  <p>#![feature(box_raw)]<br><br>use std::ptr::{self, null_mut};<br>use std::sync::atomic::AtomicPtr;<br>use std::sync::atomic::Ordering::{Relaxed, Release, Acquire};<br><br>pub struct Stack<t> {<br>    head: AtomicPtr<node>>,<br>}<br><br>struct Node<t> {<br>    data: T,<br>    next: *mut Node<t>,<br>}<br><br>impl<t> Stack<t> {<br>    pub fn new() -> Stack<t> {<br>        Stack {<br>            head: AtomicPtr::new(null_mut()),<br>        }<br>    }<br><br>    pub fn pop(&self) -> Option<t> {<br>        loop {<br>            // 快照<br>            let head = self.head.load(Acquire);<br><br>            // 如果栈为空<br>            if head == null_mut() {<br>                return None<br>            } else {<br>                let next = unsafe { (*head).next };<br><br>                // 如果现状较快照并没有发生改变<br>                if self.head.compare_and_swap(head, next, Release) == head {<br><br>                    // 读取内容并返回<br>                    return Some(unsafe { ptr::read(&(*head).data) })<br>                }<br>            }<br>        }<br>    }<br><br>    pub fn push(&self, t: T) {<br>        // 创建node并转化为*mut指针<br>        let n = Box::into_raw(Box::new(Node {<br>            data: t,<br>            next: null_mut(),<br>        }));<br>        loop {<br>            // 快照<br>            let head = self.head.load(Relaxed);<br><br>            // 基于快照更新node<br>            unsafe { (*n).next = head; }<br><br>            // 如果在此期间,快照仍然没有过时<br>            if self.head.compare_and_swap(head, n, Release) == head {<br>                break<br>            }<br>        }<br>    }<br>}<br>
  
    

   
   
  </t></t></t></t></t></t></node></t></p></code>

We can see that the idea is the same whether it is pop or push: pop on the snapshot first or push, and then try to replace the original data with CAS. If the snapshot and data are equal, it means that no writes were performed during this period, so the update will succeed. If the data is inconsistent, it means that other threads have modified it during this period and need to start again. This is a lock-free stack. It seems everything is done!

Memory Release

If you are using Java or other programming languages ​​with GC, you may have done a lot of work. The problem now is that in a language like Rust without GC, no one releases

return Some(unsafe { ptr::read(&(*head).data) })

in pop head , this is a memory leak! Hey, it seems that lock-free concurrency is not easy.

The above is the detailed content of Java lock concurrency, lock-free concurrency and CAS example analysis. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:亿速云. If there is any infringement, please contact admin@php.cn delete
What are the advantages of using bytecode over native code for platform independence?What are the advantages of using bytecode over native code for platform independence?Apr 30, 2025 am 12:24 AM

Bytecodeachievesplatformindependencebybeingexecutedbyavirtualmachine(VM),allowingcodetorunonanyplatformwiththeappropriateVM.Forexample,JavabytecodecanrunonanydevicewithaJVM,enabling"writeonce,runanywhere"functionality.Whilebytecodeoffersenh

Is Java truly 100% platform-independent? Why or why not?Is Java truly 100% platform-independent? Why or why not?Apr 30, 2025 am 12:18 AM

Java cannot achieve 100% platform independence, but its platform independence is implemented through JVM and bytecode to ensure that the code runs on different platforms. Specific implementations include: 1. Compilation into bytecode; 2. Interpretation and execution of JVM; 3. Consistency of the standard library. However, JVM implementation differences, operating system and hardware differences, and compatibility of third-party libraries may affect its platform independence.

How does Java's platform independence support code maintainability?How does Java's platform independence support code maintainability?Apr 30, 2025 am 12:15 AM

Java realizes platform independence through "write once, run everywhere" and improves code maintainability: 1. High code reuse and reduces duplicate development; 2. Low maintenance cost, only one modification is required; 3. High team collaboration efficiency is high, convenient for knowledge sharing.

What are the challenges in creating a JVM for a new platform?What are the challenges in creating a JVM for a new platform?Apr 30, 2025 am 12:15 AM

The main challenges facing creating a JVM on a new platform include hardware compatibility, operating system compatibility, and performance optimization. 1. Hardware compatibility: It is necessary to ensure that the JVM can correctly use the processor instruction set of the new platform, such as RISC-V. 2. Operating system compatibility: The JVM needs to correctly call the system API of the new platform, such as Linux. 3. Performance optimization: Performance testing and tuning are required, and the garbage collection strategy is adjusted to adapt to the memory characteristics of the new platform.

How does the JavaFX library attempt to address platform inconsistencies in GUI development?How does the JavaFX library attempt to address platform inconsistencies in GUI development?Apr 30, 2025 am 12:01 AM

JavaFXeffectivelyaddressesplatforminconsistenciesinGUIdevelopmentbyusingaplatform-agnosticscenegraphandCSSstyling.1)Itabstractsplatformspecificsthroughascenegraph,ensuringconsistentrenderingacrossWindows,macOS,andLinux.2)CSSstylingallowsforfine-tunin

Explain how the JVM acts as an intermediary between the Java code and the underlying operating system.Explain how the JVM acts as an intermediary between the Java code and the underlying operating system.Apr 29, 2025 am 12:23 AM

JVM works by converting Java code into machine code and managing resources. 1) Class loading: Load the .class file into memory. 2) Runtime data area: manage memory area. 3) Execution engine: interpret or compile execution bytecode. 4) Local method interface: interact with the operating system through JNI.

Explain the role of the Java Virtual Machine (JVM) in Java's platform independence.Explain the role of the Java Virtual Machine (JVM) in Java's platform independence.Apr 29, 2025 am 12:21 AM

JVM enables Java to run across platforms. 1) JVM loads, validates and executes bytecode. 2) JVM's work includes class loading, bytecode verification, interpretation execution and memory management. 3) JVM supports advanced features such as dynamic class loading and reflection.

What steps would you take to ensure a Java application runs correctly on different operating systems?What steps would you take to ensure a Java application runs correctly on different operating systems?Apr 29, 2025 am 12:11 AM

Java applications can run on different operating systems through the following steps: 1) Use File or Paths class to process file paths; 2) Set and obtain environment variables through System.getenv(); 3) Use Maven or Gradle to manage dependencies and test. Java's cross-platform capabilities rely on the JVM's abstraction layer, but still require manual handling of certain operating system-specific features.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor