Home >Backend Development >C++ >How to handle concurrent access when using C++ STL?
In concurrent access to shared data structures, C++ STL provides a mechanism to handle data competition: mutex: only allows one thread to access shared data at the same time; read-write lock: allows multiple threads to read at the same time but only one thread to write Entry; Atomic operations: Simple operations such as counter increment can be performed without locks.
How to use C++ STL to handle concurrent access
In concurrent programming, concurrent access to shared data structures may lead to data races and program crash. The C++ Standard Template Library (STL) provides powerful mechanisms for handling such scenarios.
Mutex (mutex lock)
A mutex is a lightweight lock that allows only one thread to access shared data at the same time. The following is an example of using a mutex to protect std::vector
:
#include <iostream> #include <mutex> #include <vector> std::mutex vector_mutex; std::vector<int> shared_vector; void thread_function() { std::lock_guard<std::mutex> lock(vector_mutex); // 访问 shared_vector,知道不会被其他线程并发访问 }
Read-write lock
Read-write lock allows multiple threads to be used simultaneously Read shared data, but only allow one thread to write. The following is an example of using a read-write lock to protect std::map
:
#include <iostream> #include <shared_mutex> #include <map> std::shared_mutex map_mutex; std::map<std::string, int> shared_map; void reader_thread_function() { std::shared_lock<std::shared_mutex> lock(map_mutex); // 读取 shared_map } void writer_thread_function() { std::unique_lock<std::shared_mutex> lock(map_mutex); // 写入 shared_map }
Atomic operations
For simple operations such as increment or decrement counter), we can use atomic operations without using locks. The following is an example of using atomic operations to update int
:
#include <atomic> std::atomic<int> shared_counter; void thread_function() { shared_counter.fetch_add(1); }
Practical case
The following is a real case using C++ STL concurrency:
Web service concurrent access to shared cache
Problem: A Web service uses std::unordered_map
as cache, multiple threads Access the cache at the same time.
Solution: Use read-write lock to protect std::unordered_map
. This allows multiple threads to read the cache simultaneously, while allowing only one thread to update the cache, thus avoiding data races.
The above is the detailed content of How to handle concurrent access when using C++ STL?. For more information, please follow other related articles on the PHP Chinese website!