Home > Article > Backend Development > Multithreading optimization techniques in C++
With the development of computer technology and the improvement of hardware performance, multi-threading technology has become an essential skill for modern programming. C is a classic programming language that also provides many powerful multi-threading technologies. This article will introduce some multi-threading optimization techniques in C to help readers better apply multi-threading technology.
1. Use std::thread
C 11 introduced std::thread, integrating multi-threading technology directly into the standard library. Creating a new thread using std::thread is very simple, just pass a function pointer. For example:
#include <thread> #include <iostream> void hello() { std::cout << "Hello World!"; } int main() { std::thread t(hello); t.join(); return 0; }
The above code creates a new thread t, executes the hello function, and waits for thread t to complete. Note that thread creation and destruction requires a certain amount of overhead, so std::thread needs to be used rationally.
2. Use std::async
std::async is another convenient multi-threading technology, which can execute a function asynchronously and return a std::future object. Use std::async to more conveniently manage the execution of asynchronous tasks and obtain results. For example:
#include <future> #include <iostream> int add(int a, int b) { return a + b; } int main() { auto async_result = std::async(add, 1, 2); std::cout << async_result.get(); return 0; }
The above code calls the add function to calculate 1 2 asynchronously, and uses the std::future object to manage the acquisition of the calculation results. It should be noted that std::async uses the std::launch::async strategy by default and will execute functions in a new thread. If you wish to use the std::launch::deferred strategy, you need to specify it manually. However, using the std::launch::deferred strategy will cause the function to be executed only when std::future::get() is called, so the choice needs to be made on a case-by-case basis.
3. Use std::condition_variable
In multi-threaded programming, communication and synchronization need to be carried out between threads, and std::condition_variable can achieve this purpose very well. Using std::condition_variable allows one thread to wait for a certain condition of another thread to be true, thereby achieving synchronization between threads. For example:
#include <condition_variable> #include <mutex> #include <thread> #include <iostream> std::mutex mutex; std::condition_variable cv; bool ready = false; void producer() { std::unique_lock<std::mutex> lock(mutex); // wait for the condition to become true cv.wait(lock, [] { return ready; }); std::cout << "Producer done." << std::endl; } void consumer() { std::this_thread::sleep_for(std::chrono::seconds(1)); ready = true; std::cout << "Consumer done." << std::endl; cv.notify_one(); } int main() { std::thread t1(producer); std::thread t2(consumer); t1.join(); t2.join(); return 0; }
The above code creates two threads t1 and t2, where t1 is waiting until a condition variable ready becomes true, and t2 sets the condition variable to after waiting for 1 second. true, and notify t1. It should be noted that std::condition_variable must be used in conjunction with std::mutex to prevent multiple threads from accessing condition variables at the same time.
4. Use the thread pool
In the case of a large number of short-term tasks that need to be created and run, the thread pool is often used to improve the performance of the program. The thread pool maintains a certain number of threads and manages the allocation and execution of tasks. Using a thread pool can avoid the additional overhead of frequently creating and destroying threads, while taking full advantage of multi-core CPUs. For example:
#include <iostream> #include <thread> #include <mutex> #include <condition_variable> #include <vector> #include <queue> #include <functional> class ThreadPool { public: ThreadPool(std::size_t numThreads = std::thread::hardware_concurrency()) { for (std::size_t i = 0; i < numThreads; ++i) { pool.emplace_back([this] { while (!stop) { std::function<void()> task; { std::unique_lock<std::mutex> lock{ mutex }; condition.wait(lock, [this] { return stop || !tasks.empty(); }); if (stop && tasks.empty()) return; task = std::move(tasks.front()); tasks.pop(); } task(); } }); } } ~ThreadPool() { { std::unique_lock<std::mutex> lock{ mutex }; stop = true; } condition.notify_all(); for (auto& worker : pool) { worker.join(); } } template <typename F, typename... Args> auto enqueue(F&& f, Args&&... args) -> std::future<typename std::result_of<F(Args...)>::type> { using return_type = typename std::result_of<F(Args...)>::type; auto task = std::make_shared<std::packaged_task<return_type()>>( std::bind(std::forward<F>(f), std::forward<Args>(args)...)); std::future<return_type> future = task->get_future(); { std::unique_lock<std::mutex> lock{ mutex }; if (stop) throw std::runtime_error("enqueue on stopped ThreadPool"); tasks.emplace([task](){ (*task)(); }); } condition.notify_one(); return future; } private: std::vector<std::thread> pool; std::queue<std::function<void()>> tasks; std::mutex mutex; std::condition_variable condition; bool stop = false; }; void hello() { std::cout << "Hello World!" << std::endl; } int add(int a, int b) { return a + b; } int main() { { ThreadPool pool; auto f1 = pool.enqueue(hello); auto f2 = pool.enqueue(add, 1, 2); std::cout << f2.get() << std::endl; } return 0; }
The above code defines a ThreadPool class, which contains multiple threads and a task queue. The thread pool continues to take tasks from the task queue and execute them until the queue is empty or the thread pool stops. Use the ThreadPool::enqueue method to add the task to the task queue and return a std::future object to manage the results of task execution.
In general, C provides a variety of multi-threading technologies to help developers take advantage of the performance of multi-core CPUs and manage threads and tasks more flexibly. Developers should use these techniques appropriately to optimize program performance.
The above is the detailed content of Multithreading optimization techniques in C++. For more information, please follow other related articles on the PHP Chinese website!