Home  >  Article  >  Backend Development  >  Introduction to concurrent future module in Python (code)

Introduction to concurrent future module in Python (code)

不言
不言Original
2018-08-30 09:55:402655browse

This article brings you an introduction (code) about the concurrent future module in Python. It has certain reference value. Friends in need can refer to it. I hope it will be helpful to you.

concurrent.futures module

The main feature of this module is the ThreadPoolExecutor and ProcessPoolExecutor classes. Both classes inherit from the concurrent.futures._base.Executor class. The interfaces they implement can be used in different Callable objects are executed in threads or processes, and they all maintain a worker thread or process pool internally.

The ThreadPoolExecutor and ProcessPoolExecutor classes are advanced classes. In most cases, you only need to learn to use them without paying attention to their implementation details.

#ProcessPoolExecutor class

>class ThreadPoolExecutor(concurrent.futures._base.Executor)

>|  This is an abstract base class for concrete asynchronous executors.

>|  Method resolution order:

>|      ThreadPoolExecutor

 |      concurrent.futures._base.Executor

 |      builtins.object

 |

 |  Methods defined here:

 |

 |  init(self, max_workers=None, thread_name_prefix='')

 |      Initializes a new ThreadPoolExecutor instance.

 |

 |      Args:

 |          max_workers: The maximum number of threads that can be used to

 |              execute the given calls.

 |          thread_name_prefix: An optional name prefix to give our threads.

 |

 |  shutdown(self, wait=True)

 |      Clean-up the resources associated with the Executor.

 |

 |      It is safe to call this method several times. Otherwise, no other

 |      methods can be called after this one.

 |

 |      Args:

 |          wait: If True then shutdown will not return until all running

 |              futures have finished executing and the resources used by the

 |              executor have been reclaimed.

 |

 |  submit(self, fn, *args, **kwargs)

 |      Submits a callable to be executed with the given arguments.

 |

 |      Schedules the callable to be executed as fn(*args, **kwargs) and returns

 |      a Future instance representing the execution of the callable.

 |

 |      Returns:

 |          A Future representing the given call.

 |

 |  ----------------------------------------------------------------------

 |  Methods inherited from concurrent.futures._base.Executor:

 |

 |  enter(self)

 |

 |  exit(self, exc_type, exc_val, exc_tb)

 |

 |  map(self, fn, *iterables, timeout=None, chunksize=1)

 |      Returns an iterator equivalent to map(fn, iter).

 |

 |      Args:

 |          fn: A callable that will take as many arguments as there are

 |              passed iterables.

 |          timeout: The maximum number of seconds to wait. If None, then there

 |              is no limit on the wait time.

 |          chunksize: The size of the chunks the iterable will be broken into

 |              before being passed to a child process. This argument is only

 |              used by ProcessPoolExecutor; it is ignored by

 |              ThreadPoolExecutor.

 |

 |      Returns:

 |          An iterator equivalent to: map(func, *iterables) but the calls may

 |          be evaluated out-of-order.

 |

 |      Raises:

 |          TimeoutError: If the entire result iterator could not be generated

 |              before the given timeout.

 |          Exception: If fn(*args) raises for any values.
Initialization can specify a maximum number of processes as the value of its parameter max_workers. This value generally does not need to be specified. The default is the number of cores of the current running machine, which can be Obtained by os.cpu_count(); the class contains methods:
  1. map() method, which has a similar function to python’s built-in method map(), that is, mapping, and the parameters are:
  • A callable function fn
  • An iterator iterables
  • Timeout duration timeout
  • The number of chunks chunksize is greater than 1, the iterator will be processed in chunks

---->> This function has a Features: The return result is consistent with the order in which the call is started; there will be no blocking during the call process, which means that the execution of the latter may have ended before the former is called.

If you must obtain all results before processing, you can choose to use the submit() method in combination with the futures.as_completed function.
  1. shutdown() method, cleans up all resources related to the current executor (executor)
  2. submit() method, submits a callable The object uses fn
  3. Inherits the __enter__() and __exit__() methods from concurrent.futures._base.Executor, which means that the ProcessPoolExecutor object can be used with the with statement.

from concurrent import futures
with futures.ProcessPoolExecutor(max_works=3) as executor:
     executor.map()

ThreadPoolExecutor class

class ThreadPoolExecutor(concurrent.futures._base.Executor)

 |  This is an abstract base class for concrete asynchronous executors.

 |

 |  Method resolution order:

 |      ThreadPoolExecutor

 |      concurrent.futures._base.Executor

 |      builtins.object

 |

 |  Methods defined here:

 |

 |  init(self, max_workers=None, thread_name_prefix='')

 |      Initializes a new ThreadPoolExecutor instance.

 |

 |      Args:

 |          max_workers: The maximum number of threads that can be used to

 |              execute the given calls.

 |          thread_name_prefix: An optional name prefix to give our threads.

 |

 |  shutdown(self, wait=True)

 |      Clean-up the resources associated with the Executor.

 |

 |      It is safe to call this method several times. Otherwise, no other

 |      methods can be called after this one.

 |

 |      Args:

 |          wait: If True then shutdown will not return until all running

 |              futures have finished executing and the resources used by the

 |              executor have been reclaimed.

 |

 |  submit(self, fn, *args, **kwargs)

 |      Submits a callable to be executed with the given arguments.

 |

 |      Schedules the callable to be executed as fn(*args, **kwargs) and returns

 |      a Future instance representing the execution of the callable.

 |

 |      Returns:

 |          A Future representing the given call.

 |

 |  ----------------------------------------------------------------------

 |  Methods inherited from concurrent.futures._base.Executor:

 |

 |  enter(self)

 |

 |  exit(self, exc_type, exc_val, exc_tb)

 |

 |  map(self, fn, *iterables, timeout=None, chunksize=1)

 |      Returns an iterator equivalent to map(fn, iter).

 |

 |      Args:

 |          fn: A callable that will take as many arguments as there are

 |              passed iterables.

 |          timeout: The maximum number of seconds to wait. If None, then there

 |              is no limit on the wait time.

 |          chunksize: The size of the chunks the iterable will be broken into

 |              before being passed to a child process. This argument is only

 |              used by ProcessPoolExecutor; it is ignored by

 |              ThreadPoolExecutor.

 |

 |      Returns:

 |          An iterator equivalent to: map(func, *iterables) but the calls may

 |          be evaluated out-of-order.

 |

 |      Raises:

 |          TimeoutError: If the entire result iterator could not be generated

 |              before the given timeout.

 |          Exception: If fn(*args) raises for any values.
is very similar to the ProcessPoolExecutor class, except that one is a processing process and the other is a processing thread, which can be selected according to actual needs.

Example

from time import sleep, strftime
from concurrent import futures


def display(*args):
    print(strftime('[%H:%M:%S]'), end="")
    print(*args)


def loiter(n):
    msg = '{}loiter({}): doing nothing for {}s'
    display(msg.format('\t'*n, n, n))
    sleep(n)
    msg = '{}loiter({}): done.'
    display(msg.format('\t'*n, n))
    return n*10


def main():
    display('Script starting')
    executor = futures.ThreadPoolExecutor(max_workers=3)
    results = executor.map(loiter, range(5))
    display('results:', results)
    display('Waiting for inpidual results:')
    for i, result in enumerate(results):
        display('result {} : {}'.format(i, result))


if __name__ == '__main__':
    main()
Running results:

[20:32:12]Script starting
[20:32:12]loiter(0): doing nothing for 0s
[20:32:12]loiter(0): done.
[20:32:12]      loiter(1): doing nothing for 1s
[20:32:12]              loiter(2): doing nothing for 2s
[20:32:12]results: <generator object Executor.map.<locals>.result_iterator at 0x00000246DB21BC50>
[20:32:12]Waiting for inpidual results:
[20:32:12]                      loiter(3): doing nothing for 3s
[20:32:12]result 0 : 0
[20:32:13]      loiter(1): done.
[20:32:13]                              loiter(4): doing nothing for 4s
[20:32:13]result 1 : 10
[20:32:14]              loiter(2): done.
[20:32:14]result 2 : 20
[20:32:15]                      loiter(3): done.
[20:32:15]result 3 : 30
[20:32:17]                              loiter(4): done.
[20:32:17]result 4 : 40
The running results may be different on different machines.

In the example, max_workers=3 is set, so as soon as the code starts running, three objects (0, 1, 2) are executed with the loiter() operation; after three seconds, the operation of object 0 ends, and the result is result 0. After that, object 3 starts to be executed. Similarly, the execution time of object 4 is after the printing of result 1, the execution result of object 1.

Related recommendations:

Detailed examples of how Python handles concurrency issues through futures

###

The above is the detailed content of Introduction to concurrent future module in Python (code). For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Related articles

See more