Home  >  Article  >  Backend Development  >  High-performance python programming coroutines (stackless)

High-performance python programming coroutines (stackless)

高洛峰
高洛峰Original
2016-10-18 10:05:101456browse

We all know that there are currently four ways of concurrent (not parallel) programming, multi-process, multi-thread, asynchronous, and coroutine.

Multi-process programming in Python has C-like os.fork, and of course there is also a higher-level encapsulated multiprocessing standard library. The Python high-availability programming method I wrote before http://www.cnblogs.com/hymenz/ p/3488837.html provides a signal processing method similar to that between the master process and worker process in nginx, ensuring that the exit of the business process can be sensed by the main process.

There are Thread and threading in multi-thread programming python. The so-called thread under Linux is actually an LWP lightweight process. It has the same scheduling method as the process in the kernel. Regarding LWP, COW (copy-on-write), There is more information on fork, vfork, clone, etc., so I won’t go into details here.

There are three main implementations of asynchronous under Linux: select, poll, and epoll. Asynchronous is not the focus of this article.

When talking about coroutines, we must talk about yield. Let’s look at an example first:

#coding=utf-8
import time
import sys
# 生产者
def produce(l):
    i=0
    while 1:
        if i < 5:
            l.append(i)
            yield i
            i=i+1
            time.sleep(1)
        else:
            return
      
# 消费者
def consume(l):
    p = produce(l)
    while 1:
        try:
            p.next()
            while len(l) > 0:
                print l.pop()
        except StopIteration:
            sys.exit(0)
l = []
consume(l)

In the above example, when the program executes the yield i of produce, a generator is returned. When we call p.next in custom (), the program returns to yield i of produce to continue execution, so that elements are appended in l, and then we print l.pop() until p.next() raises a StopIteration exception.

Through the above example, we can see that the scheduling of coroutines is invisible to the kernel, and coroutines are scheduled collaboratively. This makes the performance of coroutines much higher than threads when the concurrency is tens of thousands. .

import stackless
import urllib2
def output():
    while 1:
        url=chan.receive()
        print url
        f=urllib2.urlopen(url)
        #print f.read()
        print stackless.getcurrent()
     
def input():
    f=open(&#39;url.txt&#39;)
    l=f.readlines()
    for i in l:
        chan.send(i)
chan=stackless.channel()
[stackless.tasklet(output)() for i in xrange(10)]
stackless.tasklet(input)()
stackless.run()

Regarding coroutines, you can refer to the implementation of greenlet, stackless, gevent, eventlet, etc.

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn