Home  >  Article  >  Backend Development  >  5 tips and 2 misunderstandings about using Python containers

5 tips and 2 misunderstandings about using Python containers

爱喝马黛茶的安东尼
爱喝马黛茶的安东尼forward
2019-09-24 17:51:022319browse

5 tips and 2 misunderstandings about using Python containers

5 tips and 2 misunderstandings about using Python containers

The word "container" is rarely mentioned in Python technical articles. When people see "container", they mostly think of the little blue whale: Docker, but this article has nothing to do with it. The container in this article is an abstract concept in Python and is a general term for data types specifically used to hold other objects.

In Python, there are four most common built-in container types: list, tuple, dictionary, and set. By using them individually or in combination, many things can be accomplished efficiently.

The internal implementation details of the Python language itself are also closely related to these container types. For example, Python's class instance attributes, global variables globals(), etc. are all stored through dictionary types.

In this article, I will first start from the definition of container types and try to summarize some best practices for daily coding. Then I will share some programming tips around the special functions provided by each container type.

When we talk about containers, what are we talking about?

I gave a simple definition of "container" earlier: a container is specifically used to hold other objects. But this definition is too broad and cannot provide any guidance value to our daily programming. To truly master containers in Python, you need to start from two levels:

·Underlying implementation: What data structures are used by the built-in container types? How does a certain operation work?

· High-level abstraction: What determines whether an object is a container? What behaviors define a container?

Now, let us stand on these two different levels and re-understand containers.

Look at the container at the bottom

Python is a high-level programming language, and the built-in container types it provides are the result of a high degree of encapsulation and abstraction. Compared with names such as "linked list", "red-black tree" and "hash table", the names of all Python built-in types only describe the functional characteristics of this type. Others cannot understand them through these names alone. even a tiny bit of internal detail.

This is one of the advantages of the Python programming language. Compared with programming languages ​​​​such as C that are closer to the underlying computer, Python redesigns and implements a built-in container type that is more friendly to programmers, shielding it from additional work such as memory management. Provides us with a better development experience.

But if this is the advantage of the Python language, why do we still bother to understand the implementation details of container types? The answer is: paying attention to details helps us write faster code.

Write faster code

1. Avoid frequently expanding lists/creating new lists

All built-in containers No type limits capacity. If you want, you can keep stuffing increasing numbers into an empty list until it fills up the entire machine's memory.

In the implementation details of the Python language, the memory of the list is allocated on demand [Note 1]. When the memory currently owned by a certain list is not enough, the memory expansion logic will be triggered. And allocating memory is an expensive operation. Although in most cases, it will not have any serious impact on the performance of your program. But when the amount of data you are processing is particularly large, it is easy to drag down the performance of the entire program due to memory allocation.

Fortunately, Python has long been aware of this problem and provided official problem-solving guidelines, which is: "Becoming lazy".

How to explain "becoming lazy"? The evolution of the range() function is a very good example.

In Python 2, if you call range(100000000), you need to wait several seconds to get the result, because it needs to return a huge list and spends a lot of time in memory allocation and calculation. But in Python 3, the same call will get the result immediately. Because the function no longer returns a list, but a lazy object of type range, it will only return the real number to you when you iterate over it or slice it.

So, in order to improve performance, the built-in function range "becomes lazy". In order to avoid too frequent memory allocation, in daily coding, our functions also need to be lazy, which includes:

·Use the yield keyword more and return the generator Object

· Try to use generator expressions instead of list derivation expressions

·Generator expression: (iforinrange(100))

·List derivation expression: [iforinrange(100)]

· Try to use the lazy object provided by the module:

·Use re.finditer instead of re.findall

·Use the iterable file object directly: forlineinfp instead of forlineinfp.readlines()

2. Use the deque module in scenarios where there are many operations at the head of the list

The list is implemented based on the array structure (Array). When you insert a new member at the head of the list (list.insert(0,item)), all other members behind it need to be moved, and the operation time The complexity is O(n). This results in inserting members at the head of the list being much slower than appending at the tail (list.append(item) time complexity is O(1)).

If your code needs to perform these operations many times, consider using the collections.deque type instead of a list. Because deque is implemented based on a double-ended queue, the time complexity is O(1) whether appending elements at the head or tail.

3. Use a set/dictionary to determine whether a member exists

When you need to determine whether a member exists in a container, it is more appropriate to use a set than a list. Because the time complexity of itemin[...] operation is O(n), while the time complexity of itemin{...} is O(1). This is because both dictionaries and sets are implemented based on the Hash Table data structure.

# 这个例子不是特别恰当,因为当目标集合特别小时,使用集合还是列表对效率的影响微乎其微
# 但这不是重点 :)
VALID_NAMES = ["piglei", "raymond", "bojack", "caroline"]
# 转换为集合类型专门用于成员判断
VALID_NAMES_SET = set(VALID_NAMES)
def validate_name(name):
    if name not in VALID_NAMES_SET:
        # 此处使用了 Python 3.6 添加的 f-strings 特性
        raise ValueError(f"{name} is not a valid name!")

Hint: It is highly recommended to read TimeComplexity - Python Wiki to learn more about the time complexity of common container types.

If you are interested in the implementation details of dictionaries, it is also strongly recommended to watch Raymond Hettinger’s speech Modern Dictionaries (YouTube)

Related recommendations: "Python Introduction Tutorial"

High-level view of containers

Python is a "duck type" language: "When you see a bird that walks like a duck, swims like a duck, and quacks Like a duck, then this bird can be called a duck." Therefore, when we say what type an object is, we actually mean that this object satisfies the specific interface specification of the type and can be regarded as Use this type. The same is true for all built-in container types.

Open the abc (abbreviation for "Abstract Base Classes") submodule located under the collections module, and you can find all container-related interface (abstract class) [Note 2] definitions. Let's take a look at what interfaces the built-in container types satisfy:

·List: satisfy Iterable, Sequence, MutableSequence and other interfaces

·Tuple: Satisfies Iterable, Sequence

·Dictionary (dict): Satisfies Iterable, Mapping, MutableMapping [Note 3]

·Set: Satisfies Iterable, Set, MutableSet [Note 4]

Each built-in container type is actually a composite entity that satisfies multiple interface definitions. For example, all container types satisfy the "Iterable" interface, which means that they are all "Iterable". But conversely, not all "iterable" objects are containers. Just like although a string can be iterated, we usually don't treat it as a "container".

After understanding this fact, we will re-understand one of the most important principles of object-oriented programming in Python: programming for interfaces rather than specific implementations.

Let us use an example to see how to understand "interface-oriented programming" in Python.

Write code with better scalability

One day, we received a request: there is a list containing many user comments. In order to display it properly on the page, All comments longer than a certain length need to be replaced with ellipses.

This requirement is easy to meet, and we quickly wrote the first version of the code:

# 注:为了加强示例代码的说明性,本文中的部分代码片段使用了Python 3.5
# 版本添加的 Type Hinting 特性
 
def add_ellipsis(comments: typing.List[str], max_length: int = 12):
    """如果评论列表里的内容超过 max_length,剩下的字符用省略号代替
    """
    index = 0
    for comment in comments:
        comment = comment.strip()
        if len(comment) > max_length:
            comments[index] = comment[:max_length] + '...'
        index += 1
    return comments
comments = [
    "Implementation note",
    "Changed",
    "ABC for generator",
]
print("\n".join(add_ellipsis(comments)))
# OUTPUT:
# Implementati...
# Changed
# ABC for gene...

In the above code, the add_ellipsis function receives a list as a parameter, and then iterates through it, Replace the members that need to be modified. This all seems reasonable, because the most original requirement we received was: "There is a list in which...". But what if one day, the comments we get are no longer stored in a list, but in an immutable tuple?

In that case, the existing function design will force us to write add_ellipsis(list(comments)), which is slow and ugly code.

Programming for container interfaces

We need to improve functions to avoid this problem. Because the add_ellipsis function strongly relies on the list type, when the parameter type changes to a tuple, the current function is no longer applicable (reason: a TypeError exception will be thrown when assigning a value to comments[index]). How to improve this part of the design? The secret is: make the function rely on the abstract concept of "iterable object" rather than the entity list type.

Using the generator feature, the function can be changed to this:

def add_ellipsis_gen(comments: typing.Iterable[str], max_length: int = 12):
    """如果可迭代评论里的内容超过 max_length,剩下的字符用省略号代替
    """
    for comment in comments:
        comment = comment.strip()
        if len(comment) > max_length:
            yield comment[:max_length] + '...'
        else:
            yield comment
print("\n".join(add_ellipsis_gen(comments)))

In the new function, we changed the dependent parameter type from a list to an iterable abstract class. There are many advantages to doing this, one of the most obvious is: whether the comment comes from a list, a tuple or a file, the new function can easily satisfy:

# 处理放在元组里的评论
comments = ("Implementation note", "Changed", "ABC for generator")
print("\n".join(add_ellipsis_gen(comments)))
# 处理放在文件里的评论
with open("comments") as fp:
    for comment in add_ellipsis_gen(fp):
        print(comment)

Change the dependency from a specific container type After abstracting the interface, the applicability of the function becomes wider. In addition, the new functions also have more advantages in terms of execution efficiency. Now let's go back to the previous question. From a high-level perspective, what defines a container?

答案是:各个容器类型实现的接口协议定义了容器。不同的容器类型在我们的眼里,应该是 是否可以迭代、 是否可以修改、 有没有长度 等各种特性的组合。我们需要在编写相关代码时,更多的关注容器的抽象属性,而非容器类型本身,这样可以帮助我们写出更优雅、扩展性更好的代码。

Hint:在 itertools 内置模块里可以找到更多关于处理可迭代对象的宝藏。

常用技巧

1. 使用元组改善分支代码

有时,我们的代码里会出现超过三个分支的 if/else 。就像下面这样:

import time
 
def from_now(ts):
    """接收一个过去的时间戳,返回距离当前时间的相对时间文字描述
    """
    now = time.time()
    seconds_delta = int(now - ts)
    if seconds_delta < 1:
        return "less than 1 second ago"
    elif seconds_delta < 60:
        return "{} seconds ago".format(seconds_delta)
    elif seconds_delta < 3600:
        return "{} minutes ago".format(seconds_delta // 60)
    elif seconds_delta < 3600 * 24:
        return "{} hours ago".format(seconds_delta // 3600)
    else:
        return "{} days ago".format(seconds_delta // (3600 * 24))
now = time.time()
print(from_now(now))
print(from_now(now - 24))
print(from_now(now - 600))
print(from_now(now - 7500))
print(from_now(now - 87500))
# OUTPUT:
# less than 1 second ago
# 24 seconds ago
# 10 minutes ago
# 2 hours ago
# 1 days ago

上面这个函数挑不出太多毛病,很多很多人都会写出类似的代码。但是,如果你仔细观察它,可以在分支代码部分找到一些明显的“边界”。比如,当函数判断某个时间是否应该用“秒数”展示时,用到了 60。而判断是否应该用分钟时,用到了 3600。

从边界提炼规律是优化这段代码的关键。如果我们将所有的这些边界放在一个有序元组中,然后配合二分查找模块 bisect。整个函数的控制流就能被大大简化:

import bisect
# BREAKPOINTS 必须是已经排好序的,不然无法进行二分查找
BREAKPOINTS = (1, 60, 3600, 3600 * 24)
TMPLS = (
    # unit, template
    (1, "less than 1 second ago"),
    (1, "{units} seconds ago"),
    (60, "{units} minutes ago"),
    (3600, "{units} hours ago"),
    (3600 * 24, "{units} days ago"),
)
def from_now(ts):
    """接收一个过去的时间戳,返回距离当前时间的相对时间文字描述
    """
    seconds_delta = int(time.time() - ts)
    unit, tmpl = TMPLS[bisect.bisect(BREAKPOINTS, seconds_delta)]
    return tmpl.format(units=seconds_delta // unit)

除了用元组可以优化过多的 if/else 分支外,有些情况下字典也能被用来做同样的事情。关键在于从现有代码找到重复的逻辑与规律,并多多尝试。

2. 在更多地方使用动态解包

动态解包操作是指使用 * 或 ** 运算符将可迭代对象“解开”的行为,在 Python 2 时代,这个操作只能被用在函数参数部分,并且对出现顺序和数量都有非常严格的要求,使用场景非常单一。

def calc(a, b, multiplier=1):
    return (a + b) * multiplier
# Python2 中只支持在函数参数部分进行动态解包
print calc(*[1, 2], **{"multiplier": 10})
# OUTPUT: 30

不过,Python 3 尤其是 3.5 版本后, * 和 ** 的使用场景被大大扩充了。举个例子,在 Python 2 中,如果我们需要合并两个字典,需要这么做:

def merge_dict(d1, d2):
    # 因为字典是可被修改的对象,为了避免修改原对象,此处需要复制一个 d1 的浅拷贝
    result = d1.copy()
    result.update(d2)
    return result
user = merge_dict({"name": "piglei"}, {"movies": ["Fight Club"]})

但是在 Python 3.5 以后的版本,你可以直接用 ** 运算符来快速完成字典的合并操作:

user = {**{"name": "piglei"}, **{"movies": ["Fight Club"]}}

除此之外,你还可以在普通赋值语句中使用 * 运算符来动态的解包可迭代对象。如果你想详细了解相关内容,可以阅读下面推荐的 PEP。

Hint:推进动态解包场景扩充的两个 PEP:

    ·PEP 3132 -- Extended Iterable Unpacking | Python.org

    ·PEP 448 -- Additional Unpacking Generalizations | Python.org

3. 最好不用“获取许可”,也无需“要求原谅”

这个小标题可能会稍微让人有点懵,让我来简短的解释一下:“获取许可”与“要求原谅”是两种不同的编程风格。如果用一个经典的需求:“计算列表内各个元素出现的次数” 来作为例子,两种不同风格的代码会是这样:

# AF: Ask for Forgiveness
# 要做就做,如果抛出异常了,再处理异常
def counter_af(l):
    result = {}
    for key in l:
        try:
            result[key] += 1
        except KeyError:
            result[key] = 1
    return result
# AP: Ask for Permission
# 做之前,先问问能不能做,可以做再做
def counter_ap(l):
    result = {}
    for key in l:
        if key in result:
            result[key] += 1
        else:
            result[key] = 1
    return result

整个 Python 社区对第一种 Ask for Forgiveness 的异常捕获式编程风格有着明显的偏爱。这其中有很多原因,首先,在 Python 中抛出异常是一个很轻量的操作。其次,第一种做法在性能上也要优于第二种,因为它不用在每次循环的时候都做一次额外的成员检查。

不过,示例里的两段代码在现实世界中都非常少见。为什么?因为如果你想统计次数的话,直接用 collections.defaultdict 就可以了:

from collections import defaultdict
 
def counter_by_collections(l):
    result = defaultdict(int)
    for key in l:
        result[key] += 1
    return result

这样的代码既不用“获取许可”,也无需“请求原谅”。整个代码的控制流变得更清晰自然了。所以,如果可能的话,请尽量想办法省略掉那些非核心的异常捕获逻辑。一些小提示:

·操作字典成员时:使用 collections.defaultdict 类型

    ·或者使用 dict[key]=dict.setdefault(key,0)+1 内建函数

·如果移除字典成员,不关心是否存在:

    ·调用 pop 函数时设置默认值,比如 dict.pop(key,None)

·在字典获取成员时指定默认值: dict.get(key,default_value)

·对列表进行不存在的切片访问不会抛出 IndexError 异常: ["foo"][100:200]

4. 使用 next() 函数

next() 是一个非常实用的内建函数,它接收一个迭代器作为参数,然后返回该迭代器的下一个元素。使用它配合生成器表达式,可以高效的实现“从列表中查找第一个满足条件的成员”之类的需求。

numbers = [3, 7, 8, 2, 21]
# 获取并 **立即返回** 列表里的第一个偶数
print(next(i for i in numbers if i % 2 == 0))
# OUTPUT: 8

5. 使用有序字典来去重

字典和集合的结构特点保证了它们的成员不会重复,所以它们经常被用来去重。但是,使用它们俩去重后的结果会丢失原有列表的顺序。这是由底层数据结构“哈希表(Hash Table)”的特点决定的。

>>> l = [10, 2, 3, 21, 10, 3]
# 去重但是丢失了顺序
>>> set(l)
{3, 10, 2, 21}

如果既需要去重又必须保留顺序怎么办?我们可以使用 collections.OrderedDict 模块:

Hint: 在 Python 3.6 中,默认的字典类型修改了实现方式,已经变成有序的了。并且在 Python 3.7 中,该功能已经从 语言的实现细节 变成了为 可依赖的正式语言特性。

但是我觉得让整个 Python 社区习惯这一点还需要一些时间,毕竟目前“字典是无序的”还是被印在无数本 Python 书上。所以,我仍然建议在一切需要有序字典的地方使用 OrderedDict。

常见误区

1. 当心那些已经枯竭的迭代器

在文章前面,我们提到了使用“懒惰”生成器的种种好处。但是,所有事物都有它的两面性。生成器的最大的缺点之一就是:它会枯竭。当你完整遍历过它们后,之后的重复遍历就不能拿到任何新内容了。

numbers = [1, 2, 3]
numbers = (i * 2 for i in numbers)
# 第一次循环会输出 2, 4, 6
for number in numbers:
    print(number)
# 这次循环什么都不会输出,因为迭代器已经枯竭了
for number in numbers:
    print(number)

而且不光是生成器表达式,Python 3 里的 map、filter 内建函数也都有一样的特点。忽视这个特点很容易导致代码中出现一些难以察觉的 Bug。

Instagram 就在项目从 Python 2 到 Python 3 的迁移过程中碰到了这个问题。它们在 PyCon 2017 上分享了对付这个问题的故事。访问文章 Instagram 在 PyCon 2017 的演讲摘要,搜索“迭代器”可以查看详细内容。

2. 别在循环体内修改被迭代对象

这是一个很多 Python 初学者会犯的错误。比如,我们需要一个函数来删掉列表里的所有偶数:

def remove_even(numbers):
   """去掉列表里所有的偶数
   """
    for i, number in enumerate(numbers):
        if number % 2 == 0:
            # 有问题的代码
            del numbers[i]
numbers = [1, 2, 7, 4, 8, 11]
remove_even(numbers)
print(numbers)
# OUTPUT: [1, 7, 8, 11]

注意到结果里那个多出来的“8”了吗?当你在遍历一个列表的同时修改它,就会出现这样的事情。因为被迭代的对象numbers在循环过程中被修改了。遍历的下标在不断增长,而列表本身的长度同时又在不断缩减。这样就会导致列表里的一些成员其实根本就没有被遍历到。

所以对于这类操作,请使用一个新的空列表保存结果,或者利用 yield 返回一个生成器。而不是修改被迭代的列表或是字典对象本身。

The above is the detailed content of 5 tips and 2 misunderstandings about using Python containers. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:cnblogs.com. If there is any infringement, please contact admin@php.cn delete
Previous article:python3 multithreadingNext article:python3 multithreading