Home  >  Article  >  Backend Development  >  Why have Python, Ruby and other languages ​​deprecated the increment operator?

Why have Python, Ruby and other languages ​​deprecated the increment operator?

王林
王林forward
2023-05-11 16:37:061213browse

为什么 Python、Ruby 等语言弃用了自增运算符?

Many people may notice a phenomenon, that is, in some modern programming languages ​​(of course, not referring to "recently appeared" programming languages), the auto-increment and auto-decrement operations The symbol was cancelled. In other words, there are no expressions like ​​i ​​​ or ​​j--​​​ in these languages, but only ​​ i = 1​​​or​​j -= 1​​Such an expression. This answer will explore the background and reasons for this phenomenon from the perspective of design philosophy.

Strictly speaking, it may be biased to say "i is disappearing", because it seems that only Python, Rust and Swift among mainstream programming languages ​​do not support increment and decrement operators.

This also confused me when I first came into contact with Python. I have been interested in searching many related answers and articles, but I have not been able to get a satisfactory answer. Now that several years have passed, I have tried to rethink this question and come up with my answer.

Please note that this article only discusses this issue "from a design philosophy" and does not specifically involve the nature of the language itself. For example, in Python, a large part of the reason why the increment and decrement operators are not provided is because its integer type is Immutable, but this is not a discussion "from a design philosophy", so this article will not include relevant content.

Why are there increment and decrement operators?

Origin

Wikipedia points out that the increment and decrement operators first appeared in the B language (the predecessor of C). The inventor of the B language is the same as the inventor of the C language, also K&R. Among them, Ken Thompson was the first to introduce the increment and decrement operators in the B language. Therefore, some people often say loosely that "the increment and decrement operators first originated in C". Although the facts are somewhat different, they are not too different.

The syntax of language B is highly similar to that of C. The biggest difference may be that B is untyped. However, I won’t introduce too much about the B language here, otherwise it will deviate from the topic. What is emphasized here is only the earliest origin of the increment and decrement operators.

There are different opinions on why the increment and decrement operators were introduced in the B language. Ken Thompson has never publicly stated why he created these two operators in the first place. However, there is a misunderstanding that needs to be clarified, that is, the introduction of these two operators cannot correspond to the ​​INC​​ and ​​DEC​​ instructions in assembly language. In fact, Dennis M. Ritchie, another creator of the B language (of course, also the creator of the C language), once pointed out in his memory "The Development of the C Language":

……Thompson went a step further by inventing the ​​ ​​ and ​​--​​ operators, which increment or decrement; their prefix or postfix position determines whether the alteration occurs before or after noting the value of the operand. They were not in the earliest versions of B, but appeared along the way. People often guess that they were created to use the auto-increment and auto-decrement address modes provided by the DEC PDP-11 on which C and Unix first became popular. This is historically impossible, since there was no PDP-11 when B was developed. The PDP-7, however, did have a few 'auto-increment' memory cells, with the property that an indirect memory reference through them incremented the cell. This feature probably suggested such operators to Thompson; the generalization to make them both prefix and postfix was his own. Indeed, the auto-increment cells were not used directly in implementation of the operators, and a stronger motivation for the innovation was probably his observation that the translation of ​​ x​​ was smaller than that of ​​x=x 1​​.

文中的说法有些模糊,仅指出自增自减运算符不可能是产生于PDP-11的auto-increment和auto-decrement地址模式(因为B语言发明时这台机器甚至都不存在),然而并未指出其是否对应于汇编语言中的​​INC​​和​​DEC​​。为了验证这一说法,我找到了文中提到的PDP-7的指令集,的确不包含​​INC​​或​​DEC​​指令。为了严谨起见,我还查了一下PDP-7的汇编手册,也没有找到相关指令。这证明了自增自减运算符的发明不可能是由于其直接对应于汇编语言中的INC和DEC指令

顺带一提,为了考证INC和DEC汇编指令的最初出现时间,我找到了1969年版的PDP-11 Handbook, 其中指出了INC和DEC是在PDP-11中被新引入的汇编指令(截图中没包含DEC,但手册后面有包含这条指令):

为什么 Python、Ruby 等语言弃用了自增运算符?

PDP-11 Handbook, 1969, Page 34

PDP-11的正式发布时间是1970,而B语言的诞生时间是1969。除非Ken Thompson参与了PDP-11的早期开发工作,否则自增自减运算符的灵感不可能源于​​INC​​和​​DEC​​汇编指令。当然,正如Dennis Ritchie指出,早在PDP-7中就已经出现了auto-increment memory cells,很可能是它启发了Ken Thompson引入自增自减运算符

另一个能够反驳“自增自减运算符直接对应于汇编指令”的事实是,B语言最初并不能直接编译成机器码,而是需要编译成一种被称作“线程码(threaded code)”的东西(原谅我找不到合适的翻译) 。既然最初都无法直接编译成机器码,那就更没有这种说法了。

所以说,自增自减运算符最初出现的原因可能非常简单——当年机器字节很珍贵,而++x能比x=x+1或x+=1少写一点代码,在那时候能少写一点代码总是好的——于是自增自减运算符出现了

提高程序运行效率?原子性?

好吧,虽然上面已经严肃地论证了自增自减运算符的出现与PDP-11的ISA没关系,但K&R不过是C的创始人,他们懂什么C语言(雾)?K&R之后C语言的各种语法都被玩出花来了,恐怕他们也想不到C语言后续的发展。

自增自减运算符到底会不会被编译成​​INC​​和​​DEC​​,还得看现代的各种编译器。下面我在Ubuntu 22.04下将相关的C代码编译,然后反汇编,看看​​i++​​是否会被编译成​​INC​​,以验证“自增自减运算符能够提高程序运行效率”的逻辑是否成立。

下面是测试程序:

// incr_test.c
#include <stdio.h>

int main(void)
{
for (int i = 0; i < 5; i++)
{
printf("%d", i);
}
return 0;
}

然后运行gcc,默认不开启优化:

gcc -o incr_test incr_test.c

然后运行objdump反汇编:

objdump -d incr_test.c

下面展示相关汇编代码(我所使用的是x86-64平台),已剔除无关代码:

0000000000001149 <main>:
1149: f3 0f 1e fa endbr64 
114d: 55push %rbp
114e: 48 89 e5mov%rsp,%rbp
1151: 48 83 ec 10 sub$0x10,%rsp
1155: c7 45 fc 00 00 00 00movl $0x0,-0x4(%rbp)
115c: eb 1d jmp117b <main+0x32>
115e: 8b 45 fcmov-0x4(%rbp),%eax
1161: 89 c6 mov%eax,%esi
1163: 48 8d 05 9a 0e 00 00lea0xe9a(%rip),%rax# 2004 <_IO_stdin_used+0x4>
116a: 48 89 c7mov%rax,%rdi
116d: b8 00 00 00 00mov$0x0,%eax
1172: e8 d9 fe ff ffcall 1050 <printf@plt>
1177: 83 45 fc 01 addl $0x1,-0x4(%rbp)
117b: 83 7d fc 04 cmpl $0x4,-0x4(%rbp)
117f: 7e dd jle115e <main+0x15>
1181: b8 00 00 00 00mov$0x0,%eax
1186: c9leave
1187: c3ret

可以看到,默认情况下并没有调用inc,仍然使用了 addl。

有人肯定要问了,是不是没有开优化的原因?好,那就开优化试试:

gcc -o incr_test incr_test.c -O1
objdump -d incr_test.c

这次把addl改成了add,但inc还是没出现:

0000000000001149 <main>:
1149: f3 0f 1e fa endbr64 
114d: 55push %rbp
114e: 53push %rbx
114f: 48 83 ec 08 sub$0x8,%rsp
1153: bb 00 00 00 00mov$0x0,%ebx
1158: 48 8d 2d a5 0e 00 00lea0xea5(%rip),%rbp# 2004 <_IO_stdin_used+0x4>
115f: 89 da mov%ebx,%edx
1161: 48 89 eemov%rbp,%rsi
1164: bf 01 00 00 00mov$0x1,%edi
1169: b8 00 00 00 00mov$0x0,%eax
116e: e8 dd fe ff ffcall 1050 <__printf_chk@plt>
1173: 83 c3 01add$0x1,%ebx
1176: 83 fb 05cmp$0x5,%ebx
1179: 75 e4 jne115f <main+0x16>
117b: b8 00 00 00 00mov$0x0,%eax
1180: 48 83 c4 08 add$0x8,%rsp
1184: 5bpop%rbx
1185: 5dpop%rbp
1186: c3ret

至于更高的优化级别,其汇编代码的可读性太差,就不贴出来了。但经过验证,即使是O3甚至Ofast优化级别的汇编代码中都看不到inc的身影。

也许在某些特殊的情况下​​i++​​会被编译成​​inc​​,但是如果要指望编译器将​​i++​​编译成​​inc​​这样的单指令以提高速度(其实inc甚至不是atomic的,因此也不要指望这能带来什么“原子性”),那确实是想当然了。事实上对于gcc来说,​​i++​​和​​i += 1​​没什么区别。

这会不会是gcc的问题?用clang会不会产生不一样的结果?答案是同样不会。

clang -o incr_test incr_test.c
objdump -d incr_test

结果:

0000000000001140 <main>:
1140: 55push %rbp
1141: 48 89 e5mov%rsp,%rbp
1144: 48 83 ec 10 sub$0x10,%rsp
1148: c7 45 fc 00 00 00 00movl $0x0,-0x4(%rbp)
114f: c7 45 f8 00 00 00 00movl $0x0,-0x8(%rbp)
1156: 83 7d f8 05 cmpl $0x5,-0x8(%rbp)
115a: 0f 8d 1f 00 00 00 jge117f <main+0x3f>
1160: 8b 75 f8mov-0x8(%rbp),%esi
1163: 48 8d 3d 9a 0e 00 00lea0xe9a(%rip),%rdi# 2004 <_IO_stdin_used+0x4>
116a: b0 00 mov$0x0,%al
116c: e8 bf fe ff ffcall 1030 <printf@plt>
1171: 8b 45 f8mov-0x8(%rbp),%eax
1174: 83 c0 01add$0x1,%eax
1177: 89 45 f8mov%eax,-0x8(%rbp)
117a: e9 d7 ff ff ffjmp1156 <main+0x16>
117f: 31 c0 xor%eax,%eax
1181: 48 83 c4 10 add$0x10,%rsp
1185: 5dpop%rbp
1186: c3ret

同理,对于clang,各种优化级别我也试过了,都见不到​​inc​​的影子。

简洁性

上面的考证似乎有些太过分了,以至于稍微有些偏离了“从设计哲学上讨论”的初衷。上面讨论了这么多,只是为了证明自增自减运算符真的不能带来什么性能提升,在设计之初这两个运算符就没考虑过这方面的问题,而且出于各种原因,现代编译器也几乎不会把​​i++​​编译成​​inc​​(事实上,只有在非常陈旧的编译器中才会出现这样的情况,参见StackOverflow) 。而且,由于​​inc​​和​​dec​​并非原子指令,这也不能给程序带来任何“原子性”。

好吧,话题终于回归到“设计哲学”上了。现在已经排除了一切“为了性能/为了原子性/为了直接对应汇编语言……”而使用自增自减运算符的说法,这些更多是想当然的看法,而非事实。显然,那么答案只有从设计哲学上考虑了。

对于C/C++程序员,for循环语句是一个很得心应手的工具。C语言(甚至B语言)并非最早引入由分号分隔的for循环的语言,但却是真正将其推广开来的语言。

而自增自减操作符的引入,使得for循环变得极其强大,甚至许多C/C++程序员习惯到尽可能将代码压缩到一个以分号结尾的for循环语句(或while循环语句)中,使代码极为简洁。最初接触这些形式代码的程序员可能还不太习惯,但若看多了类似的写法,其实可以发现这些写法也非常简洁明白:

for(vector<int>::iterator iter = vec.begin(); iter != vec.end(); add(*(iter++)));
for(size_t i = 0; arr[i] == 0; i++);
while(v->data[i++] > 5);
while(--i) { ... }

有些C/C++程序员认为这类传统for循环比起许多现代语言中采用迭代器的for更有优势,也更具表达能力。此外,由于C/C++中无法直接在数组中使用迭代器(不像Java后来可以加入迭代数组的语法糖),指针的递增和递减操作使用非常频繁,也相当重要,因此提供自增自减运算符无疑是很符合C/C++的设计哲学的。

为什么一些现代编程语言取消了自增自减运算符?

事先声明,就像上面已经说过的,在C++中(甚至是任何采用传统for循环的语言中)可以认为自增自减运算符是利大于弊的,它使得代码变得更为简洁。而且在谨慎使用的前提下,也可能使得代码更加清晰。判断一个语法特性是否是个好设计,显然要看环境。这里只是指在许多精心设计的现代编程语言中,自增自减运算符似乎显得没那么重要了。

副作用

可以注意到,在许多编程语言中,具有副作用的操作符除了赋值操作符(包括但不限于=、+=、&=等),就只有自增和自减运算符了。显然,赋值操作符具有副作用是无奈之举,否则无法给变量赋值。

但在一众其他操作符,如+、-、&、||、

副作用的负面影响想必大家或多或少都在关于函数式编程的讨论中能听到一些。显然,纯函数是易于测试和组合的,对于相同的参数,纯函数每次运算都得到相同的结果。而自增和自减运算符从语法设计上就大大违背了函数式编程的不变性原则。

其实可以看到,排除不存在变量的纯函数式语言中不存在自增自减运算符,其实许多包含变量的混合范式(且偏向函数式)的编程语言也不存在自增自减运算符。除了文章一开头提到的Python、Rust和Swift,在其他偏函数式的混合范式语言如Scala中,也不原生存在自增自减运算符。

在一众运算符中,自增与自减运算符总因其具有副作用而显得独树一帜。对于重视函数式编程的语言来说,自增自减运算符是弊大于利的,也是很难被接受的。

可以想象,若有人尝试在混合范式语言中写函数式的代码,然后因为某些原因其中混进了一个​​i++​​,那恐怕是想找到BUG原因都很困难的——相比起​​i += 1​​,​​i++​​看起来确实太隐晦了,很难在杂乱的代码中一眼看出这是个赋值语句,认识到其有副作用的事实,这可能导致潜在的BUG。

迭代器替代了大多数自增自减运算符的使用场景

近年来,似乎但凡是个新语言,都会优先采用迭代式循环而非C-style的传统for循环。即使像是Go这种复古语法的语言,也推荐优先使用range而非传统for循环。而Rust更是直接删除了传统for循环,只保留迭代式for循环。即使是那些老语言,也纷纷加入了迭代式循环,如Java、JavaScript、C++等,都陆续加入了相关语法。

简单对比一下各语言中的传统for循环和迭代式循环:

Java

int[] arr = { 1, 2, 3, 4, 5 };
// 传统计数循环
for (int i = 0; i < arr.length; i++) {
System.out.println(arr[i]);
}
// 迭代
for (int num: arr) {
System.out.println(num);
}

JavaScript

const arr = [1, 2, 3, 4, 5]
// 传统计数循环
for (let i = 0; i < arr.length; i++) {
console.log(arr[i])
}
// 迭代
for (const num of arr) {
console.log(num)
}

Go

arr := [5]int{1, 2, 3, 4, 5}
// 传统计数循环
for i := 0; i < len(arr); i++ {
 fmt.Println(arr[i])
}
// 迭代
for _, num := range arr {
 fmt.Println(num)
}

可以很明显地看到,使用迭代器减少了代码量,而且反而使得代码变得更加清晰。

当然,迭代器的作用不仅停留在表面的“减少代码”上。更重要的是迭代器减小了开发人员的心智负担。有过C/C++编程经验的人都知道,在传统for循环中更改i的值是非常危险的,一不留神就会造成严重的BUG甚至产生死循环。而迭代器的逻辑是不同的:每次循环从迭代器中取出值,而不是在某个值上递增。因此,即使不小心在使用迭代器的循环中错误更改了计数变量的值,也不会产生问题:

for i in range(5):
i -= 1

上面这段Python代码会是一个死循环吗?其实不会。因为​​for i in range(5)​​​的逻辑并非创建一个计数变量i,然后每次递增。其实现方式是先创建迭代器,然后依次从里面取值。i的取值在最初就已经固定了,因此在循环体中更改i的值并不会造成什么影响,到下一次循环时,i只是取迭代器中的下一个值,不管在上一次循环中有没有更改。当然,上面这样的代码是不建议在生产环境中编写的,容易造成误会。

可以看到,在现代编程语言中,迭代器替代了自增自减运算符绝大多数的使用场景,而且能够使得代码更加简洁与清晰。而对于那些只存在迭代式for循环的编程语言,如Python、Rust等,自然也就不那么必要加入自增自减运算符了。

赋值语句返回值的消失

熟悉C/C++的程序员肯定知道,赋值语句是有返回值的,也可以时常看到C/C++程序员写出下面这样的代码(Java中也可以实现这样的操作,但似乎Java程序员不太喜欢写这样的代码):

int a = 1, b = 2, c = 3;
a = (b += 3);

赋值语句的返回值即被赋值变量执行赋值语句之后的值。在上面的例子中,a最终等于5.

为什么赋值语句会有返回值,而不是返回一个null或者其他类似的东西?这很大程度上是为了满足连续赋值的需要:

int a = 1, b = 2, c = 3;
a = b = c = 5;

上面的代码中,​​a = b = c = 5​​这句似乎太符合直觉,以至于人们常常忘记类似的连续赋值语句并非语法糖,而是赋值语句返回值的必然结果。赋值操作符是右结合的,因此上面这条语句先执行​​c = 5​​,然后返回5,再执行​​b = 5​​,以此类推,就实现了连续赋值。

在很多现代语言中,赋值语句都没有了返回值,或者其返回值只用于实现连续赋值,不允许作为表达式使用。例如在Go中,类似的语句就会报错,它甚至不支持连续赋值:

var a = 1
var b = 2
var c = 3
a = b = c = 5 // 报错

在Go中,赋值语句不能作为表达式,也自然没有赋值语句。同理,在Rust、Python等语言中,赋值语句也仅仅是“语句”而已,不能作为表达式使用,像是​​a = (b += c)​​这样的语句是不合法的。

不过,Python虽然不支持赋值语句作为表达式,但却是支持连续赋值的,像是​​a = b = c​​这样的语句是合法的。然而在这里,连续赋值就不是赋值语句返回值产生的自然结果了,在这里它确实是某种“语法糖”。

不过,有时候赋值表达式也不完全是一件坏事,它在特定情况下能够简化代码,使其更加清晰。例如在Python 3.8中,就加入了赋值表达式语法,使用“海象操作符(:=)”作为赋值表达式。例如:

found = {name: batches for name in order
 if (batches := get_batches(stock.get(name, 0), 8))}

……话题似乎有些扯远了,赋值语句返回值和自增自减运算符有什么关系?其实稍微想一想,就会发现它们之间有很强的关联性:自增自减运算虽然看起来不像赋值语句,但其本质上确实是赋值。既然赋值语句都没了返回值,不能作为表达式使用,那么自增自减运算符理论上也不该例外,也不该当作表达式使用。

可是若自增自减运算只能当作普通的赋值语句使用,那么就几乎只能​​i++​​、​​j--​​等语句单独成行了。而实际上,自增自减运算符更多的使用场景是作为表达式而非语句使用。这样一来,自增自减运算符的使用场景就变得非常有限了,而在本身已经存在迭代式循环的语言中,要使自增自减运算符单独成行使用的场景本就很罕见,那么加入自增自减运算符自然就显得没什么意义了。

当然,也存在例外。例如在Go中自增自减运算符也不是真正的“运算符”,而仅仅是赋值语句的语法糖,还真就只能单独成行使用。但Go就是任性地把它们加入到了语法中。例如下面的Go代码就会在编译时报错:

i := 0
j := i++

不过,Go选择保留自增自减运算符也并非毫无道理。毕竟Go中仍保留了C-Style的传统for循环,而​​for i := 0; i ​看起来还是要比​<code style="font-family: monospace; font-size: 12px; background-color: rgba(0, 0, 0, 0.06); padding: 0px 2px; border-radius: 6px; line-height: inherit; overflow-wrap: break-word; text-indent: 0px;">​for i := 0; i ​稍微简洁一些,因此就保留了它们。如果Go选择删除传统for循环,那大概率自增自减运算符就不复存在了。(虽然我个人认为其实现在自增自减运算符在Go中也没有太大存在价值)

想要获取下标怎么办?

至此为止,自增自减运算符的大多数使用场景似乎已经被各种更现代的语法替代了。但似乎自增自减运算符还有一个很小的优势,就是可以简化单独成行的​​i += 1​​ 或​​j -= 1​​这样的赋值语句。比如说,需要在迭代数组的同时获得下标,那么​​i++​​是否能做到简化代码?

答案是不能,因为各大语言其实很早就考虑过这个问题了。比如在Python中,没经验的新手程序员可能会写出这样的代码,然后抱怨Python中为什么没有自增自减运算符:

lst = ['a', 'b', 'c', 'd', 'e']
i = 0
for c in lst:
print(i, c)
i += 1

或是写出这样的代码:

lst = ['a', 'b', 'c', 'd', 'e']
for i in range(len(lst)):
c = lst[i]
print(i, c)

然而Python早就提供了enumerate函数用来解决这个问题,该函数会返回一个每次返回下标和元素的可迭代对象:

lst = ['a', 'b', 'c', 'd', 'e']
for i, c in enumerate(lst):
print(i, c)

类似地,Go也可以在迭代时直接获取数组下标:

arr := [5]int{1, 2, 3, 4, 5}

for i, num := range arr {
 fmt.Println(i, num)
}

在Swift中也一样:

let arr: [String] = ["a", "b", "c", "d"]

for (i, c) in arr.enumerated() {
print(i, c)
}

在Rust中:

let arr = [1, 2, 3, 4, 5];
 
for (i, &num) in arr.iter().enumerate() {
println!("arr[{}] = {}", i, num);
}

在C++中并没有直接包含类似enumerate的语法,这个函数写起来其实也比较困难,但善用模板元编程也是可以实现的,感兴趣可以自己试试。

显然,在大多数包含迭代式循环语法的语言中,要在迭代对象的同时获取下标也是相当轻松的。即使那门语言中没有类似Python中enumerate的语法,手写一个类似的函数也没有那么困难。于是,自增自减运算符的使用场景被进一步压缩,现在即使是作为纯粹的语法糖当作单独成行的​​i += 1​​或​​j -= 1​​使用,好像也没太多使用场景了。

运算符重载带来歧义

一般来说,自增和自减运算符都应视作与​​+= 1​​和​​-= 1​​同义 。然而,运算符重载使其产生了某些歧义。

若一门语言支持运算符重载,那么对于​​+=​​和​​++​​,有两种处理方法:

The first one, treats as completely the syntactic sugar of = 1. When the ​​ =​​operator is overloaded, the ​​ ​​operator is also automatically overloaded. However, this will bring serious ambiguity. For example, Python overloads the ​​ =​​ operator on the string, such as running ​​x = 'a'; x = ' After b'​​, the value of x is 'ab'. If the ​​ ​​operator exists in Python, then according to this rule, ​​x ​​ should be regarded as ​​x = 1​​, this is no problem now, a type mismatch error will be reported. But if Python, like Java, automatically performs type conversion when splicing strings, ​​x = 1​​ becomes legal, the same as ​​x = '1'​​, and then run​​x ​​, the value of x will become 'ab1', which is extremely incredible.

Consider the catastrophic consequences of this in a weakly typed language. JS can now write even without operator overloading​​let a = []; a ​​ Then the value of a is 0, a black magic code. If operator overloading is added to JS one day, and then someone has nothing to do to overload the ​ =​​ operator on the built-in type, the consequences will be a bit unimaginable.

Second, treats as an operator that has nothing to do with =. Doing this will not cause the incredible problems described above, but if you choose to do this, when users of the programming language overload the ​​ =​​operator, they may naturally think that​​ ​​Operators are also overloaded, which may bring more ambiguity.

In fact, the ambiguities caused by operator overloading mentioned here have already occurred in many languages. In languages ​​that support both increment and decrement operators and operator overloading, bugs due to similar reasons are not uncommon. One solution is to disallow overloading of the ​​ ​​ and ​​--​​ operators and only allow their use on integer types. But since that's the case, why not consider simply getting rid of the increment and decrement operators?

Some other discussions

It can be noted that in the above discussion, I intentionally ignored many features of the language itself. For example, in Python, there is no automatic Another big reason for using increment and decrement operators is that integers are immutable types, and the increment and decrement operators are prone to ambiguity.

As I said at the beginning of the article, this is a characteristic of Python and is not within the scope of the "design philosophy" discussion here. However, for the sake of rigor, I will briefly mention it here.

Also, although in many languages ​​​​a = a 1​​, ​​a = 1​​, and ​​a ​ represents the same meaning, but there are many languages ​​​​that distinguish the two. In many languages ​​that use virtual machines, such as Python and Java, ​​a = 1​​is used as an in-place operation to distinguish it from ​​a = a 1​​ . For example in Java, ​​a = a 1​​ is implemented using bytecode iadd, while ​​a = 1​​ and ​​a ​​Use iinc to implement. Similarly, in Python, their bytecodes are also distinguished by BINARY_ADD and INPLACE_ADD. For these languages, does ​​a ​​ mean ​​a = 1​​ or ​​a = a 1​​, because they have different meanings , perhaps another layer of ambiguity will arise.

Summary

I have to say that Ken Thompson originally came up with the ideas​ ​​and​​--​## The impact of the # operator may be far beyond my expectation. Many people's understanding of the origins and application scenarios of the auto-increment and auto-decrement operators is only taken for granted, and misunderstandings such as "improving operating efficiency" and even "atomic operations" are also flying everywhere. At the same time, beginners of C language (especially in China) often suffer from splitting headaches due to undefined operations such as a = i i i. Whether these two little operators bring more convenience or more trouble is left to the readers to think for themselves. In many modern programming languages, the status of the increment and decrement operators has been greatly weakened. Some languages ​​strictly restrict the use of these two operators and do not allow them to be used as expressions, such as Go; some simply cancel these two operators, thinking that​​ =​

​and​

​-=​​ is completely sufficient, such as Python and Rust. Today, when iterators are more and more widely used, ​​ ​

​ and ​

--​​These two have played an important role in history Status operators appear to be fading into obscurity. It's hard for me to say whether this is a good thing or a bad thing. After all, we have seen that in languages ​​such as C/C and Java, the restrained use of increment and decrement operators can sometimes make the code very concise and clear. Is it too extreme to do away with these two operators completely like Python and Rust do? This is also difficult to say. In short, whether you are a C/C programmer who is very good at using​​ ​

​and​

​--​​, or you are interested in these two Every FP advocate who has a natural aversion to operators with side effects has to admit that with the development of programming languages, the increment and decrement operators are becoming less and less important, but they are still valuable in specific scenarios. .

The above is the detailed content of Why have Python, Ruby and other languages ​​deprecated the increment operator?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete