Home > Article > Web Front-end > Why Do JavaScript Strings \"11\" and \"3\" Compare as True in Lexicographical Comparison?
Lexicographical Comparison of Strings
The comparison between the strings "11" and "3" in the given code snippet sparks curiosity due to the unexpected result of true. A trivial assumption might be that the comparison is based on the length of the strings, but this is not the case. Instead, JavaScript strings are compared lexicographically.
Lexicographical comparison operates by comparing the sequences of Unicode code points that represent the characters in the strings. It begins by comparing the first character of each string; if these characters are equal, it proceeds to compare the second characters, and so on. The comparison ceases when characters with different code points are encountered or when one of the strings runs out of characters.
In the case of "11" and "3", the first characters are '1' and '3'. Since '1' has a lower Unicode code point than '3', "11" is less than "3" according to lexicographical comparison. This explains the surprising result: a longer string is considered less than a shorter string because it contains a character with a lower code point.
Examples:
'11' < '3' // true '31' < '3' // false '31' < '32' // true '31' < '30' // false 'abc' < 'aaa' // false 'abc' < 'abd' // true
To explicitly convert a string to a number, use the unary plus ( ) operator:
+'11' < '3' // false
The above is the detailed content of Why Do JavaScript Strings \"11\" and \"3\" Compare as True in Lexicographical Comparison?. For more information, please follow other related articles on the PHP Chinese website!