I was learning about sockets recently and found that if a request is sent, the requested IP address and port number need to be converted into network byte order. Otherwise, parsing problems will occur due to little endian and big endian
But why shouldn’t the text data after the socket is created be processed for byte order?
For example, what I send is a string. As we all know, the current Unicode character set is more than one byte. If there are multiple bytes, there will be string problems. Why can the socket still recognize the data normally?
为情所困2017-05-31 10:39:36
Socket only recognizes the unit of bytes, and the bytes written in will be read by the other end in the same order.
Unicode encoding already includes byte order. For example, if you write UTF16LE on one end, you need to decode the bytes according to UTF16LE on the other end. (Special case: the encoding unit of UTF8 is one byte, there is no byte order problem).
If you use socket to send general data, you must consider byte order. This consideration is generally part of the serialization protocol.
大家讲道理2017-05-31 10:39:36
Byte order is for integers. The port number is a 16-bit integer, so there is an endian problem. Everything else doesn’t matter
滿天的星座2017-05-31 10:39:36
Because TCP/UDP and other lower-level network protocols are stipulated in this way. After the socket is established, the data transmitted is equivalent to the protocol you designed, so you can use whatever byte order you want, as long as both the sender and the receiver use the same byte order.