Tcp and udp belong to the "transport layer" protocols. UDP and TCP are important protocols in the computer transport layer. TCP is connection-oriented, and UDP is connectionless-oriented. TCP (Transmission Control Protocol) is a connection-oriented, reliable, byte stream-based transport layer communication protocol defined by IETF's RFC 793. UDP (User Datagram Protocol) provides a way for applications to send encapsulated IP packets without establishing a connection.
The operating environment of this tutorial: Windows 7 system, Dell G3 computer.
Computer network architecture refers to the hierarchical structure model of a computer network, which is a collection of protocols at each layer and ports between layers. Communication in a computer network must rely on network communication protocols. The widely used Open System Interconnection (OSI) reference model proposed by the International Organization for Standardization (ISO) in 1997 is customarily called the ISO/OSI reference model.
OSI seven-layer reference model:
OSI logically divides a network system into seven ordered subsystems that are relatively independent in function. In this way, OSI The architecture consists of 7 functionally relatively independent levels, as shown in the figure below. From low to high, they are the physical layer, data link layer, network layer, transport layer, session layer, presentation layer and application layer.
TCP/IP Reference Model
TCP/IP has 4 levels, which are the network interface layer, the Internet layer, Transport layer and application layer. The comparison between the TCP/IP hierarchy and the OSI hierarchy is shown in the figure below.
tcp and udp belong to the "transport layer" protocols of the computer network architecture.
The Internet's transport layer has two main protocols that complement each other. Connectionless is UDP, which does little in particular except give applications the ability to send packets and allow them to architect their own protocols at the required level. The connection-oriented one is TCP, which does almost everything.
The transport layer is one of the key layers in the entire network layer system. The network layer only sends packets to the destination host, but the real communication is not the host but the process in the host. The transport layer provides communication services to the application layer above it. It belongs to the highest communication-oriented layer. When two hosts in the edge part of the network use the functions of the core part of the network for end-to-end communication, only the protocol stack of the host Only the transport layer exists. The transport layer provides logical communication between processes. The transport layer shields the core details of the underlying network layer from high-level users, making the application look like there is an end-to-end logic between the two transport layer entities. Communication channel;
Although IP packets can send packets to the destination host, this packet still stays in the network layer of the host and is not delivered to the application process in the host. When two computers communicate, During the process, a process on this host is actually exchanging data with a process on another host. From the perspective of the transport layer, the real endpoint of communication is not the host but the process in the host. In other words, end-to-end communication is communication between application processes.
Two important functions of the transport layer:
The network layer and The difference between the transport layer: The network layer provides logical communication between hosts, while the transport layer directly provides end-to-end logical communication for applications;
UDP and TCP are important protocols in the computer transport layer. TCP is connection-oriented, and UDP is connectionless-oriented;
The transport layer in the TCP/IP layer uses a 16-bit port number To represent a port, the port number has only local meaning. It is to mark the inter-layer interface between each process in the computer application layer and the interaction in the transport layer. The port numbers between different network numbers are not related; therefore, the two In order for processes between two computers to communicate with each other, they must know each other's IP address (in order to find the other party's computer) and port number (in order to find the process in the computer). Communication on the Internet adopts the client-server method. The client When initiating a communication request, you must first know the IP address and port number of the other party's server.
The User Datagram Protocol only adds a function of multiplexing, demultiplexing and error detection to IP messages.
1. Characteristics of UDP
2. UDP message
The header field is only 8 bytes, including source port, destination port, length, and checksum. The 12-byte pseudo header is temporarily added for the purpose of calculating the checksum.
Transmission Control Protocol (TCP, Transmission Control Protocol) is a connection-oriented, reliable, byte stream-based transport layer communication protocol. IETF's RFC 793 definition.
TCP: User message protocol, connection-oriented, a connection must be established before data is transmitted, and the connection is released after the data transmission is completed. TCP does not provide broadcast or multicast services.
1. TCP characteristics
2. TCP connection
TCP connection is not two host numbers, nor port numbers, but a socket:
socket = IP address: Port number
3. TCP segment header format
4. TCP three-way handshake
Assume A is the client and B is the server.
5. TCP four waves
The following description does not discuss the sequence number and confirmation number, because the sequence number and confirmation number The rules are relatively simple. And ACK is not discussed because ACK is always 1 after the connection is established.
The reason for waving four times
After the client sends the FIN connection release message, the server receives the message and enters the CLOSE-WAIT state. This state is for the server to send data that has not yet been transferred. After the transfer is completed, the server will send a FIN connection release message.
TIME_WAIT
The client enters this state after receiving the FIN message from the server. At this time, it does not directly enter the CLOSED state. It also needs to wait for the time 2MSL set by the timer. . There are two reasons for doing this:
TCP uses timeout retransmission to achieve reliable transmission. If a message segment that has been sent is not available within the timeout period, After receiving the confirmation, retransmit the segment;
1. TCP sliding window
The window is part of the cache and is used to temporarily store the byte stream. The sender and receiver each have a window. The receiver tells the sender its window size through the window field in the TCP segment. The sender sets its own window size based on this value and other information.
All bytes within the sending window are allowed to be sent, and all bytes within the receiving window are allowed to be received. If the byte on the left side of the sending window has been sent and confirmed, then slide the sending window a certain distance to the right until the first byte on the left is not sent and confirmed; the sliding of the receiving window is similar. When the bytes on the left side of the window have been sent and confirmed and delivered to the host, slide the receiving window to the right.
The receiving window will only confirm the last byte arriving in order within the window. For example, the bytes received by the receiving window are {31, 34, 35}, among which {31} arrives in order. {34, 35} is not, so only byte 31 is acknowledged. After the sender gets an acknowledgment of a byte, it knows that all bytes before this byte have been received.
2. TCP flow control
Flow control on TCP is implemented through sliding windows. Generally speaking, the sender is expected to send The faster the data, the better. However, if the data sent by the sender is too fast, the receiver will not have time to accept it. Therefore, it is necessary to control the sender's traffic. Traffic control on TCP is mainly achieved by setting the sliding window size. The confirmation sent by the receiver The window field in the message can be used to control the sender's window size, thereby affecting the sender's sending rate. Set the window field to 0 and the sender cannot send data.
3. TCP Congestion Control
If the network is congested, packets will be lost, and the sender will continue to retransmit, resulting in higher network congestion. Therefore, when congestion occurs, the sender's rate should be controlled. This is very similar to flow control, but the starting point is different. Flow control is to allow the receiver to receive in time, while congestion control is to reduce the congestion level of the entire network.
Congestion control conditions:
Congestion control is different from flow control: Congestion control prevents excessive data from being injected into the network, so that routers or links in the network will not be overloaded or congested. Control is a global process, designed to include all hosts, all routers, and all elements related to reducing network performance. However, flow control is often the control of point-to-point traffic and is an end-to-end problem. Pay attention to the distinction;
The main algorithms for congestion control in TCP: slow start, congestion avoidance, fast retransmission, fast recovery
The sender needs to maintain a program called congestion The state variable of the window (cwnd), pay attention to the difference between the congestion window and the sender window: the congestion window is just a state variable, and it is the sender window that actually determines how much data the sender can send.
For the convenience of discussion, make the following assumptions:
Although the TCP window is based on Bytes, but here the window size unit is set to message segments.
1. Slow start and congestion avoidance
The initial execution of the send is slow start, let cwnd = 1, the sender can only Send 1 segment; when acknowledgment is received, cwnd is doubled, so the number of segments that the sender can send is: 2, 4, 8...
Note that slow start will cwnd in each round Doubling this will cause cwnd to grow very fast, causing the sender's sending speed to grow too fast and the possibility of network congestion to be higher.
Set a slow start threshold ssthresh. When cwnd >= ssthresh, congestion avoidance is entered, and only cwnd is increased by 1 in each round.
If a timeout occurs, set ssthresh = cwnd / 2, and then execute slow start again.
2. Fast retransmission and fast recovery
On the receiving side, it is required that each time a message segment is received, the last received ordered message segment should be processed. confirm. For example, M1 and M2 have been received, and when M4 is received, an acknowledgment for M2 should be sent.
On the sender side, if three duplicate acknowledgments are received, it can be known that the next segment is lost. At this time, fast retransmission is performed and the next segment is retransmitted immediately. For example, if three M2s are received, M3 will be lost and M3 will be retransmitted immediately.
In this case, only individual segments are lost, not network congestion. Therefore, fast recovery is performed, letting ssthresh = cwnd / 2, cwnd =ssthresh. Note that congestion avoidance is directly entered at this time.
The speed of slow start and fast recovery refers to the setting value of cwnd, not the growth rate of cwnd. Slow start cwnd is set to 1, and fast recovery cwnd is set to ssthresh.
Reference: Computer Network Fifth Edition edited by Xie Xiren
(Learning video sharing: Web Front End Introduction)
The above is the detailed content of What protocols do tcp and udp belong to in computer network architecture?. For more information, please follow other related articles on the PHP Chinese website!