A network switch is a device that expands the network and can provide more connection ports in the sub-network to connect more computers. It has the characteristics of high cost performance, high flexibility, relative simplicity, and easy implementation.
When a network switch interface receives more traffic than it can handle, the network switch chooses to either cache it or the network switch to drop it. Buffering of network switches is usually caused by different network interface rates, sudden bursts of traffic on network switches or many-to-one traffic transmission.
The most common problem that causes buffering on network switches is sudden changes in many-to-one traffic. For example, an application is built on multiple server cluster nodes. If one of the nodes simultaneously requests data from the network switches of all other nodes, all replies should arrive at the network switches at the same time. When this happens, all network switches flood the ports of the supplicant’s network switch. If the network switch does not have enough egress buffers, the network switch may drop some traffic, or the network switch may increase application latency. Sufficient network switch buffers can prevent packet loss or network latency due to low-level protocols.
Most modern data center switching platforms solve this problem by sharing the switching cache of the network switches. Network switches have a buffer pool space allocated to specific ports. Network switches share switching caches that vary widely between vendors and platforms.
Some network switch vendors sell network switches designed for specific environments. For example, some network switches have large buffer processing and are suitable for Hadoop environments in many-to-one transmission scenarios. Network switches In environments that can distribute traffic, network switches do not need to deploy buffers at the switch level.
Network switch buffers are very important, but there is no right answer to how much network switch space we need. Huge network switch buffers mean that the network doesn’t drop any traffic, but it also means increased network switch latency — data that is stored by the network switch needs to wait before being forwarded. Some network administrators prefer smaller buffers on network switches to let the application or protocol handle some traffic down. The correct answer is to understand the traffic patterns of your application’s network switches and choose a network switch that fits those needs.
Post time: Mar-24-2022