kantver - Fotolia
When a network switch interface receives more traffic than it can process, it either buffers or drops the traffic.
Buffering is generally caused by interface speed differences, traffic bursts and many-to-one traffic patterns.
The most common cause of switch buffer is some variation of the many-to-one traffic pattern. For example, an application is clustered across many server nodes. If one node requests data from all the other nodes simultaneously, all of the replies should arrive at the same time. When this happens, all of the network traffic floods the egress switch port facing the requestor. If the switch doesn't have sufficient egress buffers, it will drop some traffic, adding application latency. Sufficient network buffers prevent the excessive delay caused by lower-level protocols working out what traffic was dropped.
Most modern data center switching platforms deal with shared switch buffers. The switch has a pool of buffer space to allocate to specific ports during times of congestion. The amount of shared switch buffer varies widely among vendors and platforms.
Some vendors sell network switches tailored for certain environments. For example, switches with larger buffers handle the typical many-to-one traffic scenarios of a Hadoop environment. An environment with even traffic distribution doesn't need switch buffers at that level.
Network buffering is important, but there's no right answer to how much of it we need. Huge buffers mean that the network never drops any traffic, but also mean increased latency -- we're storing the traffic before it's forwarded. Some network managers prefer to have smaller buffers and let the application or protocol deal with some dropped traffic. The right answer is to understand your application traffic patterns and pick a switch that fits those needs.
About the author:
Jon Langemak, CCNP/IP, is a network engineer at a Minnesota-based corporation. He works primarily on Cisco network solutions and enjoys dabbling in other fields. He runs the blog Das Blinken Lichten to document new technologies and testing concepts.
Shopping companion for data center switches
Bare-metal switch options
High-density switches on offer
Integrate virtual switches
Dig Deeper on Real-Time Performance Monitoring and Management
Related Q&A from Jon Langemak
We're looking to expand our 10 Gb Ethernet networks on servers, and I'm trying to figure out which way to go. What are people deploying? Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.