Friday 11 December 2020

Network congestion occurs in cases where network devices

 Network congestion occurs in cases where network devices (including host interfaces) run out of buffer space and must drop excess packets. The intuitive action is to increase buffering, but that negatively affects congestion control algorithms, to the point that it has a name.

Interface drops (sometimes called discards) aren’t necessarily a bad thing. Congestion can occur at aggregation points or where link speed changes occur. It becomes a problem when it occurs too frequently, and the packet loss causes applications to become slow. Quality of service (QoS) gets used in these cases to prioritize crucial, time-sensitive traffic flows and force packet drops of less important packets. We have successfully used computer science and engineering to prioritize business applications over less important entertainment traffic (streaming audio).

So, you want to configure your network management platform to alert you to potential sources of packet loss that impact application performance. What’s a reasonable figure to use for an alerting and reporting threshold? You would think that one percent would suffice, based on our intuition developed in other disciplines, like financial. However, that intuition is flawed when applied to networking.



1 comment:

Why it's the ideal opportunity for telecoms to zero in on clients

 Brought together computerized stages can help telecoms players incorporate siloed frameworks, robotize basic administrations and improve cl...