Network performance measurement is defined as the overall set of processes and tools that can be used to quantitatively and qualitatively assess network performance and provide actionable data to remediate any network performance issues.
The demands on networks are increasing every day, and the need for proper network performance measurement is more important than ever before. Effective network performance translates into improved user satisfaction, whether that be internal employee efficiencies, or customer-facing network components such as an e-commerce website, making the business rationale for performance testing and monitoring self-evident.
When delivering services and applications to users, bandwidth issues, network downtime, and bottlenecks can quickly escalate into IT crisis mode. Proactive network performance management solutions that detect and diagnose performance issues are the best way to guarantee ongoing user satisfaction.
The performance of a network can never be fully modeled, so measuring network performance before, during, and after updates are made and monitoring performance on an ongoing basis is the only valid method to fully ensure network quality. While measuring and monitoring network performance parameters are essential, the interpretation and actions stemming from these metrics are equally important.
Network Performance Measurement Parameters
To ensure optimized network performance, the most important metrics should be selected for measurement. Many of the parameters included in a comprehensive network performance management system focus on data speed and data quality. Both of these broad categories can significantly impact end-user experience and are influenced by several factors.
Latency
Concerning network performance measurement, latency is simply the amount of time it takes for data to travel from one defined location to another. This parameter is sometimes referred to as delay. Ideally, the latency of a network is as close to zero as possible. The absolute limit or governing factor for latency is the speed of light, but packet queuing in switched networks and the refractive index of fiber optic cabling are examples of variables that can increase latency.
Packet Loss
With regards to network performance measurement, packet loss refers to the number of packets transmitted from one destination to another that fail to transmit. This metric can be quantified by capturing traffic data on both ends, then identifying missing packets and/or retransmission of packets. Packet loss can be caused by network congestion, router performance, and software issues, among other factors.
Factors Affecting Packet Loss | Effects on Users |
---|---|
Network congestion | Voice and streaming interruptions |
Router performance | Incomplete file transmissions |
Software issues | Increased retransmission volume |
To minimize the impact of packet loss and other network performance problems, it is important to develop and utilize tools and processes that identify and alleviate the true source of problems quickly. By analyzing response time to end-user requests, the system or component that is at the root of the issue can be identified. Data packet capture analytics tools can be used to review response time for TCP connections, which in turn can pinpoint which applications are contributing to the bottleneck.
Throughput and Bandwidth
- Throughput: Defined as the amount of data or number of data packets that can be delivered in a pre-defined time frame.
- Bandwidth: Measured in bits per second, characterizing the amount of data that can be transferred over a given period.
Metric | Definition |
---|---|
Throughput | Amount of data or number of data packets delivered in a time frame |
Bandwidth | Capacity of data transfer over a period |
Jitter
Jitter is defined as the variation in time delay for the data packets sent over a network. This variable represents an identified disruption in the normal sequencing of data packets. Jitter is related to latency since the jitter manifests itself in increased or uneven latency between data packets, which can disrupt network performance and lead to packet loss and network congestion.
Although some level of jitter is to be expected and can usually be tolerated, quantifying network jitter is an important aspect of comprehensive network performance measurement.
Latency vs. Throughput
While the concepts of throughput and bandwidth are sometimes misunderstood, the same confusion is common between the terms latency and throughput. Although these parameters are closely related, it is important to understand the difference between the two.
- Latency: Measurement of the delay in transfer time.
- Throughput: Measurement of actual system performance, quantified in terms of data transfer over a given time.
Delay and Bandwidth Product
Bandwidth delay product is a measurement of how many bits can fill up a network link. It gives the maximum amount of data that can be transmitted by the sender at a given time before waiting for acknowledgment.
Measurement
Bandwidth delay product is calculated as the product of the link capacity of the channel and the round-trip delay time of transmission.
Parameter | Definition |
---|---|
Link capacity | Number of bits transmitted per second |
Round-trip delay time | Sum of time for signal transmission and acknowledgment |
Bandwidth delay product unit | Bits or bytes |
Example:
Consider that the link capacity of a channel is 512 Kbps and round-trip delay time is 1000ms.
- Bandwidth delay product = 512 x 10^3 bits/sec x 1000 / 10^3 sec
- = 512,000 bits
- = 64,000 bytes
- = 62.5KB
Long Fat Networks
A long fat network (LFN) is a network having a high bandwidth-delay product that is greater than 10^5 bits.
Type of Network | Example |
---|---|
Ultra-high-speed LANs | Local Area Networks (LANs) |
Wide Area Networks | WANs through geostationary satellite connections |