Computer Networking

Network Performance Measurement

Network performance measurement is therefore defined as the overall set of processes and tools that can be used to quantitatively and qualitatively assess network performance and provide actionable data to remediate any network performance issues.

The demands on networks are increasing every day, and the need for proper network performance measurement is more important than ever before. Effective network performance translates into improved user satisfaction, whether that be internal employee efficiencies, or customer-facing network components such as an e-commerce website, making the business rationale for performance testing and monitoring selfevident.

When delivering services and applications to users, bandwidth issues, network downtime, and bottlenecks can quickly escalate into IT crisis mode. Proactive network performance management solutions that detect and diagnose performance issues are the best way to guarantee ongoing user satisfaction.

The performance of a network can never be fully modeled, so measuring network performance before, during, and after updates are made and monitoring performance on an ongoing basis is the only valid method to fully ensure network quality. While measuring and monitoring network performance parameters are essential, the interpretation and actions stemming from these metrics are equally important.

Network Performance Measurement Parameters

To ensure optimized network performance, the most important metrics should be selected for measurement. Many of the parameters included in a comprehensive network performance management system focus on data speed and data quality. Both of these broad categories can significantly impact end-user experience and are influenced by several factors.

Latency

Concerning network performance measurement, latency is simply the amount of time it takes for data to travel from one defined location to another. This parameter is sometimes referred to as delay. Ideally, the latency of a network is as close to zero as possible. The absolute limit or governing factor for latency is the speed of light, but packet queuing in switched networks and the refractive index of fiber optic cabling are examples of variables that can increase latency.

Packet Loss

With regards to network performance measurement, packet loss refers to the number of packets transmitted from one destination to another that fail to transmit. This metric can be quantified by capturing traffic data on both ends, then identifying missing packets and/or retransmission of packets. Packet loss can be caused by network congestion, router performance, and software issues, among other factors.

The end effects will be detected by users in the form of voice and streaming interruptions, or incomplete transmission of files. Since retransmission is a method utilized by network protocols to compensate for packet loss, the network congestion that initially led to the issue can sometimes be exacerbated by the increased volume caused by retransmission.

To minimize the impact of packet loss and other network performance problems, it is important to develop and utilize tools and processes that identify and alleviate the true source of problems quickly. By analyzing response time to end-user requests, the system or component that is at the root of the issue can be identified. Data packet capture analytics tools can be used to review response time for TCP connections, which in turn can pinpoint which applications are contributing to the bottleneck.

Transmission Control Protocol (TCP) is a standard for network conversation through which applications exchange data, which, works in conjunction with the Internet Protocol (IP) to define how packets of data are sent from one computer to another. The successive steps in a TCP session correspond to time intervals that can be analyzed to detect excessive latency in connection or round trip times.

Throughput and Bandwidth

Throughput is a metric often associated with the manufacturing industry and is most commonly defined as the amount of material or items passing through a particular system or process. A common question in the manufacturing industry is how many of product X were produced today, and did this number meet expectations? For network performance measurement, throughput is defined in terms of the amount of data or number of data packets that can be delivered in a pre-defined time frame.

Bandwidth, usually measured in bits per second, is a characterization of the amount of data that can be transferred over a given period. Bandwidth is therefore a measure of capacity rather than speed. For example, a bus may be capable of carrying 100 passengers (bandwidth), but the bus may only transport 85 passengers (throughput).

Jitter Jitter is defined as the variation in time delay for the data packets sent over a network. This variable represents an identified disruption in the normal sequencing of data packets. Jitter is related to latency since the jitter manifests itself in increased or uneven latency between data packets, which can disrupt network performance and lead to packet loss and network congestion.

Although some level of jitter is to be expected and can usually be tolerated, quantifying network jitter is an important aspect of comprehensive network performance measurement. Latency V/s Throughput While the concepts of throughput and bandwidth are sometimes misunderstood, the same confusion is common between the terms latency and throughput. Although these parameters are closely related, it is important to understand the difference between the two. About network performance measurement, throughput is a measurement of actual system performance, quantified in terms of data transfer over a given time.

Latency is a measurement of the delay in transfer time, meaning it will directly impact the throughput, but is not synonymous with it. The latency might be thought of as an unavoidable bottleneck on an assembly line, such as a test process, measured in units of time. Throughput, on the other hand, is measured in units completed which are inherently influenced by this latency.

Delay and Bandwidth Product Bandwidth delay product is a measurement of how many bits can fill up a network link. It gives the maximum amount of data that can be transmitted by the sender at a given time before waiting for acknowledgment. Thus it is the maximum amount of unacknowledged data.

Measurement Bandwidth delay product is calculated as the product of the link capacity of the channel and the round-trip delay time of transmission. The link capacity of a channel is the number of bits transmitted per second. Hence, its unit is bps, i.e. bits per second. The round-trip delay time is the sum of the time taken for a signal to be transmitted from the sender to the receiver and the time taken for its acknowledgment

to reach the sender from the receiver. The round-trip delay includes all propagation delays in the links between the sender and the receiver.

The unit of bandwidth delay product is bits or bytes.

Example

Consider that the link capacity of a channel is 512 Kbps and round-trip delay time is 1000ms. The bandwidth delay product

= 512 x 108 bits/sec x 1000 ? 10 5 sec

= 512,000 bits

= 64,000 bytes = 62.5KB

Long Fat Networks

A long fat network (LFN) is a network having a high bandwidth-delay product that is greater than 105 bits.

Ultra-high-speed LANs (local area networks) are an example of LFN. Another example is WANs through geostationary satellite connections.

Leave a Reply

Your email address will not be published. Required fields are marked *