A lot of people and organisations including ISPs and government agencies associate bandwidth with speed. While this is understandable by the consumer (“more bits equals more speed”), the terms are not strictly interchangeable and more bandwidth definitely does not always mean more speed.

Bandwidth or Speed?

Bandwidth, in fact, should be seen as the capacity of a link, and can be likened to the number of lanes on a motorway. If one car is travelling along a motorway, having four lanes available will not enable it to travel any faster. Increase this to 100 cars and having four lanes is suddenly likely to be much more of an advantage.

The true measure of speed comes in the term latency. Latency is a measure of the delay between the transmission of an Internet packet and the arrival at its destination. Engineers can measure this using a metric called “round trip time” and the familiar measurement tool ‘Ping’ is available across most operating systems. This measures the time taken for a packet to leave its source, arrive at its destination, its destination to “echo” it back to its source, and the source to receive the packet.

Origins of confusion

One of the reasons that bandwidth is so often mistaken for speed is that traditionally it was very easy to saturate the last-mile link between an office and the greater Internet by doing something as simple as loading a web page or watching a video. This could have been enough to cause congestion, meaning that so many packets were trying to cross the link at once that they were stored in buffers or were dropped and had to be sent again. Low bandwidth meant a slow page load time. Upgrading the capacity of the link could alleviate this congestion and remove the bottleneck between the office and its ISP, effecting an increase in performance. Thus a higher bandwidth appeared to give a greater speed to end users, and the association was made.

Link latencies

These days, with average Internet access speeds1 hitting double figures in megabits per second (Mbps), latency is becoming a much more important factor than ever. Some types of connection are much more prone to high latency than others. Latency can be affected by the amount of encapsulation and transition that has to happen through the network layers, so I have provided some examples below.

Connection type Latency
Local Ethernet Up to 1ms
Leased Line 2-4ms, depending on length
VDSL/Cable 15-25ms
ISDN 25-45ms
56k Modem (remember those?) 150-300 ms
3G/4G GSM 300-900ms
Satellite >700ms, mostly due to the distance from earth to space

Changing challenges

The problems with latency often come down to those ever-pesky laws of physics: if you’re sitting in London and want to contact a server in Sydney, your request has to travel 17,000 Km there, and the response has to travel 17,000 Km back. In reality, this is likely to be much higher, as optical cables buried undersea won’t take the absolute shortest possible route. When other factors like routing and protocol overheads are factored in, each request and response typically takes approximately 1/3rd of a second to complete. Modern web content has many hundreds of elements on each page, so a high latency connection can have a noticeable negative impact on page load times.

With the above in mind, low bandwidth is much easier to overcome than high latency. Dedicated fibre providers can provide low latency, high bandwidth fibre connections to pretty much anywhere in the world, and while these crème-de-la-crème Internet connections provide impressive figures when compared to other types of service, they can still only transmit data at the speed of light, which means that pages from across the world can still take several seconds to load. Luckily, the marvel of modern CDNs mean that content is often served locally. Try pinging UK-based ServerChoice.com from Australia to see what I mean!

Jitter

The other subject I want to touch on within this blog is Jitter. Simply put, jitter is the measure of how much latency varies within a flow of traffic over time. If the level of jitter is high, packets can arrive at their destination at odd times and not in the correct order. The effect of this varies hugely, depending on the application. For example, jitter on its own it doesn’t harm web browsing or e-mail traffic too much, but a streaming protocol such as H.323 VoIP is much more susceptible to jitter. In fact, it has to buffer packets to make sure they can be processed in the right order, which can cause noticeable delay. Without these buffers, the call can experience dropouts and might degrade very quickly. VoIP protocols can often be tuned to better cope with jitter, but it is often a fine art getting the right balance of quality and delay. While some protocols may be able to cope with the typical latency on a 56k dial-up connection, the jitter would make the service unusable.

Jitter is measured by taking consecutive measures of latency (often using RTT as described above), calculating the difference between the samples, then dividing by the number of samples (minus 1).

macbook:~ hmerrett$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmpseq=0 ttl=56 time=976 ms
64 bytes from 8.8.8.8: icmp
seq=1 ttl=56 time=475 ms
64 bytes from 8.8.8.8: icmpseq=2 ttl=56 time=361 ms
64 bytes from 8.8.8.8: icmp
seq=3 ttl=56 time=473 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=56 time=391 ms

In this example, I have collected 5 samples from a 3G connection with the following RTTs: 976, 475, 361, 473, and 391. The average latency is 535.2 - (add them, divide by 5). The jitter is calculated by taking the difference between samples.

976 to 475, diff = 501
475 to 361, diff = 114
361 to 473, diff = 112
473 to 391, diff = 82

The total difference is 809 - so the jitter is 809 / 4, or 202.25 ms. This is fairly typical for a 3G mobile connection. ADSL or VDSL are typically better, with jitter levels around 2-3ms whilst cable usually sits in the middle at at 4-5ms, depending on the time of day. Dial-up and satellite are hugely variable but almost always high. Leased lines again provide the best option for jitter-sensitive applications.

Summing up

Many studies have been performed over the years looking at methods for performance testing Internet connections. This blog has barely scraped the surface – and it’s a pretty fascinating field to perform research in. What is “normal” can be very different when comparing the relatively low distance between users and content in the UK to the relatively high distances found in Australia. User experience can be surprisingly different depending on how much content is served locally, and how much must be served by a remote domestic or international server.

Despite how the above might read, IP networks are still pretty impressive. Next time someone tells you the Internet is slow, remember the packet that took a trip to Australia and back in 1/3rd of a second…



1By which I of course mean capacity. See how easy it can be confused? ‘Speed’ means different things depending on if you’re talking technically, or talking marketing.

  • Deutsche Telekom (DTUK) Case Study
    Deutsche Telekom (DTUK)

    Case Study

    • Delivered a secure and reliable colocation solution
    • Successful data centre migration with FlexMove®
    • Reduced costs with FlexPower® metered billing
    Read the case study
  • Euroffice Case Study
    Euroffice

    Case Study

    • Delivered a powerful private cloud infrastructure
    • Increased cyber security protection
    • Directly lead to better customer experience
    Read the case study

SMART THINKING. DELIVERED.® – redefining managed hosting Learn more

  • ServerChoice are ISO 27001 certified
  • ServerChoice are ISO 9001 certified
  • ServerChoice are a PCI DSS v3.2 Level 1 service provider

  • ServerChoice are a Datacentre Alliance partner
  • ServerChoice have 24/7 on-site Security Operations Centre

Get a quote today

If you’re interested in our services, get a fast, accurate quote today by filling out the form below.