If you’ve taken time to work out what kind of impact latency is having on your IT network, you’re probably already aware that it’s an issue that costs businesses serious productivity and money. 

The trouble is, latency is generally a complex issue to get your head around – but it doesn’t have to be. Here, we’ll explain latency in a way that means you won’t need an IT professional on hand to help! And, when we have, we’ll explore how you can work toward beating latency – so your business doesn’t suffer. 

Is latency just a delay?

In simple terms, yes – latency is simply a delay in the delivery of data from one application to another.

Latency occurs when the amount of data being sent overtakes the amount of data that a connection can handle. To understand why this happens, it’s first important to understand how the speed of a data transfer is decided.

Connection speeds

Whenever data is sent over the internet, it’s broken down into tiny data packets. To decide how this is done, a specific language or formula is used – referred to as a protocol. As long as both the sending and receiving program understand the same protocol, then the data can be broken down and pieced back together again in an easily understood manner. 

To test both connection speed and the receiving programs understand, a single data packet is sent – then returned to the sending application. This confirms that the receiving application can handle the data being sent – but also, since the transaction is timed, this also sets out the speed at which the transfer should occur.

So how do delays occur?

In much the same way that anticipating how long a car journey will take relies on knowing how much traffic is on the road, data transfer speeds will also sometimes vary – owing to the amount of data currently being handled – and the complexity and length of the route. 

Any network connection has a ‘bandwidth’ – a measure of the volume of data that it can handle being passed over it at any one time. So, the greater the bandwidth, the more data that can be handled. The trouble is, when ‘throughput’ exceeds bandwidth, something has to give.

Throughput is the actual amount of data that’s being sent over the connection. So, if you’ve got 50 cars all trying to get along a narrow single-file road, you can expect the journey to take longer than 50 cars all travelling along a 10-lane highway. Data is just the same. 

How do systems handle delays?

The problem is, delays will cause further delays. Compounding delays like this occur because that very first transmission of data set the speed for the rest of the data transfer – so, a quick first journey means throughput will be high – and, if it’s higher than the connection’s current bandwidth potential, then traffic starts to back up.

Backing up traffic isn’t a good thing – because the more traffic backs up, the more traffic will be waiting behind it. So, applications and network devices have a solution – they get rid of some of that traffic to get things moving again. The trouble is, the data that they off-load can often be key to the system you’re running – and dropping too much traffic will see your applications freeze, slow, or simply lock-up and crash.

What does this mean to your business?

Generally speaking, losing your systems means downtime – and downtime is expensive. According to leading IT sources, downtime can cost even small businesses in excess of $5,600 a minute – that’s a colossal $300,000 every hour. Now, most businesses won’t feel that kind of impact – but even one lost sale or dropped day of productivity can be damaging – so it’s worth trying to tackle latency before it becomes too big an issue. 

How do you beat latency?

For the most part, latency is going to be an issue that stems from your systems and IT network design. 

Of course, latency isn’t always down to you to tackle – data transfers will never be completely instantaneous – even across the most sophisticated internet connections – but generally, the biggest factor in dealing with latency is how your IT infrastructure is designed and run.

If you don’t have the skills in house needed to tackle your latency issues, you’ll need to look elsewhere – and this usually means employing the services of a managed service provider (MSP) who has the skills needed to eliminate latency issues.

Working with an MSP

Managed service providers are not hard to come by – but finding a good one can be more challenging – especially when you’ve got a specific problem at hand. 

If you want to be sure you’re working with a provider who can help you address your latency issues, the best thing to do is talk to them about how they’ve handled similar issues in the past. No two companies are the same of course – but if they can prove they’ve worked with companies to whom mission-critical real-time services were important, then you’re definitely looking at the right kind of partner. 

Of course, latency isn’t a one-time problem, so as well as making sure you’re working with a company who knows what’s at stake now, you’ll need to make sure they have the ability and desire to grow with you and keep making a commitment to ensuring minimal latency throughout your relationship. 

Some companies choose to make latency part of their service level agreement. So, your MSP will already commit to maintaining a certain amount of uptime as part of their commitment to you – but you might want to consider talking to them about a commitment to speed across your network too. This is especially important if you anticipate big financial losses as a result of lacklustre network performance.

For the right people, latency isn’t a huge challenge – and it’s one that’s good to deal with soon, as even a few hours of dropped productivity can really add up over the course of months or years.