Skip to content
Data Sheets

WAN Optimization: Getting Lightning Fast Delivery with Zero Throttling

Chad Kunz tells us about how a serverless, peer-to-peer architecture can provide lightning fast delivery speeds over WAN, better completion rates, and also completely eliminate the threat of ever throttling your bandwidth. Faster AND safer! Hear details about how predictive bandwidth harvesting makes this possible, and how this makes a more resilient network with better completion rates.

Full Transcript

Host:
Welcome back to another episode of the Endpoint Management podcast by Adaptiva. Today, we're going to hear from Chad Kunz about some of the core technology that lets Adaptiva optimize over WAN, to dramatically improve delivery times and completion rates while eliminating congestion and network throttling. If you find this helpful, you should also check out our other episodes about WAN optimization, zero footprint caching, how our Virtual SAN delivers quota-less storage, and more. As always, links are in the show notes, and you can find everything else you need or get in touch with us at adaptiva.com.

Chad Kunz:
Hi, this is Chad Kunz with Adaptiva, here to talk to you today about Predictive Bandwidth Harvesting and how Adaptiva does WAN optimization. The first thing before getting into our technologies to note is the difference and the real paradigm shift from traditional, legacy WAN delivery systems. The issue is those protocols are reactive in nature, and the way that they work is every time it sends information and it doesn't receive any delay metrics, it's going to double the amount it sends, double the amount it sends after that, so it starts building up its send size window.

Chad Kunz:
At the point the congestion is hit and it receives that via delay metric and realizes that there's now congestion on the line, it now needs to back off, but it does so very inefficiently and it backs off by cutting its send size window in half. But at this point, it's now contributing to WAN congestion as it backs off. Due to this, the nature of these protocols, that's why throttles, rate limiting needs to be put in place, where you're going to only be sending your systems management traffic capped at 10 or 15% during business hours. Which obviously is super inefficient, because if you have bandwidth that's available, why not be able to use it? And that's what the predictive bandwidth harvesting of the adaptive protocol brings us.

Chad Kunz:
We have the ability to monitor the network characteristics in real-time, and we aren't just doing this on a NIC by NIC basis. We have visibility into the entire pipe, and we're monitoring this by listening in on specific protocols that the routers use to exchange information. We're closely scrutinizing our sends and receives. And with this information, we're able to paint a picture of what the router queue lengths look like. We keyed in on router queue lengths because that's the best indicator of when congestion is about to happen. Because if it's still in the queue, it hasn't hit the pipe and contributed to congestion.

Chad Kunz:
So with this router queue length information, where we're monitoring both on the ingress and the egress, of course, we are now painting what two machines are going to do is determine what is the threshold for congestion on this particular link. And as we're monitoring these cues and the threshold for congestion approaches, we're going to back off before we ever contribute to any congestion, allow all that other business traffic to go in front of us. Then as those queues come down and there's 10% available, we'll add about 9% to that pipe. If it gets down to there's 90% available, we're going to be able to utilize about 89% of that remaining bandwidth, which makes this the fastest way to deliver content without impacting any other business traffic.

Chad Kunz:
Additionally, one of the really great things about this protocol is it's the most reliable as well, and super efficient. Our protocol is built on UDP, so it's a lot less resource-intensive than TCP protocols, but this is not an "unreliable delivery protocol." We have sequenced all of our packets. Every packet being sent to a receiver is going to be acknowledged back to the sender. Therefore, any packets that are missed, the sender will be able to go ahead and resend those, to be sure that the content is delivered in full, which of course at that point, we are going to hash validate, and we're going to hash validate on the entire content, and each different file within is also hashed, so very, very secure implementation on a UDP as well.

Chad Kunz:
And with the sequencing, of course, it gives us checkpoint restart. Any time a content delivery has been interrupted, maybe one particular receiver at an office has gone offline, new receiver is going to stand up. It's already received all of the packets that the original receiver has received as well, so at that point, it's going to send the checkpoint restart back to the parent office, to the sender machine on the last packet it received, so we're going to be able to resume that download exactly where it left off.

Chad Kunz:
With this UDP sequencing technology, we have been proven over possibly the most ridiculous test that I've ever encountered, and by sharing this with a number of customers, the most ridiculous test they'd ever heard of. Which specifically, we were given the challenge of proving we could deliver reliably every time over a 19k link with four-second latency and 50% packet loss, at which point we were able to, it took a while, but we had zero failures under those conditions.

Host:
Thanks for joining us for today's episode from Adaptiva, where we're working to take the pain out of endpoint management with a solution that scales automatically so that your management, maintenance, and infrastructure costs don't have to. For more information about how we do that, visit us at adaptiva.com.