Skip to content
Podcasts

Faster Delivery *Anywhere* with LAN Optimization and Bandwidth Offloading

Chad Kunz talks us through the ways Adaptiva optimizes over LAN to offload bandwidth from your WAN and achieve industry-leading delivery times and success rates.

Full Transcript

Host:
Welcome back to the Endpoint Management Podcast by Adaptiva. Today, we're going to hear from Chad Kunz about how Adaptiva optimizes over LAN to offload bandwidth requirements from your WAN, speed up delivery times, and more. If you find this helpful, you should also check out our other episodes with Chad about WAN optimization, Zero Footprint Caching, and how our Virtual SAN delivers quota-less storage as always links are in the show notes, or you can find everything you need and get in touch with us at Adaptiva.com.

Chad Kunz:
Hi, this is Chad Kunz with Adaptiva. Here today to discuss with you our peer-to-peer LAN optimization and what we refer to as the memory pipeline architecture. So after previously discussing how we are going to deliver the content across the WAN, now this is about how we are going to be distributing that content among all of the peers in a particular office. And this is really another paradigm shift over traditional peer-to-peer networking. Legacy peer-to-peer has scary connotations out there for a number of people who may have worked with it and that's because it did a model of one serving many. And when you had one machine that was being picked on to deliver three gigs software update package to six or more machines. At that point, you are severely impacting the resources on that unlucky device.

Chad Kunz:
So what we have engineered is a memory pipeline architecture that really flows like a peer-to-peer daisy chain. What happens once that first machine starts receiving content over the WAN, it's now going to share that content with one other system. That machine now shares that content with one other system down the line, and on and on. So we have this daisy chain automatically being built. And for resource sensitivity, we are sharing all of these packets directly through memory. That first packet, it lands in memory. Before discarding it, we are sending that to machine number two, lands in memory, directly out of memory to machine number three.

Chad Kunz:
So of course we're going to have disk read or write operations, but we will not have any disk read operations. Additionally, because of the way the memory pipeline and this daisy chain goes, all of these machines are going to be downloading that same content at that same time. They're just going to be downloading different packets at any given different millisecond. So it's extremely efficient. And again, with our checkpoint restart technology, if any machine down in the middle of the daisy chain or even that machine that's downloading over the WAN, if those machines should drop offline, we're immediately going to re-form this daisy chain automatically. Once the new source has been established, we are going to send that checkpoint restart, resume the download based on the last packet received.

Host:
Thanks for joining us for today's episode from Adaptiva, where we're working to take the pain out of endpoint management with a solution that scales automatically so that your management, maintenance, and infrastructure costs don't have to. For more information about how we do that, visit us at Adaptiva.com.