Skip to content
Podcasts

Why You Need to Start Using P2P with UDP Transfer Protocol for Endpoint Management

Gary Walker gives us a look under the hood at the transfer protocols used for endpoint content delivery, and explains why peer-to-peer (P2P) with UDP is so much more resilient and efficient compared to using BITS and TCP. He also explains why you're wasting time and money, and increasing risk due to more machines being out of compliance longer if you rely on those default transfer protocols used by SCCM / Config Mgr, and other legacy endpoint management solutions instead of taking advantage of newer P2P solutions.

Full Transcript

Host:
Welcome back to the Endpoint Management podcast by Adaptiva. Today, we're going to hear from Gary Walker about why peer-to-peer with UDP is so much more resilient compared to BITS and TCP. If you enjoy this, maybe check out our other episodes with Gary about why peer-to-peer performs so much better than a server-based architecture, and also how it delivers huge reductions in cost and associated overhead and maintenance. As always links are in the show notes where you can find everything you need and get in touch with us at adaptiva.com.

Gary Walker:
Hi, this is Gary Walker with Adaptiva, and I wanted to talk to you today about some of the resiliency that's built into our product from a networking standpoint. One of the things that we do that's different from a lot of the products that are on the market is the protocol that we use is UDP, which is sessionless versus TCP, which has to maintain a session. I was working with a client that had a lot of remote locations in a country that didn't have great infrastructure and they were over 19.2 links, and they were trying to push WIMs out to these machines, OS WIMs over a product that used TCP-IP, and they had two to three second latency between packets. The TCP sessions kept dropping. So it took them forever to get content to all those remote machines so that they could upgrade the OS on those machines.

Gary Walker:
They put us in and we use UDP, which is sessionless, which what that means is, if it's two seconds between packets, no big deal, we're not going to drop a session because there is no session to drop. We're going to pick up and send to the very next packet. Also because we are using UDP, there's a lot less overhead. And what that means is that every packet we send contains more data. So we're going to get more data through the same pipe than what you would if you were using TCP. And because we have to manage that control portion, our product is intimately aware of what's going on on that network, and our bandwidth management is going to back off. If we send a packet to a remote office, that machine, isn't go going to say that it received that packet if there's congestion on its end.

Gary Walker:
So we're checking that congestion on both ends. So regardless of how many links are in between those two, it's all going to play out in those end routers. So we're not going to saturate a network that's already saturated. Where if you're using a control, a TCP portion, and a packet gets dropped, that protocol is going to immediately say, "Resend that packet." Where we're going to say, "Wait a minute, there's already a lot of traffic on the network. I'm going to wait a little while before I ask for the next packet to be sent again." Not only that, we also have the ability to buffer those, because when you're looking in like an NPLS circuit, you could easily get a packet out of sequence by a millisecond. Well, we can buffer that and wait on those to come in and re-sequence those so that we're going to minimize those re-transmissions.

Gary Walker:
When we're sending that content, we do full checkpoint restart. Let's say, you're on prem, you start getting a deployment, you shut your machine down, you go home, you turn your machine on and you didn't even connect on VPN, you just turned your machine on, it's going to pick up right where it left off. But this time it's going to be getting it from our CDN or from an internet peer over the internet, right where it left off. And then it goes back on prem, switches back, picks up again. The other thing is when we start sending content to a machine, it can become a source for other machines the minute it starts receiving content. So you don't have to download that entire package, register with some internet server saying, "Hey, I have this complete source and I can serve that to other clients." When you have to wait on all that, your cache hit ratio goes way down. The minute we get a block of data that machine can be a host for another machine and provide that content to it.

Gary Walker:
So we're going to have a much better coverage and we're going to get that data out to those clients much quicker. And, again, it's all automated because we're aware of the network, we're aware of your location, we're aware of where the content's coming and where it needs to go, and it's all done automatically for you. You don't have to do anything.

Gary Walker:
We've been tested in some really, really poor networking situations. One of the one that comes to mind is we had a customer that they had a rapid disaster team that when a a natural disaster would occur, they would send out teams to these areas hit by say a tornado or a hurricane. And they would basically put up a tent and set up a satellite link and they would take their laptops from their office with them. They could be getting in the middle of getting a deployment and they go to that remote disaster recovery office, and they still need to be able to be patched because sometimes these offices, even though they're temporary, they could be there for a month or two months, depending on how long that disaster lasts.

Gary Walker:
They bring their machine in, turn it on, and we pick up over a satellite link, where before they were on a LAN link. And that satellite link, that's adding a quarter of a second delay up and a quarter of second delay coming down. We work seamlessly with that, getting that data out to those machines and they were able to perform their job. They didn't even know that they were still getting patches and data that they needed to perform their jobs. Because if you leave at a moment's notice, you may need a piece of software that you didn't think of and you need it pushed to that machine in that remote office to be able to handle that. We were able to deliver that content to those locations without any issue.

Host:
Thanks for joining us for today's episode from Adaptiva, where we're working to take the pain out of endpoint management with a solution that scales automatically so that your management, maintenance and infrastructure costs don't have to. For more information about how we do that, visit us at adaptiva.com.