![]() ![]() When we are using torrent to spread the file, our file server could give a particular fragment to the first workstation, and other 20 workstations would then have a choice to either ask file server or the first workstation for that fragment. ![]() This way, every workstation on the 3rd floor may have 1Gbit connection to each other, but they all share a single 1Gbit connection to the 2nd floor, so if your file server and clients are on different floors, speeds could drop dramatically. The architecture of typical office LAN usually contains a switch on each floor of the building and a single cable between those floor switches, meaning that all workstations on the floor share the same 1 Gbit connection to the file server, while they may have separate connections to each other of the same speed. Benefits of Using Torrent in Local Network The beauty of torrent protocol is that every torrent client is sharing what he got with other participants, so even if your server goes down, the clients would share the fragments they’ve got from you with each other and every new participants. Once your torrent client receives the chunk, it calculates the checksum of the received fragment and if it’s correct – saves it, otherwise discards the fragment and re-downloads it. Usually there are about a thousand chunks in one torrent. How Torrent Works, at glanceīasically, each file is split into “chunks” or “fragments” with size from 64Kb and up to 2Mb. For those of you who don’t know exactly how it works – a brief introduction. So, the first thing that came to my mind was torrent. We have a repository of virtual environments – collections of virtual machines that need to be copied and imported to workstations.Įven if you give the “copy” command remotely, it’s the process which could be stopped in the middle by one of a thousand reasons, and it also puts a heavy load on file server and eats traffic of your file server, because all copies of the file are served from one location and the hard drive of the server could be overloaded, no matter how big the cache is. Here is the scenario: in our company, we have to regularly deploy virtual environments to tens of machines, at least twice per week. ![]() And this article is about how I’ve accomplished that goal. I wanted to reduce the overall time of deployment to about 70-90 minutes, or about 10 times. Usually these files reside on a single file server, connected to the local network by 1Gbit NIC, but even if nothing else is taking the bandwidth from that file server, copying 60 Gb to 20 machines would take more than 11 hours – that’s the amount of time it takes to transfer 1200 Gb at the speed of 30 Megabytes per second. Like – Hyper-V virtual machines for training courses, and sometimes they take up to 60 Gigabytes. Sometimes I need to copy large files simultaneously to several tens of computers. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |