The concept of steaming applications down from a server to a client has been around for almost a decade. The general concept is to not download and install an entire application package when the application is launched. Instead, the application package is broken down into small data blocks or individual files that can be streamed to the client as soon as the application needs them. Since these applications almost always ask for the same data in the same order, vendors typically pre caches the portion of the application in some kind of cache which comes down before the actual streaming takes place. This saves install time for the end user and bandwidth across the network since the entire application doesn’t have to get installed it its entirety to function. So the concept of application streaming is pretty awesome but it does come with some downsides:
Downsides to current streaming:
1. Keeping track of each data block and where it belongs in relation to the other data blocks makes streaming a complex technology to get right.
2. Some vendors rely on a server side service to keep track of what streaming block needs to be downloaded next causing a potential bottleneck when thousands of workstations are starting the same application from the same server.
3. Most vendors rely on their own proprietary network protocol to perform the streaming operation. This requires special technical skills to install and maintain these streaming servers.
4. When Microsoft Windows starts an application it will read all of the executable binary file and each statically linked DLL file entirely into memory. This means that even though the application may only need a portion of the data it still requires the entire contents of all statically linked .dlls, .exe and other binaries.
We have chosen a slightly different approach. Instead of breaking each application up into very small data blocks we leave all the files intact. Since Microsoft is going to demand all the .exe and .dll files be read from start to finish anyways, we simply keep track of the actual files needed to start the application and only bring those down when the application launches. This greatly simplifies the streaming process and removes much of the burden on the server. You get to keep the speed improvement streaming gives you without all the complexity of setting up a streaming server and dealing with the vendors proprietary network protocol/traffic. OpDesk relies on simple file shares as its source location. Unlike traditional streaming, the simplicity of the design lets the average network administrator diagnose and fix most back-end problems without needing a streaming expert. OpDesk also relies on RasmNetworkService to gain access to the distribution point, providing greater security over most traditional streaming services.
How streaming works with OpDesk
All applications are defaulted to rely on network streaming when they get uploaded to the backend server. Once uploaded, the server service breaks the package into its different versions and file and extracts them to its Distribution Point. The application then becomes available to all workstation/users that are associated to it. A user will then typically launch a shortcut that points to an executable that belongs to the application. Once selected, the Agent’s driver begins to ask the agent service to stream down each file it needs, as its needed. The agent service then uses the Network User’s credentials to stream down each file. Once the file is local, the file won’t come down again unless it is changed or the application is reinstalled.