Remote desktop virtualization pilot planning

As I alluded to in the first post on our Desktop Virtualization pilot, our use case is pretty different from the typical one. In most businesses, they are using VDI on employee desktops. Most employees run simple low performance apps like web browsers and word processors, and each person is doing different tasks at different times. The ability to “over subscribe” a server is quite high, as it is unlikely that all employees will simultaneously run highly demanding operations.

In our case we plan to have a professor doing a SolidWorks tutorial to an entire room of students.  We are likely to see all the users simultaneously do an operation that could stress many components. If 60 students were to all rotate a complex SolidWorks model at the same time, it will stress the GPU, CPU, and network.  Because of this, all the VDI sizing guides were pretty useless.  Our plan was do a little data collection, and then dive in with real equipment.

We purchased Liquidware Labs Stratusphere Fit. It comes as a VMware appliance and an agent that gets installed on each client computer you want to monitor. We installed it on our general Windows lab and our CAD Lab systems.

It has a bazillion reports you can run. You can see tables and graphs showing which applications are using resources and how much.  It did give us a general idea of how much RAM we would need, how busy our labs were, and which applications got the most use.


In the end, the data was interesting, but still not enough to have an educated guess at how VMware View would perform for us.  We needed to buy a server and just try it out.

We had no idea what the bottleneck was going to be.  So we wanted to get a very beefy system that still had room to grow.  We knew we wanted a system that supported NVIDIA’s K1 card for offloading the GPU operations.  This limited our options, as there are a limited number of systems that support the card.  Our existing servers are predominantly Dell or Supermicro based.  After evaluating the options from these two manufacturers, we ended up with the Dell PowerEdge R720.  In only 2U of rack space, it has impressive abilities. It can handle up to two NVIDIA K1 cards, has 24 DIMM slots, 2 CPU slots, and 16 2.5″ storage bays.  Along with the R720 and NVIDIA K1, we purchased a Teradici APEX 2800 card.  It basically does hardware transcoding of the video stream that gets sent over the network to the clients.  It works in conjunction with the NVIDIA K1 card.  Without these two cards, the server CPU would have to take care of all the GPU, video stream encoding, and client CPU operations.  It may work, but the number of simultaneous users could be much lower.


Our hardware stack. A Dell PowerEdge R720, NVIDIA K1, and a Teradici APEX 2800

We ended up with the following hardware:

  • 2 – Intel Xeon E5-2670 2.60 GHz CPUs (Currently the fastest that Dell supports in a system with the K1 GPU card)
  • 256 GB of RAM
  • 6 – 200 GB SSD drives (we hope to serve everything off of local fast SSD storage)
  • 1 NVIDIA K1 GRID GPU acceleration card
  • 1 Teradici APEX 2800 LP card

We’ve just received the components and finished getting the VMware Horizon View environment set up.  This wasn’t exactly smooth.  Despite the software stack almost exclusively coming from VMware, there are several components involved.  We naively expected it to be a, “download this virtual appliance from VMware and start it up” operation.  Instead, there are many manual steps involved, including at least two Windows servers that orchestrate the system.  We are still wrapping our heads around the numerous buttons and knobs.

Once we get the  buttons and knobs mostly under control we’ll be back with part 3… real world(ish) benchmarking.