Redesigned Classroom

The A/V equipment in many of our classrooms was due for replacement.  As we began to plan the upgrade, we decided the rooms would benefit from getting a full re-thinking from floor to ceiling.

The first step was getting community feedback.  We reached out to teaching faculty and students, and received several suggestions for making the rooms a better teaching and learning space.

We decided to use this feedback to upgrade a single room, Cumming 202, refine it, then replicate what works to other classrooms.  We are currently in the process of upgrading Cummings 105 followed by Cummings 118 and Cummings 200 during winter break.

Here are the changes we’ve made to Cummings 202:

Lighting

Old hanging lights

Old hanging lights

Old hanging fluorescent lights have been replaced by recessed LED lights.  Each light has occupancy and ambient light sensors and is wireless controlled using built-in mesh networking.  In the future, we can reconfigure how many switches we have, and which lights are associated with each switch.  Previously the room switches controlled the lights in a left and right group, which didn’t make sense.  We now have them set to control the front and back of the room so you can turn off (or dim) just the front while projecting.

new LED lights

New LED lights

Writing space

We greatly increased writing area by installing two huge 16′ x 5′ whiteboards.  So big, they had to come in through the window. We’ve turned the classroom 180 degrees, but the chalkboard remains in the back for now.

Projection

When projection screen is in use, the right half of the marker board remains available

When projection screen is in use, the right half of the marker board remains available

The most common piece of feedback we received from faculty is the desire to project and write on the board at the same time.  To accommodate this request, we’ve put the new projection screen on the left, leaving a big area of whiteboard on the right.  The screen itself is now widescreen to match the widescreen projector and most laptops.

While motorized screens are not that much more expensive, we stayed with a manual projection screen in C202.  The faculty we heard from felt the manual screen is faster to put up and take down.

Furniture

Desks in a "U" configuration

Desks in a “U” configuration

desks grouped together

desks grouped together

The huge, heavy tables have been replaced with smaller tables that have wheels and flip up for compact storage.  These tables should be much easier to reconfigure from class to class.  They can be put in groups or more traditional rows.  The chairs have also been replaced by wheeled chairs with adjustable height.

We’ve installed an adjustable height desk for the professor that allows you to sit or stand during lectures.

A/V Controls

A/V controls

A/V controls

We are using a custom designed touch screen interface for the room.  We spent a lot of time making the interface as easy to use as possible.  Even better, we made the projector turn on automatically and select the correct input when you connect your laptop.  There is also an occupancy sensor that will shut the system down automatically if you forget.

Power outlets

outlet

outlets all around

More power outlets for laptops was a common request in our student feedback. Installing floor outlets in Cummings wasn’t feasible, so we compromised by installing several outlets around the perimeter.

Recording appliance

Like our larger classrooms, we’ve installed a recording appliance in C202.  We are working on getting recording controls added to the touch screen

A/V Monitoring

A graph of the lamp hours on the projector

A graph of the lamp hours on the projector, one of the several items we monitor.

We are now monitoring our A/V system.  This will enable us to provide preventative maintenance, and notice when systems are broken before you do.  We can’t monitor all components due to limitations of some hardware, but we hope our systems will be even more reliable.

VMware View Space Reclamation Problem

In setting up our new View infrastructure, we’ve definitely had to work though many little issues with all the various moving parts. One issue that wasn’t a show-stopper, but was quite annoying was that after we set up a new pool, it was unable to do space reclamation on the desktops. After about 24 hours or so, we’d start getting these messages in the View Administrator event log:

Failed to perform space reclamation on machine desktop1 in Pool mypool

Once these started, we’d get them about once per hour for every desktop.

Back over on the View Connection server, we’d see the corresponding entries in its log (C:\ProgramData\VMware\VDM\logs\log-yyyy-mm-dd.txt):

WARN (08CC-09B8) [ServiceConnection25] Problem while performing VC operation: ‘Permission to perform this operation was denied.’ [com.vmware.vim25.NoPermission]
ERROR (08CC-09B8) [ServiceConnection25] Permission to perform this operation was denied.
ERROR (08CC-09B8) [PendingOperation] Error reclaiming space for VM /View/vm/view-pool-storage/mypool/desktop1 :com.vmware.vdi.vcsupport25.VmException25: Permission to perform this operation was denied.

We’d followed the instructions for giving permissions to the vCenter user used by the Connection server outlined on View Manager Privileges Required for the vCenter Server User, but there was obviously something missing. As an aside, some of the permissions given in the View documentation needed some “translating” for vSphere 5.1 – thanks to Terence Luk for his blog outlining this: Configuring vCenter role permissions for VMware vSphere 5.1 and VMware Horizon View 5.2 (View Manager and View Composer)

To figure out what was missing, we gave Administrator privileges to the vCenter user for one desktop and then initiated a manual reclamation from the View Connection server:

C:\Program Files\VMware\VMware View\Server\tools\bin>vdmadmin -M -d mypool -m desktop1 -markForSpaceReclamation

The reclamation worked, and in the event log on the vCenter server, we could see Flex-SE wipe and shrink operations for that desktop. Looking at the role permissions in vCenter, we found the culprit. The vCenter user needed to have this permission added to work properly:

Virtual Machine -> Interaction -> Perform wipe or shrink operations

Once this was added to the role for the vCenter user, all desktops were able to perform reclamation happily, and there are no more errors in the event log.

Remote desktop virtualization pilot planning

As I alluded to in the first post on our Desktop Virtualization pilot, our use case is pretty different from the typical one. In most businesses, they are using VDI on employee desktops. Most employees run simple low performance apps like web browsers and word processors, and each person is doing different tasks at different times. The ability to “over subscribe” a server is quite high, as it is unlikely that all employees will simultaneously run highly demanding operations.

In our case we plan to have a professor doing a SolidWorks tutorial to an entire room of students.  We are likely to see all the users simultaneously do an operation that could stress many components. If 60 students were to all rotate a complex SolidWorks model at the same time, it will stress the GPU, CPU, and network.  Because of this, all the VDI sizing guides were pretty useless.  Our plan was do a little data collection, and then dive in with real equipment.

We purchased Liquidware Labs Stratusphere Fit. It comes as a VMware appliance and an agent that gets installed on each client computer you want to monitor. We installed it on our general Windows lab and our CAD Lab systems.

It has a bazillion reports you can run. You can see tables and graphs showing which applications are using resources and how much.  It did give us a general idea of how much RAM we would need, how busy our labs were, and which applications got the most use.

stratusphere_fit_graph

In the end, the data was interesting, but still not enough to have an educated guess at how VMware View would perform for us.  We needed to buy a server and just try it out.

We had no idea what the bottleneck was going to be.  So we wanted to get a very beefy system that still had room to grow.  We knew we wanted a system that supported NVIDIA’s K1 card for offloading the GPU operations.  This limited our options, as there are a limited number of systems that support the card.  Our existing servers are predominantly Dell or Supermicro based.  After evaluating the options from these two manufacturers, we ended up with the Dell PowerEdge R720.  In only 2U of rack space, it has impressive abilities. It can handle up to two NVIDIA K1 cards, has 24 DIMM slots, 2 CPU slots, and 16 2.5″ storage bays.  Along with the R720 and NVIDIA K1, we purchased a Teradici APEX 2800 card.  It basically does hardware transcoding of the video stream that gets sent over the network to the clients.  It works in conjunction with the NVIDIA K1 card.  Without these two cards, the server CPU would have to take care of all the GPU, video stream encoding, and client CPU operations.  It may work, but the number of simultaneous users could be much lower.

desktop_virtualization_hardware

Our hardware stack. A Dell PowerEdge R720, NVIDIA K1, and a Teradici APEX 2800

We ended up with the following hardware:

  • 2 – Intel Xeon E5-2670 2.60 GHz CPUs (Currently the fastest that Dell supports in a system with the K1 GPU card)
  • 256 GB of RAM
  • 6 – 200 GB SSD drives (we hope to serve everything off of local fast SSD storage)
  • 1 NVIDIA K1 GRID GPU acceleration card
  • 1 Teradici APEX 2800 LP card

We’ve just received the components and finished getting the VMware Horizon View environment set up.  This wasn’t exactly smooth.  Despite the software stack almost exclusively coming from VMware, there are several components involved.  We naively expected it to be a, “download this virtual appliance from VMware and start it up” operation.  Instead, there are many manual steps involved, including at least two Windows servers that orchestrate the system.  We are still wrapping our heads around the numerous buttons and knobs.

Once we get the  buttons and knobs mostly under control we’ll be back with part 3… real world(ish) benchmarking.

Remote Desktop Virtualization Pilot

The vast majority of our students now have Apple laptops. This is a challenge in an engineering school where heavily used applications such as Solidworks are Windows only. Additionally, some engineering applications are difficult to install, or are too resource intensive to work on an older laptop.

We offer a number of Windows labs with every application under the sun, but this is a limited resource, one that is often at capacity with a class or students trying to get a project finished. Our buildings are at capacity so adding additional labs is not an option.

We’ve guided students to install Boot Camp or a virtualization solution on their Mac (Virtualbox, Fusion, or Parallels) but it is a time consuming endeavor, both for us and the student.

We’ve long wished that we could allow students to run these big engineering applications remotely on our server infrastructure. However, we had concerns about being able to scale and user interface latency. How many servers would we need to accommodate a class of students? Would the app feel sluggish when rotating a CAD model? Until recently, the available solutions didn’t look promising.

However, with the latest release of Horizon View (VMware’s Virtual Desktop Infrastructure (VDI) solution), and NVIDIA’s GRID GPU, DirectX and OpenGL graphics operations can now be offloaded to a dedicated video card. We were intrigued. After our colleagues set us up with a non-graphics-accelerated demo on their VMware View server, were were sufficiently impressed to proceed with a pilot. We couldn’t find any solid information with estimates of how many users we could support on a single server. So we decided the best way to proceed was to buy some equipment and do our own benchmarking.

Our testing is just under way. We’ll follow this post up with our pilot set up and our findings on whether running resource intensive applications remotely is really feasible and cost effective.

FreeBSD manual multipath script

I recently ran into an issue installing FreeBSD on a system that already had some disks & zpools. Because the disks were partitioned previously, automatic multipath was not an option as the last sector of all hard drives isn’t available to store an ID. The remaining option is to do manual multipath, and it needs to be done every time the system boots.

Here’s an rc script that will run early in the sequence and create a multipath “link” between drives based on their serial number.

/etc/rc.d/manual_multipath

#!/bin/sh

# PROVIDES: manual_multipath
# REQUIRE: sysctl
# BEFORE: hostid

. /etc/rc.subr

name="manual_multipath"
start_cmd="${name}_start"
stop_cmd=":"

manual_multipath_start()
{
        echo "> manual_multipath script started"
        echo "> linking drives with the same serial number with gmultipath"
        counter=0
        serials=""
        devices=`/usr/bin/find /dev -maxdepth 1 -regex '.*da[0-9]*' | /usr/bin/cut -d '/' -f 3`
        for device in $devices
        do
                echo $device
                serial=`camcontrol inquiry $device -S`
                substring=`echo "$serials" | /usr/bin/sed -n "s/\|$serial\|.*//p" | /usr/bin/wc -c`
                if [ $substring -eq 0 ]
                then
                        found_multi=0
                        arg1="$device"
                        arg2="$device"
                        for newdevice in $devices
                        do
                                newserial=`camcontrol inquiry $newdevice -S`
                                if [ "$device" != "$newdevice" -a "$serial" == "$newserial" ]
                                then
                                        echo "  same as $newdevice!"
                                        counter=`expr $counter + 1`
                                        found_multi=1
                                        arg1=$arg1"$newdevice"
                                        arg2=$arg2" $newdevice"
                                fi
                        done
                        if [ $found_multi -eq 1 ]
                        then
                                gmultipath create $arg1 $arg2
                        fi
                fi
                serials=$serials"|$serial|"
        done
        echo "> manual_multipath script finished, found $counter matches"
}

load_rc_config $name
run_rc_command "$1"

Don’t forget to “chmod 555 /etc/rc.d/manual_multipath”.

Lastly, when importing a zpool from the drives you just multipathed, make sure to specify where to look for devices or you might end up importing a mix of multipath and regular devices. Make sure to “zpool import -d /dev/multipath”.