A few months ago, the IT department was asked if it was possible to enhance the state of our video recording system for the Jones Seminar, a weekly one hour presentation held at Thayer School. Our existing set up was a single video camera zoomed out so the speaker and the projection screen were both visible in the frame. The video was captured with QuickTime Broadcaster and sent to a QuickTime Streaming Server. Technically, the system worked alright, however it had several drawbacks. The wide video shot was a big compromise that made the presenter too small to really see, and the presentation slides on the projection screen barely legible. QuickTime Broadcaster, also lacks the ability to record the video to disk at a higher resolution than what it is broadcasting. Our live stream was fairly low resolution so that people with slower broadband connections could watch. We wanted to also record a high quality version for posting on the Internet for viewers to watch later.
Our requirements included a solution that:
- could be recorded at a high resolution for viewing later
- showed video of the speaker
- synchronized slides of the presentation
- streamed live to the internet
- simple to use
- have a fall back plan in case something refused to operate
- almost no equipment attendance
- very little post production requirements
Our own additional requirement was a solution that we could use for other purposes such as recording courses, or other Thayer Events.
Evaluating our options
Audio/Video equipment, is notoriously finicky. So meeting all the requirements… without an equipment operator standing by to make sure it was working was a tall order.
The first alternative we came across was a product from RealNetworks called RealPresenter. It allows you to manually synchronize powerpoint slides with video. It was an obvious dead end, as the software had long been abandoned by Real, and it appeared to be a labor intensive task to sync slides to video. However, it was a starting point of what was possible.
It looked like RealPresenter used SMIL which is an open standard. Using very simple SMIL document, I was able to make QuickTime Player play two videos side-by-side in one window. However, not only is SMIL support a little shaky in QuickTime Player and Real Player, the videos are playing independently and there didn’t seem to be a guarantee that they would buffer and start playing at the same time. So they could potentially get out of sync.
We did some other research and found instances of other colleges installing dedicated specialized video equipment for recording courses. As soon as we saw price tags in the 10’s of thousands of dollars, we knew we needed to come up with something cheaper. Before installing permanent expensive equipment, we want to see what is possible, and what sort of demand there is in our community.
I finally came across an interesting software product called Wirecast. It was intriguing because it was basically drop-in QuickTime Broadcaster replacement, only much more powerful.
Wirecast can take multiple video inputs, which can be arranged any way you’d like on the screen. Think “picture in picture”, but with much more flexibility. You can predefine these “scenes” and then quickly switch between them with a single click. It also allows you to add titles, still images and pre-made video or audio.
We also needed a way to capture the screen of the presenter’s computer. Wirecast comes with a free piece of software, Desktop Presenter, which handles this. Install it on the presenter’s computer and it will send that screen over the network to Wirecast. It works quite well and the quality is very good. The only problem is that the presenter often shows up moments before the presentation is set to begin. Even if there is time to install Desktop Presenter, sometimes presenters are wary of having software installed on their computer. So we decided to purchase a framegrabber that splits the signal destined for the LCD projector, and converts it to a Firewire feed that we can feed into Wirecast. There don’t seem to be very many options out there for Firewire video framegrabbers. We chose the Canopus TwinPact 100. It is fairly expensive, and the quality is not great, since we are doing a couple analog/digital conversions. However, the convenience of not having to install software on the presenter’s computer makes it a fair trade off.
For a video camera, we chose a Canon HV20. The key feature is an auxiliary mic input. This feature is surprisingly not available on many consumer cameras. The HV20 also has decent low light performance and can record in High Definition. We aren’t using HD at the moment, but may use it in the future. We looked at higher end cameras, however, we decided it would be better to have several cheaper cameras that we could use as backups, rather than fewer higher end cameras. The main advantage of higher end cameras seemed to be better performance in low light, and native XLR audio input which we felt we could make compromises on.
To tie it all together, we need a computer. Encoding video on the fly can be a processor intensive task. After a few false starts, we ended up using a dual processor PowerMac G5 for our fixed set up in our auditorium, and a Core Duo MacBook Pro for a mobile setup. The key to a successful set up is something that can accommodate multiple Firewire buses. The computer and Wirecast get very flaky if you try to have multiple video sources on the same bus. We installed two additional firewire cards in our PowerMac which allows us to use two cameras and the TwinPact framegrabber simultaneously. For the MacBook Pro, we’re using a SIIG Firewire express card. This card works, but has the habit of kernel panicing the machine if it is unplugged while on. We’ve also found that the machine has trouble waking from sleep if the card is plugged in. So extreme caution is necessary.
We already had an Xserve G5 providing streaming with QuickTime Streaming Server. No changes needed to be made to this system… although, at some point we’d like to move this to a Linux Virtual Machine running Darwin Streaming Server.
With all the components selected, it is time to put it all together. Here’s an overview:
Setting Wirecast up is fairly straight forward. After skimming the documentation and playing for a few minutes, you’ll start to get the hang of setting up custom scenes. Below is an example of the setup we use for the Jones Seminar. We place the video from the HV20 on the left, and the video of the presenters computer on the right. This side-by-side setup leaves black bars on the top and bottom (Wirecast can’t output arbitrary aspect ratios) so we just put a static title at the bottom.
We’ve experimented a bit with different broadcast/recording settings. Currently we broadcast one live stream at a lower resolution and simultaneously record a 640 x 480 version to disk. Both streams use the H.264 codec. We’ve found that dropping the frame rate still provides acceptable quality. Because there is very little motion in the video, we’ve gone down as low as 12 frames per second.
Initially, we did no post-production to the video. However, we found that by re-encoding the video, we could reduce the filesize even more. So our original recording is at 24 fps. Then, we re-encode two versions (Currently using QuickTime Pro):
- Large Video: 640 x 480, H.264, 20 FPS, Two Pass encoding, medium compression
- Small Video: 480 x 360, H.264, 15 FPS, Two Pass encoding, medium compression
We also extract the audio and create a 56 kbps MP3.
You can view the results on the Thayer School Jones Seminar page. The later videos use our most recent quality settings.
Since we started using the system for the Jones Seminar, word quickly spread and we are now using the Wirecast system to record a Thayer course, our Energy Symposium, and a handful of other events.
For two events, we’ve used the system to provide overflow rooms using the live internet feed.
After working out a few kinks, we are very pleased with the system. While the system, once set up, is fairly simple, it still has several moving parts. So it probably isn’t a solution that will scale here at Thayer. We have limited staff to support the system, so if more courses and events need to be recorded, we’ll need to come up with a simpler solution. We’ve already started heading down that path using network connected cameras, but I’ll save that for another day.