Stereoscopic OpenGL part2

Me with my shutter glassesMy obsession with stereoscopic rendering continues unabated. It’s just so fucking cool to write a bit of code and have 3D objects pop outside of your monitor and float above your keyboard.

Fact is, I couldn’t settle with my crummy anaglyph glasses see previous post. I had to try out proper shutter glasses and quad-buffer OpenGL visuals.

Thanks to nvidia’s policy of supporting quad-buffer visuals and stereo ports only on expensive Quadro graphics boards, and the proliferation of flat panels which are entirely unsuitable for use with shutter glasses due to ridiculously low refresh rates, I couldn’t do that with my PC. On the other hand, my trusty Silicon Graphics Octane2 workstation was more than up to the task as it comes with a stereo synchronization port and quad-buffered OpenGL support out of the box.

So off I go to ebay, where I bought the cheapest lcd shutter glasses I could find, the ASUS VR100 glasses which came once upon a time bundled with some expensive ASUS TNT2 graphics cards as a high-end gimmick.

SGI to ASUS VR100 adaptor circuitConnecting these glasses to SGI workstations has been done before and it was a piece of cake to follow that guy’s schematic and construct the necessary circuit to translate the signals from the SGI stereo port to those required by the shutter glasses.

The only problem I’ve had, is that my Octane2 has the low-end V6 graphics option, which apparently doesn’t provide a z-buffer when using stereo visuals.

3D tunnelNow I didn’t feel like z-sorting all polygons like the good old days when z-buffering was too expensive to use on underpowered PCs while doing software polygon rendering, so I tried to figure out a couple of graphics hacks that I could do which wouldn’t require a z-buffer to look right. So I came up with this swirling tunnel, and a simple wireframe teapot.

OpenGL stereoscopic anaglyphs and patents

An anaglyph is a combination of two images into one, in such a way that they can later be separated by viewing the image through appropriately colored transparent filters. The objective is to present slightly shifted views of the same 3D environment to each eye, in order to achieve depth perception (i.e. really perceive the 3rd dimension).

anaglyph glasses

I’ve never dealt with anaglyphs in the past, but during my recent week-old obsession with stereoscopy, I’ve stumbled upon a pair of free anaglyph viewing glasses (made out of cardboard and cellophane of course). So I couldn’t help but try to find out how I can use them with my own programs.
Read the rest of this entry »

Raytracing Anamorphic Images

A long time ago, I stumbled upon a couple of strikingly odd images on Jim Arvo’s web site, which are apparently called “anamorphic”. The idea behind an anamorphic image, is that it’s distorted in such a way, that its true shape can be seen only when viewed in a particular manner. In the case of these images, you’re supposed to print the image and place a highly reflective cylindrical object, such as a chrome pipe, at a specific marked location in order to see the geometric shapes correctly.

I kept the images back then, with the purpose of either finding an appropriate cylindrical object, or raytracing them to see what they look like, but for some reason I’ve forgotten all about them until I accidentally found them again yesterday, in a dusty corner of my filesystem.

So I decided to hack some code to add perfect (x^2 + y^2 = r^2) cylinder primitives to my raytracer and do a couple of renderings with those images texture-mapped onto a ground quad (I could just do the same thing with another raytracer such as pov-ray but where’s the fun in that?).

So anyway here are the anamorphic images along with the renderings (click on the images for the full rendering):

3D VR Headtracking test

After the first successful test of my webcam marker tracking algorithm, it’s now time for the real deal.

The purpose of my experiment, is to be able to detect the position of my head in 3D space, by processing the webcam-captured frames, locating the 2 markers, and then performing an inverse projection from 2D space to 3D space. That information can be used to set the view-point of a 3D environment to follow the motions of the user’s head, thus increasing the user’s immersion in the 3D world considerably. Simple, natural motions of the user’s head, are carried along in the virtual world, making the screen act as a window into that 3D environment.

Of course the point tracking code from my previous test is the same. However, I modified my tracking program to accept local connections from client programs that need to use that tracking information (x, y normalized position of each marker). Then I wrote a test program, that renders a simple OpenGL “world” (a bunch of balls and a couple of coordinate grids), and uses the marker positions from the other program, to calculate the user’s head 3D position, and set up the virtual camera to coincide with that.

Once again, you may watch the result at youtube. There’s still some way to go, and some details to be ironed out… I’ll keep you posted on anything new with this experiment :)

Oh, and of course, the code is always available at my subversion repository:

webcam marker tracking program (server): svn://nuclear.dnsalias.com/pub/compvis/cam_test
3D environment test (client): svn://nuclear.dnsalias.com/pub/compvis/vr_test
my webcam library (used by cam_test): svn://nuclear.dnsalias.com/pub/libwcam
Posted in hacks. 11 Comments »

First Headtracking Test

Finally, after some weeks of putting it off, I manged to sit down and write some code to talk to video4linux2 drivers, in order to get streaming video from a webcam.

My motivation for messing around with webcams and v4l ioctls, was a little experiment of mine. I wanted to write a program, that given video input from a webcam, is able to detect two “markers” attached to my head. This in turn is but a step in a slightly larger experiment I’m conducting, which I’m not going to go into right now.

So what this program does, is to detect the two markers in the video stream, and draw a blue rectangle around each marker. Check out this youtube video for a demonstration of my test program. Be warned: it’s rather silly :)

Also, I wrote a nice little v4l webcam library as part of this experiment. The code is available, as usual, in my subversion repository: svn://nuclear.dnsalias.com/pub/libwcam. The test program is also available here: svn://nuclear.dnsalias.com/pub/compvis/cam_test

Posted in hacks. 4 Comments »

Automatic Class Diagram Generation

Some time ago, I needed a widget toolkit that would be able to draw widgets in an existing OpenGL window. However, I also needed it to be independent of the underlying graphics library, or event system, so I could use it in conjunction to both OpenGL, and another nameless 3D graphics API, that I was forced to use at the time for reasons I won’t go into right now. Anyhow, to cut the long story short, I started writing one such toolkit from scratch.

I opted for a fully object-oriented design, such as I rarely do lately, because OOP really makes sense for widget toolkits, and used C++ for the implementation.

Before long, I wanted to show what I was doing, to the rest of the team working on the project, for which I was writing the toolkit. And decided I should visualize the class hierarchy, as a quick overview of the widgets and their relations.
Read the rest of this entry »

Posted in hacks. 3 Comments »
Follow

Get every new post delivered to your Inbox.