Edit: I have since ported this test program to use LibOVR 0.4.4, and works fine on both GNU/Linux and Windows.
Edit2: Updated the code to work with LibOVR 0.5.0.1, but unfortunately they removed the handy function which I was using to disable the obnoxious “health and safety warning”. See the oculus developer guide on how to disable it system-wide instead.
I’ve been hacking with my new Oculus Rift DK2 on and off for the past couple of weeks now. I won’t go into how awesome it is, or how VR is going to change the world, and save the universe or whatever; everybody who cares, knows everything about it by now. I’ll just share my experiences so far in programming the damn thing, and post a very simple OpenGL test program I wrote last week to try it out, that might serve as a baseline.
It’s the early days of DK2 really, and it definitely shows. There’s a huge list of issues to be ironed out software-side by Oculus, and DK2 development is a bit of a pain in the ass at the moment.
First and foremost, there’s no GNU/Linux support in the oculus driver/SDK (current version 0.4.2) yet. That’s what we get for relying on proprietary crap, so there’s no point in whining about it. We should bite the bullet and write a free software replacement or shut the fuck up really. OpenHMD looks like a good start, although it still only does rotation tracking, and doesn’t utilize the IR camera to calculate the head position. Haven’t tried it yet because I’m lazy, but using and improving it is the only future-proof way to go really. For now I’m stuck on my old nemesis: windows, due to the oculus sdk.
Second issue, is that even on windows, the focus up to now is really Direct3D 11 support. The oculus driver does a weird stunt to let programs output to the rift display, without having it act as another monitor that shows part of the desktop and confuse users. That mode is called “direct-to-rift” and it really doesn’t work at all for OpenGL programs at the moment. As I don’t have any intention of hacking Direct3D code again in my life (I’ve had my fill in the past), I’m stuck in extended desktop mode, and I’m also missing a couple of features that are still D3D-only in the SDK. Hopefully this too will be fixed soon. Again, proprietary software problems, so not much that can be done about it other than begging oculus to fix it in the next release.
There are other, relatively minor issues, like a really anoying and completely ridiculus health and safety warning that pops up every time an application initializes the oculus rift, but I’ll not dwell on how stupid that is, since it thankfully can be disabled by calling a private internal SDK function (see example code at the end).
So anyway, onwards to the SDK and how to program for the rift. One huge improvement since the last time I did anything for the Oculus Rift DK1 with SDK 0.2.x, is the new C API they’ve added. The old ad-hoc set of C++ classes was really a joke, and even though the new API isn’t exactly what I’d call elegant, it’s definitely much more usable.
With the old SDK, one would have to render each eye’s view to a texture using the projection matrix and orientation quaternions supplied by the SDK. Then draw both images onto the display itself taking care to pre-distort the image in order to counteract the pincusion effect produced by the rift’s lenses. The distortion algorithm was specified in the documentation as a barrel distortion, with specific radial scaling parameters, and an accompanying sample HLSL shader program to implement it, but was otherwise left to the application.
The new SDK, while still being able to work in a similar way (which is now dubbed: “client distortion rendering” in the documentation), also provides a higher level interface (SDK-based distortion) which given the rendered texture(s), takes care of presenting it properly distorted and corrected onto the rift’s display. The benefits of this new approach are obviously that future releases may improve the distortion and chromatic abberation correction algorithms without extra effort on the app-side, but also since the SDK now knows the time interval between requesting head-tracking information and final drawing to the HMD, it can do funky hacks to reduce apparent lattency, and make the VR experience much smoother.
Here’s an overview of the tasks required by the application, to utilize the oculus rift through the Oculus SDK (0.4.2):
- Initialize the SDK and create an HMD device with
ovr_Initialize
andovrHmd_Create
. - Create a window and OpenGL context; position the window in the part of the desktop which is mapped to the rift display (
WindowPos
inovrHmd
structure), and make it fullscreen there. - Setup headtracking by calling
ovrHmd_ConfigureTracking
. - Create a framebuffer object and its color buffer texture(s) after querying the SDK for the recommended render target sizes by calling
ovrHmd_GetFovTextureSize
. - Prepare the
ovrGLTexture
structure by filling it with information about the framebuffer texture(s) we created and the area in them which will contain each image. This will be passed to theovrHmd_EndFrame
function during our drawing loop to let the SDK distort and present our renderings to the user. - Setup SDK distortion rendering parameters by filling in an
ovrGLConfig
structure and passing it toovrHmd_ConfigureRendering
. - In the drawing loop:
- Call
ovrHmd_BeginFrame
and bind the render target fbo. - For each eye:
- Get the projection matrix by calling
ovrMatrix4f_Projection
and use it. - Get the current head-tracking information by calling
ovrHmd_GetEyePose
. It returns a structure containing a translation vector and a rotation quaternion, which we’ll have to set/multiply with our view matrix. - And finally draw the scene.
- Get the projection matrix by calling
- Revert to the default framebuffer, and restore the original viewport.
- Call
ovrHmd_EndFrame
giving it a pointer to theovrGLTexture
structures we filled in earlier describing our rendertarget texture(s).
- Call
I wrote a very simple test program (see screenshot at the top of this article), demonstrating all of the above and released it as public domain. You may grab the code from my mercurial repository: http://nuclear.mutantstargoat.com/hg/oculus2. To compile it you’ll need the Oculus SDK library, SDL2, and GLEW. The repo includes VS2013 project files to build on windows. As soon as the GNU/Linux version of the Oculus SDK is released, I’ll port it and push an updated version.
In closing, I just have to mention a new project I started last week called libgoatvr. It’s a VR abstraction library, which presents a simple unified API for VR programs, while supporting multiple runtime-switchable backends such as the oculus SDK and OpenHMD. I intend to base all my future VR projects on this library, so that I can switch back and forth between the various VR backends effortlessly, to try new features, avoid vendor lock-in, and stop relying solely on the whims of a proprietary SDK developer.
September 12, 2014 at 11:23 am
Nice article, thanks. Would like to see more stuff of this kind . Like to test it on my DK2 soon :)
October 18, 2014 at 12:48 am
Exactly what I was after, this is a great starter reference – cheers
November 25, 2014 at 10:33 pm
Nice, but immediate mode?
November 26, 2014 at 3:16 am
Immediate mode, yes indeed.
December 13, 2014 at 12:25 pm
Do you know of any good examples for programmable pipeline?
January 12, 2015 at 1:04 pm
Programmable vs fixed function pipeline, has nothing to do with the drawing calls used (immediate mode/vertex buffer objects/etc). Also neither of the above has anything to do with how to use the Oculus SDK, which is what this example is about.
December 23, 2014 at 1:05 am
Hi, does this sample work with Ubuntu 14.04 and 0.4.4 Linux SDK version?
December 23, 2014 at 11:00 pm
I don’t know, why don’t you try it and let us know? :)
January 11, 2015 at 1:15 pm
Hey fedel, did you try it? I am working with Windows. With the SDK 0.4.3 that example works fine
(thanks for the example btw).
But when I change to SDK 0.4.4 many functionalities does not work any longer… hmd->WindowsPos for example returns {0,0} and the hmd->HmdCaps does not contain the ovrHmdCap_ExtendedDesktop, as it does in SDK 0.4.3
January 11, 2015 at 5:27 pm
I just ported the example to 0.4.4, so be sure to get the latest from the repo. Both of the things you mentioned still exist in 0.4.4, the only thing that changed is the name of a member of the ovrRenderAPIConfigHeader structure, which used to be RTSize and is now called BackBufferSize.
January 11, 2015 at 10:07 pm
I think, I also had to update the OVR Runtime to version 0.4.4. Now it is working!
But I also had to include windows.h for some methods, e.g. memset
January 12, 2015 at 1:01 pm
memset is in string.h. You need windows.h because in 0.4.4, the SDK headers are using some win32 API types like HWND.
January 12, 2015 at 5:35 pm
I am now trying to use the infrared camera.
I call ovrHmd_GetEyePoses to get the ovrTrackingState.
But I am not sure, how to use the data, I receive in that struct.
I think, I have to change the OpenGL-Matrix depending on the CameraPose and HeadPose, right?!
But I am not sure, how to do that.
January 1, 2015 at 11:50 pm
[…] the help of some resources I found on the web (here and here), I could get things to work but only as long as they were stand-alone programs with no […]
February 2, 2015 at 6:16 pm
Hey, I am using a 3rd party lib. From that lib, I call a method,
that draws objects with opengl.
Those objects appear behind the view field of the oculus!
I know, that the lib uses (expects) a Perspective Projection and
uses glOrtho(…) to fit their scene into the screen. But for me, the method distorts
my objects.
I would be happy to receive your help.
February 9, 2015 at 2:14 am
Your comment is very vague, and you didn’t actually ask a specific question. I’m not sure what you mean by saying “the lib uses a perspective projection and uses glOrtho to fit their scene into the screen”.
You need to set up projection and view matrices as provided by the Oculus SDK, and you have to render everything into a texture for each eye. How to accomplish that with your 3rd party library which does all the drawing, I cannot possibly know. You need to refer to the documentation of that library or contact the author.
March 21, 2015 at 3:36 pm
Can you run that example on mesa drivers?
It seems to cause the “[Context] Unable to obtain x11 visual from context” message for me and at least one other person, i.e. have you seen this? https://forums.oculus.com/viewtopic.php?f=34&t=16664&p=255384#p252973
I have played a bit with it, but I just don’t know the linux driver stack well enough to know why this happens.
Finding out what the problem is would be nice, because janus vr does not start on mesa, probably because of the same problem.
March 21, 2015 at 10:48 pm
Hello, I can’t remember if I tried running this with mesa. Probably not though, I’m using my main pc which has an nvidia graphics card, and nvidia’s proprietary drivers, for my VR experiments so far exclusively.
I’ll give it a go on my laptop, which has an integrated intel GPU, and I’ll follow up on that forum thread if I find anything interesting.
March 27, 2015 at 12:26 am
Thanks for looking. Can you reproduce it? Any good ideas how to fix it on the application side?
April 2, 2015 at 5:31 am
[…] for instance to run my oculus2 test program on the rift, I just have to do something […]
June 26, 2015 at 1:16 am
Hi there. Nice post and very helpful. Is there a reason why you don’t use the flag ovrProjection_ClipRangeOpenGL when retrieving the projection matrix via ovrMatrix4f_Projection?
June 26, 2015 at 12:59 pm
No, only reason is that there wasn’t such a flag in the SDK version I used to write this initially, and in subsequent tweaks to work with newer SDK versions I didn’t notice it.
June 26, 2015 at 4:20 pm
That makes sense. Thank you!
May 8, 2016 at 6:43 pm
i run your code but have an exception
i deleted the lines with the viewport
why there is that exception?
BOOL success = SwapBuffers(dc);
in capi_gl_distortionrender?
thank you very much:)
May 8, 2016 at 11:39 pm
Let me pull out my crystal sphere… no I’m afraid the crystal sphere is cloudy and vague, like your question.
May 9, 2016 at 4:51 am
Oh it works but onlt on extend mode.
Additionally, it shows the scene onlt on my monitor and not on the oculus. Means that the oculus keep showing my desktop.
Pkease help me why it doesn’t work on direct midde and whawhat i should to to show the scene on ocukus.
Thanjs