Jump to content
Sign in to follow this  
Jakobsson87

Oculus Rift VR headset

Recommended Posts

I love the idea of the Occulus and playing in 3d but there are a couple of things that are going to be a problem and they have been mentioned...

1) It will be VERY hard to play without ANY view of keyboard or other controls. (Anyone who claims to never look at the keyboard is lying or hasn't really tried)

Simple solution would be to have a small gap at the bottom so see a bit of your desk.

2) Since 3d stereo requires rendering two different viewpoints it does put about double the load on the GPU and an extra load on the CPU too.

We'll have to wait for the final product to see if this will be practical without a Titan....

One day it will be possible and I'm looking forward to it.

3) Not to forget people wearing glasses... Such a device is absolutely nonsense for them (i.e. me) :yay:

Share this post


Link to post
Share on other sites
3) Not to forget people wearing glasses... Such a device is absolutely nonsense for them (i.e. me) :yay:

Most of the goggles (and i guess this too) has the adjustable lens inside, so you don't need to wear your glasses while using it.

Share this post


Link to post
Share on other sites

Directly from Oculus Rift FAQ:

Can I wear glasses while using the Oculus Rift developer kit?

This really depends on the shape and size of the glasses. The developer kit is designed to sit as close to your eyes as possible which makes it a bit unfriendly for glasses. That said, we’ll do everything we can to make it as comfortable as possible for the developer kit and we have a lot of great ideas for supporting glasses in the consumer version (especially since huge portion of the Oculus team wears glasses every day!).

Share this post


Link to post
Share on other sites
3) Not to forget people wearing glasses... Such a device is absolutely nonsense for them (i.e. me) :yay:

Yes, I imagine it will either fit over glasses or with have a diopter adjustment.

Share this post


Link to post
Share on other sites
3) Not to forget people wearing glasses... Such a device is absolutely nonsense for them (i.e. me) :yay:

The developer version comes with 3 interchangeable eye cup lenses to allow of people with different visual acuity, it also has adjustable clearance precisely so that you can wear glasses under the goggles if you prefer. And thats just the developer prototype, never mind the consumer version.

http://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-game/posts/398230

Edited by Deepfried

Share this post


Link to post
Share on other sites

Would love to see A3 support 3D devices and I would get an Oculus Rift ASAP if it did!

/KC

Share this post


Link to post
Share on other sites
Well you are still rendering twice the frames to get to a comparable single screen fps. The rest is semantics.

In my first hand experience, stereoscopic 3D is demanding. I haven't gathered any hard data but I'd say it's damn near twice the GFX horsepower to run the thing at the same framerate.

1) It's not twice the frames because it's still 60 fps and rendering for both eyes on each frame does not mean rendering twice. That theory would mean that adding a reflective surface would require 2x rendering for each surface. That's just not how it works.

2) "The rest is semantics" suggests that semantics are not significant but we ARE talking about computer technology where semantics are everything.

3) Your experience with stereo graphics has almost guaranteed been alternating frames and either 2x30 WAY back with LCD shutter or 2x60 (which is VERY demanding). The rift uses neither AND only renders at 720p currently, then likely 1080p next year, for public release.

As a developer who is getting the Dev kit in days, works on the Leap Motion and will work with the Myo as soon as it arives (and consequently is attending Google I/O for the 3rd time this year), I assure you that I know what I'm talking about.

There are significant hurdles to implementing the Rift SDK for an app that wasn't designed with their requirements in mind but that doesn't mean that all those hurtles will be so tall either. Good design patterns lend themselves to easier implementation (like a properly scaling HUD which Arma has had for some time) and independent head tracking (which Arma has also had for some time).

In other words, Arma is a great candidate for the Rift AND they're still in Alpha so there's hope still.

Share this post


Link to post
Share on other sites
There are significant hurdles to implementing the Rift SDK for an app that wasn't designed with their requirements in mind but that doesn't mean that all those hurtles will be so tall either. Good design patterns lend themselves to easier implementation (like a properly scaling HUD which Arma has had for some time) and independent head tracking (which Arma has also had for some time).

In other words, Arma is a great candidate for the Rift AND they're still in Alpha so there's hope still.

That being said, not sure if it was linked in the 11 pages of this thread, but games like Mirror's Edge, Half-Life 2, and Crysis have successfully been modded to run flawlessly using the Oculus Rift...and they were by no means designed with the device in mind:

And supposedly all of these games:

Share this post


Link to post
Share on other sites
1) It's not twice the frames because it's still 60 fps and rendering for both eyes on each frame does not mean rendering twice. That theory would mean that adding a reflective surface would require 2x rendering for each surface. That's just not how it works.

2) "The rest is semantics" suggests that semantics are not significant but we ARE talking about computer technology where semantics are everything.

3) Your experience with stereo graphics has almost guaranteed been alternating frames and either 2x30 WAY back with LCD shutter or 2x60 (which is VERY demanding). The rift uses neither AND only renders at 720p currently, then likely 1080p next year, for public release.

As a developer who is getting the Dev kit in days, works on the Leap Motion and will work with the Myo as soon as it arives (and consequently is attending Google I/O for the 3rd time this year), I assure you that I know what I'm talking about.

There are significant hurdles to implementing the Rift SDK for an app that wasn't designed with their requirements in mind but that doesn't mean that all those hurtles will be so tall either. Good design patterns lend themselves to easier implementation (like a properly scaling HUD which Arma has had for some time) and independent head tracking (which Arma has also had for some time).

In other words, Arma is a great candidate for the Rift AND they're still in Alpha so there's hope still.

And how exactly do you achieve stereoscopic effect if you don't render the scene twice from 2 separate viewpoints/cameras?

Can you also quickly elaborate on how a reflective surface is rendered (like water or a mirror)? I mean real reflections and not cubemaps.

Share this post


Link to post
Share on other sites
not everyone wants a oculus rift. not everyone wants to play arma3. not everyone wants icetea.

Why can't I +1 or "thumbs up" this post? :cool:

---------- Post added at 00:25 ---------- Previous post was at 00:08 ----------

And how exactly do you achieve stereoscopic effect if you don't render the scene twice from 2 separate viewpoints/cameras?

Can you also quickly elaborate on how a reflective surface is rendered (like water or a mirror)? I mean real reflections and not cubemaps.

I didn't say it isn't rendered twice from 2 viewpoints (just like a reflective surface does), I said "it's not twice the frames because it's still 60 fps". More polys may be rendered per frame (assuming NO allowances are made which is generally not the case) and 2 logical scenes are rendered, but just like a reflective surface such as a mirror requires, it's just a matter of viewports and the way a rendering pipeline works (vectors passed through matrices), it doesn't matter if you have 12 perspectives logically. All that matters is how many vectors (normals, and all the other goodies) are translated, scaled and rotated a given number of times.

You're likely picturing a far over-simplified manner of 3D graphics that hasn't existed for 15+ years; methods that made MASSIVE words like those in Arma 3 possible. Consider this. Do you think every vector (that's an x,y,z coordinate pair if you didn't know) in processed in the entire virtual world for every frame? How do you know which to render? Do you check every one to see what's going to be visible (occlusion)? No, the process is FAR more complex and efficient now that I could begin to explain here largely because I don't know the intricacies anymore. I lost track around D3D5 (if not a bit before that).

Now if you want to learn, I'll google for you, but if you want to consider taking my word for it, here's why you might be convinced to. In 1993 I was working on a gifted Zenith Data Systems 80x86 (33 MHz with math co-processor and 256K ram if I recall correctly) running DOS 3.0 (I didn't have access to 6.0 yet) and was coding in GWBasic (NO IDE). Eventually I graduated to DOS 6 with QBasic where I begin writing my own graphics routines to overcome those missing or horrificly slow in QB 4.5. To do that, I was calling out to assembly language routines that used peek/poke to work with ram directly (the portion in DOS allocated specifically for graphics), in EGA which was the best that machine could do.

I built my own pipeline that could load a vector-based model from a clear text file then rotate/translate/scale that model (technically speaking, the view is modified and the model processed through the modified view) then translated a final time for screen coordinates and perspective. THAT was in 1993.

Eventually I started dabbling in OGL and D3D but they were WORLDS apart and by the time I got my version of a "hello world" app done in each, I realized that I was no longer going to be able to "program games" for a living because it was going to be done by massive teams. I'd say that was around 1999.

14 years later (longer that a good chunk of the users on this forum have been alive), CGI has evolved so far that even a SINGLE aspect of it usually requires multiple people for a decent sized project. So I know enough to know that your notions are off but not enough to give a terrific explanation. Even if I did, I've already spent too much time saying this much so I'll have to leave it at this and you can think what ever you like but the fact remains that the stereoscopy approach used by the Rift doesn't require 2X processing like most others. It's not a 1x load either because if nothing else, the Rift offers a 110degree FOV where most games won't even let you go over 90degrees and few people do so anyway when their monitor represents a portal of about 45 degrees (depending on how close you stick your face to it and how large it is).

Hope that provided some clarification. If not, well; I tried.

---------- Post added at 00:27 ---------- Previous post was at 00:25 ----------

That being said, not sure if it was linked in the 11 pages of this thread, but games like Mirror's Edge, Half-Life 2, and Crysis have successfully been modded to run flawlessly using the Oculus Rift...and they were by no means designed with the device in mind:

Good point and it makes sense since those are major titles that 1) have done significant work to support stereoscopy already and 2) are quality titles to begin with.

More reason to have hope because Arma is most definitely a quality title and I suspect that BI has already done stereo work in other projects they've been involved in ;)

UPDATE: The video there of Half-Life must be understood carefully: "Head and gun tracking mod for the Rift". That likely means that the Rift tracking can feed the view in the game, but if you look at the monitor they show while he's playing, and the fraps feed, that isn't Rift-Ready in the sense that it won't render in stereo for the Rift which is the biggest challenge because the rendering pipeline has to be modified. I think if we're lucky, nVidia and ATI will see success or potential in the Rift and build support into the drivers. nVidia has had a framework for better than 10 years that allows for all sorts of stereo rendering methods so it wouldn't be a huge challenge for them to take care of the hardest part but games will still have to be modified to look good (like wide FOV, scaled HUD, etc..).

Watching the Crysis video, I see no reason to think any different from the Half Life one. The rendering aspect isn't dealt with. They're just demonstrating that head/gun separation exists which is still great. I'm just clarifying.

Edited by rainabba

Share this post


Link to post
Share on other sites
That being said, not sure if it was linked in the 11 pages of this thread, but games like Mirror's Edge, Half-Life 2, and Crysis have successfully been modded to run flawlessly using the Oculus Rift...

No they haven't, only two games have full Rift support as at this time and that is TF2 and Hawken. Its not as simple as implementing the warped stereoscopic shaders or injection drivers and only partial head tracking can be modded in, for example you wont have support for roll, and even then this says nothing of the latency. Beyond the technical implementation you then have to think about in game interfaces and menus, and cutscenes or sections where camera control is taken away from the player, then you have to consider perspective (how high off the ground is the players head?) and scale, many games are created with assets at warped scales to display better on a flat 2d monitor which might just look bad in VR.

For "flawless rift support" all of this has to be coded in (or out) by the developer. To be perfectly frank retrospective support for the rift will be a far cry from what you will experience with a game designed from the ground up for the rift, and tbh the rift (and VR in general) isn't that well suited to competitive FPS.

Share this post


Link to post
Share on other sites

With regards to having too many controls, I find that voice controls work pretty well to eliminate some of need for complicated key combos and repetitive work. It's a little work to get it setup just right but it works remarkably well, paticularly when dealing with commanding a squad of AI. So I imagine combined with V.R. it would be pretty sweet!

Share this post


Link to post
Share on other sites

With any distance over ~12m, stereoscopy accomplishes very little. That's one of the reasons 3D TVs suck so badly. The greatest immersive effect is from changes in perspective from small movements of the head.

I'd be quite confident that stereoscopic rendering of the player and weapon (in 1st person) but single-rendering of the environment would accomplish 80%+ of the results required, with a minimal impact on rendering load.

As for seeing the controls, with a gaming mouse and keyboard - you can do without looking at the keyboard ever in game. My G600 and G510 combined do the job admirably - I never need to lift my hands from either. I know not everyone has gaming mice and keyboards, but if you're in the market for a Rift (or equivalent face-hugging display) you're more likely to be one who does go for the peripherals that allow you to use it properly.

Share this post


Link to post
Share on other sites
Yes, I imagine it will either fit over glasses or with have a diopter adjustment.

This won't work with a corneal irregularity, only for myopia or hyperopia...

Share this post


Link to post
Share on other sites

I'd be quite confident that stereoscopic rendering of the player and weapon (in 1st person) but single-rendering of the environment would accomplish 80%+ of the results required, with a minimal impact on rendering load.

If you're concerned about the performance impact of stereoscopic "rendering" then don't be, because nothing is actually rendered twice, you're just creating two views of assets than have already been rendered. The performance hit is more like 20% than 100%.

Share this post


Link to post
Share on other sites

That's good to know. My mental model of 3D rendering is certainly closer to the circa-1992 procedure. It's an effort for me to remember that the process has evolved since then, and that GPUs aren't just crunching matrices faster than before :)

I'm not too concerned though. My point was mostly in response to the focus (unintentional pun, truly) people seem to have on stereoscopy being the only thing relevant to 3D rendering. Rapid response to small POV changes from head movements and the focal distance of each eye are much more relevant. The first is addressed by the Rift very well. The second is a very difficult technical problem to overcome (which is why no-one's done it, I guess).

Share this post


Link to post
Share on other sites

The Rift needs to have good occular setup per-eye. It's not enough to specify inter-occular distance - you have to allow that ONE eye be shifted over and leave the other where it is. otherwise, ironsights and other collimated sights won't work properly when you raise the weapon. You'll have the weapon aligned with your nose :)

Share this post


Link to post
Share on other sites
The Rift needs to have good occular setup per-eye. It's not enough to specify inter-occular distance - you have to allow that ONE eye be shifted over and leave the other where it is. otherwise, ironsights and other collimated sights won't work properly when you raise the weapon. You'll have the weapon aligned with your nose :)

This is the most important part that BI have to consider if they ever try to support Oculus Rift, if they ever get it right, they still have to creat proper ingame UI for everything from weapon selection to action menu to squad command interface to "work", which maybe even harder then the first part.

Share this post


Link to post
Share on other sites

If this will happen we can finally use collimator sights as they are supposed to :)

edit: Squad interface and more can be solved with VAC. Its quick and intuitive, and fun!

Share this post


Link to post
Share on other sites
If this will happen we can finally use collimator sights as they are supposed to :)

edit: Squad interface and more can be solved with VAC. Its quick and intuitive, and fun!

VAC is not the be all end all answer to the problem the UI have, but that will be another topic.

Share this post


Link to post
Share on other sites

Agree. The UI must work regardless of VAC or not. For some reason I mixed it up with "hard to see keyboard issue".

Share this post


Link to post
Share on other sites
Agree. The UI must work regardless of VAC or not. For some reason I mixed it up with "hard to see keyboard issue".

Well, it is related, a better UI that give visual reference and lowering keyboard key press(or atless, needs to move hands around too much) is very important for a good VR experience.

Share this post


Link to post
Share on other sites

Other than commanding AI, I really don't see the problem. I think most PC gamers can find the AWDS keys and if you can find those then it's pretty easy to find QERFZXC [Tab] [L.Shift] and etc with out looking ;) Also many people have mice that have several additional buttons (other than M1,M2,M3) as well so that further reduces the dependency of the keyboard.

Certianly the UI has room to be improved. I wouldn't mind that at all! However in my opinion the real solution is AI that doesn't need to be micro-managed. Thus further reducing the need to access a complicated set of key combos. For example it sure would be nice if my squad medic would heal injured members with out me having to select him, press action, heal soldier, etc.

Share this post


Link to post
Share on other sites

They already shipped a lot of the developer kits, and the tests/reviews on youtube about the rift show people are really impressed. It's the beginning of the matrix, what a wonderful thing to be alive to experience it after watching the movie and dreaming about it.

If you are a smart developer, you will hop on the oculus rift immediately, as just being one of the first multiplayer games that support it would alone bring a huge audience. Especially with the modability of arma, it would compliment it so well- the other half of the matrix.

Whats interesting to me is arma 3 seems already designed for it in a way that when your running you can use controls to look around, with your gun pointed somewhere else- not something I've seen in any other shooter game already built in.

So maybe this will possibly make it a reality sooner.

Even though the resolution is not really amazing on the rift for the first version, I still want to mess with one in unreal & unity, but without a multiplayer environment like arma its not as good.

Edited by CyberpunkDev

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×