fbpx
Good Morning Web 3 - guides and resources for brands and individuals to jump into the next phase of the internet
HTC Vive

HTC Vive Delay the Result of a ‘very, very big breakthrough’ to be Shown at CES

Virtual reality (VR) fans were disappointed to learn that the upcoming HTC Vive head-mounted display (HMD) had been delayed earlier this month. Instead of the initially proposed limited launch before the end of the year followed by a wider release in Q1 2016 the kit will instead arrive next April. Fans have speculated as to the reason for this delay but it appears that HTC has now revealed the official reasoning: a ‘very, very big’ breakthrough on the technological front.

ViveNPM

That is according to HTC CEO Cher Wang who said as much at this week’s HTC Vive Unbound event in Beijing, China. According to Engadget the delay came because of “a very, very big technological breakthrough”, apparently so significant that the company decided not to release the original design for the consumer version of the device. “We shouldn’t make our users swap their systems later just so we could meet the December shipping date,” Wang said, later teasing that said breakthrough should be revealed at CES 2016 on 6th – 9th January.

Just what this breakthrough could be is anyone’s guess, although supposed leaked images of the consumer version of the device have hit the internet this week, with the HMD now including a front-mounted camera and the SteamVR controllers undergoing a significant redesign. The HTC Vive is already attributed with one big breakthrough in the Room Scale user tracking that’s provided by SteamVR. Though also featured in its main rival, the Oculus Rift, HTC and partner Valve were the first to unveil this feature, allowing players to move around a space of up to 15 feet by 15 feet and have those movements replicated within the given experience.

For the latest updates on the HTC Vive, keep reading VRFocus.

Total
0
Shares
17 comments
  1. Interesting. My hunch is they’re using that camera in conjunction with the Lighthouse scanners to physically map all the objects in the room so rather than simply getting the white grid pattern for the four walls, it’ll afford a much more accurate look at all the potential stuff you may have in your den or computer room which can then be rendered in VR as you move about your environment. It also makes me think this’ll push the cost to the $499 to $599 range.

    1. doubtful, what you describe would require a software update not a hardware update, hence it doesn’t explain the delay. Plus it won’t increase the price since the hardware hasn’t changed.

      That said, who will want to see their room within VR? That is what AR is for, with VR we expect the whole world around us to change and go into whatever world the developers have envisioned!

  2. Should be 4k screen. Otherwise it already has everything. I hope Oculus does the same thing. 4k screens will make the first wave of consumer VR epic. It’s really the only thing they’re missing.

    1. And how many will be able to run VR in 4K, or be willing to invest in a system that can? At this point in time, 4K for VR is financial suicide.

      1. No. 4K would be a big improvement without any noticable performance impact if you render at the same resolution as before. The image will be warped and rescaled in any case and this is a cheap operation.

        You get much less screen door by using a higher resolution. You get much clearer HUD if you render HUD/text at a higher resolution than everything else. Having a higher output resolution reduces aliasing from warping and scaling thin lines.

        The human eye is not good at colour resolution. It is much better at brightness resolution. This may open up for additional techniques at high res. E.g. if you have an edge between two polygons that occurs within one pixel, you only need two colour samples, one for each triangle (this is where pixel shaders live), but for good anti aliasing you need many coverage samples; these are cheap. These coverage samples tells you which colour subpixel to use at which subpixel location. Normally, when finished rendering you blend x samples from one colour with y samples of another and get exactly one pixel which is output to the screen. But before you blend down you have the information to generate additional pixles and define the edge more clearly.

  3. i live in bellevue the guy on the next street over from me is lead programmer on the source engines i may have to run some black ops into the new valve building they like me there so i may have something soon

  4. It won’t be 4K. We don’t have computers that can do 4K in VR, by and large. 4K is over three times as many pixels as 2160×1080. And 4K in VR is also much more difficult than 4K on a 2D screen (for one thing, both the Vive and Rift do an over-render of 40% in either dimension, so that’s actually DOUBLE the number of pixels that you need to push for regular 4K gaming, not to mention rendering from two different camera angles, at 90 frames per second, with head-tracking).

    It also won’t be wireless or eye-tracking or wider FOV. I think the more detailed room-tracking or finger tracking are likely. I’m not going to get my hopes up very high however.

  5. Saying that Oculus offers the same feature of room scale tracking is very imprecise:
    room scale makes sense only with touch controllers, and Oculus only tracks if the users is facing the cameras. In other words: Oculus is a 180 degrees experience, while HTC Vive is the only one to offer real, 360 degrees, room-size experience.

  6. It can also be built in Augmented Reality. If they manedge (sry spelling) to do that they will start to go towards the Windows Holo Lens. It would be awesome if you could play a game in VR then turn on AR and walk around and have the game now out on the wall in your home instead like a TV just like Windows holo lens.

    Or it could be easy as totally 100% wireless. tech

  7. Hopefully, the big breakthrough is fingers-tracking integrated with the HTC Vive. …or , at least, additional light-house-sensors that allow for some body tracking (especially arms).

  8. I hope it has something to do with how we perceive depth of field and how our eyes focus on objects.

  9. It could be a mini GPU built into the unit itself that handles the two images. If they can do this then 4k isn’t out of the window. I’m not sure if it’s possible either.

Comments are closed.

Related Posts