Can a 3D interface be as efficient as the existing 2D paradigm?
The current state of VR GUI leaves me with an unmistakable feeling of… meh. Endless panes of glass with unimaginative drawing and scribbles dot the VR landscape. We have an entire extra dimension and 4 more degrees of freedom, but we still stick to the existing 2D paradigm. Is this the best we can do? I hope not.
Part of me says general lack of innovation of VR GUI is just an input problem, and can’t be solved until we figure out input. That part of me is dying a little more each day as input comes into its own. To date the best iteration and winner of the Proto GUI award is TiltBrush. While it is a fun game, and nice to look at, The GUI is still essentially flat images… interchangeable with any 2d interface.
When I first started developing The VR internet, I pictures swirling lattices of information and tree like nodes that would be manipulated with flailing arms(PalmerLuckeyWILDSPECULATION.gif). It soon became evident that this way of interaction is best left to the world of science fiction. Everything I’ve tried so far was wildly inefficient, physically demanding, too convoluted to be intuitive or was suffering from a bad case of information overload.
Picture your file browser… now picture how that will structured in 3d space. Now imagine trying to upload an image from a folder with 1000s of images on your computer to a website, say wikipedia, in VR. How will the folder structure work? How will you jump from the wiki page to the file browser and back?
There are some obvious answers for some forms of data. For example, 3D bar graphs for time based stock tickers or a sphere of earth for location. Some other forms are incredible hard. Displaying the comment tree on a Reddit thread, the previously mentioned file browser system or the internet as a whole. I can tell you right away that large cities will not work.
The Human Component
Wacky waving inflatable arm flailing tube man is NOT ideal for VR. Again, this seems to be the rule in the world of science fiction, but in practice it is entirely impracticable, dangerous to those around the tube man, and inefficient compared to mouse and keyboard. On the other end of the spectrum, Mouse and keyboard lack engagement, are immersion breaking and, worst of all, impossible to find with the HMD on your face. The elusive sweet spot is somewhere in the middle, somewhere between lazy and engaging.
We have 2 dimensional mathematically derived laws for how humans interact with any GUI, based on human physiology and psychology. Fitts’s law, Hick’s law, Accot-Zhai steering law, and other laws dictate everything from how wide a menu should be, how long the list of items should be, the whitespace on a page and nearly every other interface concern one might have. We don’t have these for the 3D world… at least not in the world of VR. Most of these laws are probably a good starting point, but many of those concerns have the constraints of reality(aka Physics) or a 2d interface. Simply put, VR developers are in uncharted waters in the paper boat. It works… but it’s cheap and wont last.
The Final Frontier
Efficiency is key. The GUI for VR MUST be even more efficient then 2d interface. I say more efficient, because it is still a hassle to put on the bulky HMD. Your OS and the internet, on average, have had decades to figure this out. Hopefully it doesn’t take decades for VR interface design, assuming it is even possible to have such efficiency. If there is no valid solution for this problem… Then I fear that VR will always remain a niche product for gamers. If the world wants VR then developers MUST find these solutions. For VR to be mainstream, we need VR operating systems, VR internet browsers, VR offices, and everything else people use a computer for.
It’s the wild west in GUI design. There is a fortune to be had for those that figure out the rules.