Joho the Blogvr Archives - Joho the Blog

December 3, 2016

[liveblog] Stephanie Mendoza: Web VR

Stephanie Mendoza [twitter:@_liooil] [Github: SAM-liooil] is giving a talk at the Web 1.0 conference. She’s a Unity developer.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

WebVR— a 3D in-browser standard— is at 1.0 these days, she says.. It’s cross platform which is amazing because it’s hard to build for Web, Android, and Vive. It’s “uncharted territory” where “everything is an experiment.” You need Chromium
, an experimental version of Chrome, to run it. She uses A-Frame to create in-browser 3D environments.

“We’re trying to figure out the limit of things we can simulate.” It’s going to follow us out into the real world. E.g., she’s found that “Simulating fearful situations ) can lessen fear of those situations in the real world”simulating fearful situations (e.g., heights) can lessen fear of those situations in the real world.

This crosses into Meinong’s jungle: a repository of non-existent entities in Alexius Meinong‘s philosophy.

The tool they’re using is A-Frame, which is an abstraction layer on top of WebGL
, Three.js, and VRML. (VRML was an HTML standard that didn’t get taken up much because the browsers didn’t run it very well. [I was once on the board of a VRML company which also didn’t do very well.]) WebVR works on Vibe, High Fidelity, Janus, the Unity Web player, and Youtube 360, under different definitions of “works.” A-Frame is open source.

Now she takes us through how to build a VR Web page. You can scavenge for 3D assets or create your own. E.g., you can go to Thingiverse and convert the files to the appropriate format for A-Frame.

Then you begin a “scene” in A-Frame, which lives between <a-scene> tags in HTML. You can create graphic objects (spheres, planes, etc.) You can interactively work on the 3D elements within your browser. [This link will take you to a page that displays the 3D scene Stephanie is working with, but you need Chromium to get to the interactive menus.]

She goes a bit deeper into the A-Frame HTML for assets, light maps, height maps, specular maps, all of which are mapped back to much lower-count polygons. Entities consist of geometry, light, mesh, material, position, and raycaster, and your extensions. [I am not attempting to record the details, which Stephanie is spelling out clearly. ]

She talks about the HTC Vive. “The controllers are really cool. “They’re like claws. I use them to climb virtual trees and then jump out”They’re like claws. I use them to climb virtual trees and then jump out because it’s fun.” Your brain simulates gravity when there is none, she observes. She shows the A-Frame tags for configuring the controls, including gabbing, colliding, and teleporting.

She recommends some sites, including NormalMap, which maps images and lets you download the results.


Q: Platforms are making their own non-interoperable VR frameworks, which is concerning.

A: It went from art to industry very quickly.

Comments Off on [liveblog] Stephanie Mendoza: Web VR

September 21, 2016

[iab] Robert Scoble

I’m at a IAB conference in Toronto. The first speaker is Robert Scoble, who I haven’t seen since the early 2000s. He’s working at Upload VR that gives him “a front row seat on what’s coming.”

WARNING: Live blogging. Not spellpchecking before posting. Not even re-reading it. Getting things wrong, including emphasis.

The title of his talk is “The Fourth Transformation: How AR and AI change everything.”

First: The PC.

Second: Mac and GUI. Important companies in the first went away.

Third: Mobile and touch. Companies from the second went away.

We’re now getting a taste of the fourth: Virtual Reality and Augmented Reality. Kids take to VR naturally and with enthusiasm, he notes.

“Most people in the world are going to experience with VR with a mobile phone because the cost advantages of doing that are immense.” This Christmas Google will launch its Tango sensors that map the world in 3D. Early games for the Tango phone will give a taste of AR: mapping the physical space and put virtual things into it. Robert shows what’s possible with the Tango phone. Retail 411 is working on bringing you straight to the product you want in a physical store. This tech will let us build new games, but also, for example, put a virtual blue line on a floor to show you where your meeting is. Or, in a furniture store it can show you the items in a vision of your home.

Robert calls AR “Mixed Reality” because he thinks AR refers to the prior generation.

Vuforia was designed for mobile phones, placing virtual objets in real space. But soon we’ll be doing this with glasses, Robert says. Genesis [?] puts a virtual window on your wall. Click on it, and zombies crawl through it and come toward you.

Magic Leap got huge investments because the optics of the glasses they;re building are so good. He points out that the system knows to occlude images by interfering real world objects, e.g., the couch between you and the zombie.

He shows a Hololens app preview. Dokodemo Teleportation Door, made in Unity. You place a door on the ground. Open it. There’s a polygonal world inside it. Walk through the door and you’re in it.

Robert says Apple ditched the headphone jack in order to put advanced audio computing in your head, replacing ambient sound with processed sound that may include virtual audio.

Eyefluence builds sensors for eyes. Robert shows video of someone navigating complex screens of icons solely with his eyes. “Advertisers will be able to build a new kind of billboard in the street and know who looked at it.” [Oh great.]

ActionGram puts holograms into VR. [If you need a tiny George Takei in your living room — and who doesn’t? — this is for you.]

SnapChat bought a company that puts a camera in glasses. SnapChat is going to bring out a connected camera. It could be the size of a sugar cube.

Sephora has an app that shows you how their makeup looks like on your face, color matched.

Robert talks about the effect on sports. E.g, Nascar has 100+ sensors in cars already Researchers are putting sensors in NFL players’ tags for “next gen stats.”

“We’re in the Apple II stage” of this. It wasn’t great but kicked off a trillion dollar industries. Robert’s been told that we’re two years away, but says maybe it’s four years. “The new Ford cards are all built in virtual reality…If you don’t have a team thinking about working in this new world, you’ll be at a disadvantage soon.”

“This is the best educational technology humans have ever invented.”

This is intensely social tech, he says. You can play basketball or ski jumping with your friends over the Internet. He shows a Facebook demo. You can share things with others, things with media inside of them. E.g., go to a physical space and see it together. [Very cool demo. I think this is it:]

Comments Off on [iab] Robert Scoble

February 17, 2016

Oculus Time Shift: Virtual Reality in the 1850s

From On Time: Technology and Temporality in Modern Egypt, by On Barak (Univ. of California Press, 2013):

Dioramas were given their definitive form by Louis Daguerre, the inventor of photography, in the early 1820s. They consisted of massive, realistic landscape paintings, suspended from a theater ceiling and moving in sequence on a wire, with shifting light effects projected from behind. Alternatively, pictures might be stationed around a revolving platform.

Throughout the 1850s, after the diorama of the Overland Mail debuted in London, various other dioramas and panoramas showcased Egypt. “The Great Moving Panorama of the Nile” had been exhibited in England over 2,500 times by 1852. The new photographic “Cairo Panorama” debuted in 1859. In 1860 “London to Hong Kong in Two Hours” took spectators to the Far East via Egypt along the Overland Route.

…A typical description, taken from a review of the 1847 “City of Cairo Panorama,” reveals how Eurocentrism was performed in these spectacles: “The visitor standing on the circular platform is in the very center of the locality represented, as real to the eye as if he were on the spot itself. (Kindle Locations 789-802)

BTW, Barak’s book is about the history of the difference between the Western colonists’ view of time and the local Egyptian understanding:

…means of transportation and communication did not drive social synchronization and standardized timekeeping, as social scientists conventionally argue. Rather, they promoted what I call “countertempos” predicated on discomfort with the time of the clock and a disdain for dehumanizing European standards of efficiency, linearity, and punctuality. (Kindle locations 209-212)

1 Comment »

June 9, 2015

VR and Education

The MindCET blog has posted a post of mine about why VR seems so attractive to educational technology folks. Here’s the beginning:

By now we’re accustomed to the idea that the Internet enables us to spread education out across large physical distances. But just as spreading Nutella means thinning it, so does spreading education seem to require making the connections less substantial and real.

That’s one important reason that virtual reality and augmented reality appliances were so prevalent at Shaping the Future III. (The other reasons are that they’re very cool.) They promise to “thicken” the online experience. As Avi Warshavski pointed out in his presentation, this also helps to explain the recent increase in interest in the maker movement and the Internet of things: learners are not just brains in space, as he put it.

Miriam Reiner presented some evidence from her research that suggests…

1 Comment »

June 3, 2015

[liveblog] Todd Revolt on AR

Todd Revolt is worth Meta. It has 70 people. It’s shipping a Meta 1 developer kit. You use common hand gestures to manipulate virtual things.

He shows a video of people wearing Oculus Rifts in the real world and failing to navigate. Instead, Meta wants you to be together with people in the real world.

With augmented reality, he says, people know how to work it without training. Examples:

Fourth largest cause of death in the US: medical error. But with AR we can do more useful simulations. You can see the vital signs and the next steps in the procedures.

Princess Leia standing on your clipboard.

1 Comment »

[liveblog] Miriam Reiner on VR for learning

Miriam Reiner is giving a talk on virtual reality. Her lab collects info about brain activity under VR to create a model of optimal learning.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Her lab lets them provide sensory experiences virtually: you can feel water, etc. New haptic interfaces. There’s a kickstarter project for an Oculus Rift that lets you smell and feel a breeze and temperature.

They also do augmented reality, overlaying the virtual onto the real.

A robot she worked with last year suffers from the uncanny valley. Face to face is important. “Only 10% of information is conveyed through words.”

In an experiment, they re-created a student virtually and had her teach another student how to use a blood pressure machine.

VR can help us understand what learning is. And enhance it.

Exxample: A human wears electrodes. As she plays a VR game, her brain activity is recorded. They measured response times to light, auditory, and haptic signals, Auditory was fastest. But if you put all three together, the response time goes down dramatically. What does this mean for learning? We should find out. It looks like multi-modal sensation increases learning.

If you learn something in the morning, and they test you over the next few days, your memory of it will be best after sleep. Sleep consolidates memory. If you can use neuro-feedback perhaps we can teach people to do that consolidation immediately after learning. Her research suggests this is possible.

“The advantage of vVR is not just in creating worlds that do not exist. For the first time we have a mthod to organize and enhance learning.”

1 Comment »

December 27, 2014

Oculus Thrift

I just received Google’s Oculus Rift emulator. Given that it’s made of cardboard, it’s all kinds of awesome.

Google Cardboard is a poke in Facebook’s eyes. FB bought Oculus Rift, the virtual reality headset, for $2B. Oculus hasn’t yet shipped a product, but its prototypes are mind-melting. My wife and I tried one last year at an Israeli educational tech lab, and we literally had to have people’s hands on our shoulders so we wouldn’t get so disoriented that we’d swoon. The Lab had us on a virtual roller coaster, with the ability to turn our heads to look around. It didn’t matter that it was an early, low-resolution prototype. Swoon.

Oculus is rumored to be priced at around $350 when it ships, and they will sell tons at that price. Basically, anyone who tries one will be a customer or will wish s/he had the money to be a customer. Will it be confined to game players? Not a chance on earth.

So, in the midst of all this justifiable hype about the Oculus Rift, Google announced Cardboard: detailed plans for how to cut out and assemble a holder for your mobile phone that positions it in front of your eyes. The Cardboard software divides the screen in two and creates a parallaxed view so you think you’re seeing in 3D. It uses your mobile phone’s kinetic senses to track the movement of your head as you purview your synthetic domain.

I took a look at the plans for building the holder and gave up. For $15 I instead ordered one from Unofficial Cardboard.

When it arrived this morning, I took it out of its shipping container (made out of cardboard, of course), slipped in my HTC mobile phone, clicked on the Google Cardboard software, chose a demo, and was literally — in the virtual sense — flying over the earth in any direction I looked, watching a cartoon set in a forest that I was in, or choosing YouTube music videos by turning to look at them on a circular wall.

Obviously I’m sold on the concept. But I’m also sold on the pure cheekiness of Google’s replicating the core functionality of the Oculus Rift by using existing technology, including one made of cardboard.

(And, yeah, I’m a little proud of the headline.)