















Author: sam
















Many people have found remote interactions awkward or difficult in remote life.
We’ve all found different ways to cope. I turn closed-captions on during Hangout meetings, I can read and screenshot what I couldn’t capture otherwise.
Spatial audio also offers new options.
I finally met people who I follow on Twitter, “in-person” or in a spatial format during an ice-cream social on High-Fidelity, courtesy @EstellaTse
WEIRDNESS of Hi-Fi
The navigation controls are weird
the rotate (WASD) rotates the world not you
even though it’s 2D and 3DOF (rotation) it’s awkward for your “camera” to stay stationary and have the world move, maps (like Google and Apple maps) don’t function like this
this is how Metaio operated (early AR system purchased by Apple)
rotating the world, and not the camera
it’s a hard concept to wrap your head around.
Fortunately, with SLAM we won’t have to think about this much, but it might make sense in some scenarios. Like medical imaging where it makes sense to rotate an object
Rotating the map (on mobile) seems to be the secondary method of rotation (primary being by rotating your body or phone)
the zoom seems unnatural (or inverted)
I mistook a beachball floating in the pool as a person – the avatars are similarly round
I was more likely to engage with people if they had an avatar photo and not just initials in their bubble. Some experiences like Glitch give people an avatar by default, which could make people appear more approachable
Your “ears” or spatial audio is fed through a cone that you can see, but the hearing is so sensitive, you get tired of walking too far away or might not want to seem awkward from walking way to far away from the group.
I wish there were a sensitivity control to the spatial audio
PROS of Hi-Fi
You’re able to notify someone if they have an audio issue.
on Hangouts I usually end up muting the person for everyone, to their and my embarrassment.
It’s spatial
Noclip mode is on – so you can walk through walls (or move through them like a ghost), as there aren’t physics
at first, I navigated to walk through a doorway, then learned you can WASD or click-drag through everything
It was still social
At the beginning, Estella “walked” around and asked everyone if they were ok, as she “wanted to be a good host.” Similar to how you might start a party IRL
I was able to meet multiple new groups of people, I reconnected with other mentors from the MIT Hack Reality, I caught up with the dev relations from Microsoft & Magic Leap.
These are individual groups of human interactions in a group setting that couldn’t happen in either the “one to many” or “cross-fire radio coms” of Zoom and Hangouts that are our current limitation of real-time remote meetings.
excluding breakouts because those are still controlled by the host and not natural and organic
It was a little draining maybe because that was my first time in Hi-Fi, but you wouldn’t be able to have that many conversations with that many people and allow conversations to emerge in a normal group video call.
Shout out to @estellatse and @HighFidelityXR
Give High-Fidelity a try here https://www.highfidelity.com/ and let’s figure something out better than walkie-talkies with TVs.
Used in healthcare more than in industrial
It’s not the doctors it
its’ the nurses
A couple doctors, several thousand nurses
How nurses are onboard
Sign for slurred speech –
Not just demos – but applied in training
eg. CPR
Communicate the level of physical pressure needed
People usually don’t know how hard [deep?] they have to press
[the pressure can break ribs]
[collision? Hand tracking?]
Eg 2. Septic Shock
Integrating with a flow meeter
Not just a simulated syringe – but a real syringe – HoloLens shows animation of how it affects the circulator system
@detansinn
Developing Lumin Runtime Apps for Magic Leap
The ability to have multiple applications open
Eg: A clock and a chessboard
Game Dev engines are designed to run one thing at one time
[Scene Based IA]
Is like a Scene Graph
The method that engineers are using
Is similar to IA that designers should use
But instead of Node – we should define what the content is
Input Events
Environment Events
Multi-user
Send the scene graph to the server
(Does that make sense?)
In theory – run everything in the cloud
Talk 1
Increasing FOV in displays
BIGGER
Sometimes a bigger field of view is more important than a wider field of view
Projection on the environment is considered a Gold standard – perfect registration
Width is the metric people use – people want wider FOV
But there are many applications when height is the issue
Eg: – walking downstairs with low vision – not a context when using a cane makes sense
FASTER
Decrease latency
The time btwn stimulus (cause) and response (effect)
Eg: motion to photons
— Motion can be head, hands world
Most electronics in our environment are more than 100 milliseconds delay
Some are 10 milliseconds
Perceptible Latency (university of Toronto) 1ms-latency 2D touch screen 2013
Around 6 milliseconds is noticeable (in a random group of people – not a group selected with high reflexes – eg athletes, race car drivers, pro gamers)
OUT OF HANDS
Context without hands
Eg Hands-free AR for vascular intervention
Both hands are busy with a catheter –
Using Headpose & voice
[See video]
Scaling and rotating based on head pose
Increase rotation and rate of rotation by moving head away from virtual object
Change location by head pose
This is also important for accessibility
Importance in many domains of not using hands at all,
Columbia Engineering
The Fu Foundation School of Engineering and Applied Science
Presented at MIT Media Lab, Cambridge Mass
Non-profit #AR org bringing people from around the world. They charge nothing for the booths here One reason it’s called the anti-CES.
Explicitly thinking about implications for contextual computing – next computing platform.
When Bill Gates wrote The Road Ahead He didn’t even put the word intent in there He had to add it later .
AR is harder than VR We haven’t totally figured it out in terms of enterprise and consumer
Should be a place where AR innovators can meet and connect.
All of these talks are online and free
This is the 4th annual ARIA.
There was a 5th that happened in 2017 – at Javits Center in New York City.
If you are in New York and interested in AR. If ARIA is back – skip work and attend ARIA.
First head mounted display – AR. 52 years ago, a mile from here @medialab by Ivan Sutherland pic.twitter.com/5LuvIAlXxW
— Sam Brewton 🚀 AR in Action @ MIT Media Lab (@ironandsilk) February 12, 2020
https://artsandculture.google.com/project/versailles
Hat tip – Tom Emrich
https://www.linkedin.com/feed/update/urn:li:activity:6633147563975426049/
Favorite Projects from the MIT Reality Hackathon
Top things I learned plus more info and imagery to come
MIT Reality Hack 2020: Hack to the future at the coolest XR hackathon! – Devpost
Accessibility Toolkit For Unity | Devpost
#³ | Devpost (hashtag cubed)
Accessibility Toolkit For Unity | Devpost
– 360° Image on glitch
Reviewing portfolios last night at Harvard University Graduate School of Design with design leadership from Wayfair
I remember architects would come to hashtag#IxDA looking to break into User Experience. Now, architects can leapfrog to the next generation of digital product design with their 3D and spatial output while maintaining a human-centered approach.
Landscape architecture intervention by Xiaoji Zhou xiaojzhou@gsd.hardvard.edu
Brand installation simulated in Unity by Shi Tang www.tangs23.com stang@gsd.harvard.edu