Shared reality: Exploring VR-like environments with your smartphone
Virtual reality (VR) devices offer great potential for immersive communication and interactivity. But so far, it turns out few of us want to put on a clunky headset to make those possibilities come alive.
What if we could have a similar feeling of being in the room with a friend or colleague halfway around the world by using an everyday smartphone or tablet?

ATLAS PhD researcher聽Rishi Vanukuru builds tools to do just that.聽
鈥淲e're always going to be separated from family, from friends, from colleagues, and I think we can do better than just waiting for technology to advance in the next five or 10 years,鈥 Vanukuru explained. 鈥淲e can do more with the devices that we all have, that we're all familiar with, and use them to improve the experience of remote interaction today.鈥
He also aims to address access to these tools in his work, noting, 鈥淭here's a divide between video calls that are widely available but not spatial, and augmented and virtual reality headsets that offer spatial interaction but are not widely accessible. My work bridges that gap using everyday technology to allow for more spatial interactions.鈥
Vanukuru is a member of the聽ACME Lab, directed by Professor聽Ellen Do. It is a space particularly conducive to this kind of research. He related that in the ACME Lab, 鈥渁 key through-line through all of our work is building tools to help people be creative. The way that I interpret that is that for me, collaboration is one of the best amplifiers for creativity. What can we do to make better tools to help people be collaborative and therefore be more creative?鈥

A sense of space
The three-dimensionality of a room, the tactile nature of objects in that space, the ability to interact with those objects鈥攖hese features create a sense of immersion we feel in well-executed virtual environments. Typically, though, you need a VR headset and additional controllers to experience these things.聽
But our smart devices also have lots of sensors鈥攆or tracking motion, location, light, depth, biometrics and more. Vanukuru explores ways those sensors can be used to bring the immersive qualities of VR to everyday video calls.聽
Vanukuru stated, 鈥淢y hunch has been that we can do a lot more with devices that we all carry around, like phones and tablets. My work [aims] to maximize the potential of these devices for spatial collaboration in everyday contexts.鈥
For example, imagine your car has broken down on the side of the road. You might call an expert to talk you through a possible solution, but if you don鈥檛 know much about what鈥檚 under the hood, you will likely be hard-pressed to make the fix. With Vanukuru鈥檚 technology, dubbed聽DualStream, the caller can transmit a live 3D rendering of the car to an expert, who can then point to specific parts directly, improving the sense of shared presence over a standard video call.

鈥淗is work shows that we do not need expensive, specialized hardware to experience meaningful, embodied collaboration. Instead, we can use the sensors already in our pockets to transform how we share and interact within remote spaces,鈥 said Do.
The importance of partnership
This research has been supported in part by an ongoing partnership with聽, which explores at the forefront of information and communications technology.聽
鈥淎 lot of this work has been helped by this active collaboration that we've had with Ericsson Research, both in Silicon Valley in California and with researchers in Sweden,鈥 Vanukuru said. 鈥淔or three years now we've had biweekly weekly meetings with them where they've been giving inputs on the work and how it might progress.鈥澛
Professor Do elaborated on the importance of such relationships, saying, 鈥淧artnering with industry leaders like Ericsson Research is vital because it enables our academic prototypes, such as DualStream or Shared Reality, to be tested against the technical realities of global networking and communication standards. This collaboration has not only resulted in high-impact research on network-adaptive AR, but also directly contributed to international standards (like聽), ensuring our innovations have a clear pathway to real-world deployment.鈥
Vanukuru concluded by noting, 鈥淏eing at ATLAS has given me the space to question dominant narratives around what technology can or should be. It has also given me the freedom to use design as a means to explore alternate possibilities, and see what we can do with technologies that are already familiar to us and how we can use them to do more in the present.鈥
Authors: Rishi Vanukuru, Krithik Ranjan, Ada Yi Zhao, David Lindero, Gunilla H. Berndtsson, Gregoire Phillips, Amy Bani膰, Mark D. Gross, Ellen Yi-Luen Do
Abstract: Mobile video calls are widely used to share information about real-world objects and environments with remote collaborators. While these calls provide valuable visual context in real time, the experience of interacting with people and moving around a space is significantly reduced when compared to co-located conversations. Recent work has demonstrated the potential of Mobile Augmented Reality (AR) applications to enable more spatial forms of collaboration across distance. To better understand the dynamics of mobile AR collaboration and how this medium compares against the status quo, we conducted a comparative structured observation study to analyze people's perception of space and interaction with remote collaborators across mobile video calls and AR-based calls. Fourteen pairs of participants completed a spatial collaboration task using each medium. Through a mixed-methods analysis of session videos, transcripts, motion logs, post-task exercises, and interviews, we highlight how the choice of medium influences the roles and responsibilities that collaborators take on and the construction of a shared language for coordination. We discuss the importance of spatial reasoning with one's body, how video calls help participants "be on the same page" more directly, and how AR calls enable both onsite and remote collaborators to engage with the space and each other in ways that resemble in-person interaction. Our study offers a nuanced view of the benefits and limitations of both mediums, and we conclude with a discussion of design implications for future systems that integrate mobile video and AR to better support spatial collaboration in its many forms.