Human-Computer Interaction

Wireless Robotic Agents for Real-time Mixed Reality Tabletop Games


This project is already completed.

Introduction

Computer games are a highly entraining activity due to their multimedial richness and usability. Similarly, traditional board game offer high joy of use due to their diversity, the mostly tangible interaction in an shared interaction space as well as the opportunity for social interaction. Hence, there have been various attempts to fuse computer and board games in mixed reality scenarios to create an even richer experience for the player [Leitner et al., 2008]. However, projection or screen based systems like XRoads [Link et al., 2016] can easily replace game elements like the board or enhance them with decorations or rule assistance (like walking distance). But it is more difficult to emulate the Non-Player-Characters (NPCs) or autonomously acting enemies commonly found in computer games, since they could only be two dimensional due to the display method. This would put them in contrast to the three dimensional tangible player pieces. While a mixture of 2D and 3D pieces is a common occurence in classic board games, these always happen on different levels of the game, for instance the printed areas on the board as passive level (passive parts of the game), and the player figures on the active level (pieces through which the player can interact with the game state). Mixing 2D and 3D (e.g. a chess game where white has real objects, and black 2d symbols) in the actor layer might significantly increase the cognitive load of players, since they cannot rely on their natural ability to instinctively grasp the distribution of real objects in space as would be normally the case if all actors were real figurines [Rosenfeld, Zawadzki, Sudol, and Perlin, 2004)]. This might also negatively influence the quality of the scene recognition, e.g. by overlooking enemy actors or misjudging the relative positioning of pieces (e.g. distance, angle). To improve on this, NPCs should be either physical assets in the form of roboters or atleast seem physical through the use of augmented reality.

Thanks to recent advances in augmented reality via head mounted displays (e.g. Microsoft Hololens/Oculus Rift) it is possible to augment those flat displays with three dimensional holograms for these virtual actors. This allows both the virtual and real actors to exist in (perceived) three dimensions, thus potentially restoring the natural grasp of a situation one might have when looking at a physical game of chess. Furthermore the virtual actors could easily be animated without the difficulty of building small articulate robots and there is no risk of accidentially toppling them. Yet augmented reality still introduces problems on its own: players need to wear headsets which are often cumbersome to wear for longer amounts of time, might lose tracking, and have limited fields of view. Current lightweight AR glasses like the Microsoft Hololens have a Field of View (FoV) of about 30° x 17.5° [Kreylos, 2016] and while heavier VR Headsets like the HTC Vive or the Oculus Rift manage a Field of Views of about 110° x 61° [Widder and Nicol, 2016], both fall short of the approximately 200°x135° [Dagnelie, 2011] FoV a Human usually has. Those Head-Mounted Displays also introduce significant costs since every player needs a compatible Headset. While handheld AR devices like smartphones and tablets would mitigate the cost impact, they still suffer from the very limited field of view. Robots on the other hand require ways of tracking their position, sufficiently good batteries and need to be small enough to blend with the player figures (unless the game design demands this disparity). The advantages they offer are the lack of cumbersome equipment for the players, significantly lower costs and full field of view. Due to the availability of small yet powerful lithium based rechargeable batteries and microcontrollers, the power and size problems can be considered reasonably solveable.

This project proposes to represent NPCs via semi-or fully autonomous robots, thus enabling physical feedback while preserving both the overview over the game space and social aspects of board gaming. As a first step, suitable wireless robots have to designed and implemented, in order to prepare for future studies regarding the impact of semi-autonomous robotic enemies on cognitive load and game enjoyment. Consequently, the goals of this projects are:

XRoads is a mixed-reality board game based on “Quest: Zeit der Helden” by Pegasus Games. It expands the original game with rule assist, animations and a real-time mode with a simple enemy AI. While XRoads is the first realtime mixed reality boardgame utilizing semi-autonomous actors [Link et al., 2016]. IncreTable used a physically based robot to great sucess [Leitner et al., 2008]. More generally it has been shown that the use of mobile three-dimensional objects leverage the natural spatial awareness in non-game applications [Rosenfeld et al., 2004]. Another study by Ip and Cooperstock (2011) has shown that real playpieces are favored over two dimensional playpieces for complex interaction, implying that robots looking like the figures (in XRoads’ case, Orcs) might increase enjoyment over the two dimensional icons used. The RoboTable [Mi, Krzywinsk, Fujita, and Sugimoto, 2012] used an autonomous robot as enemy protagonist in a rudimentary mixed reality tabletop game and showed that this enhanced the attraction of the game.

Approach

First, available ready-made minirobots will be investigated that might fit the role as platform for game pawns. Additionally, promising Do-it-yourself (DIY) Designs will be researched, built and evaluated alongside the bought robots.

In order to qualify as candidate for the study, a robot needs to match the following criteria:

The evaluation will consist of a obstacle course including different movement patterns like curves and straight lines to measure the deviation from the desired path and a stresstest that uses nonstop linear movement (i.e. both motors running) to measure maximum worst case operating duration.

##Literatur

Dagnelie, G. (2011). Visual prosthetics: physiology, bioengineering, rehabilitation. In Springer Science & Business Media. Springer.

Ip, J. & Cooperstock, J. (2011). To virtualize or not? the importance of physical and virtual components in augmented reality board games. In Proceedings of the 10th international conference on entertainment computing (pp. 452–455). Springer.

Kreylos, O. (2016). Hololens and field of view in augmented reality. Retrieved from http://doc-ok.org/?p=1274

Leitner, J., Haller, M., Yun, K., Woo, W., Sugimoto, M., & Inami, M. (2008). Incretable, a mixed reality tabletop game experience. In Proceedings of the 2008 international conference on advances in computer entertainment technology (pp. 9– 16). ACM.

Link, S., Barkschat, B., Zimmerer, C., Fischbach, M., Wiebusch, D., Lugrin, J.-L., & Latoschik, M. E. (2016). An intelligent multimodal mixed reality real-time strategy game. In Proceedings of the 23rd IEEE virtual reality conference 2016. IEEE.

Mi, H., Krzywinsk, A., Fujita, T., & Sugimoto, M. (2012). Robotable: an infrastructure for intuitive interaction with mobile robots in a mixed-reality environment. In Adv. in Hum.-Comp. Int. ACM.

Rosenfeld, D., Zawadzki, M., Sudol, J., & Perlin, K. (2004). Physical objects as bidirectional user interface elements. In Computer Graphics and Applications. IEEE.

Widder, B. & Nicol, W. (2016). Spec showdown: oculus rift vs. htc vive. Retrieved from http://www.digitaltrends.com/virtual-reality/oculus-rift-vs-htc-vive/


Contact Persons at the University Würzburg

Martin Fischbach, M.Sc. (Primary Contact Person)
Mensch-Computer-Interaktion, Universität Würzburg
martin.fischbach@uni-wuerzburg.de

Legal Information