Link Foundation Fellowships Newsletter

Inside this Issue

Features

Meet this Year's Fellowship Recipients

Link Fellowship Awardees for 2022 - 2023

Modeling, Simulation and Training

FIRST YEAR FELLOWS

Tammer Barkouki

Name: Tammer Barkouki
School: University of California, Davis
Project: Explainable Human-Autonomy Teaming for Deep Space Exploration
Research Advisor: Dr. Stephen Robinson

 

A new sub-field of Artificial Intelligence, called eXplainable AI (XAI) has sprung up to improve the transparency and predictability of AI systems. There is a critical need to develop XAI methods for users of AI-driven methods in safety-critical settings such as deep space habitats. This project is developing explainable artificial intelligence (XAI) techniques for human-autonomy teaming in space exploration. In this project we will utilize feedback from NASA scientists, engineers, astronauts, and flight controllers on how they would need to interact with highly autonomous systems on future missions, and conduct human studies with supervisory and collaborative human-robot teaming tasks. The human studies will include both conventional and learning-based AI algorithms such as motion planning, perception, and decision making. The end result will be a validated set of design requirements for developing XAI systems for space exploration.

 

Abhijat Biswas

Name: Abhijat Biswas
School: Carnegie Mellon University
Project: Using Virtual Reality Driving Simulation to Model the Dynamics of Driver Peripheral Vision
Research Advisor: Dr. Henny Admoni

 

During driving, relevant stimuli such as a jaywalking pedestrian sign often first appear outside of the central region of our gaze, but we are still able to respond to them. This is characterized by the Functional Field of View (FFoV) — the region of our field of view in which stimuli can be processed during a single fixation. We propose to model drivers’ FFoV in real-world driving scenarios which is challenging to do in the real world. Hence, we first built a high-fidelity virtual reality driving simulator for behavioral research to study drivers' attention in a safe manner. Now, we are collecting data in a human subject’s study where we ask people to perform a driving task in our virtual reality simulator while also responding to occasional peripheral targets. Then, we can use this data to build a predictive model of driver FFoVs. Finally, we will demonstrate the model’s effectiveness by having i t predict inattention during driving in simulation and deploying assistive interventions to mitigate its effect. We hope that this model will allow us to develop intelligent driving assistance based on the psychophysical limits of driver peripheral vision and ultimately lead to safer drives for all!

Sergio Machaca

Name: Sergio Machaca
School: Johns Hopkins University
Project: Investigating the Efficacy of Dynamically Modulated Multi-Modality Haptic Feedback in Robot-Assisted Minimally Invasive Surgical Training
Research Advisor: Dr. Jeremy Brown

 

Despite the widespread use of robotic minimally invasive surgery (RMIS) in surgical procedures, commercially available RMIS platforms do not provide haptic feedback. This technical limitation can significantly increase the learning curve for robotic surgery trainees. In addition, the standard approach to RMIS training requires an expert evaluator to observe and rate trainee performance, often not in real-time. Recent studies have begun to demonstrate the utility of haptic feedback in training specific aspects of RMIS skill. However, it is not well understood if the utility of these haptic feedback approaches scales when they are combined and provided simultaneously to the trainee. In addition, little is known regarding whether trainee skill level can be used as a metric for dynamically modulating these feedback approaches in real-time. The central hypothesis of the proposed work is that a dual-modality haptic feedback training system for RMIS that dynamically tunes each modality according to surgical skill will provide more utility in RMIS training than single-modality, unmodulated haptic feedback. I propose a one-year, three-phase research project to evaluate this hypothesis with surgical trainees using the da Vinci surgical robot using dual-modality haptic feedback of robot arm accelerations and contact forces.

Neeli Tummala

Name: Neeli Tummala
School: University of California, Santa Barbara
Project: Computational Modeling of Human Touch for Haptic Feedback in Virtual Environments
Research Advisor: Dr. Yon Visell

Virtual reality (VR) simulations are promising tools for training in diverse areas such as aerospace, medicine, and many other fields. Effective training simulations should allow users to experience tactile sensations during manipulation tasks with virtual objects. However, existing haptic VR technologies cannot provide touch feedback that matches the sensations felt during real-world manipulation activities. This is partially due to a lack of comprehensive understanding of the complex signals and computational processes underlying human tactile sensing, particularly the interplay between hand biomechanics and touch sensations. This project aims to address this gap in knowledge through a data-driven simulation of touch sensing in the hand with unprecedented fidelity, temporal resolution, and spatial accuracy. This simulation will integrate optical vibrometry measurements of biomechanical signals in the skin elicited by touch interactions, physiologically validated neuron models of touch receptors, and machine learning techniques that emulate human abilities to extract information via touch. This research will yield novel methods and simulation tools for elucidating mechanisms of touch perception and contribute knowledge that can advance haptic interfaces for natural touch interaction in virtual environments.

 

Niall Williams

Name: Niall Williams
School: University of Maryland, College Park
Project: Natural Walking Interfaces to Improve Immersive Training in Virtual Reality
Research Advisor: Dr. Dinesh Manocha

In virtual reality, the user is simultaneously located in a physical and virtual environment, and the virtual environment is usually much larger than the physical environment. This makes it difficult for users to safely explore large virtual environments using natural, everyday locomotion without walking into obstacles in the physical environment that they cannot see. To get around this, different virtual locomotion techniques such as walking-in-place and point-and-click teleportation have been developed to allow users to explore virtual environments while located in constrained physical spaces. Although these techniques work, research has shown that exploring an environment using natural walking is more effective for spatial understanding and task performance. To enable real walking in virtual reality, we employ a technique called redirected walking which decouples the shape of the user's physical and virtual trajectories, allowing us to steer them away from unseen obstacles. My thesis focuses on developing new redirected walking algorithms to allow users to explore virtual environments using natural walking, while located in uncontrolled, unpredictable physical spaces (e.g. a user's home, or an office), with the aim of improving the effectiveness of virtual training applications by providing users with a more immersive and realistic experience.

 

SECOND YEAR FELLOWS

Mike Salvato

Name: Mike Salvato
School: Stanford University
Project: Predicting Hand-Object Interaction for Improved Haptic Feedback in Simulated Environments
Research Advisor: Dr. Allison Okamura

 

Accurately detecting when a user begins interaction with virtual objects is necessary for compelling multi-sensory experiences in mixed reality. To address inherent sensing, computation, display, and actuation latency, we developed a system to predict when a user will begin touch interaction with a virtual object before it occurs. This system leverages a sequence of hand poses when approaching an object, combined with object pose, to predict when the user will begin contact. Thus far, this system has been shown to work only in simulation. As we continue the work, we will implement this system on an augmented reality system. We will compare the effectiveness of this system with a non-predictive, state-of-the-art, off-the-shelf hand tracking method. To do so, we will integrate this system with a vibrotactile device, which activates upon detected interaction time with a virtual object. Subjects will compare the realism of each system. By leveraging this information, we could reduce or eliminate latency in providing haptic feedback during virtual object interaction. We focus on small time horizons, on the order of 100ms, to overcome sense-to-actuation latency for haptic feedback in mixed reality systems.

 


If you would like to find out more about our Link Foundation Modeling, Simulation and Training Fellows and projects that have been funded in the field of Modeling, Simulation and Training by the Link Foundation, please visit the Link Modeling, Simulation and Training webpage at http://www.linksim.org/.