Link Foundation Fellowships Newsletter

Inside this Issue

Features

Meet this Year's Fellowship Recipients

Link Fellowship Awardees for 2023 - 2024

Modeling, Simulation and Training

FIRST YEAR FELLOWS

Benjamin Killeen

Name: Benjamin Killeen
School: Johns Hopkins University
Project: Interactive Digital Twins for Simulating the Future of Work in AI- and Robot-Assisted Operating Rooms
Research Advisor: Dr. Mathias Unberath

 

Robots and AI are rapidly changing the nature of work, from chatbots as editors to drones in nuclear reactors. The modern operating room is no exception, as medical robots disrupt existing workflows and introduce new capabilities, benefiting clinicians and patients alike by reducing occupational risk and broadening access to the highest standard of care. However, AI assistance systems involve complicated workflows for setup, calibration, and deployment, during surgeries that can last several hours. To prepare for the future of work in the OR, I will develop an interactive simulation called a "digital twin" as a training tool, enabling trainees to step into any role during interactive playback of a real or planned procedure. A digital twin is a virtual representation of a real object, like an operating room or patient, that reflects how that object looks and behaves in real life. A digital twin operating room will reflect the appearance and constraints of a real operating room as a procedure plays out, so that individuals can attain confidence through near-real-world experience interacting with simulated robots and colleagues.

 

Irene Kim

Name: Irene Kim
School: Johns Hopkins University
Project: A Human-in-the-Loop Simulator for Controlling an Upper Limb Prosthesis Using Mixed Reality
Research Advisor: Dr. Peter Kazanzides

 

The loss of upper limbs severely impacts individuals' ability to perform activities of daily living (ADLs) and degrades their overall quality of life. Existing myoelectric prostheses, which rely on electromyographic (EMG) signals for control, suffer from limitations such as extensive training requirements and high cognitive load, leading to a high abandonment rate. To address these challenges, this project aims to develop a Human-in-the-Loop (HITL) simulator for upper limb prosthesis control and training by incorporating vision data, EMG signals, and mixed reality (MR) user interface. The proposed system would perceive the environment via multimodal sensors, develop action plans using artificial intelligence, and preview them through augmented reality. The system will first be used for training, with a simulated prosthesis and environment, and then assist the user to perform ADLs with a real prosthesis. The performance and usability of the system will be evaluated through experiments and compared with conventional EMG-controlled methods. The anticipated impact of this research includes robust and reliable control of multi-degree-of-freedom (DOF) transradial prostheses as well as easier and more accessible training for prosthesis use. Ultimately, this project seeks to contribute to the advancement of assistive technologies for amputees.

 

Muhammad Twaha Ibrahim

Name: Muhammad Twaha Ibrahim
School: University of California Irvine
Project: DYNASAUR: Dynamic Spatially Augmented Reality
Research Advisor: Dr. Aditi Majumder

 

My research focuses on developing Spatially Augmented Reality (SAR) on dynamic, deformable surfaces/objects, which I call DYNASAUR (Dynamic Spatially Augmented Reality). In general, SAR systems focus on augmenting the real world with virtual content (e.g., video or images) by projecting the virtual content from one or more projectors onto a real object (e.g., the walls of a room, a life-sized statue, a table top object or even a moving fabric), merging the real and the virtual seamlessly. SAR creates experiences that can be shared by multiple people together without any wearables (e.g., VR/AR headsets) and enables them to interact with each other in a natural way. Prior research deals with creating SAR on static objects in an automated manner using one or more feedback RGB cameras. However, no prior research addresses dynamic and deformable surfaces in a comprehensive manner. My research, for the first time, aims to create multi-projector SAR systems on dynamic and deformable objects (e.g. fabric, human skin, etc.) with applications in two domains: Medical (surgical simulation, training and remote assistance), and Defense (military training, simulation and command & control).

 

SECOND YEAR FELLOWS

Tammer Barkouki

Name: Tammer Barkouki
School: University of California, Davis
Project: Explainable Human-Autonomy Teaming for Deep Space Exploration
Research Advisor: Dr. Stephen Robinson

 

A new sub-field of Artificial Intelligence, called eXplainable AI (XAI) has sprung up to improve the transparency and predictability of AI systems. There is a critical need to develop XAI methods for users of AI-driven methods in safety-critical settings such as deep space habitats. This project is developing explainable artificial intelligence (XAI) techniques for human-autonomy teaming in space exploration. In this project we will utilize feedback from NASA scientists, engineers, astronauts, and flight controllers on how they would need to interact with highly autonomous systems on future missions, and conduct human studies with supervisory and collaborative human-robot teaming tasks. The human studies will include both conventional and learning-based AI algorithms such as motion planning, perception, and decision making. The end result will be a validated set of design requirements for developing XAI systems for space exploration. Objective 1 is a survey-based study of engineering students and NASA and JPL scientists, engineers, and flight controllers aimed at determining the design requirements and evaluating how metrics capture the effects of the intended design. Objective 2 involves a human study to test the effects of XAI on the performance and comprehension of a human subject during the supervision of a robot. The student will involve a robot arm manipulator pick-and-place task with preprogrammed, failure-prone tasks requiring human intervention. Objective 3 is to evaluate an XAI design framework with a learning-based robot, i.e. robot that incorporates learning-based control. This adds a new layer of uncertainty and presents new challenges with user trust and comprehension of the robotic system.

 

Sergio Machaca

Name: Sergio Machaca
School: Johns Hopkins University
Project: Investigating the Efficacy of Dynamically Modulated Multi-Modality Haptic Feedback in Robot-Assisted Minimally Invasive Surgical Training
Research Advisor: Dr. Jeremy Brown

 

Despite the widespread use of robotic minimally invasive surgery (RMIS) in surgical procedures, commercially available RMIS platforms do not provide haptic feedback. This technical limitation can significantly increase the learning curve for robotic surgery trainees. In addition, the standard approach to RMIS training requires an expert evaluator to observe and rate trainee performance, often not in real-time. Recent studies have begun to demonstrate the utility of haptic feedback in training specific aspects of RMIS skill. However, it is not well understood if the utility of these haptic feedback approaches scales when they are combined and provided simultaneously to the trainee. In addition, little is known regarding whether trainee skill level can be used as a metric for dynamically modulating these feedback approaches in real-time. The central hypothesis of the proposed work is that a dual-modality haptic feedback training system for RMIS that dynamically tunes each modality according to surgical skill will provide more utility in RMIS training than single-modality, unmodulated haptic feedback. I propose a one-year, three-phase research project to evaluate this hypothesis with surgical trainees using the da Vinci surgical robot using dual-modality haptic feedback of robot arm accelerations and contact forces.

 

Neeli Tummala

Name: Neeli Tummala
School: University of California, Santa Barbara
Project:Computational Modeling of Human Touch for Haptic Feedback in Virtual Environments
Research Advisor: Dr. Yon Visell

 

Virtual reality (VR) simulations hold promise for training in many areas, including surgery, robotics, and aerospace. Effective training simulations should allow users to experience realistic and informative touch sensations during manipulation tasks with virtual objects. However, existing haptic VR technologies cannot provide tactile feedback that matches the sensations felt during real-world manual activities. This is partly due to a limited understanding of the complex mechanical and neural processes underlying the sense of touch, especially the prominent role of hand biomechanics in spatiotemporally filtering tactile signals and shaping touch perception. I aim to address this crucial gap in knowledge by developing a novel computational framework for simulating tactile sensing in the hand. This approach integrates whole-hand vibrometry measurements of touch-elicited signals in the skin with physiologically validated neuron models of touch receptors. The simulation outputs are then analyzed and interpreted using machine learning and neural information theory techniques, including approaches drawn from sensory neuroscience research in audio and vision. The tools and methods developed in this project will be integrated as an open-source software toolbox that will enable advancements in understanding the sense of touch and facilitate the development of new haptic technology supporting touch interactions in virtual environments.

 


If you would like to find out more about our Link Foundation Modeling, Simulation and Training Fellows and projects that have been funded in the field of Modeling, Simulation and Training by the Link Foundation, please visit the Link Modeling, Simulation and Training webpage at http://www.linksim.org/.