I am passionate about inventing new interactive technologies that improve the users’ experience and empowers them with new and exciting capabilities. I am a full-stack interactive systems engineer and who invents interactive artifacts from electronics up to the UI software. I have a keen interest in applying AI and machine learning to create new interactive applications and experiences, by allowing machines to understand human gestures, activities, mental and physical states. As a Human-Computer Interaction researcher, I value the importance improving the usability and UX of new technologies through studying actual users. My work has resulted in multiple patents and has been published at top venues in the field.
Prior to joining Snapchat as a Senior Research Engineer, I worked as a Senior UX engineer building UI for self-driving trucks at Kodiak Robotics. From 2017-2018, I was member of the Future Experience Team at Harman International. Before that, I worked as a research scientist at FX Palo Alto Laboratory (FXPAL). During my Ph. D. studies, I worked as a research assistant at Media Informatics and Human-Computer Interaction Group at the University of Munich , Germany, where I obtained my Ph.D. in Computer Science in 2012. From 2008–2011, I worked as a junior researcher and Ph.D. student within the Quality and Usability Group at Telekom Innovation Laboratories, TU Berlin, Germany. I interned at the Microsoft Applied Sciences Group in the summers of 2010 and 2011. I hold a Diplom degree in Computer Science from RWTH Aachen University, Germany.
I have published 32 peer-reviewed scientific publications at top journals and conferences, comprising 1 best-paper and 2 honorable mentions. My technical contributions have resulted in 26 granted and a multitude of pending US and international patents.
At Kodiak Robotics, one of the leading self-driving truck startups, I spearheaded the development of the user interface for the autonomous vehicles. The Viewer, as we called it, catered to a number of different stakeholders in the company, from autonomous vehicle operators, QA engineers, machine learning engineers to the Kodiak executives for demo events. I enjoyed my time at Kodiak very much. I learned a lot about robotic vehicle stack design, React.js and Three.js, as well as user-centered UI/UX design and engineering in a real-world setting. Software updates I pushed were used immediately by various co-workers in at the company, and it was invigorating to have this level of daily impact.
[Image credit: Kodiak voluntary safety self-assesment (VSSA), 2020]
I noticed that the majority of thermal haptic output devices in the existing HCI literature only have one thermal pixel. Why not have an entire grid of them? Seizing this opportunity to contribute to the body of haptic UI research, I engineered a novel thermal haptic output device ThermoTouch. ThermoTouch is a haptic hardware prototype that provides a grid of thermal pixels, with an overlaid video projection. Unlike previous devices, which mainly use Peltier elements for thermal output, ThermoTouch uses liquid cooling and electro-resistive heating to output thermal feedback at arbitrary grid locations, which will potentially provide faster temperature switching times and a higher temperature dynamic range. Furthermore, the PCB-based design allows us to incorporate capacitive touch sensing directly on a thermal pixel.
ThermoTouch was presented as a full paper at ACM Interactive Surfaces and Spaces (ISS) 2017, and late-breaking work at CHI 2016.
Most current mobile and wearable devices are equipped with inertial measurement units (IMU) that allow the detection of motion gestures, which can be used for interactive applications. A difficult problem to solve, however, is how to separate ambient motion from an actual motion gesture input. We explore the use of motion gesture data labeled with gesture execution phases for training supervised learning classifiers for gesture segmentation. We believe that using gesture execution phase data can significantly improve the accuracy of gesture segmentation algorithms. We define gesture execution phases as the start, middle and end of each gesture. Since labeling motion gesture data with gesture execution phase information is work intensive, we used crowd workers to perform the labeling.
Using this labeled data set, we trained SVM-based classifiers to segment motion gestures from ambient movement of the device. Our main results show that training gesture segmentation classifiers with phase-labeled data substantially increases the accuracy of gesture segmentation: we achieved a gesture segmentation accuracy of 0.89 for simulated online segmentation using a sliding window approach.
A full paper about GestureSeg will be presented at EICS 2016.
The FXPAL robotics research group has recently explored technologies for improving the usability of mobile telepresence robots. We evaluated a prototype head-tracked stereoscopic (HTS) teleoperation interface for a remote collaboration task. The results of this study indicate that using a HTS systems reduces task errors and improves the perceived collaboration success and viewing experience.
 
We also developed a new focus plus context viewing technique for mobile robot teleoperation. This allows us to use wide-angle camera images that proved rich contextual visual awareness of the robot's surroundings while at the same time preserving a distortion-free region in the middle of the camera view.
To this, we added a semi-automatic robot control method that allows operators to navigate the telepresence robot via a pointing and clicking directly on the camera image feed. This through-the-screen interaction paradigm has the advantage of decoupling operators from the robot control loop, freeing them for other tasks besides driving the robot.
As a result of this work, we presented two papers at the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).
The paper Look Where You're Going: Visual Interfaces for Robot Teleoperation won best paper award at the conference!
What if mobile phones were equipped with depth imaging cameras? PalmSpace envisions the use of such cameras to facilitate interaction with 3D content using hand gestures. We developed a technique that maps the pose of the user's palm directly to 3D object rotation. Our user study shows that the users could manipulate the 3D objects significantly faster than with a standard virtual trackball on the touch screen.
Protractor3D is a tilt-invariant, data-driven gesture recognizer for 3D motion gestures from data which can be obtained, for example, from 3D accelerometers on smart phones.