About me

Hi, I’m Yuki Koyama. Thank you so much for visiting my page :)

Currently

I’m working at Honda R&D Japan (their most known achievement is ASIMO, one of the most advanced bi-pedal robots at that time) as a researcher in Robotics. Here, I do research on an avatar robot to achieve a society where people can experience and work beyond the constraints of time and space (Please watch this video). My current focus lies on intuitive control, human robot interaction, human machine interface, and etc.

Graduation Study

I received a Bachelor’s degree (2018) in Mechanical Engineering from the Department of Mechanical Engineering, the University of Tokyo. I obtained a Master’s degree (2020) and a Ph.D. (2023) in Mechanical Engineering from the Graduate School of Engineering, the University of Tokyo.

I was a member of the Harada Laboratory (formerly Mitsuishi-Harada Laboratory) of the University of Tokyo, Japan. The laboratory’s objective is to develop surgical robots specializing in minimally invasive surgery and microsurgery.

Automating surgical tasks and expanding the surgeon’s ability by using robots are something I’ve been passionate about. During my graduation research, I worked on task (semi-)autonomy to assist vitreoretinal surgery, one of the most challenging eye surgeries, using the SmartArm surgical robotic system.

ICRA2023

Surgeons frequently rotate the eye during surgery, which is called orbital manipulation, to observe regions around the fundus. In this paper, we propose the autonomous orbital manipulation in robot-assisted vitreoretinal. In a simulation study, we preliminarily investigated the increase in the manipulability of our system using orbital manipulation. Furthermore, we demonstrated the feasibility of our method in experiments with a physical robot, showing an increase in the view-able area when compared to a conventional technique.

Paper: Arxiv, IEEE, Source code: Minimal example

Paper-related video:

TMRB2022

Robot assistance allows for task automation. In this work, we propose a strategy to automate the motion of the light guide with respect to the surgical instrument. This automation allows the instrument’s shadow to always be inside the microscopic view, which is an important cue for the accurate positioning of the instrument in the retina. We show simulations and experiments demonstrating that the proposed strategy is effective in a 700-point grid in the retina of a surgical phantom. Furthermore, we integrated the proposed strategy with image processing and succeeded in positioning, relying on only the robot’s geometric information and microscopic images.

Paper: Arxiv, IEEE

Paper-related video: