EEIS 東京大学大学院 工学系研究科 電気系工学専攻

OISHI Takeshi Associate Professor

Komaba Campus

Media, Intelligence & Computation
Perceptual information processing
Intelligent robotics

Spatiotemporal modeling and representation of real world

We are developing technologies for 3D modeling, recognition, and analysis of the real world using optical sensor devices such as LiDAR and cameras to realize autonomous mobility for robots and self-driving vehicles.

Research field 1

Accurate 3D measurement by optical sensor fusion

The fusion of various sensors is essential to reconstruct 3D models of the surrounding environment for autonomous mobile systems. We are developing systems that integrate multiple optical sensors, such as LiDAR and cameras, to generate accurate 3D maps. Our team has developed high-precision calibration methods between sensors, pose estimation techniques by fusing a camera and LiDAR.
Research field 2

SLAM and robot navigation

SLAM is crucial for robots to move autonomously. As basic research for SLAM, we are developing methods for depth estimation from cameras, depth completion by LiDAR-camera fusion, and robust loop closure for SLAM. We are also developing robot navigation for applications to robotic systems based on the head tracking technology of MR devices and robot teleoperation systems.
Research field 3

AR/MR technology development and robot applications

We are developing augmented and mixed reality (AR/MR) technologies that superimpose virtual worlds generated by computer graphics in the real world. In addition to essential technologies such as camera tracking, occlusion handling, and visibility-based rendering, we have developed a vehicle-based mobile MR system that enables users to experience MR while moving over a wide area. We have also proposed applying MR device head-tracking technology to robot navigation.
Back to the list