Hyunjung Lee (Graduate Student Candidate)

main 

Undergraduate Student (B.S), Embedded System-on-Chip Integrator
AI-Embedded System/Software on Chip (AI-SoC) Lab
School of Electronics Engineering, Kyungpook National University
Phone: +82 053 940 8648
E-mail: guswnd0403 [@] naver[DOT] com
[Homepage] [Google Scholar] [SVN]

Repository Commit History

main 

Introduction

Full Bio Sketch

Mr. Lee is currently doing his undergraduate degree in Electronics Engineering at Kyungpook National University, Daegu, Republic of Korea. His research interests cover automotive embedded systems, specially in designing multi-camera interoperable emulation framework using embedded edge-cloud AI computing for autonomous vehicle driving. He is pursuing his research to implement the entire full stacks from low-level embedded firmware to autonomous driving algorithm, including the hardware systems, by interoperating human-interative interfaces.

Research Topic

Vehicle-Controller-Interface Interoperation Emulation Framework

Frames from four cameras connected to the Raspberry Pi are transmitted to the PC for streaming. On the PC, object detection is performed using yolo based on this streaming screen to determine the location of the object and the distance to the object. Using the identified information, the handling angle and speed for safe driving of the vehicle are appropriately set and transmitted to the Arduino board to ultimately control the vehicle. Additionally, the yoke steering wheel is linked to the PC to transmit the user's current handling angle information. In the process of processing information, control is basically based on the user's current handling angle, and if the angle differs by more than a certain amount from the handling angle for safe driving, not the current handling angle but the handling angle for safe driving is applied to control. Through this, it is possible to respond to various dangerous situations such as lane departure and obstacle collision that can occur when the driver cannot fully concentrate on driving. In driving situations, among the four cameras, information from the front camera is given the highest priority, and information from other cameras is used only in dangerous situations that have exceeded the threshold, allowing the driver to respond to emergency situations.

The distance to an object is defined by converting the coordinates of an object detected in a two-dimensional plane into coordinates in a three-dimensional space based on camera calibration information and measuring the distance from the coordinates where the camera is located. In addition to this method, we are researching various methods of recognizing the distance to an object using a monocular camera, and plan to apply the optimal method.

Additionally, in parking situations, the car is designed to recognize parking lines around the vehicle using four cameras and then automatically park along the optimal parking route by appropriately switching between drive mode and rear mode. The around view, as if looking at the vehicle from above, displays the vehicle when it is parked, allowing the driver to also understand the surrounding situation. Object detection in parking situations raises the standard for risk judgment higher than in driving situations, allowing parking to proceed while safely responding to situations such as when a vehicle or person suddenly passes by while parking.

Publications

Conference Publications (Intl. 1)

  • Hyunjung Lee and Daejin Park. Multi-Camera Interoperable Emulation Framework using Embedded Edge-Cloud AI Computing for Autonomous Vehicle Driving In IEEE Vehicular Networking Conference (VNC), 2024.

Participation in International Conference

  • IEEE VNC, Kobe, Korea

Last Updated, 2024.5.2