Myeongjin Kang (Integrated Ph.D. Student)
Repository Commit HistoryIntroductionFull Bio SketchMr. Kang received his B.S. degree in Electronics Engineering at Kyungpook National University, Daegu, Republic of Korea in 2020. He is currently a integrated Ph.D. student in School of Electronics Engineering at Kyungpook National University, Daegu, Republic of Korea. His research interests include the robust execution techniques of microcontroller. The techniques for robust execution includes processing of ECC and analysis of power consumption data in embedded systems. His research direction pursues performance improvement, low power consumption, and size reduction based on robust execution for the development of microcontrollers. Currently, he is conducting research on analyzing processor bus communication with mcahine learning for error detection and error prediction for robust execution. Research TopicECC-Protected Robust ProcessorsA tiny processing unit (TPU) activated with insufficient power always has a problem with data protection. To solve this problem, many TPUs and embedded systems use error-correcting code (ECC), especially Hamming code. However, adding an ECC decoding block to the TPU can cause a bottleneck. Most TPUs that follow a Von Neumann structure spend large amounts of time in the instruction fetch stage. The instruction fetch time increases due to ECC decoding intensifying the bottleneck. In this research, we propose an architecture for a parallelized ECC decoding block. Although it increases memory usage, the parallelized ECC decoding block speeds up the entire TPU by more quickly processing the ECC decoding. This architecture was synthesized and validated with Design Compiler and showed successful performance improvements using proposed architecture. Safe Software Execution using Sensor-FusionIf an error occurs in a system where several edges are gathered and operated together, the error may be transferred to other edges or the entire system may be down. Therefore, it is important to judge and control the errors of each edge in such a system, which puts a load on the embedded system of small edges. To solve this problem, we show that the server can determine errors using the power consumption data, and the data consumption allows the server to read data values through data communication using QR codes. The proposed architecture was implemented using chip-whisperer to measure edges and data, as well as raspberry pi to implement the server. In the next, we plan to study additional error detection techniques by learning the bus analysis data of the processor through machine learning. Cloud-Edge Connected Online Computing for Seamless Edge AI SerivesThis research aims to empower low-performance edge devices with the ability to execute AI tasks seamlessly in real-time. Beyond using lightweight models for simple inference, study focuses on dynamic adaptation to new environmental conditions, enabling real-time sensor control and signal processing tailored to specific contexts. To achieve this objective, the research employs a novel approach. It involves reconstructing sensor data through virtual simulations hosted on a server, which serves as a cloud accelerator for the edge devices. Subsequently, code generation based on these simulations allows for the creation of adaptive algorithms. These algorithms are then transmitted to the edge devices via high-speed parallel communication channels, enabling them to execute AI tasks in real-time while seamlessly adjusting to changing environmental factors. This approach ensures that the edge devices can efficiently respond to dynamic situations, enhancing their overall performance and utility in various application domains. Through this research, several advantages can be obtained, including seamless AI operation, robust execution, and flexible execution with unlimited volume. These benefits enhance the reliability, adaptability, and scalability of AI execution in edge computing environments. Event-Driven Edge AI Software Offloading SchedulingThe trend of executing AI software using edge devices’ own hardware is increasing. However, for tasks requiring large-scale computations such as learning and preprocessing, data is often transmitted to a server for execution and the edge device waits for the response. While the server processes data much faster than the edge device, the uncertainty regarding processing time and required resources causes the edge device to remain in a polling state. Therefore, there is a need for interrupt and event-driven based scheduling of edge AI software. The research focus is on identifying dependencies through AI software analysis, scheduling AI software based on analysis of execution time relative to code and parameter data size, and developing interrupt and event-driven based edge-server parallel processing techniques. By analyzing AI software code in terms of dependencies, size, execution time, and resource usage, the code is restructured to operate in an event-driven manner based on interrupts rather than polling on the edge device. This enables complete parallel processing between the server and the edge, allowing the edge device to operate independently of server performance. Additionally, the interrupt-based approach allows for dynamic responses to server results. PublicationsJournal Publications (KCI 4, SCI 2)
Conference Publications (Intl. 8)
Patents (Total 1 Patents Pending)
Participation in International Conference
Last Updated, 2024.12.13 |