2023 Theses Doctoral
Algorithm-Hardware Co-design for Ultra-Low-Power Machine Learning and Neuromorphic Computing
The rapid proliferation of the Internet of Things (IoT) devices and the growing demand for intelligent systems have driven the development of low-power, compact, and efficient machine learning solutions. Deep neural networks (DNNs) have become state-of-the-art algorithms in various applications, such as face recognition, object detection, and speech recognition, due to their exceptional accuracy. In terms of edge devices, it is ideal to execute these algorithms locally on devices rather than on servers to mitigate data transfer latency and address privacy concerns. Reducing power consumption and enhancing energy efficiency becomes crucial, as mobile and wearable devices typically have limited battery capacity. Low-power consumption can extend battery life, reduce recharging cycles, and thus decrease maintenance costs.
Ultra-Low-power AI hardware has increasingly garnered attention due to its potential to enable numerous compelling applications. This technology can serve as an always-on wake-up module, such as keyword spotting and visual wake-up, to facilitate hierarchical data processing. Addi-tionally, it can be employed in security and surveillance applications on battery-powered cameras and miniaturized drones. Various techniques to reduce power consumption have been proposed at individual levels, encompassing algorithms, architecture, and circuits. Application-oriented ultra-low-power AI hardware design incorporating full-stack optimization can exploit unique features in specific tasks and further minimize power consumption.
This thesis presents my research on algorithm-hardware co-design for ultra-low-power hardware for AI applications. Chapter 2 to 5 list my past works. The first work implements a spiking neural network classifier that leverages fully event-driven architecture to reduce power consump-tion while the input activity is low. The second work presents an end-to-end keyword spotting system featuring divisive energy normalization for both internal and external noise robustness. The third work shows a digital in-memory-computing macro utilizing approximate arithmetic hardware for better area and energy efficiency. The last work demonstrates an automatic speech recognition chip featuring bio-inspired neuron model, digital in-memory-computing hardware with time-sharing arithmetic units, and fully pipelined architecture for low power consumption and real-time processing.
Subjects
Files
This item is currently under embargo. It will be available starting 2025-05-24.
More About This Work
- Academic Units
- Electrical Engineering
- Thesis Advisors
- Seok, Mingoo
- Degree
- Ph.D., Columbia University
- Published Here
- June 7, 2023