logo EECS Rising Stars 2023




Bokyung Kim

Processing-in-memory Accelerators Toward Energy-Efficient Real-World Machine Learning



Research Abstract:

Unlike the sensational advance of machine learning algorithms, hardware development falls far behind because of the separation of storage and computation. Processing-in-memory (PIM) accelerators have appeared to overcome the stagnant progress in hardware by infusing the processing capability into memories. PIM designers should consider various factors, from low levels of devices and circuits to high levels with algorithms and applications, to achieve high efficiency of reliable hardware. I have researched energy-efficient PIM-based circuit, architecture, and system designs with diverse memory types to accelerate machine learning algorithms. PIM designs can be leveraged for specific applications showing higher resource requirements and restricted environments. I fabricated an SRAM-PIM chip based on TSMC 65 nm technology for diagnosis automation, which can detect and predict disease with high speed and low power as an implementable device for patients. As we face the end of Moore's Law, emerging devices like resistive RAM (RRAM) are actively studied. Recognizing the potential of 3D stacking technology, I have designed 1) 3D RRAM PIM accelerators for deep learning models. Beyond utilizing an existing 3D RRAM architecture, I contributed to the field with a novel 3D RRAM architecture to implement input-stationary dataflow in PIM accelerators for the first time. It was recognized and published in a top conference in the computer architecture area, 2023 HPCA. Likewise, RRAM-based PIM is a promising solution for efficiency in the post-Moore era. However, privacy becomes an imperative issue beyond efficiency in the machine-learning-ubiquitous world, as machine-learning models are unsafe from attacks. Correspondingly, I am working on a PIM accelerator designing 3) DRAM-PIM architecture for privacy. As such, my research with PIM technology contributes to evolutionary performance and efficiency for machine learning acceleration for the present and the future.

Bio:

Bokyung Kim is a Ph.D. candidate in Electrical and Computer Engineering at Duke University under the supervision of Dr. Hai (Helen) Li and Dr. Yiran Chen. She graduated with honors from Ewha Womans University in South Korea, where she received her M.S. and B.S. degrees in Electrical and Computer Engineering and Electronic and Electrical Engineering, respectively. Her research area focuses on efficient processing-in-memory accelerators for machine learning models. She has broad experience in hardware design, spanning different system levels through device modeling, mixed-signal VLSI design, and chip fabrication. She won an NSF iREDEFINE professional development award from the ECE Department Heads Association and is a select fellow of EECS Rising Stars. She will earn her Ph.D. degree in the Spring of 2024.