Analog In-memory Hardware

Recently, deep learning (DL) models have shown performance beating all the other AI algorithms and models. The deeper the DL is, the more accurate which led to an exponential increase in hardware requirements. Emerging devices have enabled the In-memory computing paradigm leading to order of magnitude speedup compared to the traditional Von-Neumann computing paradigm.

Our goal of this project is to enable efficient offline and online training of AI hardware accelerators through proposing new hardware architecture and software methods taking into consideration the hardware nonidealities, without involving the hardware or spice simulators, to achieve the highest performance possible.


Related Publications:

Journals:

Conferences:

Patents:

  • J. Lee, S. Lee, J. Jung, M. E. Fouda, F. Kurdahi, and A. Eltawil, "Stuck-at-Fault Mitigation Method for ReRAM-based Deep Learning Accelerators", U.S. Patent Application No. 17/581,327.

Collaborators:

  • Prof. Fadi Kurdahi (UCI)
  • Prof. Ahmed Eltawil (KAUST)
  • Prof. Jongeun Lee (UNIST)
  • Prof. Rouwida Kanj (AUB)

Students:

  • Mariam Rakka (UCI)
  • Sugil Lee (UNIST)
  • Jinane Bazzi (KAUST)
  • Jana Swidan (AUB)