Analog In-memory Hardware
Recently, deep learning (DL) models have shown performance beating all the other AI algorithms and models. The deeper the DL is, the more accurate which led to an exponential increase in hardware requirements. Emerging devices have enabled the In-memory computing paradigm leading to order of magnitude speedup compared to the traditional Von-Neumann computing paradigm.
Our goal of this project is to enable efficient offline and online training of AI hardware accelerators through proposing new hardware architecture and software methods taking into consideration the hardware nonidealities, without involving the hardware or spice simulators, to achieve the highest performance possible.
Related Publications:
Journals:
- J. Bazzi, J. Sweidan, M. E. Fouda, R. Kanj, A. Eltawil, “DNA Pattern Matching Acceleration with Analog Resistive CAM”, ArXiv.
- M. Rakka, M. E. Fouda, R. Kanj, F. Kurdahi “DT2CAM: A Decision Tree to Content Addressable Memory Framework” ArXiv.
- S Lee, M. E. Fouda, J. Lee, A. Eltawil, F. Kurdahi, “Offline Training-based Mitigation of IR Drop for ReRAM-based Deep Neural Network Accelerators" TCAD, 2022.
- K. Smagulova*, M. E. Fouda*, F. Kurdahi, K. Salama, and A. Eltawil “Resistive Neural Hardware Accelerators” arXiv preprint arXiv:2109.03934, *Equal Contribution.
- M. E. Fouda, S. Lee, J. Lee, A. Eltawil, and F. Kurdahi, "IR-QNN Framework: An IR Drop-Aware Offline Training Of Quantized Crossbar Arrays", IEEE Access, 2020.
- M. E. Fouda, S. Lee, J. Lee, A. Eltawil, and F. Kurdahi, ''Mask Technique for Fast and Efficient Training of Binary Resistive Crossbar Arrays'', IEEE TNANO, 2019.
- M. E. Fouda, E. Neftci, A. Eltawil, and F. Kurdahi, “Independent Component Analysis using RRAMs", IEEE Transactions on Nanotechnology, 2018.
- M. E. Fouda, A. Eltawil and F. Kurdahi. “ Modeling and Analysis of Passive Switching Crossbar Arrays”, IEEE Transactions of Circuits and Systems I, 2018, (IF:3.605).
Conferences:
- S. Lee, M. E. Fouda, J. Lee, A. Eltawil, and F. Kurdahi, “Fast and Low-Cost Mitigation of ReRAM Variability for Deep Learning Applications", ICCD 2021. Youtube.
- G. Jung , M. E. Fouda, S. Lee, J. Lee, A. Eltawil, and F. Kurdahi, “Cost- and Dataset-free Stuck-at Fault Mitigation for ReRAM-based Deep Learning Accelerators”, DATE 2021
- S Lee, G. Jung, M. E. Fouda, J. Lee, A. Eltawil, and F. Kurdahi, “Learning to Predict IR Drop with Effective Training for ReRAM-based Neural Network Hardware", DAC, 2020.
- M. E. Fouda, A. Eltawil, and F. Kurdahi, “Activated Current Sensing Circuit for Memristive Neuromorphic Networks", IEEE NEWCAS, 2019.
- M. E. Fouda, J. Lee, A. Eltawil, and F. Kurdahi, ''Overcoming Crossbar Nonidealities in Binary Neural Networks Through Learning", IEEE NANOARCH, 2018.
Patents:
- J. Lee, S. Lee, J. Jung, M. E. Fouda, F. Kurdahi, and A. Eltawil, "Stuck-at-Fault Mitigation Method for ReRAM-based Deep Learning Accelerators", U.S. Patent Application No. 17/581,327.
Collaborators:
- Prof. Fadi Kurdahi (UCI)
- Prof. Ahmed Eltawil (KAUST)
- Prof. Jongeun Lee (UNIST)
- Prof. Rouwida Kanj (AUB)
Students:
- Mariam Rakka (UCI)
- Sugil Lee (UNIST)
- Jinane Bazzi (KAUST)
- Jana Swidan (AUB)