In the last decade, AI has achieved great breakthroughs and has transformed a number of industries, e.g., robotics, self-driving cars, healthcare, etc. Our goal is to explore new paradigms for AI and intelligent computing from the perspective of hardware. Specifically, we are exploring the hardware acceleration of AI algorithms, e.g., neural networks, based on digital logic, FPGAs, resistive RAM (RRAM), silicon photonic components, etc.
The title of the paper is “Basis Sharing: Cross-Layer Parameter Sharing for Large Language Model Compression”
The title is “CorrectBench: Automatic Testbench Generation with Functional Self-Correction using LLMs for HDL Design"
The titles are “AutoBench: Automatic Testbench Generation and Evaluation Using LLMs for HDL Design” and “Automated C/C++ Program Repair for High-Level Synthesis via Large Language Models”.
The title of this paper is “BasisN: Reprogramming-Free RRAM-Based In-Memory-Computing by Basis Combination for Deep Neural Networks”.