Affiliation:
1. NTT Research, Inc.
2. City University of New York
3. Korea University
Abstract
The rapid rise of machine learning drives demand for extensive
matrix-vector multiplication operations, thereby challenging the
capacities of traditional von Neumann computing systems. Researchers
explore alternatives, such as in-memory computing architecture, to
find energy-efficient solutions. In particular, there is renewed
interest in optical computing systems, which could potentially handle
matrix-vector multiplication in a more energy-efficient way. Despite
promising initial results, developing high-throughput optical
computing systems to rival electronic hardware remains a challenge.
Here, we propose and demonstrate a hyperspectral in-memory computing
architecture, which simultaneously utilizes space and frequency
multiplexing, using optical frequency combs and programmable optical
memories. Our carefully designed three-dimensional opto-electronic
computing system offers remarkable parallelism, programmability, and
scalability, overcoming typical limitations of optical computing. We
have experimentally demonstrated highly parallel, single-shot
multiply-accumulate operations with precision exceeding 4 bits
in both matrix-vector and matrix-matrix multiplications, suggesting
the system’s potential for a wide variety of deep learning and
optimization tasks. Our approach presents a realistic pathway to scale
beyond peta operations per second, a major stride towards
high-throughput, energy-efficient optical computing.