Presentation + Paper
5 March 2021 Progress on photonic tensor processors based on time multiplexing and photoelectric multiplication
Ryan Hamerly, Alexander Sludds, Liane Bernstein, Vivienne Sze, Joel Emer, Marin Soljacic, Dirk Englund
Author Affiliations +
Abstract
Optical approaches to machine learning rely heavily on programmable linear photonic circuits. Since the performance and energy efficiency scale with size, a major challenge is overcoming scaling roadblocks to the photonic technology. Recently, we proposed an optical neural network architecture based on coherent detection. This architecture has several scaling advantages over competing approaches, including linear (rather than quadratic) chip-area scaling and constant circuit depth. We review the fundamental and technological limits to the energy consumption in this architecture, which shed light on the quantum limits to analog computing, which are distinct from the thermodynamic (e.g. Landauer) limits to digital computing. Lastly, we highlight a recent "digital" implementation of our architecture, which sheds light on the scaling challenges associated with controlling aberrations in the free-space optical propagation.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ryan Hamerly, Alexander Sludds, Liane Bernstein, Vivienne Sze, Joel Emer, Marin Soljacic, and Dirk Englund "Progress on photonic tensor processors based on time multiplexing and photoelectric multiplication", Proc. SPIE 11680, Physics and Simulation of Optoelectronic Devices XXIX, 116800E (5 March 2021); https://doi.org/10.1117/12.2576990
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Multiplexing

Free space optics

Neural networks

Logic

Optical components

Receivers

Silicon photonics

Back to Top