Hardware Neural Networks: From Inflated Expectations to Plateau of Productivity
Times and again, a longstanding fascination with the brain has attracted computer architects to hardware neural networks. However, the fascinating nature of the topic is also its greatest pitfall: it sometimes drives researchers to forgo the pragmatic application-driven nature of computer architecture, and the results can fall disappointingly short of the lofty goal of emulating the brain in hardware; as a result, most computer architects have stayed away from the topic.
In this presentation, we outline that it might be more sustainable to focus on brain functionality than on brain structure. Architectures designed to efficiently implement some important functionality are more likely to find support than architectures designed to emulate a biological structure. And in the past decade, machine-learning researchers, who largely share this longstanding fascination with the brain, have made significant progress towards emulating some elementary, but already important, brain functionalities (e.g., image and speech recognition) using so-called deep neural networks. These successes come at a time where transistor technology constraints are nudging architectures towards custom accelerators. This remarkable conjunction of algorithm, application and technology evolutions can pave the way for the development of competitive hardware neural network accelerators, and help cement the adoption of the topic within the computer architecture community.
Tue 16 Jun
|11:20 - 12:30|