Table of Contents
Advances in Artificial Neural Systems
Volume 2014 (2014), Article ID 602325, 10 pages
Research Article

Architecture Analysis of an FPGA-Based Hopfield Neural Network

University of São Paulo, 05508-010 São Paulo, SP, Brazil

Received 30 June 2014; Revised 11 November 2014; Accepted 12 November 2014; Published 9 December 2014

Academic Editor: Ping Feng Pai

Copyright © 2014 Miguel Angelo de Abreu de Sousa et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA) hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.