Advances in Artificial Neural Systems

Volume 2014, Article ID 602325, 10 pages

http://dx.doi.org/10.1155/2014/602325

## Architecture Analysis of an FPGA-Based Hopfield Neural Network

University of São Paulo, 05508-010 São Paulo, SP, Brazil

Received 30 June 2014; Revised 11 November 2014; Accepted 12 November 2014; Published 9 December 2014

Academic Editor: Ping Feng Pai

Copyright © 2014 Miguel Angelo de Abreu de Sousa et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA) hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.

#### 1. Introduction

For nearly 50 years, artificial neural networks (ANNs) have been applied to a wide variety of problems in engineering and scientific fields, such as, function approximation, systems control, pattern recognition, and pattern retrieval [1, 2]. Most of those applications were designed using software simulations of the networks, but, recently, some studies were developed in order to extend the computational simulations by directly implementing ANNs in hardware [3].

Although there were some works reporting network implementations in analog circuits [4] and in optical devices [5], most of the researches in ANNs hardware implementations were developed using digital technologies. General-purpose processors and application-specific integrated circuits (ASICs) are the two technologies usually employed in such developments. While general-purpose processors are often chosen due to economic motivations, ASIC implementations provide an adequate solution to execute parallel architectures of neural networks [6].

In the last decade, however, FPGA-based neurocomputers have become a topic of strong interest due to the larger capabilities and lower costs of reprogrammable logic devices [7]. Other relevant reasons to choose FPGA, reported in the literature, include high performance requirement which is obtained with parallel processing on hardware systems when compared to sequential processing in software implementations [8], reduction of power consumption in robotics or general embedded applications [9], and the maintenance of the flexibility of software simulations while prototyping. In this particular feature, FPGA presents advantages over ASIC neurocomputers because of the decreased hardware cost and circuit development period.

Among several studies on different network models implemented on electronic circuits, just recently were published in the literature works on the hardware implementations addressing specifically the Hopfield Neural Networks (HNNs). Those works usually aimed to approach the resolution of a target problem, such as, image pattern recognition [10], identification of DNA motifs [11], or video compression [12] and only few studies concentrated on the peculiar features about FPGA implementations of HNN [13], which, unlike others neural architectures, is fully connected.

Driven by such motivation, the present work does the analysis of the chip occupied area according to the quantity of HNN’s nodes and the precision required to represent its internal parameters. The paper is organized as follows. Next section briefly reviews the concepts of the network proposed by Hopfield, Section 3 describes details of the implementation strategy, and the last two sections present the results, conclusions, and discussions raised by this work.

#### 2. Hopfield Neural Network

The HNN is a single-layer network with output feedback and dynamic behavior, in which the weight matrix , connecting all its neurons, can be calculated by with each one of the stored patterns represented by a vector of elements and denoting the identity matrix.

It has been shown [14] that, for a weight matrix with symmetric connections, that is, , and asynchronous update of the neural activation, , the motion equation for the neuron’s activation always leads to a convergence to stable states. In the discrete-time version of the network, the neural activation is updated according to where is a sign function of the neural potential and .

The model, proposed by Hopfield in 1982 [15], can also be interpreted as a content-addressable memory (CAM) due to its ability to restore prior-memorized patterns from initial inputs corresponding to noisy and/or partial versions of such stored patterns. The CAM storage capacity increases according to , and, with allowance for a small fraction of errors; references [16, 17] demonstrated the following relation:

##### 2.1. Application-Dependent Parameters

The number of nodes of the Hopfield architecture is defined by the length of the binary strings stored by the associative memory, which is specific to each application. For example, when using the associative architecture for the storage of binary images with pixels, the number of neural nodes has to be [14].

With respect to , which is the number of stored patterns in the associative Hopfield architecture, each one with length bits, it is also defined by the requirements of the application: is the number of distinct patterns relevant for the associative process. In the case of storage of images of decimal digits, for example, will be 10. Notice though that the theoretical study of the Hopfield architectures indicates that we need to have a reasonable relationship between the number of stored patterns and the size of the architecture . According to [16], when increases beyond it results in significant degradation of the associative memory performance, and we say that the storage capacity of the architecture is exceeded.

#### 3. Implementation Architecture

The FPGA-based HNN’s digital system developed is depicted in Figure 1 and consists of a control unit (CU) and a data flow (DF) which implements the processes shown in the block diagram of Figure 2.