Neuroengineers have come up with the idea of artificial neural networks, massively parallel computing machines inspired by biological nervous systems. Such systems offer a new paradigm, where learning from examples or learning from interaction replaces programming. They are basically composed of interconnected simple processing elements (i.e., artificial neurons or simply neurons).
A predominant approach in the field of artificial neural networks consists of using a database to train the system by applying a so-called learning algorithm to modify the interconnection dynamics between artificial neurons, using a predetermined network topology (i.e., the number of artificial neurons and the way they are interconnected). When the training process ends, the system remains fixed, and can be considered a learned system. However, in applications where the entire training database is not available or may change in time, the learning process must persist, giving rise to learning systems. A problem of such systems is how to preserve what has been previously learned while continuing to incorporate new knowledge. Learning systems overcome such problems by a local adaptation process and by offering the possibility of dynamically modifying the network's topology. Most artificial neural network applications today are executed as conventional software simulations on sequential machines with no dedicated hardware. Reconfigurable hardware can provide the best features of both dedicated hardware circuits and software implementations: high-density and high-performance designs, and rapid prototyping and flexibility.