Enhanced Learning for Evolutive Neural Architectures


ELENA-NERVES-2 - 6891

Work Area: Neural Networks and Neurosciences

Keywords neural networks, incremental learning, classification, benchmarks, VLSI


Start Date: 1 July 92 / Duration: 36 months / Status: running

[ participants / contact ]


Abstract ELENA-NERVES 2 addresses learning in neural networks, both by adding or removing neurons in the networks and by synaptic adaption, for classification tasks. The project includes theoretical work on these algorithms, simulations and benchmarks, especially on realistic industrial data, and VLSI hardware implementations.


Aims

Neural network applications are mainly devoted to classification. Unfortunately, most of the usual neural algorithms are unusable for practical applications: for instance, the architecture cannot easily grow, and it is difficult to learn new classes. The project's aim is to study and develop neural algorithms with evolutionary architectures and to finally propose a consistent set of efficient preprocessing and classification algorithms.

Approach and Methods

Theoretical work will be carried out in order to predict the efficiency of algorithms and to choose optimal parameters. Various strategies for changing the neural architecture will be evaluated, and problems of memory assignments addressed. Comparisons with classical methods are included in the theoretical approach.

Experimental work will be performed with the most efficient algorithms known. These tests and benchmarks will be carried out on industrial data in a standard graphic environment.

The design of hardware implementations of this class of algorithms will be studied in order to achieve high computation speed. The robustness of the algorithms, and the influences of limited precision, noise, drifts, etc, will all be studied in an initial phase. Performances must also be evaluated on various architectures, from classic computers to dedicated VLSI. It will then be necessary to define basic blocks and to design complete architectures. Finally, a few VLSI prototypes will be designed, using both digital and analogue technologies.

Progress and Results

Asymptotic learning theorems have been proved, that show that any classifier (including the MLP) minimising a Quadratic Error over a learning set can ultimately reach the Bayes bound only under certain conditions. If the learning set is finite, asymptotic results hold when noisy replicates are recycled infinitely many times. Probabilities of errors (entries of the confusion matrix) are chosen as performance criteria, and calculated by Jackknife or resampling techniques. The ultimate bounds on performance are supposed to be reached by the best Bayesian classifier, based on densities of examples in each class. These are calculated with the help of Parzen-type kernel estimators whose width varies with location. It has been demonstrated that Gaussian kernels are inadequate in dimensions larger than five, because of the so-called empty-space phenomenon. In this context, an efficient kernel has been proposed, that takes into account the dimension, the finite number of samples, and their density. The expression of the asymptotic optimal variable width is used in this kernel estimator. For data bases that are recognised to be small, it is proposed to perform an Independent Component Analysis preprocessing in order to allow a reliable approximate density estimate.

To test new results and algorithms and to ensure consistent benchmarking, a smart software environment (looking like a kind of Simulink(r) of MatlabTM but better suited for the Neural Networks context) has been developed. The package, Packlib, is an open programming toolset including a communication library and several graphic tools. All algorithms studied and developed in the project are implemented as Packlib modules by partners. A demonstration version of Packlib should be soon released in the public domain, and supported by periodic courses.

First hardware implementations of simple evolutive neural networks have already been done and study of the implementation in the new SOI technology have been addressed.

Potential

Theoretical results, as well as the hardware and software tools developed in this project, will provide amenable methods for solving realistic problems in classification. Furthermore, the test and benchmark results should constitute convincing demonstrations of the power of neural networks.

Latest Publications

Information Dissemination Activies

Tutorials on theoretical results of the project will be propsoed during the International Conference NeuroNîmes 93. Elena partners will present panels on the project, oral presentation and Packlib demonstration during this conference. A demonstration version of Packlib should be soon released in the public domain, and supported by periodic courses.


Coordinator

Institut National Polytechnique de Grenoble - TIRF - F
46, avenue Félix Viallet
F - 38031 GRENOBLE CEDEX

Partners

Université Catholique de Louvain - B
Ecole Polytechnique Fédérale de Lausanne - CH
Universidad Politecnica de Catalunya, Barcelone - E
Thomson Sintra ASM, Sophia Antipolis - F

Associate partner

Ecole pour les Etudes et la Recherche en Informatique
et Electronique, Nîmes - F

CONTACT POINT

Prof. Christian Jutten
tel +33/76 574548
fax +33/76 574790
e-mail: chris@TIRF.grenct.fr


LTR synopses home page LTR work area index LTR acronym index LTR number index LTR Projects index
All synopses home page all acronyms index all numbers index

ELENA-NERVES-2 - 6891, August 1994


please address enquiries to the ESPRIT Information Desk

html version of synopsis by Nick Cook