กก
English translation of the abstract and contents of
Lernen durch Genetisch-Neuronale Evolution:
Aktive Anpassung an Unbekannte Umgebungen mit Selbstentwickelnden
Parallelen Netzwerken, ISBN 3-929037-16-6, 268pp,
Infix-Verlag, St. Augustin/Bonn, July 1992.
Learning by Genetic Neural
Evolution:
Active Adaptation to Unknown Environments
with Self-developing Parallel Networks
Byoung-Tak Zhang
(C) Infix-Verlag, July 1992
กก
Abstract
Artificial neural networks
possess, in contrast to symbolic systems, several advantageous
properties, such as massive parallelism and fault tolerance.
Among various network architectures, the multilayer neural nets
are of special importance since they can theoretically realize
any arbitrary bounded continuous functions. Previous studies on learning in
such networks were usually restricted to the adaptation of
connection weights. From the perspectives of artificial
intelligence and the theory of system identification, such
learning methods have at least two deficiencies. First, their
application area is very limited. If the given network
architecture is not suitable for the problem, the learning
converges very slowly or does not converge at all. Second, this
learning paradigm is too passive in the sense that it can model
only the systems that are predefined by the given training sets.
Networks should, however, be able to actively explore unknown or
continuously changing environments and adapt themselves to them.
This work presents an autonomous
learning method for neural networks, called GENIAL (Genetic
Neural Incremental Autonomous Learning). GENIAL starts with a
small training set and a network structure with a single hidden
neuron. The learning process consists, on the one hand, in the
active expansion of the training set through genetic exploration
of the environment on the basis of the network knowledge. On the
other hand, GENIAL constructs a neural network model of its
environment through adaptation of the structure and weights on the
basis of the expanded training set.
The efficiency of these
self-developing parallel networks with respect to the learning
speed and the generalization performance is analyzed and tested
on various function approximation tasks.
Practical applicability of
the learning method was demonstrated on writer-independent digit
recognition and robot control. The experimental results confirm
that the GENIAL learning method, confronted with an unknown
environment, acquires incrementally and actively new knowledge
and thereby builds an effective neural network model of the
environment. For the tasks for which a large number of training
examples are available, the method finds a problem-specific
network structure using a selected subset of the given
network. In spite of the computational overhead in network
structure optimization the self-developing networks can converge
faster than the usual backpropagation networks with pre-optimized
structures.
Contents
- Introduction
- 1.1 Neural Networks and Artificial Intelligence
- 1.1.1 Neural Networks for Artiicial Intelligence
- 1.1.2 Artificial Intelligence for Neural Networks
- 1.2 Requirements of Learning Methods for Neural Networks
- 1.2.1 Construction of Neural Systems
- 1.2.2 Quality Criteria for Learning Methods
- 1.3 Goals of the Work
- 1.3.1 Creativity
- 1.3.2 Selectivity
- 1.3.3 Adaptivity
- 1.3.4 Identification of Black-Box Environments
- 1.4 Dissertation Overview
- Learning Methods for Neural Networks
- 2.1 Neurons
- 2.2 A Taxonomy of Neural Nets
- 2.3 Some Network Architectures and Learning Procedures
- 2.3.1 Perceptrons
- 2.3.2 Self-organizing Maps
- 2.3.3 Learning Automata
- 2.3.4 Relaxation Nets
- 2.3.5 Multilayer Nets
- 2.3.6 Simulated Annealing and Boltzmann Machines
- 2.4 Current Research Issues
- The GENIAL Learning Model
- 3.1 The Environments
- 3.2 A Taxonomy of Associative Learning Methods
- 3.2.1 Learning Principles
- 3.2.2 Learning Mechanisms
- 3.2.3 The 6 Learning Types
- 3.2.4 Passive Learning Paradigm
- 3.2.5 Active Learning Paradigm
- 3.3 Genetic Neural Evolutionary Learning
- 3.3.1 The Network
- 3.3.2 Genetic Learning
- 3.3.3 Neural Learning
- 3.3.4 GENIAL Control Algorithm
- 3.4 GENIE: An Environment for Developing Artificial Neural Systems
- 3.4.1 System Architecture
- 3.4.2 Operation Modes
- Efficient Gradient Search by Focused Propagation
- 4.1 Backpropagation
- 4.2 Convergence Properties of Backpropgation
- 4.3 Efficient Gradient-Descent Methods
- 4.4 Focused Propagation
- 4.4.1 Derivation of the Modification Rule
- 4.4.2 Training Algorithm FP
- 4.4.3 Properties of the FP Algorithm
- 4.5 Comparison of Convergence Speeds
- 4.5.1 Nonlinear Functions
- 4.5.2 Experimental Results
- 4.6 Remarks on Choosing Learning Parameters
- Minimization of Training Set Complexity
- 5.1 Generalization
- 5.2 Why Small Training Sets?
- 5.3 Selective Incremental Learning
- 5.3.1 Algorithm SEL
- 5.3.2 Convergence Criterion
- 5.3.3 Approximation-Theoretical Considerations
- 5.4 Relationship among Generalization, Learing Time and Training Set Size
- 5.4.1 Quality Criteria for Learning
- 5.4.2 SEL Learning Curves
- 5.4.3 Generalization, Learning Speed and Example Selection
- 5.5 Minimal Training Sets
- 5.5.1 Linear Mappings with Binary Inputs
- 5.5.2 Nonlinear Mappings with Continuous Inputs
- 5.6 Summary
- Active Exploration of Unknown Environments
- 6.1 Necessity of Novel Learning Examples
- 6.2 Example Generation by Genetic Search
- 6.2.1 Genetic Search for Critical Examples
- 6.2.2 Reproduction Plan
- 6.2.3 Genetic Operators for Example Generation
- 6.3 Creative Incremental Learning
- 6.4 Genetic Creation vs. Random Generation
- 6.4.1 Preliminary Experiments
- 6.4.2 Preliminary Results
- 6.5 Summary
- Genetic Neural Self-developing Nets
- 7.1 What Network Size is Optimal?
- 7.1.1 Learning Capability and Network Size
- 7.1.2 Problems of Pure Connectionist Approaches
- 7.2 Approaches to Optimizing Network Structures
- 7.2.1 Destructive Approaches
- 7.2.2 Constructive Approaches
- 7.3 Learning by Self-development
- 7.3.1 Development Process
- 7.3.2 Two Learning Algorithms for Self-developing Nets
- 7.4 Optimality of Self-development Algorithms
- 7.4.1 Convergence and Optimality of Algorithms
- 7.4.2 v-Optimality of Self-developing Nets
- 7.4.3 Reduction of Training Set Complexities
- 7.5 Time Complexity of Self-developing Nets
- 7.5.1 Selective Developing Networks
- 7.5.2 Creative Developing Networks
- 7.5.3 Influence of Neural Growth
- 7.6 Relation to Previous Approaches
- 7.7 GENIAL, Genetic Algorithms and Simulated Annealing
- Digit Recognition and Function Approximation
- 8.1 Writer-Independent Digit Recognition
- 8.1.1 Solutions of Self-developing Nets
- 8.1.2 Comparison Results
- 8.1.3 Interpretation of Recognition Rates
- 8.1.4 Summary
- 8.2 Approximation of Complex Functions
- 8.2.1 Comparison of Learning Strategies
- 8.2.2 Changing Strategies via Neural Growth and Seed Examples
- 8.2.3 Summary
- Self-developing Nets for Robot Control
- 9.1 Basics of Robot Control
- 9.2 The Task and the Simulated Robot Arm
- 9.3 Learning to Predict Trajectories
- 9.3.1 Trajectory Prediction
- 9.3.2 Learning Results for Discrete Coding
- 9.3.3 Learning Results for Continuous Coding
- 9.3.4 Trajectory Prediction for Arbitrary Directions
- 9.4 Learning Inverse Kinematics
- 9.4.1 Inverse Kinematics Problem
- 9.4.2 Creative Development Learning of Inverse Kinematics
- 9.4.3 Properties of Genetic Learning
- 9.4.4 Learning Results for Continuous Coding
- 9.5 Concluding Remarks
- Summary and Future Work
|
- 40
- 40
- 43
- 43
- 44
- 45
- 46
- 47
- 50
- 51
- 53
- 56
- 58
- 60
- 60
- 62
- 64
- 64
- 71
- 74
- 77
- 77
- 80
- 83
- 84
- 84
- 85
- 88
- 90
- 90
- 93
- 95
- 96
- 98
- 100
- 104
- 105
- 108
- 111
- 115
- 116
- 118
- 121
- 122
- 123
- 125
- 127
- 129
- 133
- 139
- 143
- 143
- 145
- 147
- 148
- 149
- 149
- 150
- 151
- 151
- 153
- 156
- 156
- 160
- 164
- 164
- 165
- 172
- 176
- 176
- 178
- 179
- 181
- 182
- 187
- 188
- 189
- 190
- 191
- 194
- 195
- 196
- 202
- 202
- 205
- 206
- 208
- 210
- 210
- 210
- 214
- 216
- 217
- 217
- 217
- 221
- 225
- 227
- 229
|
- A
- B
|
- Appendix
- Operating Instructions for GENIE
- References
- Symbol Index
- Index
|
|