Documentation /  Neural Networks /

RegistrationIntroduction

Table Of Contents

1 Introduction

1.1 Features of This Package

2 Neural Network Theory—A Short Tutorial

2.1 Introduction to Neural Networks

2.1.1 Function Approximation

2.1.2 Time Series and Dynamic Systems

2.1.3 Classification and Clustering

2.2 Data Preprocessing

2.3 Linear Models

2.4 The Perceptron

2.5 Feedforward and Radial Basis Function Networks

2.5.1 Feedforward Neural Networks

2.5.2 Radial Basis Function Networks

2.5.3 Training Feedforward and Radial Basis Function Networks

2.6 Dynamic Neural Networks

2.7 Hopfield Network

2.8 Unsupervised and Vector Quantization (VQ) Networks

2.9 Further Reading

3 Getting Started and Basic Examples

3.1 Palettes and Loading the Package

3.1.1 Palettes

3.1.2 Loading the Package and Data

3.2 Package Conventions

3.2.1 Data Format

3.2.2 Function Names

3.2.3 Network Format

3.3 NetClassificationPlot

3.4 Basic Examples

3.4.1 Classification Problem Example

3.4.2 Function Approximation Example

4 The Perceptron

4.1 Perceptron Network Functions and Options

4.1.1 InitializePerceptron

4.1.2 PerceptronFit

4.1.3 NetInformation

4.1.4 NetPlot

4.2 Examples

4.2.1 Two Classes in Two Dimensions

4.2.2 Several Classes in Two Dimensions

4.2.3 Higher-Dimensional Classification

4.3 Further Reading

5 The Feedforward Neural Network

5.1 Feedforward Network Functions and Options

5.1.1 InitializeFeedForwardNet

5.1.2 NeuralFit

5.1.3 NetInformation

5.1.4 NetPlot

5.1.5 LinearizeNet and NeuronDelete

5.1.6 SetNeuralD, NeuralD, and NNModelInfo

5.2 Examples

5.2.1 Function Approximation in One Dimension

5.2.2 Function Approximation from One to Two Dimensions

5.2.3 Function Approximation in Two Dimensions

5.3 Classification with Feedforward Networks

5.4 Further Reading

6 The Radial Basis Function (RBF) Network

6.1 RBF Network Functions and Options

6.1.1 InitializeRBFNet

6.1.2 NeuralFit

6.1.3 NetInformation

6.1.4 NetPlot

6.1.5 LinearizeNet and NeuronDelete

6.1.6 SetNeuralD, NeuralD, and NNModelInfo

6.2 Examples

6.2.1 Function Approximation in One Dimension

6.2.2 Function Approximation from One to Two Dimensions

6.2.3 Function Approximation in Two Dimensions

6.3 Classification with RBF Networks

6.4 Further Reading

7 Training Feedforward and Radial Basis Function Networks

7.1 NeuralFit

7.2 Examples of Different Training Algorithms

7.3 Train with FindMinimum

7.4 Troubleshooting

7.5 Regularization and Stopped Search

7.5.1 Regularization

7.5.2 Stopped Search

7.5.3 Example

7.6 Separable Training

7.6.1 Small Example

7.6.2 Larger Example

7.7 Options Controlling Training Results Presentation

7.8 The Training Record

7.9 Writing Your Own Training Algorithms

7.10 Further Reading

8 Dynamic Neural Networks

8.1 Dynamic Network Functions and Options

8.1.1 Initializing and Training Dynamic Neural Networks

8.1.2 NetInformation

8.1.3 Predicting and Simulating

8.1.4 Linearizing a Nonlinear Model

8.1.5 NetPlot—Evaluate Model and Training

8.1.6 MakeRegressor

8.2 Examples

8.2.1 Identifying the Dynamic of a DC Motor

8.2.2 Identifying the Dynamics of a Hydraulic Actuator

8.2.3 Bias-Variance Tradeoff—Avoiding Overfitting

8.2.4 Fix Some Parameters—More Advanced Model Structures

8.3 Further Reading

9 Hopfield Networks

9.1 Hopfield Network Functions and Options

9.1.1 HopfieldFit

9.1.2 NetInformation

9.1.3 HopfieldEnergy

9.1.4 NetPlot

9.2 Examples

9.2.1 Discrete-Time Two-Dimensional Example

9.2.2 Discrete-Time Classification of Letters

9.2.3 Continuous-Time Two-Dimensional Example

9.2.4 Continuous-Time Classification of Letters

9.3 Further Reading

10 Unsupervised Networks

10.1 Unsupervised Network Functions and Options

10.1.1 InitializeUnsupervisedNet

10.1.2 UnsupervisedNetFit

10.1.3 NetInformation

10.1.4 UnsupervisedNetDistance, UnUsedNeurons, and NeuronDelete

10.1.5 NetPlot

10.2 Examples without SOM

10.2.1 Clustering in Two-Dimensional Space

10.2.2 Clustering in Three-Dimensional Space

10.2.3 Pitfalls with Skewed Data Density and Badly Scaled Data

10.3 Examples with SOM

10.3.1 Mapping from Two to One Dimensions

10.3.2 Mapping from Two Dimensions to a Ring

10.3.3 Adding a SOM to an Existing Unsupervised Network

10.3.4 Mapping from Two to Two Dimensions

10.3.5 Mapping from Three to One Dimensions

10.3.6 Mapping from Three to Two Dimensions

10.4 Change Step Length and Neighbor Influence

10.5 Further Reading

11 Vector Quantization

11.1 Vector Quantization Network Functions and Options

11.1.1 InitializeVQ

11.1.2 VQFit

11.1.3 NetInformation

11.1.4 VQDistance, VQPerformance, UnUsedNeurons, and NeuronDelete

11.1.5 NetPlot

11.2 Examples

11.2.1 VQ in Two-Dimensional Space

11.2.2 VQ in Three-Dimensional Space

11.2.3 Overlapping Classes

11.2.4 Skewed Data Densities and Badly Scaled Data

11.2.5 Too Few Codebook Vectors

11.3 Change Step Length

11.4 Further Reading

12 Application Examples

12.1 Classification of Paper Quality

12.1.1 VQ Network

12.1.2 RBF Network

12.1.3 Feedforward Network

12.2 Prediction of Currency Exchange Rate

13 Changing the Neural Network Structure

13.1 Change the Parameter Values of an Existing Network

13.1.1 Feedforward Network

13.1.2 RBF Network

13.1.3 Unsupervised Network

13.1.4 Vector Quantization Network

13.2 Fixed Parameters

13.3 Select Your Own Neuron Function

13.3.1 The Basis Function in an RBF Network

13.3.2 The Neuron Function in a Feedforward Network

13.4 Accessing the Values of the Neurons

13.4.1 The Neurons of a Feedforward Network

13.4.2 The Basis Functions of an RBF Network

RegistrationIntroduction


Any questions about topics on this page? Click here to get an individual response.Buy NowMore Information