1. introduction to knowledge-based intelligent systems(summary / questions for review / references)
2. rule-based expert systems
3. uncertainty management in rule-based expert systems
4. fuzzy expert systems
5. frame-based expert systems
6. artificial neural networks
7. evolutionary computation
8. hybrid intelligent systems
9. knowledge engineering and data mining
6. artificial neural networks
Machine learning involves adaptive mechanisms(自适应机制) that enable computers to learn from experience, learn by example and learn by analogy(类比). Learning capabilities can improve the performance of an intelligent system over time. The most popular approaches to machine learning are artificial neural networks(人工神经网络) and genetic algorithms(遗传算法). This chapter is dedicated to neural networks.
An artificial neural network consists of a number of very simple and highly interconnected processors, called neurons(神经元), which are analogous to the biological neurons in the brain. The neurons are connected by weighted links that pass signals from one neuron to another. Each link has a numerical weight associated with it. Weights are the basic means of long-term memory in ANNs. They express the strength, or importance, of each neuron input. A neural network "learn" through repeated adjustments of these weights.
A neuron is a simplest computing element. Figure 6.3 is "diagram of a neuron". The input signal can be raw data or outputs of other neurons, the output signal can be either a final solution to the problem or an input to other neurons. The outputs is determined by inputs transfer or activation function(激活函数). The most common choices for activation function are:the step, sign, linear and sigmoid functions(阶跃、符号、线性和S形函数), which are illustrated in figure 6.4. The step and sign activation functions, also called hard limit functions(硬限幅函数), are often used in decision-making neurons for classification and pattern recognition tasks. Neurons with the linear function are often used for linear approximation(线性近似). Neurons with the sigmoid function are often used in the back-propagation networks(后向传递网络).

本文是《智能系统指南》一书第六章读书笔记,主要探讨了人工神经网络,包括其自适应机制、单层感知器、多层前馈网络、回传学习算法以及无监督学习中的Hebb学习法则和竞争学习。阐述了神经元结构、权重调整、误差反向传播等概念,并介绍了Hopfield网络与BAM的联想记忆功能。
最低0.47元/天 解锁文章
2万+

被折叠的 条评论
为什么被折叠?



