Many research results have been reported in the field of neural network and soft computing. These research fields have been widely developed in not only pattern recognition but also control or forecasting systems. The network structure and learn- ing method of a neural network is similar to the biomedical mechanism. The structure of the basic neural network usually consists of three layers. Each layer is composed of the input, connecting weight, summation, thresholds and output unit. In our previ- ous study, we designed and proposed motion detection system or image processing model using a multi-layered neural network and an artificial retina model.
On the other hand, we construct a pattern recognition machine using variable resistance, operational amplifier. We used CdS cells in the input sensor. However, we have to adjust the resistance value by our hands. Moreover, the capacitor reduces the electric charge with time passage. It needs the analog refresh process. The analog refresh process is quite difficult compared to the digital refresh process of DRAM. In the present study, we proposed a neural network using analog multiple circuits and an operational amplifier. The learning time and working time are very short because this system is not dependent on clock frequency in the digital clock processing. At first we designed a neural network circuit by SPICE simulation. Next we measured the behavior of the output value of the basic neural network by capture CAD and SPICE.
Capture is one of the computer-aided design (CAD) systems of SPICE simulation.
We compared both output results and confirmed some extent of EX-OR behavior [8, 9]. EX-OR behavior is typically a working confirmation method of a three layered neural network model. This EX-OR behavior makes liner separation impossible, it is suitable data for checking neural network ability.
Moreover, the model which used the capacitor as the connecting weights was proposed. However, it is difficult to adjust the connecting weights. In the present study, we proposed a neural network using analog multiple circuits. The connecting weights are shown as a voltage of multiple circuits. It can change the connecting weights easily. The learning process will be quicker. At first we made a neural network by computer program and neural circuit by SPICE simulation. SPICE means the Electric circuit simulator as shown in the next chapter. Next we measured the behavior confirmation of the computer calculation and SPICE simulation. We compared both output results and confirmed some extent of EX-OR behavior [10].
2 Neural Network Using Multiple Circuit
In our previous study, we used multiple circuits to realize the analog neural network.
In the SPICE simulation, the circuit is drawn by CAD, called Capture. After set- ting the input voltage or frequency, SPICE has some analysis function, AC, DC or transient. At first, we made the differential amplifier circuits and Gilbert multiple circuits toward making the analog neural network. We show the different circuits in Fig.1. Many circuits take an input signal represented as a difference between two voltages. There circuits all use some variant of the differential pair. Figure1shows the schematic diagram of the simple transconductance amplifier with differential pairs. The current mirror formed by Q3 and Q4 is used to form the output current, which is equal toI1–I2. The different circuits enhanced to a two-quadrant multiplier.
Its output current can be either positive or negative, but the bias currentIbcan only be a positive current.Vb, which controls the current, can only be positive voltage.
So the circuit multiplies the positive part of the currentIbby the tanh of (V1–V2).
If we plotV1–V2horizontally, and I vertically, this circuit can work in only the first and second quadrants.
We show the Gilbert multiple circuit in Fig.2. To multiply a signal of either sign by another signal of either sign, we need a four-quadrant multiplier. We can achieve all four quadrants of multiplication by using each of the output currents from the differential pair (I1orI2) as the source for another differential pair. Figure2shows the schematic of the Gilbert multiplier. In the range where the tanh xis approxi- mately equal tox, this circuits multipliesV1–V2byV3–V4. And we confirmed the range of the voltage operated excellently. One neuron is composed by connecting weights, summation and threshold function. The product of input signal and con- necting weights is realized by multiple circuits. Summation and threshold function
Vdd Vdd
Ib Iout I4
I2
I3 I1 Q3 Q4
Q1 Q2
V1 V2
R3 V
Vout
Fig. 1 Difference circuits
Fig. 2 Gilbert multiple circuits
Vdd Vdd
Vout out
V3
V1
Ib
Q1 V Q2
V2
V4
R4
I1 I2
I+ I-
I
I13 I14 I24 I23
are realized by additional and difference circuits. In the previous hardware model of neural network, when we use solid resistance elements, it needs to change the resistance elements with the complex wires in each step of learning process. In the case of using variable resistance, we have to adjust the resistance value by our hands.
Figure3is the one neuron circuit, using multiple circuits and an additional circuit by opamp. Multiple circuits calculate the product of the two input values, input signals and connecting weights. There are three multiple circuits. Two multiple circuits mean two input signals and connecting weights. The other one multiple circuit means the threshold part of basic neuron. In the threshold part, the input signal is−1. In the multiple circuit, its products input signal−1 and connecting weights. So the output of the multiple circuit is the threshold of this neuron.
3 Perceptron Feedback Network by Analog Circuits
In Fig. 4, we show the architecture of the perceptron neural network. This is a basic learning network using a teaching signal. ‘y’ means the output signal of the neural network. ‘t’ means the teaching signal. Error value ‘t-y’ is calculated by the subtract circuit. After calculating the error value, the product error value and input signal are calculated and make a feedback signal. The input of the subtract circuit on the feedback line are the feedback signal and the original connecting weight. This subtract circuit calculates the new connecting weight. After the product of the new connecting weigh is calculated, the next time learning process is started.
V11 0.5v
3 + 2 -
V+7V-4 OUT 6 OS1 1
OS2 5 U3 uA741
R10 100
0 0
0 0 V12 12v V13 12V
0 J1B
DN5567
V14 0.1Vdc
V15 -0.5v
3 + 2 -
V+7V-4 OUT 6 OS1 1
OS2 5 U4 uA741
R12 100
0 0
0 0
V16 12v V17 12V
0 J2B
DN5567
V18 0.1Vdc
3 + 2 -
V+7V-4 OUT 6 OS1 1
OS2 5 U1 uA741
0 0
V2 12v V3 12V
R4 100 0
R5 0.1k
R6 0.1k 0
R11 100
0 R13 100
0 R15 100
R3 100
V V
R14 V
0.1k V9
0Vdc
3 + 2 -
V+7V-4 OUT 6 OS1 1
OS2 5 U2 uA741
R8 100
0 0
0 0 V4 12v V5 12V
0 J1A
DN5567
V10 0.1Vdc
0
Fig. 3 Neural circuit by capture CAD
Fig. 4 The architecture of perceptron
Figure5shows the perceptron circuits, two-input and one-output. There are mul- tiple circuits and additional circuits in the feed forward line. Error value between original output and teaching signal is calculated by subtract circuit. There are mul- tiple circuits and additional circuit in the feedback lines. In the experimental result of this perceptron, the learning time is about 900àS shown in Fig.6[11]. Figure7 shows the Architecture of Three-Layers Neural Circuits. In Fig. 8, we show the Learning Neural Circuit on Capture CAD by SPICE.
Fig. 5 The circuit of perceptron
Fig. 6 The convergence output of perceptron
Fig. 7 The diagram of neural circuits with threshold
Fig. 8 The learning feedback neural circuit
4 Neural Circuit on Alternative Current Behavior
We proposed an analog neural network using multiple circuit in the previous research.
However, in the case of constructing a network, one of the disadvantages is that the input and output range is limited. Furthermore, the circuit operation becomes unstable because of the characteristics of the multiple circuit using semiconductor. It is called
‘Circuit Limitations’. One of the cause is transistor mismatch. Not all transistors created equal. Another cause is the output-voltage limitation. We tried to use the alternative current as a transmission signal in the analog neural network in Fig.9.
The original input signal is direct current. We used the voltage frequency converter unit when generate the connecting weight. The input signal and connecting weight generate the Alternative current by the Amplifier circuit. Two Alternative currents are added by an additional circuit. The output of the additional circuit is a modulated wave. This modulated wave is the first output signal of this neural network.
Figure10is the output of the RMS value of AC voltage by the neural circuit. In this network, two Alternative currents are added by an additional circuit. The output of the additional circuit is a modulated wave. Figure10shows the RMS value of the modulated wave. It operates satisfactorily because the output voltage increases monotonically in the general-purpose frequency range. Figure11is the output of the neural circuit. It is shown by two dimensional graph. We recognized the RMS value of the output voltage is the appropriate value in the two-dimensional area.
When we construct learning the AC neural circuit, we have to convert the feed- back modulated current signal to a connecting weight with frequency. The correction error signal is calculated by the products difference signal and input signal. Difference signal is the difference between the output value and teaching signal. Figure12shows the convergence result of learning experiment. It means the learning process is suc- ceeding with very short time. Figure13shows the Basic AC operation learning neural network model. This circuit is composed by a Rectifier circuit, Voltage-Frequency converter, Amplifier, subtract circuit, Additional Circuit and Inverter. The input sig-
Fig. 9 AC operation neural circuit
Fig. 10 The output Rms value of neural circuit
Fig. 11 The output behavior of AC operation neural circuit
nal is direct current. The initial value of the connecting weight is also direct current.
This direct current is converted to frequency by a Voltage-Frequency converter cir- cuit. The input signal and connecting weight generate the Alternative current by the Amplifier circuit.
Figure14shows the relationship between the number of learning time and output frequency. It shows the frequency f1 convergences to 4 kHz and the frequency f2 convergences to 1 kHz. The learning count time is very small. The learning speed is very fast in this AC operation circuit. Figure15shows the whole circuit of AC operation learning neural network.
Fig. 12 The convergence result of learning experiment
Fig. 13 Basic AC operation learning neural network model
Two alternative currents are added by an additional circuit. The output of the additional circuit is a modulated wave. This modulated wave is phase inverted by an Inverse circuit. The phase-inverted wave is amplified. The amplification is the value of the teaching signal. This amplified signal and modulated wave are added by an adder circuit. The output of this adder circuit is the error value, which is the difference between the output and teacher signal. Thus, we do not have to use the subtract circuit to calculate the error value (Figs.17and18).
Fig. 14 The number of learning time and output frequency
Fig. 15 The AC operation learning neural circuit
Fig. 16 The simulation results of AC feed-back neural model
The output of the Adder circuit is converted to direct current from alternating current by the rectifier circuit. This direct current is a correction signal of connecting weights. New connecting weight is calculated by a subtract circuit. This circuit calculates the original connecting weight and the correction signal of the connecting weight. The output of the subtract circuit is converted to a frequency signal by a voltage-frequency convert circuit. It means that in the AC feedback Circuit for the BP learning process, after getting DC current by a rectifier circuit, we have to convert from DC Voltage to AC current with frequency. Finally, alternating current occurs by the amplifier circuit. The amplification is the value of the input signal.
Figure 16shows the simulation results of AC feed-back neural model, two-input signal, connecting weights and after rectified wave.
5 Deep Learning Model
Recently, a deep learning model has been proposed and developed in many applica- tions such as image recognition and artificial intelligence. Deep learning is a kind of algorithms in the machine learning model. This model is developed in the recent research. The recognition ability is improved more and more. Not only pattern recog- nition, but also in image or speech recognition fields, the deep learning model is used in many fields in practical use. And this system is expected in the field of robotics, conversation system and artificial intelligence.
Fig. 17 The structure of 2-patterns analog AC operation neural network with V-F conversion circuit