The inductance of the magnetic circuit is a nonlinear function of both phase current and rotor position.. Stephenson and Corda [1] proposed a quite successful method to model the flux li
Trang 1MODELLING OF A NONLINEAR SWITCHED RELUCTANCE DRIVE BASED ON
ARTIFICIAL NEURAL NETWORKS
Ç Elmas*, Ş Sağıroğlu+, İ.Çolak*, G Bal*
* Technical Education Faculty, Gazi University, Ankara, Turkey
+ Engineering Faculty, Erciyes University, Kayseri, Turkey
Abstract - Switched Reluctance Motors (SRMs) are
increasingly popular machines in electrical drives, whose
performances are directly related to their operating
condition Their dynamic characteristics vary as condition
change Recently, several methods of modelling of the
magnetic saturation of SRMs have been proposed
However, the SRM is nonlinear and cannot be adequately
described by such models Artificial Neural Networks
(ANNs) may be used to overcome this problem This
paper presents a method which uses backpropagation
algorithm to handle one of the modelling problems in an
switched reluctance motor The simulated waveforms of a
phase current are compared with those obtained from a
real switched reluctance commercial motor Experimental
results have validated the applicability of the proposed
method
INTRODUCTION
An important characteristic of the SRM drive is its
inherent nonlinearity The inductance of the magnetic
circuit is a nonlinear function of both phase current and
rotor position In addition, the system handles energy
most efficiently when the energy conversion cycles are
made as square as possible, maximising the ratio of
energy converted to energy input [12, 13] This leads a
particularly difficult problem because of their complicated
magnetic circuit, which operates at varying levels of
saturation under operating conditions Square energy
conversion cycles are created by driving the motor into
magnetic saturation and bring the energy handling
requirements of inverter into closer alignment with the
energy conversion characteristics of motor [12, 13] This
can results in reduced switch requirements and energy
savings The recirculated energy in a drive with an
applied voltage requires current flow and acts to increase
the inverter and motor losses that accompany the current
flow Stephenson and Corda [1] proposed a quite
successful method to model the flux linkage as a function
of current and rotor position This method has been
modified by several others [2, 3, 4] Torrey and Lang [5]
have also proposed a method to provide analytical
expressions for the flux linkage and current for every
rotor position within a single summary equation In contrast to the above methods, there have been many attempts to generate the necessary static magnetisation curves by Finite Element Analysis (FEA) [6] Recently, the authors have reported an application of ANN for modelling of the magnetic nonlinearity of the magnetisation curves [14]
Artificial Neural Network (ANN) techniques have grown rapidly in recent years Extensive research has been carried out on the application of artificial intelligence Artificial Neural Network technology has the potential to accommodate an improved method of determining nonlinear model which is complementary to conventional techniques NN are alone nonlinear and actual algorithmic relevant set of training examples is required which can be derived from operating plant data
This paper investigates the use of ANNs for the modelling
of the magnetic nonlinearity of the SRM Since this method does not require any prior information regarding the SRM system apart from the input and output signals, it
is quite simple and cost effective The modelling method
in this paper departs significantly from previous modelling method by the authors, in which the magnetisation curves are represented by functions of flux linkage against rotor position, rather than current In the paper, first, magnetic nonlinearity of the SRM is presented, then ANN approach to the modelling of the SRM is presented ANN training requirements are discussed next and finally, the models are verified through comparisons with experimentally measured results
MAGNETIC NONLINEARITIES OF THE SRM
The first step in modelling the nonlinearities of the SRM
is predicting (ψ/θ/i) curves for a given motor For the experimental motor, these curves are shown in Fig 1 Although the construction of the SRM is quite simple, it is very difficult to derive a comprehensive mathematical model for the behaviour of the machine Many attempts have been made by different researchers to overcome this
problem The structure of Stephenson and Corda method
is that flux linkage is modelled as a function of current,
Trang 2with rotor position as a parameter This method is based
on storing the (ψ/θ/i) information in a look up table
Elmas and Zelaya de la parra [4] described a similar
method which applies Least Squares curve fitting methods
to produce a representation of the measured magnetic data
as a series of polynomials Pulle [3] has investigated the
merits of representing the magnetising curve of an SRM
by customised cubic splines and storing the coefficients in
a new data base with the aim of improving the method
suggested by Stephenson and Corda Miller and McGilp
[1] have also adapted the Stephenson and Corda method
Their aim was to represent flux linkage as a function of
rotor position, with current as an undetermined parameter
rather than position In addition, an original work was
published by Torrey and Lang [5] The goal of their
method was to provide complete analytic expressions of
the flux linkage current information for every rotor
position within a single summary equation
The general equation of SRM for only one phase is as
follows:
dt
For the solution of the Eq (1), it is necessary to model
magnetic nonlinearities in the form of i(ψ,θ) rather than ψ
(i,θ) form as shown in Fig 1 This is because after each
integration step the solution to Eq 1 yields a value for the
flux which can be used to find the corresponding current
value for the next integration step The variation of
flux-linkage with current is the same for the remaining phases
except for the angular dependence, which takes
1 A
2 A
4 A
5 A
Fig 1 The variation of flux-linkage with current
into account the physical interpolar spacing The
experimental motor has a total of eight symmetrically
located stator poles used by a total of four phases (two
poles per phase)
NEURAL NETWORK MODELLING OF THE SRM
Ability and adaptability to learn, generalisation, less information requirement, fast real-time operation and ease
of implementation have made ANNs popular in the last few years ANNs have been applied in many areas [7, 11] Dynamic system modelling, identification and control using ANNs are particularly very promising [7, 8, 10, 11]
As a result of that, the modelling of SRM has been employed using the Backpropagation (BP) [9], which is the most popular algorithm in the arena of neural networks
Backpropagation The standard backpropagation by
Rumelhart and McClelland has been demonstrated on various problems [7, 8, 9, 10, 11] The reasons for using this algorithm are that its structure is well understood and its recent successful applications encourage the applicants This algorithm consists of a number of propagation errors (PEs), a transfer function for each PE
in the layers, number of connections between layers (at least three layers; an input, a hidden layer, an output layer) and an algorithm or learning rule which is the generalised delta rule
This rule is simple and give a prescription for changing
the weights (wij) in any feedforward network to map the
input-output pairs This change is based on gradient descent and relies on propagating an error occurred from
an output PE backwards, towards input layer through the PEs in the hidden layers according to the errors A simple operation for calculating the output takes place when a set
of input is entered to the input layer The calculation direction is from input layer towards the output layer via hidden layers
The number of PEs in the input layer is equal to the number of inputs in each input pattern, and each of these PEs receives one of the inputs The number of PE in the output layer is the output of the network The number of PEs in the hidden layer depends on the discretion of the network designer However, there is no given clear explanation implemented
Fig 2 shows the topology of NN with biases Generally, the weights between the layers are initialised with small random values This ensures that the networks train and function easier So it is important that the weights do not start with the same value, thus nonsymmetric weights can
be obtained for internal representations During training the feedforward computation and the adjustment to the weights based upon the error are determined During recall only the feedforward computation takes place as mentioned earlier
The algorithm is simple and relies on propagating an error signal from the output layer backwards towards input layer through hidden layers The operation of calculating the output takes place when the input signal is entered to
Trang 3input layer The calculation direction is from input layer
towards the output layer via hidden layers The
feedforward computation and the weights' adjustments
based upon the error are determined during training The
feedforward computation only takes place in recall
+1 Bias +1 Bias
O u t p u t s
Output Layer
Hidden Layer
Input Layer
More Layers
Weights
I n p u t s
Fig 2 Topology of backpropagation neural network
A simple training and recall chart is given in Fig 3 It
shows the sequence of training and recalling procedure of
backpropagation For some applications more than one
hidden layer are used
Initialise weights randomly
Start training
Present input set to input layer
Calculate output throughout PEs
Adjusting weights using gradient descent Start testing
Present either training or testing
Set to input layer of network
An actual output from the network
error ?
set completed?
Stop
Calculating output throughtout PEs
not acceptable acceptable
no
yes
Fig 3 Training and recall flow chart of backpropagation
algorithm
In the feedforward computation an input set passes onto
the hidden layer from input layer The output of each PE
in the layer is calculated a weighted sum of its input, then
passes the sum through its activation function and presents the activation value to the output layer This simply explains how a PE works At this stage of training,
X represents the input vector, Position and Flux, and C represents the desired output vector, Current BP is briefly reviewed here If the network contains n inputs and m outputs, X and C are given by:
(2)
= ( , , , ,( , , , )1 21 2 33 )
If Cnet is the output vector of the neural network, the aim
of BP is to minimise the error values between the output
of the system (C) and the output of the network (Cnet)
This error is considered as a function of the connection weights
When the examples X and C are presented to the net, an output of j-th PE in the kth layer is calculated as;
First, the inputs are multiplied by related weights and then they are summed as;
(3)
i
n i
=
=
0
)
Second, the output of the j-th PE in the k-th layer is calculated as a function of netj as;
(4)
cnetj= ( netfk j
where f is a transfer or threshold function The transfer
function used in training of this work is given as follows;
k netj
netj netj netj netj
where T is a threshold and c is the sum of the
weighted inputs for the j-th PE in the k-th layer The activation function is a 'smoothed' form of the threshold function The function used in the backpropagation network should be monotonically increasing and continuously differentiable such as hyperbolic tangent It should be noted that not all of the nets used the hyperbolic tangent as given in Eq.(5) The input and output layer uses linear activation function
netj
The output obtained is used to feed the PEs in the further layer as the inputs to it/them This process continues till reaching the output layer When feedforward process has completed, the backpropagating starts A backpropagation net learns by making changes in its weights in a direction
to minimise the error between the a desired value and its prediction The changes have been done using the steepest decent or generalised delta rule Assume that there are s input/output pairs, x and c, available for training the
Trang 4network After presentation of a pair of s, the weights are
changed as follows:
(6)
ωjis ω
jis jis ( )= (−1)+∆ω( ) with given by three equations when two hidden
layers are considered:
∆ω( )jis
hidden to output weights:
i jk netjk jis k
( )( )=η( −1 )( )[ ( )− ( )]+α∆ω( − 1 )( )
)
−1
∆ω
)
− 1 − 2
∆ω
)
(7) where j represents the number of PE in the output layer i
represents the number of PEs in the second hidden layer
hidden to hidden weights:
i k jis k
( )( −1 )=η ( −2 )( ) ( −1 )+α (−1 )( (8)
where;
i
m ( −) '( −)( ) [ ( ) ( )] (−)( )
=
1
ω (9)
where i represents the number of PE in the output layer j
represents the number of PEs in the second hidden layer
m is equal to the number of PEs in the output layer
hidden to input weights:
i
m
=
−
∑
1
1
(10)
where i represents the number of PE in the second layer j
represents the number of PEs in the first hidden layer m
is equal to the number of PEs in the second layer
where η : learning coefficient,
: momentum coefficient,
α
: delta weights from the i-th to the j-th unit in the k-th layer
∆ωji( ) s
: previous delta weights of the k-th
layer
∆ωji( s −1
After the first pair, the rest of the input set is applied to
the network The weights of the network have been set of
randomised with a set of values which are distributed
uniformly between -0.1 and +0.1 The selected seed was
1
Training and testing of neural network The first and
usually longest step in this work was to collect data from
the system The data set used in training was obtained
from the SRM machine experimentally Obtaining
accurate data has taken an important place to train the
networks more accurately Generally, this is the most critical to prospective success It must be possible to gather an adequate sample of characteristics data so that networks can learn efficiently, otherwise, it may be hard
or infeasible to train a neural network
During training a reasonable strategy is to start with a few hidden nodes and increase the number while monitoring generalisation by testing at each epoch The most common index of generalisation for BP is mean squared error, calculated by squaring each error, summing the squares, then averaging the sum by number of outputs and data patterns A good technique for preventing overtraining is
to stop training when the improvement of the mean squared error is stop After a successful training the neural network model is replaced with the SRM system The neural network is here a part of a larger application, within which it acts like a callable function: the application passes a set of input values to the neural network model that the model produces phase current
Backpropagation network used in modelling The
backpropagation network used in modelling is shown in Fig 4 with a block diagram This structure was used for training and testing processes After a couple of training,
it was found that two layers network achieved the mapping task in high accuracy The both learning and momentum coefficients were 0.018 and the number of epoch was 2000 for training The most suitable network configuration found was 2x8x8x1
Current Position(s)
Flux(s)
(s)
Fig 4 Modelling the SRM using Artificial neural network
Calculation methods The analysis is now proceeds by
solving the characteristic differential equations for each topological mode by using the more accurate model for (ψ /θ/i) variations discussed above Since four phases of the SRM are identical to each other only one model for (ψ/θ /i) is sufficient and is used for the other phases Before going deep into calculation methods, it is necessary to explain the torque production mechanism for the SRM The torque produced by a switched reluctance motor is proportional to the rate of change of coenergy as the rotor moves from one rotor position to another The most general expression for instantaneous torque for one phase is:
Trang 5Te W
cons t
=
150
where Te is torque, θ is rotor position and W' is the
coenergy which is
′ = ζ
W iψ θ ( , ) i
The coenergy is a function of both rotor position and
excitation current and hence, when evaluating the partial
derivatives, it is necessary to keep the indicated variables
constant
The method of calculation is as follows Initially the
values for phase flux (ψ), operating phase pole position (θ
) and phase current (i) are given as zero for the operating
phase Eq (1) is now solved numerically using the
Runge-Kutta numerical integration method This yields a new
value for ψ The program now refers to the derived (ψ/θ/i)
NN algorithm to find values for the phase current (i)
Since the steady state conditions are assumed, the speed is
constant As the time constant of the mechanical system is
much slower than the electrical time constant, the phase
current and the phase flux can be accepted as a constant
between two integration intervals Since the speed is
constant at a given value, the rotor position (θ) can
replace time as the independent variable
The whole process is then repeated for the new values of
θ and ψ The accuracy of the results thus depends on the
modelling of the flux current linkage
COMPARISON BETWEEN SIMULATION AND
EXPERIMENTAL RESULTS
To explore the effectiveness of this technique, both
computer simulation and practical experimental work
have been carried out As indicated by Eqs (11) and (12),
the torque produced is based solely on the flux
linkage/current relationship This suggest that if the phase
current is predicted correctly then the torque is also
known Thus, comparison between current waveforms
from simulation and experiment should give enough
evidence
Fig 5 shows the variation of flux-linkage with current
along with NN results These results have also
demonstrated the strong potential of the NN applied to the
SRM Fig 6 illustrates simulation result and an actual
measurement obtained by a data acquisition board As
seen from Fig 6, there is generally good agreement
between simulation and experimental results
0 25
75 100 125
Position (degree)
NN results Training data
Fig 5 The variation of flux-linkage with current along with NN results
Simulation result Experimental result (a)
(b)
Simulation result Experimental result
Fig 6 Current waveforms from simulation and experimental measurement, a) low speed, b) high speed
RESULTS
Figs 7-10 show simulation results for the SRM obtained
by the proposed method The motor was excited by a split
DC source converter (SDCSC) The following values were used for the simulations: a DC link voltage of 200V, per phase resistance of 2.56Ω, input filter inductance of 240µH and two input filter capacitors of 1000µF (each)
Trang 6current torque flux
Fig 7 Phase current, flux and torque waveforms at 60
rad/s motor speed
Fig 8 Coenergy at 60 rad/s motor speed
current torque flux
Fig 9 Phase current, flux and torque waveforms at 120
rad/s motor speed
Fig 10 Coenergy at 120 rad/s motor speed
CONCLUSIONS
Simulation results were verified through experimental results and ANN model was proven to be reasonably accurate The advantages of the model developed here are that no a priori knowledge is required (model or equation), reduced mathematical complexity, and faster operation after training However, it should be emphasised during the development, the collection of a data set is critical that the network can learn efficiently The training period usually takes a long time
REFERENCES
[1] Stephenson J M and Corda, J., 1979, Proc IEE, Vol 126, No 5
[2] Miller T J E and McGilp M., 1990, Proc IEE, Vol 137, Pt B, No 6
[3] Pulle D W J., 1991, Proc IEE, Vol 138, Pt B, No
6
[4] Elmas, Ç and Zelaya de la Parra H., 1992, PESC'92, Vol 2, 844 - 849
[5] Torrey D A and Lang J.H., 1990, Proc IEE, Vol
137, Pt B, No 5
[6] Lindsay J.F., Armugam R and Krishnan R., 1986, Proc IEE, Vol 133, Pt B, No 6
[7] Pham D T and Sarolu ., 1992, The First Turkish Symposium on Artificial Intelligence and Neural Networks Ankara, Turkey
[8] Narendra K S and Parthasarathy K., 1990, IEEE Trans on Neural Networks, 1(1), 4-27
[9] Rumelhart D E and McClelland J L., 1986,
"Parallel Distributed Processing", Vol 1, The MIT press, Cambridge
[10] Chen S., Billings S A and Grant P M.,1990, Int J Control, Vol 51, No 6, 230-241,
[11] Maren, C Harston, and Pap R., 1990, Handbook of Neural Computing Applications, Academic Press, London ISBN 0-12-471260-6
[12] Miller T.J.A., 1990, IEEE Trans on Ind appl., Vol IA-21, No 5
[13] Stephenson M and EL-Khazendar M.A., 1989, Proc IEE, Vol 136, Pt B, No 1
[14] Elmas Ç Sağırolu Ş., Çolak İ and Bal G., 1994, MELECON'94, Part 2, 809-812