I. Introduction to BP neural network prediction algorithm
Note: Section 1.1 is mainly to summarize and help understand the principle of BP neural network algorithm considering influencing factors, that is, the explanation of conventional BP model training principle (whether to skip according to their own knowledge). Section 1.2 begins with the BP neural network prediction model based on the influence of historical values.
When BP neural network is used for prediction, there are mainly two types of models from the perspective of input indicators:
1.1 principle of BP neural network algorithm affected by relevant indexes
As shown in Figure 1, when training BP using MATLAB's newff function, it can be seen that most cases are three-layer neural networks (i.e. input layer, hidden layer and output layer). Here to help understand the principle of neural network:
1) Input layer: equivalent to human facial features. The facial features obtain external information and receive input data corresponding to the input port of neural network model.
2) hidden Layer: corresponding to the human brain, the brain analyzes and thinks about the data transmitted from the five senses. The hidden Layer of neural network maps the data X transmitted from the input layer, which is simply understood as a formula hidden Layer_ output=F(w*x+b). Among them, W and B are called weight and threshold parameters, and F() is the mapping rule, also known as the activation function, hiddenLayer_output is the output value of the hidden Layer for the transmitted data mapping. In other words, the hidden Layer maps the input influencing factor data X and generates the mapping value.
3) Output layer: it can correspond to human limbs. After thinking about the information from the five senses (hidden layer mapping), the brain controls the limbs to perform actions (respond to the outside). Similarly, the output layer of BP neural network is hidden layer_ Output is mapped again, outputLayer_output=w *hiddenLayer_output+b. Where W and B are weight and threshold parameters, and outputLayer_output is the output value (also called simulation value and prediction value) of the output layer of the neural network (understood as the external executive action of the human brain, such as the baby slapping the table).
4) Gradient descent algorithm: by calculating outputlayer_ For the deviation between output and the y value input by the neural network model, the algorithm is used to adjust the weight, threshold and other parameters accordingly. This process can be understood as that the baby beats the table and deviates. Adjust the body according to the distance of deviation, so that the arm waved again keeps approaching the table and finally hits the target.
Take another example to deepen understanding:
The BP neural network shown in Figure 1 has input layer, hidden layer and output layer. How does BP realize the output value outputlayer of the output layer through these three-layer structures_ Output, constantly approaching the given y value, so as to train to obtain an accurate model?
From the serial ports in the figure, we can think of a process: take the subway and imagine Figure 1 as a subway line. One day when Wang went home by subway: he got on the bus at the input starting station, passed many hiddenlayers on the way, and then found that he sat too far (the outputLayer corresponds to the current position), so Wang will be based on the distance from home (Error) according to the current position, Return to the midway subway station (hidden layer) and take the subway again (Error reverse transfer, using gradient descent algorithm to update w and b). If Wang makes another mistake, the adjustment process will be carried out again.
From the example of baby slapping the table and Wang taking the subway, think about the problem: for the complete training of BP, the data needs to be input first, and then through the mapping of the hidden layer, the BP simulation value is obtained from the output layer, and the parameters are adjusted according to the error between the simulation value and the Target value to make the simulation value approach the Target value continuously. For example, (1) the baby responds to external interference factors (x), and the brain constantly adjusts the position of the arms and controls the accuracy of the limbs (y, Target). (2) Wang Moumou goes to the boarding point (x), crosses the station (predict), and constantly returns to the midway station to adjust the position and get home (y, Target).
In these links, the influencing factor data x and the Target value data y (Target) are involved. According to x and y, BP algorithm is used to find the law between x and y, and realize the mapping and approximation of Y by x, which is the function of BP neural network algorithm. To add another word, the above processes are all BP model training. Although the training of the final model is accurate, is the bp network accurate and reliable. Therefore, we put x1 into the trained bp network to obtain the corresponding BP output value (predicted value) predict1. By mapping, calculating Mse, Mape, R and other indicators to compare the proximity of predict1 and y1, we can know whether the model is accurate or not. This is the test process of BP model, that is to realize the prediction of data, and compare the actual value to test whether the prediction is accurate.
Fig. 1 3-layer BP neural network structure
1.2 BP neural network based on the influence of historical value
Taking the power load forecasting problem as an example, the two models are distinguished. When forecasting the power load in a certain period of time:
One way is to predict the load value at , t , time by considering the climate factor indicators at , t , time, such as the influence of air humidity x1, temperature x2 and holidays x3 at that time. This is the model mentioned in 1.1 above.
Another approach is to consider that the change of power load value is related to time. For example, it is considered that the power load value at time T-1, t-2 and t-3 is related to the load value at time t, that is, it satisfies the formula y(t)=F(y(t-1),y(t-2),y(t-3)). When BP neural network is used to train the model, the influencing factor values input to the neural network are historical load values y(t-1),y(t-2),y(t-3). In particular, 3 is called autoregressive order or delay. The output of y to the neural network is the value of T.
II. Harris Eagle algorithm
3, Partial code
function [TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] = elm_kernel(TrainingData, TestingData, Elm_Type, Regularization_coefficient, Kernel_type, Kernel_para) % Usage: elm(TrainingData_File, TestingData_File, Elm_Type, NumberofHiddenNeurons, ActivationFunction) % OR: [TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] = elm(TrainingData_File, TestingData_File, Elm_Type, NumberofHiddenNeurons, ActivationFunction) % % Input: % TrainingData_File - Filename of training data set tic; Omega_test = kernel_matrix(P',Kernel_type, Kernel_para,TV.P'); TY=(Omega_test' * OutputWeight)'; % TY: the actual output of the testing data TestingTime=toc %%%%%%%%%% Calculate training & testing classification accuracy if Elm_Type == REGRESSION %%%%%%%%%% Calculate training & testing accuracy (RMSE) for regression case TrainingAccuracy=sqrt(mse(T - Y)) TestingAccuracy=sqrt(mse(TV.T - TY)) end if Elm_Type == CLASSIFIER %%%%%%%%%% Calculate training & testing classification accuracy MissClassificationRate_Training=0; MissClassificationRate_Testing=0; for i = 1 : size(T, 2) [x, label_index_expected]=max(T(:,i)); [x, label_index_actual]=max(Y(:,i)); if label_index_actual~=label_index_expected MissClassificationRate_Training=MissClassificationRate_Training+1; end end TrainingAccuracy=1-MissClassificationRate_Training/size(T,2) for i = 1 : size(TV.T, 2) [x, label_index_expected]=max(TV.T(:,i)); [x, label_index_actual]=max(TY(:,i)); if label_index_actual~=label_index_expected MissClassificationRate_Testing=MissClassificationRate_Testing+1; end end TestingAccuracy=(1-MissClassificationRate_Testing/size(TV.T,2))*100 end %%%%%%%%%%%%%%%%%% Kernel Matrix %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function omega = kernel_matrix(Xtrain,kernel_type, kernel_pars,Xt) nb_data = size(Xtrain,1); if strcmp(kernel_type,'RBF_kernel'), if nargin<4, XXh = sum(Xtrain.^2,2)*ones(1,nb_data); omega = XXh+XXh'-2*(Xtrain*Xtrain'); omega = exp(-omega./kernel_pars(1)); else XXh1 = sum(Xtrain.^2,2)*ones(1,size(Xt,1)); XXh2 = sum(Xt.^2,2)*ones(1,nb_data); omega = XXh1+XXh2' - 2*Xtrain*Xt'; omega = exp(-omega./kernel_pars(1)); end elseif strcmp(kernel_type,'lin_kernel') if nargin<4, omega = Xtrain*Xtrain'; else omega = Xtrain*Xt'; end elseif strcmp(kernel_type,'poly_kernel') if nargin<4, omega = (Xtrain*Xtrain'+kernel_pars(1)).^kernel_pars(2); else omega = (Xtrain*Xt'+kernel_pars(1)).^kernel_pars(2); end elseif strcmp(kernel_type,'wav_kernel') if nargin<4, XXh = sum(Xtrain.^2,2)*ones(1,nb_data); omega = XXh+XXh'-2*(Xtrain*Xtrain'); XXh1 = sum(Xtrain,2)*ones(1,nb_data); omega1 = XXh1-XXh1'; omega = cos(kernel_pars(3)*omega1./kernel_pars(2)).*exp(-omega./kernel_pars(1)); else XXh1 = sum(Xtrain.^2,2)*ones(1,size(Xt,1)); XXh2 = sum(Xt.^2,2)*ones(1,nb_data); omega = XXh1+XXh2' - 2*(Xtrain*Xt'); XXh11 = sum(Xtrain,2)*ones(1,size(Xt,1)); XXh22 = sum(Xt,2)*ones(1,nb_data); omega1 = XXh11-XXh22'; omega = cos(kernel_pars(3)*omega1./kernel_pars(2)).*exp(-omega./kernel_pars(1)); end end
4, Simulation results
Fig. 2 convergence curve of Harris Eagle algorithm
The test statistics are shown in the table below
test result | Test set accuracy | Training set accuracy |
---|---|---|
BP neural network | 100% | 95% |
HHO-BP | 100% | 99.8% |
5, References
Prediction of water resources demand in Ningxia Based on BP neural network