[optimal dispatching] particle swarm optimization algorithm to solve the minimum power purchase problem of optimal dispatching of hydropower plants [matlab phase 1234]

1, Introduction to particle swarm optimization

1 Introduction
The group behavior of birds and fish in nature has always been the research interest of scientists. Biologist Craig Reynolds proposed a very influential bird swarm aggregation model in 19 eight 7. In his simulation, each individual follows: avoid collision with neighboring individuals: match the speed of neighboring individuals; Fly to the center of the flock, and the whole flock flies to the target. In the simulation, only the above three simple rules can be used to closely simulate the phenomenon of birds flying. In 1990, biologist Frank Heppner also proposed a bird model. The difference is that birds are attracted to fly to their habitat. In the simulation, at the beginning, each bird has no specific flight target, but uses simple rules to determine its flight direction and speed. When a bird flies to the habitat, the birds around it will fly to the habitat, and finally the whole flock will land in the habitat.
In 1995, American social psychologist James Kennedy and electrical engineer Russell Eberhart jointly proposed particle swarm optimization (PSO), which was inspired by the research results of modeling and Simulation of bird group behavior. Their models and simulation algorithms mainly modify Frank Heppner's model to make the particles fly to the solution space and land at the optimal solution. Once particle swarm optimization algorithm was proposed, because its algorithm is simple and easy to implement, it immediately attracted extensive attention of scholars in the field of evolutionary computing and formed a research hotspot. Swarm intelligence published by J.Kennedy and R.Eberhart in 2001 further expanded the influence of swarm intelligence [], and then a large number of research reports and research results on particle swarm optimization algorithm emerged, which set off a research upsurge at home and abroad [2-7].
Particle swarm optimization algorithm comes from the regularity of bird group activities, and then uses swarm intelligence to establish a simplified model. It simulates the foraging behavior of birds, compares the search space for solving the problem to the flight space of birds, and abstracts each bird into a particle without mass and volume
The process of finding the optimal solution of the problem is regarded as the process of birds looking for food, and then the complex optimization problem is solved. Like other evolutionary algorithms, particle swarm optimization algorithm is also based on the concepts of "population" and "evolution"
Cooperation and competition to realize the search of optimal solution in complex space. At the same time, unlike other evolutionary algorithms, it performs evolutionary operator operations such as crossover, mutation and selection on individuals. Instead, it regards individuals in the population as particles without mass and volume in the l-dimensional search space. Each particle moves in the solution space at a certain speed and gathers to its own historical best position P best and neighborhood historical best position g best, Realize the evolution of candidate solutions. Particle swarm optimization algorithm has a good biological and social background and is easy to understand. Due to few parameters and easy to implement, it has strong global search ability for nonlinear and multimodal problems. It has attracted extensive attention in scientific research and engineering practice. At present, the algorithm has been widely used in function optimization, neural network training, pattern classification, fuzzy control and other fields.

2 particle swarm optimization theory
2.1 description of particle swarm optimization algorithm
In the process of bird predation, members of the flock can obtain the discovery and flight experience of other members through information exchange and sharing among individuals. Under the condition that food sources are sporadically distributed and unpredictable, the advantages brought by this cooperation mechanism are decisive, far greater than the impact on food
Disadvantages caused by competition. Inspired by the predation behavior of birds, particle swarm optimization algorithm simulates this behavior. The search space of the optimization problem is compared with the flight space of birds. Each bird is abstracted as a particle. The particle has no mass and volume to represent a feasible solution of the problem. The optimal solution to be searched for in the optimization problem is equivalent to the food source sought by birds. Particle swarm optimization algorithm formulates simple behavior rules for each particle similar to bird motion, so that the motion of the whole particle swarm shows similar characteristics to bird predation, so it can solve complex optimization problems.
The information sharing mechanism of particle swarm optimization algorithm can be interpreted as a symbiotic and cooperative behavior, that is, each particle is constantly searching, and its search behavior is affected by other individuals in the group to varying degrees [8]. At the same time, these particles also have the memory of the best position they experience
Memory ability, that is, its search behavior is not only influenced by other individuals, but also guided by its own experience. Based on the unique search mechanism, particle swarm optimization algorithm first generates the initial population, that is, randomly initializes the velocity and position of particles in the feasible solution space and velocity space, in which the position of particles is used to represent the feasible solution of the problem, and then solves the optimization problem through the cooperation and competition of individual particles among populations.
2.2 particle swarm optimization modeling
Particle swarm optimization algorithm comes from the study of bird predation behavior: a group of birds randomly search for food in the area. All birds know how far their current position is from the food, so the simplest and effective strategy is to search the surrounding area of the bird nearest to the food. Particle swarm optimization
This model is used to get enlightenment and applied to solve optimization problems. In particle swarm optimization, the potential solution of each optimization problem is a bird in the search space, which is called particle. All particles have a fitness value determined by the optimized function, and each particle has a speed to determine their flying direction and distance. Then, the particles follow the current optimal particle to search in the solution space [9].

Firstly, particle swarm optimization algorithm initializes particle swarm randomly in a given solution space, and the number of variables of the problem to be optimized determines the dimension of the solution space. Each particle has an initial position and initial velocity, and then it is optimized by iteration. In each iteration, each particle updates its spatial position and flight speed in the solution space by tracking two "extreme values": one extreme value is the optimal solution particle found by a single particle in the iteration process, which is called individual extreme value; the other extreme value is the optimal solution particle found by all particles in the population in the iteration process, This particle is a global extremum. The above method is called global particle swarm optimization. If you do not use all particles in the population and only use a part of them as the neighbor particles of the particle, the extreme value in all neighbor particles is the local extreme value. This method is called local particle swarm optimization algorithm.

2.3 characteristics of particle swarm optimization algorithm
Particle swarm optimization is essentially a random search algorithm. It is a new intelligent optimization technology. The algorithm can converge to the global optimal solution with high probability. Practice shows that it is suitable for optimization in dynamic and multi-objective optimization environment. Compared with traditional optimization algorithms, it has faster calculation speed
Computing speed and better global search ability.
(1) Particle swarm optimization algorithm is an optimization algorithm based on swarm intelligence theory. It guides the optimization search through the swarm intelligence generated by the cooperation and competition among particles in the swarm. Compared with other algorithms, particle swarm optimization is an efficient parallel search algorithm.
(2) Both particle swarm optimization algorithm and genetic algorithm initialize the population randomly, and use fitness to evaluate the quality of individuals and carry out a certain random search. However, particle swarm optimization algorithm determines the search according to its own speed, without the crossover and mutation of genetic algorithm. Compared with evolutionary algorithm, particle swarm optimization algorithm retains the global search strategy based on population, but its speed displacement model is simple to operate and avoids complex genetic operations.
(3) Because each particle still maintains its individual extreme value at the end of the algorithm, that is, particle swarm optimization algorithm can not only find the optimal solution of the problem, but also get some better suboptimal solutions. Therefore, applying particle swarm optimization algorithm to scheduling and decision-making problems can give a variety of meaningful schemes.
(4) The unique memory of particle swarm optimization makes it possible to dynamically track the current search situation and adjust its search strategy. In addition, particle swarm optimization algorithm is not sensitive to the size of population. Even when the number of population decreases, the performance degradation is not great.

3 types of particle swarm optimization
3.1 basic particle swarm optimization

3.2 standard particle swarm optimization algorithm
Two concepts often used in the study of particle swarm optimization are introduced: one is "exploration", which means that particles leave the original search track to a certain extent and search in a new direction, which reflects the ability to explore unknown areas, similar to global search; The second is "development", which means that particles continue to search in a finer step on the original search track to a certain extent, mainly referring to further searching the areas searched in the exploration process. Exploration is to deviate from the original optimization trajectory to find a better solution. Exploration ability is the global search ability of an algorithm. Development is to use a good solution and continue the original optimization trajectory to search for a better solution. It is the local search ability of the algorithm. How to determine the proportion of local search ability and global search ability is very important for the solution process of a problem. In 1998, Shi Yuhui et al. Proposed an improved particle swarm optimization algorithm with inertia weight [10]. Because the algorithm can ensure good convergence effect, it is defaulted to the standard particle swarm optimization algorithm. Its evolution process is:

In equation (6.7), the first part represents the previous velocity of the particle, which is used to ensure the global convergence performance of the algorithm; The second and third parts make the algorithm have local convergence ability. It can be seen that the inertia weight W in equation (6.7) represents the extent to which the original speed is retained: W
Larger, the global convergence ability is strong and the local convergence ability is weak; If w is small, the local convergence ability is strong and the global convergence ability is weak.
When w=1, equation (6.7) is exactly the same as equation (6.5), indicating that the particle swarm optimization algorithm with inertia weight is an extension of the basic particle swarm optimization algorithm. The experimental results show that when w is between 0.8 and 1.2, PSO has faster convergence speed; When w > 1.2, the algorithm is easy to fall into local extremum.
In addition, w can be dynamically adjusted in the search process: at the beginning of the algorithm, w can be given a large positive value. With the progress of the search, w can be gradually reduced linearly, which can ensure that each particle can explore in the global range with a large speed step at the beginning of the algorithm
Good area is measured; In the later stage of the search, the smaller W value ensures that the particles can do a fine search around the extreme point, so that the algorithm has a greater probability of converging to the global optimal solution. Adjusting w can balance the ability of global search and local search. At present, the more dynamic inertia weight value is the linear decreasing weight strategy proposed by Shi, and its expression is as follows:

3.3 compression factor particle swarm optimization algorithm
Clerc et al. Proposed using constraint factors to control the final convergence of system behavior [11]. This method can effectively search different regions and obtain high-quality solutions. The speed update formula of compression factor method is:

The experimental results show that compared with the particle swarm optimization algorithm using inertia weight, the
Particle swarm optimization algorithm with constraint factor has faster convergence speed.
3.4 discrete particle swarm optimization
The basic particle swarm optimization algorithm is a powerful tool to search the extreme value of function in continuous domain. After the basic particle swarm optimization algorithm, Kennedy and Eberhart proposed a discrete binary version of particle swarm optimization algorithm [12]. In this discrete particle swarm optimization method, the discrete problem space is mapped to the continuous particle motion space, and the particle swarm optimization algorithm is modified appropriately to solve it. In the calculation, the speed position update operation rules of the classical particle swarm optimization algorithm are still retained. The values and changes of particles in state space are limited to 0 and 1, and each dimension vi y of velocity represents the possibility that each bit xi of position is taken as 1. Therefore, the vij update formula in continuous particle swarm optimization remains unchanged, but P best and: best only take values in [0,1]. The position update equation is expressed as follows:

4 particle swarm optimization process
Based on the concepts of "population" and "evolution", particle swarm optimization algorithm realizes the search of optimal solution in complex space through individual cooperation and competition [13]. Its process is as follows:
(1) Initializing particle swarm, including population size N and position x of each particle; And speed Vio
(2) Calculate the fitness value of each particle fit[i].
(3) For each particle, compare its fitness value fit [gate with the individual extreme value P best(i). If fit[i] < P best(i), replace P best(i) with fit[i].
(4) For each particle, its fitness value fit[i] is compared with the global extremum g best. If fit[i] < 8 best, replace g best with fit[i].
(5) Iteratively update the particle velocity v; And location xj.
(6) Carry out boundary condition treatment.
(7) Judge whether the termination conditions of the algorithm are met: if so, end the algorithm and output the optimization results; Otherwise, return to step (2).
The operation flow of particle swarm optimization algorithm is shown in Figure 6.1.

5 description of key parameters
In particle swarm optimization algorithm, the selection of control parameters can affect the performance and efficiency of the algorithm; How to select appropriate control parameters to optimize the performance of the algorithm is a complex optimization problem. In practical optimization problems, the control parameters are usually selected according to the user's experience.
The control parameters of particle swarm optimization algorithm mainly include: particle population size N, inertia weight w, acceleration coefficients c and c, maximum speed Via x, stop criterion, setting of neighborhood structure, boundary condition processing strategy, etc. [14],
Particle population size N
The selection of particle population size depends on the specific problem, but the number of particles is generally set to 20 ~ 50. For most problems, 10 particles can achieve good results: however, for difficult problems or specific types of problems, the number of particles can be taken as 100 or 100
200. In addition, the larger the number of particles, the larger the search space of the algorithm, and it is easier to find the global optimal solution; Of course, the longer the algorithm runs.
Inertia weight w
Inertia weight w is a very important control parameter in standard particle swarm optimization algorithm, which can be used to control the development and exploration ability of the algorithm. The size of the inertia weight indicates how much is inherited from the particle's current velocity. When the inertia weight value is large, the global optimization ability is strong and the local optimization ability is strong
Weak: when the inertia weight value is small, the global optimization ability is weak and the local optimization ability is strong. The choice of inertia weight usually has fixed weight and time-varying weight. Fixed weight is to select a constant as the inertia weight value, which remains unchanged in the process of evolution. Generally, the value is
[0.8, 1.2]: time-varying weight is to set a change interval and gradually reduce the inertia weight in a certain way in the process of evolution. The selection of time-varying weight includes variation range and decline rate. Fixed inertia weight can make particles maintain the same exploration and development ability, while time-varying weight can make particles have different exploration and development ability in different stages of evolution.
Acceleration constants c1 and c2
The acceleration constants c and c 2 adjust the maximum step length in the direction of P best and g best respectively. They determine the influence of particle individual experience and group experience on particle trajectory respectively, and reflect the information exchange between particle swarm. If cr=c2=0, the particles will fly to the boundary at their current flight speed. At this time, particles can only search a limited area, so it is difficult to find the optimal solution. If q=0, it is a "social" model. Particles lack cognitive ability and only have group experience. Its convergence speed is fast, but it is easy to fall into local optimization; If oy=0, it is the "cognitive" module
Type, there is no social shared information, and there is no information interaction between individuals, so the probability of finding the optimal solution is small. A group with scale D is equivalent to running N particles. Therefore, generally set c1=C2, and generally take c1=cg=1.5. In this way, individual experience and group experience have the same important influence, making the final optimal solution more accurate.
Maximum particle velocity vmax
The speed of particles has a maximum speed limit value vd max on each dimension of space, which is used to clamp the speed of particles and control the speed within the range [- Vimax, + va max], which determines the strength of problem space search. This value is generally set by the user. Vmax is a very important parameter. If the value is too large, the particles may fly over the excellent region; if the value is too small, the particles may not be able to fully detect the region outside the local optimal region. They may fall into the local optimum and cannot move far enough to jump out of the local optimum and reach a better position in space. The researchers pointed out that setting Vmax and adjusting inertia weight are equivalent, so! max is generally used to set the initialization of the population, that is, Vmax is set as the variation range of each dimensional variable, instead of carefully selecting and adjusting the maximum speed.
Stop criteria
The maximum number of iterations, calculation accuracy or the maximum number of stagnant steps of the optimal solution ▲ t (or acceptable satisfactory solution) are usually regarded as the stop criterion, that is, the termination condition of the algorithm. According to the specific optimization problem, the setting of stop criterion needs to take into account the solution time, optimization quality and efficiency of the algorithm
Search efficiency and other performance.
Setting of neighborhood structure
The global version of particle swarm optimization takes the whole population as the neighborhood of particles, which has the advantage of fast convergence, but sometimes the algorithm will fall into local optimization. The local version of particle swarm optimization takes the individuals with similar positions as the neighborhood of particles, which converges slowly and is not easy to fall into local optimization
Value. In practical application, the global particle swarm optimization algorithm can be used to find the direction of the optimal solution, that is, the approximate results can be obtained, and then the local particle swarm optimization algorithm can be used to carry out fine search near the best point.
Boundary condition treatment
When the position or velocity of a one-dimensional or dry dimension exceeds the set value, the boundary condition processing strategy can limit the position of particles to the feasible search space, which can not only avoid the expansion and divergence of the population, but also avoid the blind search of particles in a large range, so as to improve the search efficiency.
There are many specific methods. For example, by setting the maximum position limit Xmax and the maximum speed limit Vmax, when the maximum position or maximum speed is exceeded, a numerical value is randomly generated within the range to replace it, or set it to the maximum value, that is, boundary absorption.

2, Partial source code

clear
clc
tic%Timing start 
baseMVA = 100;

%Input control variable limit

% global PGmin
%  PGmin=[0.2 0.15 0.1 0.1 0.12];
% global PGmax 
% PGmax=[0.8 0.5 0.35 0.3 0.4];

global PGmin
PGmin=[0.2 0.15 0.1 0.1 0.12];
global PGmax 
PGmax=[1.0 0.8 0.55 0.8 0.6];

%Input state variable
%PQ Voltage limit of nodes (10) pq Node)
VPQmin=0.95*ones(1,10);  %Lower limit of node voltage
VPQmax=1.05*ones(1,10);  %Node voltage upper limit
%Voltage limit of balance node
Vph=1.06;
%Reactive power limit of generator node
QGmin=[-0.4 0 -0.06 -0.06];
QGmax=[0.5 0.4 0.24 0.24];
%Active power output limit of balance node
PGmins=0.5;
PGmaxs=2;

global popsize  %%Number of particle swarm
popsize=10;
global dimsize  %%Number of variables(Number of control variables)Control generator
dimsize=5;
global wcmax    %%Upper limit of weight
wcmax=0.9;
global wcmin    %%Weight offline
wcmin=0.1;
global c1        %Learning factor
global c2
c1=2;
c2=2;
global maxgeneration                      %%Total number of iterations
maxgeneration=50;
global generation
global PBEST
global GBEST
global PVALUER
global GVALUER


trace=zeros(maxgeneration,1);     %Algorithm performance tracking

 generation=1;                          %%for the first time
 pop=initpop(popsize);            %%Population initialization (call initialization function)
 [pbest,gbest,objvalue,gvaluer]=calobjvalue(pop);%%calobjvalue The function of the function is to calculate the fitness of the objective function
 PBEST=pbest;
 GBEST=gbest;
 PVALUER=objvalue;
 GVALUER=gvaluer;  %%Particles evolve according to their fitness,Adjust the best position of the individual Pbest And the best location for the group Gbest
 pop=renew(pop,PBEST,GBEST);              %%%Evolution of particle velocity and position
 trace(1,1)=GVALUER;

for generation=2:maxgeneration             %%Number of iterations
   [pbest,gbest,objvalue]=calobjvalue(pop);%%calobjvalue The function of the function is to calculate the fitness of the objective function
   for n=1:popsize
       if objvalue(n)<PVALUER(n)
            PVALUER(n,1)=objvalue(n,1);
            PBEST(n,1:dimsize)=pbest(n,1:dimsize);
       end
   end
  [GVALUER,m]=min(PVALUER);
  GVALUER;
  GBEST=PBEST(m,1:dimsize);%%Particles evolve according to their fitness,Adjust the best position of the individual Pbest And the best location for the group best
   pop=renew(pop,PBEST,GBEST);              %%%Evolution of particle velocity and position
    trace(generation,1)=GVALUER;  
end
GBEST
GVALUER

baseMVA = 100;
n=30;%Specify the number of nodes
%Active power output limit of balance node
PGmins=0.5;
PGmaxs=2;
%Set initial value of node voltage
e=ones(n,1);

 %Calculate the admittance matrix first
%  1No.L,T  2fbus 	3tbus	4r	        5x	        6 b/2 or k   7 Branch type    8a       9b      %among ab Indicates the status of the head and end switches. After that, standby 1 indicates that the switch is in operation
                                                                                       %Branch type 1==Line; six==transformer
branch = [
	1   1	2	0.0192	0.0575	0.0264		1;
	2   1	3	0.0452	0.1852	0.0204		1;
	3   2	4	0.057	0.1737	0.0184		1;
	4   3	4	0.0132	0.0379	0.0042		1;
	5   2	5	0.0472	0.1983	0.0209		1;
	6   2	6	0.0581	0.1763	0.0187		1;
	7   4	6	0.0119	0.0414	0.0045		1;
	8   5	7	0.046	0.116	0.0102		1;
	9   6	7	0.0267	0.082	0.0085		1;
	10  6	8	0.012	0.042	0.0045		1;
	11  6	9	  0	    0.208	0.978	    6;
	12  6	10	  0	    0.556	0.969		6;
	13  9	11	  0	    0.208	0	    	1;
	14  9	10	  0	    0.11	0	    	1;
	15  4	12	  0	    0.256	0.932		6;
	16  12	13	  0	    0.14	0	    	1;
	17  12	14	0.1231	0.2559	0	    	1;
	18  12	15	0.0680	0.1304	0	    	1;
	19  12	16	0.0945	0.1987	0	    	1;
	20  14	15	0.221	0.1997	0	    	1;
	21  16	17	0.0524	0.1923	0	    	1;
	22  15	18	0.1073	0.2185	0	    	1;
	23  18	19	0.0639	0.1292	0	    	1;
	24  19	20	0.034	0.068	0	    	1;
	25  10	20	0.0936	0.209	0	    	1;
	26  10	17	0.0324	0.0845	0	    	1;
	27  10	21	0.0348	0.0749	0	    	1;
	28  10	22	0.0727	0.1499	0	    	1;
	29  21	22	0.0116	0.0236	0	    	1;
	30  15	23	0.1	    0.202	0	    	1;
	31  22	24	0.115	0.179	0	    	1;
	32  23	24	0.132	0.27	0	    	1;
	33  24	25	0.1885	0.3292	0	    	1;
	34  25	26	0.2544	0.38	0	    	1;
	35  25	27	0.1093	0.2087	0	    	1;
	36  28	27	  0	    0.396	0.968	 	6;
	37  27	29	0.2198	0.4153	0	    	1;
	38  27	30	0.3202	0.6027	0	    	1;
	39  29	30	0.2399	0.4533	0	    	1;
	40  8	28	0.0636	0.2	    0.0214	    1;
	41  6	28	0.0169	0.0599	0.0065		1;];
brsize=size(branch);%Find the rows and columns of the matrix**
Y=zeros(n);
for i=1:brsize(1)
    for j=1:n        %j node
    %Calculate the self admittance first
         if (branch(i,2)==j|branch(i,3)==j)&branch(i,7)==1
            Y(j,j)=Y(j,j)+1/(branch(i,4)+branch(i,5)*1.0j)+branch(i,6)*1.0j;
        elseif branch(i,2)==j&branch(i,7)==6
            Y(j,j)=Y(j,j)+1/(branch(i,4)+branch(i,5)*1.0j)/branch(i,6)+1/(branch(i,4)+branch(i,5)*1.0j)*(branch(i,6)-1)/branch(i,6);
        elseif branch(i,3)==j&branch(i,7)==6
            Y(j,j)=Y(j,j)+1/(branch(i,4)+branch(i,5)*1.0j)/branch(i,6)+1/(branch(i,4)+branch(i,5)*1.0j)*(1-branch(i,6))/(branch(i,6)*branch(i,6));
        end
    end
end
% %Node admittance correction(capacitor)
% Y(9,9)=real(Y(9,9))+1.0j*(imag(Y(9,9))+0.1*pop(zz,1)/e(9)^2);
%Recalculate mutual admittance
for i=1:brsize(1)
    for j=1:n
        for k=1:n
            if (branch(i,2)==j&branch(i,3)==k)&branch(i,7)==1&j~=k
                Y(j,k)=-1/(branch(i,4)+branch(i,5)*1.0j);
            elseif (branch(i,2)==j&branch(i,3)==k)&branch(i,7)==6&j~=k
                Y(j,k)=-1/(branch(i,4)+branch(i,5)*1.0j)/branch(i,6);
            end
        end
    end
end
%Form a symmetrical admittance matrix
    for i=1:n
        for j=1:i
            if i~=j&Y(i,j)~=0
            Y(j,i)=Y(i,j);
            end
        end
    end
    for i=1:n
        for j=i:n
            if i~=j&Y(i,j)~=0
            Y(j,i)=Y(i,j);
            end
        end
    end
    
 %Start measurement input
% Definition of measurement type 1*Node voltage; two*Node injection current; three*Node injection power; four*Line current; five*Line power flow; six*Transformer current; seven*Transformer power....... 
%               *Is 1, indicating the real part of the measured value (voltage amplitude, active power);*2 represents the imaginary part of the measured value (voltage phase angle, reactive power).
% Node type 1 is PQ;2 by PV;3 Is a balance node;
%      1 Serial number      2 measured value     3 first node    4 end node 5 measurement type    6 node type      
LCSJ= [1          -0.217       2           2       31          2          
       2          -0.127       2           2       32          2
       3          -0.024       3           3       31          1
       4          -0.012       3           3       32          1
       5          -0.076       4           4       31          1
       6          -0.016       4           4       32          1
       7          -0.942       5           5       31          2
       8          -0.19        5           5       32          2
       9          -0.301       7           7       31          1
       10         -0.109       7           7       32          1
       11         -0.300       8           8       31          2
       12         -0.300       8           8       32          2
       13         -0.058       10          10      31          1
       14         -0.02        10          10      32          1
       15         -0.112       12          12      31          1
       16         -0.075       12          12      32          1
       17         -0.062       14          14      31          1
       18         -0.016       14          14      32          1
       19         -0.082       15          15      31          1
       20         -0.025       15          15      32          1
       21         -0.035       16          16      31          1
       22         -0.018       16          16      32          1
       23         -0.09        17          17      31          1
       24         -0.058       17          17      32          1
       25         -0.032       18          18      31          1
       26         -0.009       18          18      32          1
       27         -0.095       19          19      31          1
       28         -0.034       19          19      32          1
       29         -0.022       20          20      31          1
       30         -0.007       20          20      32          1
       31         -0.175       21          21      31          1
       32         -0.112       21          21      32          1
       33         -0.032       23          23      31          1
       34         -0.016       23          23      32          1
       35          -0.087      24          24      31          1
       36          -0.067      24          24      32          1
       37          -0.035      26          26      31          1
       38          -0.023      26          26      32          1
       39          -0.024      29          29      31          1
       40          -0.009      29          29      32          1
       41          -0.106      30          30      31          1
       42          -0.019      30          30      32          1
       43         0.4824      2           2       31          2
       44         1.045        2           2       11          2
       45         0.3881     5           5       31          2
       46         1.01         5           5       11          2
       47         0.3310      8           8       31          2
       48         1.01         8           8       11          2
       49         0.6655     11          11       31          2
       50         1.082       11          11       11          2
       51         0.4564     13          13       31          2
       52         1.071       13          13       11          2
       53         0            1           1       31          3
       54         1.06         1           1       11          3];
   
   Lsize=size(LCSJ);
   %Initialization voltage phasor
   V=ones(n,1);%o=zeros(n,1);
   PV=[];
   pvnum=0;
   for i=1:Lsize(1)
       if LCSJ(i,5)-10==1&LCSJ(i,6)~=3
           V(LCSJ(i,3))=LCSJ(i,2);
               pvnum=pvnum+1;              %How many records PV node
               PV=[PV LCSJ(i,3)];          %record pv Label of node
       elseif LCSJ(i,5)-10==1&LCSJ(i,6)==3
           V(LCSJ(i,3))=LCSJ(i,2);
           pqv=LCSJ(i,3);                  %Record the balance node location
       end
   end
G=real(Y);
B=imag(Y);

3, Operation results


4, matlab version and references

1 matlab version
2014a

2 references
[1] Steamed stuffed bun Yang, Yu Jizhou, Yang Shan Intelligent optimization algorithm and its MATLAB example (2nd Edition) [M]. Electronic Industry Press, 2016
[2] Zhang Yan, Wu Shuigen MATLAB optimization algorithm source code [M] Tsinghua University Press, 2017

Keywords: MATLAB Algorithm linear algebra

Added by mjr on Wed, 22 Dec 2021 01:31:50 +0200