Matrix fast power
Counting
seek a n a^n an.
Obviously, a simple way is: loop n n n times, multiply the results cumulatively, and the time complexity is O ( n ) O(n) O(n) .
You can consider
n
n
n is expressed in binary form:
n
=
a
0
⋅
2
0
+
a
1
⋅
2
1
+
...
a
n
−
1
⋅
2
n
−
1
+
a
n
⋅
2
n
,
a
i
∈
{
0
,
1
}
n = a_0\cdot 2^0 + a_1\cdot 2^1 + \ldots a_{n-1}\cdot 2^{n-1} + a_n\cdot 2^n, a_i\in\{0, 1\}
n=a0⋅20+a1⋅21+...an−1⋅2n−1+an⋅2n,ai∈{0,1}
Obviously, the sequence a n a n − 1 ... a 1 a 0 a_na_{n-1}\ldots a_1a_0 An − an − 1... a1 − a0 yes n n Binary representation of n.
So,
a
n
=
a
a
0
⋅
2
0
+
a
1
⋅
2
1
+
...
a
n
−
1
⋅
2
n
−
1
+
a
n
⋅
2
n
a^n = a^{a_0\cdot 2^0 + a_1\cdot 2^1 + \ldots a_{n-1}\cdot 2^{n-1} + a_n\cdot 2^n}
an=aa0⋅20+a1⋅21+...an−1⋅2n−1+an⋅2n
It can be written as:
a
n
=
∏
i
=
0
n
a
a
i
⋅
2
i
a^n = \prod_{i=0}^na^{a_i\cdot 2^i}
an=i=0∏naai⋅2i
When
a
i
=
0
a_i = 0
When ai = 0,
a
a
i
⋅
2
i
=
1
a^{a_i\cdot 2^i} = 1
aai ⋅ 2i=1, then this term is equivalent to No.
When
a
i
=
1
a_i = 1
When ai = 1,
a
a
i
⋅
2
i
=
a
2
i
a^{a_i\cdot 2^i} = a^{2^i}
aai ⋅ 2i=a2i, obviously
2
i
2^i
2i can be from
2
i
−
1
2^{i-1}
2i − 1 is obtained, so
a
2
i
=
a
2
i
−
1
∗
a
2
i
−
1
a^{2^i} = a^{2^{i-1}} * a^{2^{i-1}}
a2i=a2i−1∗a2i−1.
We just need to n n If the current bit is 1 and RET is the final result, then ret = ret* a 2 i a^{2^i} a2i. Save with a variable a 2 i − 1 a^{2^{i-1}} a2i − 1, which can be expressed by squaring this variable each time a 2 i a^{2^{i}} a2i.
code implementation
long long quick_pow(long long base, long long n){ long long ret = 1; while (n){ if (n & 1){ ret *= base; } base *= base; n >>= 1; } return ret; }
Time complexity:
O
(
l
o
g
n
)
O(logn)
O(logn)
Space complexity:
O
(
1
)
O(1)
O(1)
Matrix fast power
seek A n A^n An , A A A is a matrix.
If A A A is a number, then it becomes a fast power problem.
n n n can still be written in binary form. Looking back on the thinking process of fast power, in fact, we can A A A as a whole, the problems to be solved are ret *= base and base *= base. This base is in the fast power a a a. In fast power of matrix A A A . For numbers a a A is digital multiplication for a matrix A A A, obviously, is matrix multiplication.
We first overload the operator * to realize matrix multiplication. We might as well use a two-dimensional vector to represent the matrix.
The code is as follows:
vector<vector<int>> operator * (vector<vector<int>>& m1, vector<vector<int>>& m2){ vector<vector<int>> ret; int m1_n = m1.size(), m1_m = m1[0].size(); int m2_n = m2.size(), m2_m = m2[0].size(); if (m1_m != m2_n){ return ret; } ret.assign(m1_n, vector<int>(m2_m, 0)); for (int i = 0; i < m1_n; i ++){ for (int j = 0; j < m2_m; j ++){ for (int k = 0; k < m1_m; k ++){ ret[i][j] += m1[i][k] * m2[k][j]; } } } return ret; }
Then, put the fast power a a a replace with A A A, that is, replace the base with vector < vector < int > >
The matrix fast power code is as follows:
vector<vector<int>> quick_matrix_pow(vector<vector<int>> m, int n){ vector<vector<int>> ret; if (m.size() != m[0].size()){ return ret; } ret.resize(m.size(), vector<int>(m.size())); for (int i = 0; i < m.size(); i ++){ ret[i][i] = 1; } while (n){ if (n & 1){ ret = ret * m; } m = m * m; n >>= 1; } return ret; }
Time complexity:
O
(
l
o
g
n
)
∗
O
(
Moment
front
ride
method
)
O (logn) * O (matrix multiplication)
O(logn) * O (matrix multiplication)
Space complexity:
O
(
square
front
large
Small
2
)
O (square array size ^ 2)
O (square array size 2)
Note: for the implementation of matrix multiplication, matrices m1 and m2 may not be square matrices. The matrix in the fast power of the matrix must be a square matrix.
Overall code
#include<bits/stdc++.h> using namespace std; long long quick_pow(long long base, long long n){ long long ret = 1; while (n){ if (n & 1){ ret *= base; } base *= base; n >>= 1; } return ret; } vector<vector<int>> operator * (vector<vector<int>>& m1, vector<vector<int>>& m2){ vector<vector<int>> ret; int m1_n = m1.size(), m1_m = m1[0].size(); int m2_n = m2.size(), m2_m = m2[0].size(); if (m1_m != m2_n){ return ret; } ret.assign(m1_n, vector<int>(m2_m, 0)); for (int i = 0; i < m1_n; i ++){ for (int j = 0; j < m2_m; j ++){ for (int k = 0; k < m1_m; k ++){ ret[i][j] += m1[i][k] * m2[k][j]; } } } return ret; } vector<vector<int>> quick_matrix_pow(vector<vector<int>> m, int n){ vector<vector<int>> ret; if (m.size() != m[0].size()){ return ret; } ret.resize(m.size(), vector<int>(m.size())); for (int i = 0; i < m.size(); i ++){ ret[i][i] = 1; } while (n){ if (n & 1){ ret = ret * m; } m = m * m; n >>= 1; } return ret; } int main(){ cout << quick_pow(2, 31) << endl; cout << "------------" << endl; vector<vector<int>> m1 = {{1, 2}, {3, 4}}; vector<vector<int>> m2 = {{4}, {5}}; auto ret = m1 * m2; for (auto item : ret){ for (int i : item){ cout << i << " "; } cout << endl; } cout << "------------" << endl; ret = quick_matrix_pow(m1, 8); for (auto item : ret){ for (int i : item){ cout << i << " "; } cout << endl; } return 0; }
2021.08.20 9:41
The Friday of the third week of temporary exercise in Yuquan community.
I'm following Li Mu's pytorch version of "hands-on learning and deep learning" these days.
I haven't written a blog for a long time. I still haven't written many documents. I feel that procrastination has become an advanced stage.
I've also learned the usage of Appium these days. It feels similar to selenium. I learned this Appium because the positioning system of the West Lake pioneer App is not compatible with the mobile phone simulator, so I can only use the real machine to check in.
It is not a wise choice to waste time in repeated check-in. Fortunately, I know some programming.
It smells good.
By the way, I'm still chasing "anti Mafia storm", a very good online play. I've always liked sun Honglei.
There is still no single dog npy. I have very little money. I want to spend it alone.
Fast power, fast improve yourself.