
Introduction | after I deeply study and understand GORM and XORM, I think they are not concise and elegant, some cumbersome and have a great learning cost. In order to study and explore, I realized a simple and elegant go language version of ORM. This paper mainly introduces the basic principles, realizes them step by step, and then completes the whole simple and elegant MySQL ORM.
1, Pre learning
(1) Why use ORM
When we use various languages to make requirements, whether PHP, Golang or C + +, we should have used ORM to link databases. Some of these ORM are integrated and implemented by the project team, and some are open-source components. Especially in a new project, we will use an ORM framework to connect to the database instead of directly writing SQL links with native code. There are many reasons, including security and performance considerations, but, More I think or lazy (escape) and low development efficiency, because sometimes some SQL is very complex and tired to write, especially when querying the list, it is paging and result set. It also needs to judge and traverse for next. It is really tired and the development efficiency is very low. If there is an ORM, the database config is matched, several chain functions are adjusted, and the results will come out.
Therefore, ORM is the middleware that we interact with the database. We interact with the database through various fast methods provided by ORM, and then realize the functions more conveniently and efficiently.
Sum up what ORM is in one sentence: provide a more convenient and fast curd method to interact with the database.
(2) How does Golang connect to MySQL natively
After saying what ORM is and why ORM is used, let's take a look at how Golang connects to MySQL natively, which is very helpful for us to develop an orm. Only by understanding the principle of interaction between them can we better start building.
The following steps are generally used to connect MySQL with native code.
First, import the sql Engine and mysql driver:
import ("database/sql"_ "github.com/go-sql-driver/mysql")
Connect to MySQL:
db, err := sql.Open("mysql", "root:123456@tcp(127.0.0.1:3306)/ApiDB?charset=utf8") //The first parameter is the driver name if err= nil { panic(err.Error())}
Then, let's quickly check how to add, delete, modify and query:
Add:
//Method 1: result, err: = dB Exec("INSERT INTO userinfo (username, departname, created) VALUES (?, ?, ?)", "lisi","dev","2020-08-04") //Method 2: stmt, err: = dB Prepare("INSERT INTO userinfo (username, departname, created) VALUES (?, ?, ?)") result2, err := stmt.Exec("zhangsan", "pro", time.Now().Format("2006-01-02"))
Delete:
//Method 1: result, err: = dB Exec("delete from userinfo where uid=?", 10795) //Method 2: stmt, err: = dB Prepare("delete from userinfo where uid=?") result3, err := stmt.Exec("10795")
Change:
//Method 1: result, err: = dB Exec("update userinfo set username=? where uid=?", "lisi", 2) //Method 2: stmt, err: = dB Prepare("update userinfo set username=? where uid=?") result, err := stmt.Exec("lisi", 2)
Check:
//Single var username, departname, status stringerr: = dB QueryRow("select username, departname, status from userinfo where uid=?", 4). Scan(&username, &departname, &status)if err != nil { fmt.Println("QueryRow error :", err.Error())}fmt. Println("username: ", username, "departname: ", departname, "status: ", status) //Multiple: rows, err: = dB Query("select username, departname, status from userinfo where username=?", "yang")if err != nil { fmt.Println("QueryRow error :", err.Error())} //Define a structure to store the data model type userinfo struct {username string ` JSON: "username" ` departname string ` JSON: "departname" ` status string ` JSON: "status" `} / / initialize var user []UserInfo for rows.Next() { var username1, departname1, status1 string if err := rows.Scan(&username1, &departname1, &status1); err != nil { fmt.Println("Query error :", err.Error()) } user = append(user, UserInfo{Username: username1, Departname: departname1, Status: status1})}
For more detailed courses and instructions, please refer to this article I wrote: https://blog.csdn.net/think2me/article/details/108317492
So, to sum up, the method of native connection to MySQL in Golang is very simple. It is to write sql directly. If it is simple and rough, you can directly Exec. If it is complex, but the efficiency will be higher, you can Prepare first and then Exec. Overall, the learning cost is very low. The biggest problem is the trouble and development efficiency.
So I was thinking? Can I develop an ORM myself based on the advantages of the native code base? First, it can provide various methods to improve development efficiency. Second, the bottom layer can be directly transformed and spliced into the final SQL to call the native component to interact with MySQL. This is not killing two birds with one stone. It can not only improve the development efficiency, but also maintain enough efficiency and simplicity. Perfect!
(3) ORM framework concept
The principle of this ORM library is simple SQL splicing. Expose various CURD methods, and splice them into Prepare and Eexc placeholder parts in the underlying logic, and then call the "GitHub. COM / go SQL driver / MySQL" driven method to interact with the database.
First of all, let's give it a great name: smallorm, well, it's OK!
Then, the whole calling process adopts the chain method, which is more convenient, such as this
db.Where().Where().Order().Limit().Select()
Secondly, the exposed CURD method should be simple to use, the name should be clear and unambiguous, and don't make a lot of complex indirect calls.
OK, let's sort out some curd methods commonly used in sql, organize them into ORM methods, and implement them step by step, as follows:
- Connect connect
- Set Table name Table
- Add / replace Insert/Replace
- Condition Where
- Delete delete
- Modify Update
- Query Select
- Execute native SQLExec/Query
- Set query fieldfield
- Set size Limit
- Aggregate query Count/Max/Min/Avg/Sum
- Sort Order
- Group group
- Judging Having after grouping
- Gets the complete SQLGetLastSql that executes the build
- Transaction Begin/Commit/Rollback/
Insert/Replace/Delete/Select/Update is the last step of the whole chain operation. It is a real method of interacting with MySQL. You can't chain other operation methods later.
Therefore, we can enjoy how this completed ORM is called:
Add:
type User1 struct { Username string `sql:"username"` Departname string `sql:"departname"` Status int64 `sql:"status"`} user2 := User1{ Username: "EE", Departname: "22", Status: 1,} // insert into userinfo (username,departname,status) values ('EE', '22', 1) id, err := e.Table("userinfo").Insert(user2)
Delete:
// delete from userinfo where (uid = 10805) result1, err := e.Table("userinfo").Where("uid", "=", 10805).Delete()
Change:
// update userinfo set departname=110 where (uid = 10805) result1, err := e.Table("userinfo").Where("uid", "=", 10805).Update("departname", 110)
Check:
// select uid, status from userinfo where (departname like '%2') or (status=1) order by uid desc limit 1 result, err := e.Table("userinfo").Where("departname", "like", "%2").OrWhere("status", 1).Order("uid", "desc").Limit(1).Field("uid, status").Select() //select uid, status from userinfo where (uid in (1,2,3,4,5)) or (status=1) order by uid desc limit 1 result, err := e.Table("userinfo").Where("uid", "in", []int{1,2,3,4,5}).OrWhere("status", 1).Order("uid", "desc").Limit(1).Field("uid, status").SelectOne() type User1 struct { Username string `sql:"username"` Departname string `sql:"departname"` Status int64 `sql:"status"`} user2 := User1{ Username: "EE", Departname: "22", Status: 1,} user3 := User1{ Username: "EE", Departname: "22", Status: 2,} // select * from userinfo where (Username='EE' and Departname='22' and Status=1) or (Username='EE' and Departname='22' and Status=2) limit 1id, err := e.Table("userinfo").Where(user2).OrWhere(user3).SelectOne()
2, Start building
(1) Connect connect
It's easy to connect to MySQL. You can directly connect the native SQL The open ("MySQL", dsn) method can be a function shell, but it needs to consider the co process, long connection maintenance and ping failure. We won't consider the first version here.
The first step is to construct a variable engine SmallormEngine, which is of structure type and is used to store various data. Other externally exposed CURD methods are also inherited based on this structure.
type SmallormEngine struct { Db *sql.DB TableName string Prepare string AllExec []interface{} Sql string WhereParam string LimitParam string OrderParam string OrWhereParam string WhereExec []interface{} UpdateParam string UpdateExec []interface{} FieldParam string TransStatus int Tx *sql.Tx GroupParam string HavingParam string}
Because the underlying essence of our ORM is SQL splicing, we need to save the data generated by various operation methods to each variable of this structure to facilitate the final step of generating SQL.
These two fields need to be briefly explained: the type of Db field is * SQL Db, which is used for CURD operation directly. Tx is * SQL Tx type, which is the transaction operation of the database, used for rollback and commit. This will be described in detail later. Here is a general concept.
Next, you can write the connection:
//Create a new Mysql connection func newmysql (username string, password string, address string, dbname string) (* smallormengine, error) {DSN: = username + ":" + password + "@ TCP (" + address + ") / + dbname +"? Charset = utf8 & timeout = 5S & readtimeout = 6S "dB, err: = SQL. Open (" Mysql ", DSN) if err! = nil {return nil, err} //Configuration such as the maximum number of connections, occupying bits / / DB SetMaxOpenConns(3) //db. SetMaxIdleConns(3) return &SmallormEngine{ Db: db, FieldParam: "*", }, nil}
A method NewMysql is created to create a new connection. The parameters are (user name, password, ip and port, database name). The reason for using this name is:
- What if version 2.0 supports other databases;
- Join the subsequent connection pool.
Secondly, how to realize the chain call? You only need to return the instance itself in each method, for example:
func (e *SmallormEngine) Where (name string) *SmallormEngine { return e} func (e *SmallormEngine) Limit (name string) *SmallormEngine { return e}
In this way, we can call in a chain:
e.Where().Where().Limit()
(2) Set / read table name Table/GetTable
We need a method to set and read database table names, because all our curds are based on a table:
//Set table name func (E * smallormengine) table (name string) * smallormengine {e.tablename = name //Reset engine e.resetsmallormengine() return e} //Get table name func (E * smallormengine) gettable() string {return e.tablename}
In this way, every time we call the Table() method, we set a table name for this execution. All data mounted on the SmallormEngine node will be cleared.
(3) Add / replace Insert/Replace
- Single data insertion
The following is the first play and challenge of this orm. How to insert data into the database? Before implementing this function with ORM, let's recall how the native code mentioned above is inserted:
We use Prepare before Exec, which is efficient and safe:
stmt, err := db.Prepare("INSERT INTO userinfo (username, departname, created) VALUES (?, ?, ?)") result2, err := stmt.Exec("zhangsan", "pro", time.Now().Format("2006-01-02"))
Let's analyze its practice:
- First, in Prepare, use the value value of the inserted data as? Placeholder instead, how many values are used?
- In Exec, add the value value, and? The quantity of is always enough.
ok, I see. Then we can split the data according to these two parts.
For convenience, when we call this Insert method to Insert data, the parameter is to pass a k-v key value pair class, such as [field1:value1, field2:value2, field3:value3]. Field represents the field of the table and value represents the value of the field. In the go language, such types can be Map or Struct, but maps must all be of the same type. Obviously, it does not comply with the situation that different fields may be of different types in the database table. Therefore, we chose the Struct structure, which can have multiple data types, which is just in line with the situation.
Since the data in go must be defined first and then initialized with a value, the general calling process is as follows:
type User struct { Username string `sql:"username"` Departname string `sql:"departname"` Status int64 `sql:"status"`} user2 := User{ Username: "EE", Departname: "22", Status: 1,} id, err := e.Table("userinfo").Insert(user2)
Note that each element of the User structure is followed by an sql: "xxx", which is called the Tag tag. What's this for? Because the first letter in go indicates that it is a visible variable, so if it is a visible variable, it starts with an uppercase letter, while the first letter name of the field in the sql statement table is generally lowercase. Therefore, this Tag feature is used to take care of this special relationship for conversion and matching. If the field type of your table also starts with large and small letters, you don't need this label. Next, we will talk about how to convert matching.
Therefore, the next difficulty is to parse user2 into two steps:
Step 1: parse and match the sql: "xxx" tag, replace it with all lowercase, parse it into (username, departname, status), and generate the corresponding number of tags in turn.
stmt, err := db.Prepare("INSERT INTO userinfo (username, departname, status) VALUES (?, ?, ?)")
Step 2: remove the values of the child elements of user2 and put them into Exec.
result2, err := stmt.Exec("EE", "22", 1)
Then, how can the fields of the three child elements in user2 be resolved to (username, partname, status)? Since we are a general method, golang cannot know which fields and values are included in the passed in data structure parameters directly through the for loop. What should we do? At this time, the famous reflection can come in handy. We can deduce the field, value and type of the incoming structure variable through reflection. What is tag. Can be derived from reflection.
Let's now try two of these functions, reflect Typeof and reflect ValueOf:
type User struct { Username string `sql:"username"` Departname string `sql:"departname"` Status int64 `sql:"status"`} user2 := User{ Username: "EE", Departname: "22", Status: 1,} //Reflect the type of this structure variable t: = reflect TypeOf(user2) //Reflect the value of this structure variable V: = reflect ValueOf(user2) fmt.Printf("==== print type ====\n%+v\n", t)fmt.Printf("==== print value ====\n%+v\n", v)
Let's print it. What's the result?
==== print type ====main.User ==== print value ===={Username:EE Departname:22 Status:1}
Through the above printing, we can know that its type is User, and the value is also the value we want. OK. The first step is completed. Next, we use the for loop to traverse t.NumField() and t.Field(i) to split the values inside:
//Reflection type and value: = reflect TypeOf(user2)v := reflect. ValueOf(user2) //Field name var fieldName []string //question mark? Placeholder var placeholder []string //Circular judgment for I: = 0; i < t.NumField(); i++ { //It starts with lowercase and cannot be reflected. Skip if! v.Field(i). CanInterface() { continue } //Parse the tag and find out the real SQL field name sqltag: = t.field (I) Tag. Get("sql") if sqlTag != "{/ / skip auto increment field if strings. Contains (strings. Tolower (sqltag)," auto_increment ") {continue} else {fieldname = append (fieldname, strings. Split (sqltag,", "[0]) placeholder = append (placeholder,"? ")}} else { fieldName = append(fieldName, t.Field(i).Name) placeholder = append(placeholder, "?") } //Field value e.allexec = append (e.allexec, v.field (I) Interface())} //Splice table, field name, placeholder e.prepare = "insert into" + e.gettable() + "(" + strings. Join (fieldname, "," + ") values (" + strings. Join (placeholder, "," + ")"
As shown above: t.NumField() can get how many fields in this structure are used for the for loop, t.field (I) Tag. Get ("sql") can get the value of the tag containing sql: "xxx", which we use to match and replace sql. t.Field(i).Name can get the field name of the field. Through v. field (I) Interface () can get the value of the field. e.GetTable() to get the name of the object we set. Through the slightly complex reflection and splicing in the above paragraph, we have completed dB Prepare section:
e.Prepare = "INSERT INTO userinfo (username, departname, status) VALUES (?, ?, ?)"
Next, let's get stmt In the value part of exec, we put all values into the attribute e.AllExec. The reason why it uses the interface type is because the value types in the structure are changeable, which may be int or string.
//Declare stmt type var stmt * SQL Stmt //Step 1: dB preparestmt, err = e.Db. Prepare(e.Prepare) //Step 2: execute exec. Note that this is stmt Execresult, err := stmt. Exec(e.AllExec...) if err != nil { //TODO} //Get self incrementing Idid,:= result. LastInsertId()
1. Batch insertion, the incoming data is a slice array,`[]struct` Such data types. two. We have to use reflection to figure out how many elements there are in this array. That's easy to calculate VALUES How many are there in the back`()`Placeholder for. three. Two for Cycle, outside for Loop to get the value of this child element type and value. The second one inside for Loop, just like the reflection operation of a single insert, is to calculate how many fields each child element has and reflect field Name and corresponding`()`How many are there?Question mark placeholder. four. 2 layer for Loop through each field of each element in the slice value Put into 1 unified AllExec Yes.
OK, go directly to the code:
//Batch insert func (E * smallormengine) batchinsert (data interface {}) (Int64, error) {return e.batchinsertdata (data, "insert")} //Batch replacement insert func (E * smallormengine) batchreplace (data interface {}) (Int64, error) {return e.batchinsertdata (data, "replace")} //Batch insert func (e *SmallormEngine) batchInsertData(batchData interface{}, insertType string) (int64, error){ //Reflection resolution getValue: = reflect ValueOf(batchData) //Slice size L: = getValue Len() //Field name var fieldName []string //Placeholder var placeholderString []string //Circular judgment for I: = 0; i < l; I + + {value: = getValue. Index (I) / / value of item typed: = value. Type() / / type of item if typed. Kind()! = reflect. Struct {panic ("child elements inserted in batch must be of structure type")} num := value.NumField() //Child element value var placeholder []string / / loop through child elements for J: = 0; j < num; j++ { //It starts with lowercase and cannot be reflected. Skip if! value. Field(j). CanInterface() { continue } //Parse the tag and find the real SQL field name sqltag: = typed Field(j). Tag. Get("sql") if sqlTag != "" {/ / skip the auto increment field if strings. Contains (strings. Tolower (sqltag), "auto_increment") {continue} else {/ / field name only records the first if I = = 1 {fieldname = append (fieldname, strings. Split (sqltag, ",") [0])} placeholder = append (placeholder, "?")}} Else {/ / field name records only the first if I = = 1 {fieldname = append (fieldname, typed. Field (J). Name)} placeholder = append (placeholder, "?")} //Field value e.allexec = append (e.allexec, value. Field (J) Interface()) } //The value after child elements are spliced into multiple () parentheses, placeholderString = append(placeholderString, "("+strings.Join(placeholder, "," + ")")} //Splice table, field name, placeholder e.prepare = inserttype + "into" + e.gettable() + "(" + strings. Join (fieldname, "," + ") values" + strings Join(placeholderString, ",") //prepare var stmt *sql.Stmt var err error stmt, err = e.Db.Prepare(e.Prepare) if err != nil { return 0, e.setErrorInfo(err) } //Execute exec. Note that this is stmt Exec result, err := stmt. Exec(e.AllExec...) if err != nil { return 0, e.setErrorInfo(err) } //Get self incrementing ID,:= result. LastInsertId() return id, nil} //Custom error format func (E * smallormengine) seterrorinfo (ERR error) error {, file, line,: = runtime. Caller (1) return errors. New ("file:" + file + ":" + strconv.itoa (line) + "," + err. Error())}
Start to summarize the key points above. The first is to get the size of the slice for the first for loop. You can use the following 2 lines of code:
//Reflection resolution getValue: = reflect ValueOf(batchData) //Slice size L: = getValue Len()
Secondly, in the first for loop, you can use value: = getValue Index (i) to get the value of the ith element in the slice, which is similar to the value of the structure reflected by inserting a single data above: V: = reflect ValueOf(data)
Then, by typing: = value Type() to get the type of the ith element. Similar to inserting a single data above, the type of reflected structure is the same: T: = reflect TypeOf(data) . This thing is reflected mainly to get the tag.
The reflection logic in the second for loop is basically the same as that of a single insert. The only thing to note is the value of fieldName. Because we only need one, we judge it with i==1. Just add it once.
Another is the variable placeholderString. In order to achieve the effect of multiple (), we have made another slice.
In this way, the logic of batch insertion and batch replacement insertion is completed.
- Single and batch integration
In order to make our ORM elegant and simple enough, we can expose single insertion and batch insertion into one method. How to identify whether the incoming data is a single structure or a slice structure? Still use reflection:
reflect.ValueOf(data).Kind()
It can give us the answer. If we pass a single structure, its value is Struct. If it is a Slice Array, its values are Slice and Array. So we can do it easily. We just need to make a little judgment:
//Insert func (e *SmallormEngine) Insert(data interface{}) (int64, error){ //Determine whether to insert getValue in batch or single: = reflect ValueOf(data). Kind() if getValue == reflect. Struct { return e.insertData(data, "insert") } else if getValue == reflect. Slice || getValue == reflect. Array {return e.batchinsertdata (data, "insert")} else {return 0, errors. New ("the inserted data format is incorrect, the single insertion format is: struct, and the batch insertion format is: [] struct")}} //Replace insert func (e *SmallormEngine) Replace(data interface{}) (int64, error) {/ / judge whether to insert getValue in batch or single: = reflect. Valueof (data). Kind() if getValue = = reflect. Struct {return e.insertdata (data, "replace")} else if getValue = = reflect Slice || getValue == reflect. Array {return e.batchinsertdata (data, "replace")} else {return 0, errors. New ("the inserted data format is incorrect, the single insertion format is: struct, and the batch insertion format is: [] struct")}}
OK, done.
(4) Condition Where
- Structure parameter call
Next, we begin to implement the logic of the where method, which is mainly used to replace the logic behind the where in the sql statement. There are still many where in the sql statement, such as native sql:
select * from userinfo where status = 1delete from userinfo where status = 1 or departname != "aa"update userinfo set departname = "bb" where status = 1 and departname = "aa"
Therefore, it is necessary to separate the data behind where into a where method. Most ORM do the same.
By observing the above three sql sentences, we can get the basic where structure. Either there is only one condition, and the comparison complex of this condition is rich, such as: =,! =, like, <, > etc. Either mu lt iple conditions, separated by and or, represent the relationship between and or.
Through the top native code, we can find that the where part is the same. First generate a question mark placeholder with Prepare, and then replace the value with Exce l.
stmt, err := db.Prepare("delete from userinfo where uid=?")result3, err := stmt.Exec("10795") stmt, err := db.Prepare("update userinfo set username=? where uid=?")result, err := stmt.Exec("lisi", 2)
Therefore, the division of the where part is actually divided into two parts. The logic is the same as the two-step insertion. The general calling process is as follows:
type User struct { Username string `sql:"username"` Departname string `sql:"departname"` Status int64 `sql:"status"`} user2 := User{ Username: "EE", Departname: "22", Status: 1,} result1, err1 := e.Table("userinfo").Where(user2).Delete()result2, err2 := e.Table("userinfo").Where(user2).Select()
What we are implementing this time is the where part. Where is the middle layer. It will not specifically execute the results. What it does is to split the data and use two new sub elements WhereParam and WhereExec to temporarily store the data for the final CURD operation method.
We started writing code, almost the same as the reflection logic of the Insert method.
func (e *SmallormEngine) Where(data interface{}) *SmallormEngine { //Reflection type and value T: = reflect TypeOf(data) v := reflect. ValueOf(data) //Field name var fieldNameArray []string //Circular parsing for I: = 0; i < t.NumField(); i++ { //Initial lowercase, non reflective if! v.Field(i). CanInterface() { continue } //Parse the tag and find out the real SQL field name sqltag: = t.field (I) Tag. Get("sql") if sqlTag != "" { fieldNameArray = append(fieldNameArray, strings.Split(sqlTag, ",")[0]+"=?") } else { fieldNameArray = append(fieldNameArray, t.Field(i).Name+"=?") } //Reflects the value of Exec. e.WhereExec = append(e.WhereExec, v.Field(i).Interface()) } //Splice E. whereparam + = strings Join(fieldNameArray, " and ") return e }
In this way, we can call Where() repeatedly and generate 2 temporary variables. Let's print these two values to see:
WhereParam = "username=? and departname=? and Status=?"WhereExec = []interface{"EE", "22", 1}
Because Where() is an intermediate method, it can be called multiple times, and each call is an and relationship. For example:
e.Table("userinfo").Where(user2).Where(user3).XXX
Therefore, we have to transform e.WhereParam and let it splice the data generated last time.
First judge whether it is empty. If it is not empty, it means that this is the second call. We use "and (" for isolation.
//Call judgment if e.whereparam multiple times= "" { e.WhereParam += " and ("} else { e.WhereParam += "("}
//At the end of the splice, add the closing parenthesis "") ".
e.WhereParam += strings.Join(fieldNameArray, " and ") + ") "
In this way, we have achieved our goal. Let's look at the print results after multiple calls:
WhereParam = "(username=? and departname=? and status=?) and (username=? and departname=? and status=?)"WhereExec = []interface{"EE", "22", 1, "FF", "33", 0}
It should be noted that for calls in this way, in order to simplify the call structure and make it clearer and simpler, the relationship between each condition is = by default. If there are other relationship judgments, you can use the following methods.
- Call of a single string parameter
In fact, the above parameters of the Where method are the same as Insert. We pass in a structure, but sometimes if a structure is passed in, it has to be defined first and then instantiated, which is also very troublesome. And sometimes, we only need to query one field. If we define another structure and instantiate it, it will be too troublesome. Therefore, our ORM must also provide fast method calls, such as:
Where("uid", "=", 1234)Where("uid", ">=", 1234)Where("uid", "in", []int{2, 3, 4})
In this way, we can also use other non and judgment expressions, such as:! =, like, not in, in, etc.
OK, let's start writing. How to judge this way? The method of comparing the incoming structure is simpler: the method has three parameters, the first is the field to be queried, the second is the comparator, and the third is the query value.
func (e *SmallormEngine) Where(fieldName string, opt string, fieldValue interface{}) *SmallormEngine { //Distinguish between the case of the operator in data2: = strings Trim (strings. Tolower (fieldname. (string)), ") if data2 = =" in "| data2 = =" not in "{/ / judge whether the incoming slice retype: = reflect. Typeof (fieldvalue). Kind() if retype! = reflect. Slice & & retype! = reflect. Array {panic (" the data passed in by in / not in operation must be slice or array ")} //Reflection value V: = reflect Valueof (fieldvalue) / / array / slice length datanum: = v.len() / / placeholder PS: = make ([] string, datanum) for I: = 0; i < dataNum; i++ { ps[i] = "?" e.WhereExec = append(e.WhereExec, v.Index(i). Interface()) } //Splice E. whereparam + = fieldname (string) + " " + fieldValue + " (" + strings.Join(ps, ",") + ")) " } else { e.WhereParam += fieldName.(string) + " " + fieldValue.(string) + " ?) " e.WhereExec = append(e.WhereExec, fieldValue) } return e }
The only thing that the above code needs to pay attention to is that if the second parameter is the in operator and the following third parameter is the slice type, it must be reflected with in (?,?,?) In this way.
Therefore, we can splice the two methods and integrate them into one method. We can judge them intelligently. The following is the complete code:
//Incoming and conditionfunc (E * smallormengine) where (data... Interface {}) * smallormengine{ //Judge whether it is a structure or multiple strings var datatype int if len (data) = = 1 {datatype = 1} else if len (data) = = 2 {datatype = 2} else if len (data) = = 3 {datatype = 3} else {panic ("wrong number of parameters")} //Call judgment if e.whereparam multiple times= "" { e.WhereParam += " and (" } else { e.WhereParam += "(" } //If it is a structure, if datatype = = 1 {T: = reflect.typeof (data [0]) V: = reflect.valueof (data [0]) //Field name var fieldNameArray []string //Circular parsing for I: = 0; i < t.NumField(); i++ { //Initial lowercase, non reflective if! v.Field(i). CanInterface() { continue } //Parse the tag and find out the real SQL field name sqltag: = t.field (I) Tag. Get("sql") if sqlTag != "" { fieldNameArray = append(fieldNameArray, strings.Split(sqlTag, ",")[0]+"=?") } else { fieldNameArray = append(fieldNameArray, t.Field(i).Name+"=?") } e.WhereExec = append(e.WhereExec, v.Field(i).Interface()) } //Splice E. whereparam + = strings Join(fieldNameArray, " and ") + ") " } else if dataType == 2 { //Direct = E. whereparam + = data [0] (string) + "=?)" e.whereexec = append (e.whereexec, data [1])} else if datatype = = 3 {/ / 3 parameters //Distinguish between the case of the operator in data2: = strings Trim (strings. Tolower (data [1]. (string)), "") if data2 = = "in" | data2 = = "not in" {/ / judge whether the incoming slice is retype: = reflect. Typeof (data [2]). Kind() if retype! = reflect. Slice & & retype! = reflect. Array {panic ("the incoming data of in / not in operation must be slice or array")} //Reflection value V: = reflect Valueof (data [2]) / / array / slice length datanum: = v.len() / / placeholder PS: = make ([] string, datanum) for I: = 0; i < dataNum; i++ { ps[i] = "?" e.WhereExec = append(e.WhereExec, v.Index(i). Interface()) } //Splice E. whereparam + = data [0] (string) + " " + data2 + " (" + strings.Join(ps, ",") + ")) " } else { e.WhereParam += data[0].(string) + " " + data[1].(string) + " ?) " e.WhereExec = append(e.WhereExec, data[2]) } } return e}
In the above writing, the parameter is changed to 1, but it is used in interface {} means that the incoming parameter is a variable parameter type, which can be 1, 2 or 3. In this way, what you get in the method is a slice type. We have to use the len() function to determine how many elements are in the slice, and then correspond to our branch logic in turn. It should be noted that when we pass in a structure, we also need to obtain it in the way of data[0].
In this way, we can use the Where method to call quickly and happily:
// where uid = 123e.Table("userinfo").Where("uid", 123) // where uid not in (2,3,4)e.Table("userinfo").Where("uid", "not in", []int{2, 3, 4}) // where uid in (2,3,4)e.Table("userinfo").Where("uid", "in", []int{2, 3, 4}) // where uid like '%2%'e.Table("userinfo").Where("uid", "like", "%2%") // where uid >= 123e.Table("userinfo").Where("uid", ">=", 123) // where (uid >= 123) and (name = 'vv')e.Table("userinfo").Where("uid", ">=", 123).Where("name", "vv")
(5) Condition OrWhere
The data blocks generated by the above Where method are and related. In fact, we have some sql that need or relationships, such as:
where (uid >= 123) or (name = 'vv')where (uid = 123 and name = 'vv') or (uid = 456 and name = 'bb')
In fact, this situation also needs to be taken into account. It is also very simple to write. You only need to add a new OrWhereParam parameter and replace the whereParam in the above Where method. WhereExec does not need to change. Then change the splicing relationship to or, and other codes are the same:
func (e *SmallormEngine) OrWhere(data ...interface{}) *SmallormEngine { ... //Judge the use order if e.whereparam = = "{panic (" whereor must be called after Where ")} //WhereOr conditione. Orwhereparam + = "or" ... return e}
It should be noted that the OrWhere method must be called before it is called. Because or is generally used, there must be a pre where judgment in front.
Similarly, there are three calling methods:
OrWhere("uid", 1234) //By default, it is equal to orwhere ("uid", "> =", 1234) orwhere (uidstruct) / / one structure is passed in, and the structures are connected by and
Let's see the effect:
// where (uid = 123) or (name = "vv")e.Table("userinfo").Where("uid", 123).OrWhere("name", "vv") // where (uid not in (2,3,4)) or (uid not in (5,6,7))e.Table("userinfo").Where("uid", "not in", []int{2, 3, 4}).OrWhere("uid", "not in", []int{5, 6, 7}) // where (uid like '%2') or (uid like '%5%')e.Table("userinfo").Where("uid", "like", "%2").OrWhere("uid", "like", "%5%") // where (uid >= 123) or (uid <= 454)e.Table("userinfo").Where("uid", ">=", 123).OrWhere("uid", "<=", 454) // where (username = "EE" and departname = "22" and status = 1) or (name = 'vv') or (status = 1) type User struct { Username string `sql:"username"` Departname string `sql:"departname"` Status int64 `sql:"status"`} user2 := User{ Username: "EE", Departname: "22", Status: 1,} e.Table("userinfo").Where(user2).OrWhere("name", "vv").OrWhere("status", 1)
In order to make this method easier to use without complexity, the or relationship in this way is essentially for multiple calls to where. It does not support the or relationship of data in the same where. If necessary, you can call:
// where (username = "EE") or (departname = "22") or (status = 1) e.Table("userinfo").Where(username, "EE").OrWhere("departname", "22").OrWhere("status", 1)
(6) Delete delete
Delete is also the most common operation in sql logic. After we have completed the data logic binding of Where and OrWhere, it is the simplest to write delete. Why? Because the delete method is the last step of CURD and directly interacts with the database, we don't need to reflect all kinds of data for binding. We just need to set the two values bound in Where into Prepare and Exec.
Let's see how it is written:
//Delete func (e *SmallormEngine) Delete() (int64, error){ //Splice delete SQL e.prepare = "delete from" + e.gettable() //If where is not empty, e.whereparam= "" || e.OrWhereParam != "" { e.Prepare += " where " + e.WhereParam + e.OrWhereParam } //Limit is not empty if e.limitparam= "" { e.Prepare += "limit " + e.LimitParam } //Step 1: prepare var stmt * SQL Stmt var err error stmt, err = e.Db. Prepare(e.Prepare) if err != nil { return 0, err } e.AllExec = e.WhereExec //Step 2: execute exec. Note that this is stmt Exec result, err := stmt. Exec(e.AllExec...) if err != nil { return 0, e.setErrorInfo(err) } //Number of rows affected rowsaffected, err: = result RowsAffected() if err != nil { return 0, e.setErrorInfo(err) } return rowsAffected, nil}
Are you familiar with it? The logic is almost the same as that of the Insert method, except that the sql statements in e.Prepare are different.
Look at the calling method and result as follows:
// delete from userinfo where (uid >= 123) or (uid <= 454)rowsAffected, err := e.Table("userinfo").Where("uid", ">=", 123).OrWhere("uid", "<=", 454).Delete()
(7) Modify Update
Modifying data is also the last step of CURD, but it is different from Delete. It has two data to be bound, one is the where data bound through the where method, and the other is the data to be updated. We haven't done this yet.
update userinfo set status = 1 where (uid >= 123) or (uid <= 454)
We also need to refine the data of status=1 into a method of external exposure. Therefore, the final calling method will be as follows:
e.Table("userinfo").Where("uid", 123).Update("status", 1) e.Table("userinfo").Where("uid", 123).Update(user2)
Similar to the variable parameters of Where, we also provide two parameter transfer methods. You can pass in either a structure variable or only a single updated variable, which will be more convenient and flexible.
As like as two peas, we can see that the way to get data in Update and the way Insert inserts single data is not alike.
Go directly to the code:
//Update func (E * smallormengine) update (data... Interface {}) (Int64, error){ //Judge whether it is a structure or multiple strings var datatype int if len (data) = = 1 {datatype = 1} else if len (data) = = 2 {datatype = 2} else {return 0, errors. New ("wrong number of parameters")} //If it is a structure, if datatype = = 1 {T: = reflect.typeof (data [0]) V: = reflect.valueof (data [0]) var fieldNameArray []string for i := 0; i < t.NumField(); i++ { //Initial lowercase, non reflective if! v.Field(i). CanInterface() { continue } //Parse the tag and find out the real SQL field name sqltag: = t.field (I) Tag. Get("sql") if sqlTag != "" { fieldNameArray = append(fieldNameArray, strings.Split(sqlTag, ",")[0]+"=?") } else { fieldNameArray = append(fieldNameArray, t.Field(i).Name+"=?") } e.UpdateExec = append(e.UpdateExec, v.Field(i).Interface()) } e.UpdateParam += strings.Join(fieldNameArray, ",") } else if dataType == 2 { //E. updateparam + = data [0] (string) + "=?" e.UpdateExec = append(e.UpdateExec, data[1]) } //Splicing SQL e.prepare = "update" + e.gettable() + "set" + e.updateparam //If where is not empty, e.whereparam= "" || e.OrWhereParam != "" { e.Prepare += " where " + e.WhereParam + e.OrWhereParam } //Limit is not empty if e.limitparam= "" { e.Prepare += "limit " + e.LimitParam } //prepare var stmt *sql.Stmt var err error stmt, err = e.Db.Prepare(e.Prepare) if err != nil { return 0, e.setErrorInfo(err) } //Merge updateexec and whereexec if e.whereexec= nil { e.AllExec = append(e.UpdateExec, e.WhereExec...) } //Execute exec. Note that this is stmt Exec result, err := stmt. Exec(e.AllExec...) if err != nil { return 0, e.setErrorInfo(err) } //Number of rows affected ID,:= result. RowsAffected() return id, nil}
One thing to note is: merge UpdateExec and WhereExec. You need to add..., after e.WhereExec, The purpose is to expand all the slices into one variable parameter and append them to the UpdateExec slice. If you don't add it, you will report a syntax error.
cannot use []interface{} literal (type []interface{}) as type interface{} in append
In golang, it seems that there is no function that can directly merge two slices, similar to array in PHP_ Merge, maybe I haven't found it yet.
$a1=array("red","green");$a2=array("blue","yellow");print_r(array_merge($a1,$a2)); // Array ( [0] => red [1] => green [2] => blue [3] => yellow )
(8) Inquiry
Query data is often used in sql. Through the implementation of the above methods, we are basically familiar with addition, deletion and modification. However, it is worth noting that the query is written differently in the go native code. Instead of Prepare and Exec, query data is obtained through QueryRow and query methods, You can see by looking at the writing of the native golang query at the beginning of the article.
For example, to query a single piece of data, we need to define the query fields first, and then bind and assign them with Scan(). This writing method feels too troublesome. PHP programmers call good guys directly.
//Single var username, departname, status stringerr: = dB QueryRow("select username, departname, status from userinfo where uid=?", 4). Scan(&username, &departname, &status)if err != nil { fmt.Println("QueryRow error :", err.Error())}fmt. Println("username: ", username, "departname: ", departname, "status: ", status)
Looking at multiple queries, the first step is to first define the data structure of the query, then instantiate a multi-dimensional array, and then assign a value to the array through the for loop. It is worth noting that the number of fields in the data structure must be consistent with the number of fields select ed, otherwise it will be lost. PHP programmers call good guys again.
//Multiple: rows, err: = dB Query("select username, departname, created from userinfo where username=?", "yang")if err != nil { fmt.Println("QueryRow error :", err.Error())} //Define a structure to store the data model type userinfo struct {username string ` JSON: "username" ` departname string ` JSON: "departname" ` created string ` JSON: "created" `} / / initialize var user []UserInfo for rows.Next() { var username1, departname1, created1 string if err := rows.Scan(&username1, &departname1, &created1); err != nil { fmt.Println("Query error :", err.Error()) } user = append(user, UserInfo{Username: username1, Departname: departname1, Created: created1})}
The trouble is the trouble. We still need to strip the cocoon. We still have to find out the rules and use our custom methods to generate data in this format. Therefore, query will be another difficulty and challenge.
In order to simplify the complexity of the internal implementation of the query logic, for a single query, we abandon the original QueryRow and directly replace it all with Query+for next. In this way, for a single query, we add a limit 1 internally to limit the number, and then meet the conditions.
Let's start.
- Query multiple Select() and the return value is map slice
Considering that it is too troublesome to define a data structure in advance and initialize it into an array, I wonder if I can pass nothing? Directly output a map slice with the same name to me according to the field name in the data table? Give it a try.
For example, there are four fields in the userinfo table: "uid, username, departname, status". We can query as follows, and then we can return an array slice of a map. Isn't it nice?
result, err := e.Table("userinfo").Where("status", 1).Select()
Return to:
//type: []map[string]string //value: [map[departname:v status:1 uid:123 username:yang] map[departname:n status:0 uid:456 username:small]]
The premise of this implementation is that we can get the fields of the table, and then we can transform these fields into a map. It's easy, DB Query returns us a Columns() method, which can return the field names of the table we queried this time.
For example:
rows, err := db.Query("select uid, username, departname, status from userinfo where username=?", "yang")if err != nil { fmt.Println("Query error :", err.Error())} column, err := rows.Columns()if err != nil { fmt.Println("rows.Columns error :", err.Error())} fmt.Println(column)
Let's look at the return value:
[uid username departname stauts]
If we can get the field name, we are half done. The next problem is rows Data binding problem with Scan(). Since we have no predefined data type for binding, this data can only be generated dynamically. Let's first look at the calling method of native scan ().
During each for loop, four variables with null initial value are temporarily generated, and then their addresses are passed to the Scan() method to dynamically reference and assign values through the addresses. Therefore, these four names are actually not important. You can take any name. Anyway, their address is the last one.
for rows.Next() { var uid1, username1, departname1, status1 string rows.Scan(&uid1, &username1, &departname1, &status1) fmt.Println(uid1,username1,departname1,status1)}
In this way, we print these four variables, and they all have values:
1 yang v 012 yi b 1....
By taking advantage of this, we can dynamically generate two slices according to the number of fields returned by Columns to solve the mapping problem:
//Read out the queried column field name column, err: = rows Columns()if err != nil { return nil, e.setErrorInfo(err)} //values is the value of each column. values obtained here in byte: = make ([] [] byte, len (column)) //Because the length of each query column is variable, use len(column) to fix the length of the current query. Scans: = make ([] interface {}, len(column)) for i := range values { scans[i] = &values[i]}
We have created two new slices. The first slice is values, and the initial value is empty. The initial value of scans is a slice of empty interface type. Through a for loop, the value of each element of scans is the address of each value in values. 2 are deeply bound.
One to one correspondence:
// Print the value of column [uid username departname stauts] // Print values [[]]] //Print the value of scans [0xc00056180 0xc00056198 0xc000561b0 0xc000561c8]
What are the benefits? Because the Scan() method needs to pass the address symbol. next. We can do this:
for rows.Next() { rows.Scan(scans[0], scans[1],scans[2], scans[3]) }
Thus, scans[0] corresponds to uid1 in the above example, and scans[3] corresponds to status1 in the above example. scans[0] because it is an address fetching operation on values[0], the value of values[0] changes and becomes a real value. Therefore, this operation is complete. The values in values change:
// Print the value of column [uid username departname stauts] // Print the value of scans [0xc00056180 0xc00056198 0xc000561b0 0xc000561c8] // Print values [1 yang v 0]
Then, by mapping the subscripts of the three slices, we can match the table fields and values and splice them into a map.
Now I have a problem. If there are ten or even dozens of parameters in scans, is it the same? scans[0], scans [1] Scans [n] expand? That's no different from writing the original code manually. Is there any way to solve the problem of uncertain parameters? Of course, look directly at the code:
results := make([]map[string]string, 0)for rows.Next() { if err := rows.Scan(scans...); err != nil { return nil, e.setErrorInfo(err) } //Row of data per row: = make (map [string] string) //Loop the values data, take the corresponding column name in the column through the same subscript, and generate a new map for K, V: = range values {key: = column [k] row [key] = string (V)} //Add to map slice results = append(results, row)}
In this way, we have solved the most critical and core data fields and data mapping problems, by the way, rows Scan(scans...) This is the most critical and ingenious. It can be said to be the most important part of this method. It can spread all the slices we pass in as the parameters of one variable and one variable. It solves the problem of uncertain number of table fields in our general function.
rows.Scan(scans[0], scans[1],scans[2], scans[3])↓↓↓↓↓↓rows.Scan(scans...)
In this way, even if there are 100 data in scan, it doesn't matter. He will handle it well.
OK, let's look at the complete code of this method:
//Query multiple, and the return value is map slicfunc (E * smallormengine) select() ([] map [string] string, error){ //Splice SQL e.prepare = "select * from" + e.gettable() //If where is not empty, e.whereparam= "" || e.OrWhereParam != "" { e.Prepare += " where " + e.WhereParam + e.OrWhereParam } e.AllExec = e.WhereExec //query rows, err := e.Db.Query(e.Prepare, e.AllExec...) if err != nil { return nil, e.setErrorInfo(err) } //Read out the queried column field name column, err: = rows Columns() if err != nil { return nil, e.setErrorInfo(err) } //values is the value of each column. values obtained here in byte: = make ([] [] byte, len (column)) //Because the length of each query column is variable, use len(column) to fix the length of the current query. Scans: = make ([] interface {}, len(column)) for i := range values { scans[i] = &values[i] } results := make([]map[string]string, 0) for rows.Next() { if err := rows.Scan(scans...); err != nil { //query. The indefinite length value queried by scan is placed in scans [i] = & values [i], that is, each line is placed in values. return nil, e.setErrorInfo(err)} //Row of data per row: = make (map [string] string) //Loop the values data, take the corresponding column name in the column through the same subscript, and generate a new map for K, V: = range values {key: = column [k] row [key] = string (V)} //Add to map slice results = append(results, row)} return results, nil}
In this way, we can easily query data, but this method has two small effects:
- Finally, in the returned map slice, the key names are the field names of the database (which may all be small letters). If we want to map to an initial capital structure, we need to write our own methods.
- He will convert the types of all fields in the database table into string types, which has little impact in theory.
- Query a single SelectOne(), and the return value is map
With the above theoretical knowledge base of querying multiple items, querying a single item becomes extremely simple. You only need to add a limit 1 to the last part of the sql execution, and take the 0th data in the returned map slice.
//Query 1 func (E * smallormengine) selectone() (map [string] string, error) {/ / limit 1 single query results, err: = e.limit (1). Select() if err! = nil {return nil, e.seterrorinfo (ERR)} //Judge whether it is empty if len (results) = = 0 {return nil, nil} else {return results [0], nil}}
The function of the Limit() method is to splice limit 1 at the back of sql, which will be described in detail in the following chapter. In this way, we can get a single map data through the SelectOne method.
In this way, we can easily query a single piece of data:
result, err := e.Table("userinfo").Where("status", 1).SelectOne()
Return to:
//type: map[string]string //value: map[departname:v status:1 uid:123 username:yang]
- Query multiple Find(), and the return value is the reference structure slice
In fact, this method is a simple package for the native go query. After all, many people like to define the data structure first and then assign values by reference. Of course, this is also the way to realize the query operation in the ORM of large parts of go.
//Define the structure type user struct {uid Int ` SQL: "uid, auto_increment" ` username string ` SQL: "username" ` departname string ` SQL: "departname" ` status Int64 ` SQL: "status" `} //Instantiate slice var user1 []User // select * from userinfo where status=1err := e.Table("userinfo").Where("status", 2).Find(&user1) if err != nil { fmt.Println(err.Error())} else { fmt.Printf("%#v", user1)}
Look at the printed data
[]smallorm.User{smallorm.User2{Uid:131733, Username:"EE2", Departname:"223", Status:2}, smallorm.User{Uid:131734, Username:"EE2", Departname:"223", Status:2}, smallorm.User{Uid:131735, Username:"EE2", Departname:"223", Status:2}}
Let's first sort out a general call and logical processing process in our mind:
- First, define a structure, and the fields in it are associated with the fields of the table through the tag tag
- Initialize an empty structure slice, and then pass it to the Find() method by & taking the address character
- The Find() method first obtains the column name of the table, then binds the data to the incoming structure slice through tag Association and various reflection tools, and attaches the value to it.
So, step 3 is the most complex. It needs to get every value in the incoming structure slice, and assign all the query results to it. God, it feels so difficult!!! I can't do this problem.
Later, after I read a lot of GORM's source code and checked the documents reflected by go, I gradually got a clue. This problem is too simple!
First of all, like the Select method, we need to parse the field names of the table, because this needs to be one-to-one corresponding to tag:sql: "xx".
//Read out the queried column field name column, err: = rows Columns()if err != nil { return e.setErrorInfo(err)} //values is the value of each column. values obtained here in byte: = make ([] [] byte, len (column)) //Because the length of each query column is variable, use len(column) to fix the length of the current query. Scans: = make ([] interface {}, len(column)) for i := range values { scans[i] = &values[i]}
The above steps are the same. The final data is assigned to values, but I won't repeat it. The following is the most critical step:
//Slice value of original struct destslice: = reflect ValueOf(result). Elem() //Type of original single struct: desttype = destslice Type(). Elem()
Through these two magic (abnormal) go reflection methods, we can get the type and value of the incoming User structure slice. Print it and see:
fmt.Printf("%+v\n", destSlice)fmt.Printf("%+v", destType) []main.User
ok, we have successfully resolved what the incoming structure looks like, and then we can continue according to a series of for loops and various magical go reflection methods:
destType.NumField(); //Gets the number of fields in the User structure. Here, it returns: 4 destType.Field(i).Tag.Get("sql") //Get the tag value of the ith field of the User structure, such as return: ` username` destType.Field(i).Name // //Get the name of the ith field of the User structure, for example, return: ` Username`
Then assign values to them through these reflections:
dest := reflect.New(destType).Elem() // Generate 1 new value according to the type and return: {Uid:0 Username: Departname: Status:0} dest.Field(i).SetString(value) //Attach value to the ith element. The type is string reflect.Append(destSlice, dest) // Add the dest value to the destSlice slice. destSlice.Set(reflect.Append(destSlice, dest)) //Assign the final slice to itself completely.
Maybe this reflection operation has knocked you out. To be honest, I'm also dizzy. Now look at the complete function:
//Query multiple, and the return value is struct slice func (e *SmallormEngine) Find(result interface{}) error{ if reflect.ValueOf(result).Kind() != reflect.Ptr { return e.setErrorInfo(errors.New("Parameter please pass pointer variable!")) } if reflect.ValueOf(result).IsNil() { return e.setErrorInfo(errors.New("Parameter cannot be null pointer!")) } //Splice SQL e.prepare = "select * from" + e.gettable() e.AllExec = e.WhereExec //query rows, err := e.Db.Query(e.Prepare, e.AllExec...) if err != nil { return e.setErrorInfo(err) } //Read out the queried column field name column, err: = rows Columns() if err != nil { return e.setErrorInfo(err) } //values is the value of each column. values obtained here in byte: = make ([] [] byte, len (column)) //Because the length of each query column is variable, use len(column) to fix the length of the current query. Scans: = make ([] interface {}, len(column)) //Slice value of original struct destslice: = reflect ValueOf(result). Elem() //Type of original single struct: desttype = destslice Type(). Elem() for i := range values { scans[i] = &values[i] } //Loop through for rows Next() { dest := reflect.New(destType).Elem() if err := rows.Scan(scans...); err != nil { //query. The indefinite length value queried by scan is placed in scans [i] = & values [i], that is, each line is placed in values. return e.setErrorInfo(err)} //Traverse the fields of a row of data for K, V: = range values {/ / each row of data is placed in values. Now move it to row key: = column [k] value: = string (V) //Traversal structure for I: = 0; i < destType. NumField(); i++ { //Check whether there is an SQL alias sqltag: = desttype Field(i). Tag. Get("sql") var fieldName string if sqlTag != "" { fieldName = strings.Split(sqlTag, ",")[0] } else { fieldName = destType.Field(i).Name } //There is no key in struct if key= fieldName { continue } //Reflection assignment if err: = e.reflectset (DeST, I, value); err != Nil {return err}} / / assign destslice Set(reflect.Append(destSlice, dest)) } return nil}
We added several parameter checks in front of the method, which are also based on reflection, to judge that the passed in value is of pointer type. In reflection assignment, I made a general method, reflectSet, to match field types. Enumerate and traverse the types of various fields in the query result set to convert them into the types in the actual structure. Because field types are strictly distinguished in go, reflection assignment must also be assigned according to the types of specific fields in the structure.
//Reflection assignment func (E * smallormengine) reflectset (dest reflect.value, I int, value string) error {switch dest.field (I). Kind() {case reflect.int, reflect.int8, reflect.int16, reflect.int32, reflect.int64: res, err: = strconv.parseint (value, 10, 64) if err! = nil {return e.seterrorinfo (ERR)} dest Field(i). SetInt(res) case reflect. String: dest. Field(i). SetString(value) case reflect. Uint, reflect. Uint8, reflect. Uint16, reflect. Uint32, reflect. Uint64: res, err := strconv. ParseUint(value, 10, 64) if err != nil { return e.setErrorInfo(err) } dest. Field(i). SetUint(res) case reflect. Float32: res, err := strconv. ParseFloat(value, 32) if err != nil { return e.setErrorInfo(err) } dest. Field(i). SetFloat(res) case reflect. Float64: res, err := strconv. ParseFloat(value, 64) if err != nil { return e.setErrorInfo(err) } dest. Field(i). SetFloat(res) case reflect. Bool: res, err := strconv. ParseBool(value) if err != nil { return e.setErrorInfo(err) } dest. Field(i). SetBool(res) } return nil}
Via switch dest Field(i). Kind() case to match the types of fields in the structure one by one, and then through strconv Xxx() converts the data type found in the database to the corresponding type, and then go to SetXXX().
- Query a single FindOne(), and the return value is the reference structure
The logic of multiple items is solved, and a single item is very simple. It is done in two steps: the first step is to set Limit 1, and the second step is to return the 0th data of the structure.
//Query a single item, and the return value is struct slicing func (e *SmallormEngine) FindOne(result interface{}) error{ //Original value dest: = reflect Indirect(reflect.ValueOf(result)) //new a slice of type destslice: = reflect new(reflect.SliceOf(dest.Type())). Elem() //Call if err: = E. limit (1) Find(destSlice.Addr(). Interface()); err != nil { return err } //Judge the return value length if destslice Len() == 0 { return e.setErrorInfo(errors.New("NOT FOUND")) } //Take the 0th data in the slice and copy it to the original value structure pointer dest Set(destSlice.Index(0)) return nil}
However, the actual process is several more steps than we expected, and it is a piece of reflective logic. Let's take a closer look at the parameter of the Find() method. It is a pointer to the slice, that is, the original value is an array of slices. Our method FindOne() this time passes in a structure pointer, which is a single data, not an array slice. This is troublesome because the data types do not match and cannot be passed. So what?
Omnipotent reflection must be the solution. Then I finally found the solution by reading countless documents and manuals: I dynamically generate a slice array parameter through reflection according to the incoming single structure data, and pass it to Find()?
OK, let's call:
//Define the structure type user struct {uid Int ` SQL: "uid, auto_increment" ` username string ` SQL: "username" ` departname string ` SQL: "departname" ` status Int64 ` SQL: "status" `} //Instantiate data var user1 User // select * from userinfo where status=1err := e.Table("userinfo").Where("status", 2).FindOne(&user1) if err != nil { fmt.Println(err.Error())} else { fmt.Printf("%#v", user1)}
Look at the printed data
smallorm.User{Uid:131733, Username:"EE2", Departname:"223", Status:2}
(9) Set query fieldfield
Setting query fields is a very basic and important function, because we usually like to use select * when querying data. It will read out all the fields of the table. In a scenario with a large amount of data, it is actually very inefficient and wasteful. This ORM also uses this method to specify the query field, which can be called as follows:
e.Table("userinfo").Where("status", 2).Field("uid,status").Select()
Because it is called in a chain mode and has no data attributes, it can be placed anywhere in the middle part:
e.Table("userinfo").Field("uid,status").Where("status", 2).Select()
The implementation logic is also very simple. Just assign a value to the FieldParam of SmallormEngine:
//Set query field func (E * smallormengine) field (field string) * smallormengine {e.fieldparam = field return e}
Then, in the relevant query methods Select/Find, we can splice sql in this way:
e.Prepare = "select " + e.FieldParam + " from " + e.GetTable()
e. The initial value of fieldparam is "*", which is initialized in NewMysql. Therefore, even if Field() is not called for setting, the value of Prepare is select *, which does not affect the integrity of logic.
*It is worth noting that we directly transmit the data without checking and judging the incoming fields. This optimization will be carried out in the second version
(10) Set size Limit
Setting limit is generally used to control the amount of data obtained. It is generally used to query a single entry, such as limit 1. More often, it is used for paging. For example, take 10 for each page. The first page is limit 0 and 9, and the second page is limit 10 and 19. Therefore, limit has two uses. Therefore, we have to set two parameters for ORM, for example:
e.Table("userinfo").Where("status", 2).Limit(1).Select()e.Table("userinfo").Where("status", 2).Limit(0, 9).Select()
Let's see how to call these two methods:
//Limit paging func (E * smallormengine) limit (limit... Int64) * smallormengine {if len (limit) = = 1 {e.limitparam = strconv.itoa (int (limit [0])} else if len (limit) = = 2 {e.limitparam = strconv.itoa (int (limit [0]) + "," + strconv.itoa (int (limit [1]))} else {panic ("wrong number of parameters")} return e}
We use a variable parameter mode on the parameters, so that we can pass one or two modes. At the same time, by judging the length of the parameters, we limit whether the parameters are one or two, or an error will be reported. Then pass the separated parameters to the LimitParamb variable, so that we can judge whether the variable is empty during Find/Select to add the limit parameter to sql:
//Limit is not empty if e.limitparam= "" { e.Prepare += " limit " + e.LimitParam}
In this way, we add the limit statement to prepare.
(11) Aggregate query Count/Max/Min/Avg/Sum
- Count() / / get the total number
- Max() / / get the maximum value
- Min() / / get the minimum value
- Avg() / / get the average value
- Sum() / / get the sum
We usually use a lot of aggregate queries. In fact, their implementation methods are very similar in terms of SQL splicing. They replace the original select * with select Xxxx(*). Secondly, they generate only one piece of data. So we can use the DB that we didn't use when querying Queryrow () method, because this method is used to query a query that does not require various for loops and just matches the results of our methods.
Let's see how to write it. First, set two parameters corresponding to the specific aggregation function and the field name to be aggregated.
name corresponds to a specific aggregate function, and param corresponds to a specific field:
func (e *SmallormEngine) aggregateQuery(name, param string) (interface{}, error) { e.Prepare = "select " + name + "(" + param + ") as cnt from " + e.GetTable() }
In this way, the main body of our general method is completed. If we want to realize the corresponding aggregation query function, we only need to pass two parameters. Next, let's look at the query section:
//Execute binding var cnt interface {} //queryRows err := e.Db.QueryRow(e.Prepare, e.AllExec...).Scan(&cnt)if err != nil { return nil, e.setErrorInfo(err)}
We declare an interface type variable cnt, which is used to obtain the final aggregation result value. The reason why we use the interface type is because the aggregation result type is uncertain, which may be decimal or floating-point, such as average. Here is the complete code:
//Aggregate query func (e *SmallormEngine) aggregateQuery(name, param string) (interface{}, error){ //Splice SQL e.prepare = "select" + name + "(" + param + ") as CNT from" + e.gettable() //If where is not empty, e.whereparam= "" || e.OrWhereParam != "" { e.Prepare += " where " + e.WhereParam + e.OrWhereParam } //Limit is not empty if e.limitparam= "" { e.Prepare += " limit " + e.LimitParam } e.AllExec = e.WhereExec //Generate SQL e.generatesql() //Execute binding var cnt interface {} //queryRows err := e.Db.QueryRow(e.Prepare, e.AllExec...).Scan(&cnt) if err != nil { return nil, e.setErrorInfo(err) } return cnt, err}
OK, in this way, we have completed the general body part of the aggregate function, and then the differences.
- Get total Count
You can use the count () method to get the total number. The type of total returned is Int64, which is the last operation of the chain structure. We pass the first parameter to count. Because we usually take the total number, we usually use count () or count(1), so we use the second parameter, this place.
//Total func (E * smallormengine) count() (Int64, error) {count, err: = e.aggregatequery ("count", "*") if err! = nil {return 0, e.seterrorinfo (ERR)} return count (int64), err}
For the final return value, we use count (xxx) this method is used to convert the format.
- Get Max
You can use the Max() method to get the maximum value of a field. The type of the total returned is string, which is the last operation of the chain structure. The first parameter is passed to max, and the second parameter is passed to a table field.
//Maximum func (E * smallormengine) max (param string) (string, error) {max, err: = e.aggregatequery ("Max", param) if err! = nil {return "0", e.seterrorinfo (ERR)} return string (max. ([] byte)), nil}
The reason why the return value is of string type is that it takes the maximum value. Sometimes it is not limited to the maximum value of table fields of int type, and sometimes there is a time maximum value. Therefore, it is most appropriate to return string.
- Get minimum Min
You can use the Min() method to get the minimum value of a field. The type of total returned is string, which is the last operation of the chain structure. The first parameter is passed to min, and the second parameter is passed to a table field.
//Minimum func (E * smallormengine) min (param string) (string, error) {min, err: = e.aggregatequery ("Min", param) if err! = nil {return "0", e.seterrorinfo (ERR)} return string(min.([]byte)), nil}
- Get average Avg
Avg() method can be used to get the average value of a field. The type of returned total is string, which is the last operation of chain structure. The first parameter is passed to avg, and the second parameter is passed to a table field.
//Average func (E * smallormengine) AVG (param string) (string, error) {AVG, err: = e.aggregatequery ("AVG", param) if err! = nil {return "0", e.seterrorinfo (ERR)} return string(avg.([]byte)), nil}
- Get Sum
You can use the Sum() method to get the sum of a certain field. The type of the total returned is string, which is the last operation of the chain structure. The first parameter is passed to sum, and the second parameter is passed to a table field.
//Sum func (E * smallormengine) sum (param string) (string, error) {sum, err: = e.aggregatequery ("sum", param) if err! = nil {return "0", e.seterrorinfo (ERR)} return string (sum. ([] byte)), nil}
Next, let's quickly call:
//select count(*) as cnt from userinfo where (uid >= 10805)cnt, err := e.Table("userinfo").Where("uid", ">=", 10805).Count() //select max(uid) as cnt from userinfo where (uid >= 10805)max, err := e.Table("userinfo").Where("uid", ">=", 10805).Max('uid') //select min(uid) as cnt from userinfo where (uid >= 10805)min, err := e.Table("userinfo").Where("uid", ">=", 10805).Count() //select avg(uid) as cnt from userinfo where (uid >= 10805)avg, err := e.Table("userinfo").Where("uid", ">=", 10805).Avg("uid") // select sum(uid) as cnt from userinfo where (uid >= 10805) sum, err := e.Table("userinfo").Where("uid", ">=", 10805).Sum("uid")
(12) Sort Order
Sorting is also the most commonly used in ordinary sql statements. It is used to display query results and sort according to a field. asc is used for positive order (from small to large) and desc is used for reverse order (from large to small). The writing method is as follows:
//Query results are selected * from userinfo where (uid > = 10805) order by uid desc in reverse uid order //The query result is selected * from userinfo where (uid > = 10805) order by uid ASC according to uid positive order //For query results, first select * from userinfo where (uid > = 10805) order by uid ASC, status desc in positive uid order and then in reverse status order
Therefore, we also expose this operation in a separate method to facilitate sorting. The calling method is as follows:
sum, err := e.Table("userinfo").Where("uid", ">=", 10805).Order("uid", "desc").Select()sum, err := e.Table("userinfo").Where("uid", ">=", 10805).Order("uid","asc", "status", "desc").Select()
Looking at the number of parameters, we immediately know that this method is also a variable parameter. In fact, the idea of writing this method is very clear. We only need to change the incoming parameters into xx data behind order XXX, XXX, xx, xx, and then store them in the variable e.OrderParam, and directly judge the splicing when finding / select query.
Let's see how it is realized:
//Order sorting func (E * smallormengine) Order (order... String) * smallormengine {orderlen: = len (order) if orderlen% 2! = 0 {panic ("order by parameter error, please ensure that the number is even")} //Number of sorted ordernum: = orderlen / 2 //If e.orderparam is called multiple times= "" { e.OrderParam += "," } for i := 0; i < orderNum; i++ { keyString := strings.ToLower(order[i*2+1]) if keyString != "desc" && keyString != "asc" { panic("The sorting keywords are: desc and asc") } if i < orderNum-1 { e.OrderParam += order[i*2] + " " + order[i*2+1] + "," } else { e.OrderParam += order[i*2] + " " + order[i*2+1] } } return e}
The only complication is to judge that the parameters are even, and then splice multiple sorting rules according to the binary search method. This place also has other algorithms for splicing.
Then, you can judge during Find/Select query and add it to e.Prepare:
//Order by is not empty if e.orderparam= "" { e.Prepare += " order by " + e.OrderParam}
(13) Group group
Grouping is often used by us. It is used to group one or several fields and then query the grouped data. The writing method is very simple. It is directly written in the code:
//Group groupfunc (E * smallormengine) group (Group... String) * smallormengine {if len (Group)! = 0 {e.groupparam = strings. Join (group, ",")} return e}
Parameters are also variable because we can group multiple fields. Sometimes, you need to use Field(count(*) as c) to implement more detailed grouping queries
result,err := e.Table("userinfo").Where("departname", "like", "2%").Field("departname, count(*) as c").Group("departname", "status").Select()
In this way, we can make a judgment during Find/Select query and add it to e.Prepare:
//Group is not empty if e.groupparam= "" { e.Prepare += " group by " + e.GroupParam}
(14) Judging Having after grouping
Having is used as like as two peas in the Group group. The function is exactly the same as where. It is all filtered. Having can only be used after group, filtering the parameters behind select, such as sql:.
We want to query the data whose total uid is greater than 5 after grouping by status:
select status, count(uid) as c from userinfo where (uid >= 10805) group by status having c >= 5
So as like as two peas in where, we can see how to call it:
result,err := e.Table("userinfo").Where("", "like", "2%").Field("status, count(uid) as c ").Group(status").Having("c",">=", 5).Select()result,err := e.Table("userinfo").Where("departname", "like", "2%").Field("status, count(uid) as c ").Group(status").Having("c", 5).Select() type User struct { Status int64 `sql:"status"`} user2 := User1{ Status: 1,}result,err := e.Table("userinfo").Where("departname", "like", "2%").Field("status, count(uid) as c ").Group(status").Having(user2).Select()
Since the implementation method is almost the same as that of the Where method, let's take a quick look at the implementation process of this method:
//Having filter func (E * smallormengine) having (having... Interface {}) * smallormengine{ //Judge whether it is a structure or multiple strings var datatype int if len (having) = = 1 {datatype = 1} else if len (having) = = 2 {datatype = 2} else if len (having) = = 3 {datatype = 3} else {panic ("wrong number of having")} //Call judgment if e.havingparam multiple times= "" { e.HavingParam += "and (" } else { e.HavingParam += "(" } //If it is a structure, if datatype = = 1 {T: = reflect.typeof (having [0]) V: = reflect.valueof (having [0]) var fieldNameArray []string for i := 0; i < t.NumField(); i++ { //It starts with lowercase and cannot be reflected. Skip if! v.Field(i). CanInterface() { continue } //Parse the tag and find out the real SQL field name sqltag: = t.field (I) Tag. Get("sql") if sqlTag != "" { fieldNameArray = append(fieldNameArray, strings.Split(sqlTag, ",")[0]+"=?") } else { fieldNameArray = append(fieldNameArray, t.Field(i).Name+"=?") } e.WhereExec = append(e.WhereExec, v.Field(i).Interface()) } e.HavingParam += strings.Join(fieldNameArray, " and ") + ") " } else if dataType == 2 { //Direct = E. havengparam + = having [0] (string) + "=?)" e.whereexec = append (e.whereexec, having [1])} else if datatype = = 3 {/ / 3 parameters e.havingparam + = having [0]. (string) + "" + having [1]. (string) + "?)" e.whereexec = append (e.whereexec, having [2])} return e}
A HavingParam is specially made to store placeholders, and the numerical part is still stored in WhereExec.
Then, like others, you can make a judgment during Find/Select query and add it to e.Prepare:
//havingif e.HavingParam != "" { e.Prepare += " having " + e.HavingParam}
OK, let's try how to call:
//select uid, status, count(uid) as b from userinfo where (departname like '2%') group by uid,status having (status=1) order by uid desc,status asc result,err := e.Table("userinfo").Where("departname", "like", "2%").Order("uid", "desc", "status", "asc").Field("uid, status, count(uid) as b").Group("uid", "status").Having("status",1).Select()if err != nil { fmt.Println(err.Error()) return}fmt.Println("result is :", result)
(15) Gets the complete SQLGetLastSql that executes the build
In fact, all of our above methods are assembled into native sql syntax. Sometimes, we actually want to know what the last generated sql is, or the query reports an error. We want to see whether the last generated sql has syntax errors. ORM also provides this method to query the last generated sql statement in this execution.
The implementation method is actually very simple, because we have generated the first half of the sql statement, e.Prepare. We only need to replace the question mark placeholder in e.Prepare with the specific numerical part, e.AllExec, because when the data matches, we also convert it into placeholders in order. Therefore, this time is equivalent to reverse sql generation.
//Generate the completed SQL statement func (E * smallormengine) generatesql() {e.sql = e.prepare for, I2: = range e.allexec {switch I2. (type) {case int: e.sql = strings.replace (e.sql, "?", strconv. Itoa(i2.(int)), 1) case int64: e.Sql = strings. Replace(e.Sql, "?", strconv. FormatInt(i2.(int64), 10), 1) case bool: e.Sql = strings. Replace(e.Sql, "?", strconv. FormatBool(i2.(bool)), 1) default: e.Sql = strings. Replace(e.Sql, "?", "'"+i2. (string)+"'", 1) } }}
This replacement is relatively simple. It only converts the basic int and bool types. Other types are treated as strings in sql and need single quotes.
Then, we call this method in the related method of executing SQL in the last step of chain call. The final SQL statement can be generated and stored in the e.Sql attribute. Call GetLastSql to print out the last generated SQL statement.
//Get the sqlfunc (E * smallormengine) getlastsql() string generated by the last execution {return e.sql}
It is worth noting that this is the last sql generated. If you have multiple CURD operations, remember to call each time:
sum, err := e.Table("userinfo").Where("uid", ">=", 10805).Order("uid", "desc").Select() fmt.Println(e.GetLastSql()) //select * from userinfo where (uid >= 10805) order by uid asc sum, err := e.Table("userinfo").Where("uid", ">=", 10805).Order("uid","asc", "status", "desc").Select() fmt.Println(e.GetLastSql()) //select * from userinfo where (uid >= 10805) order by uid asc,status desc
(16) Execute native SQLExec/Query
This ORM also provides the method of naked sql tuning. Although it is not recommended, it is sometimes used in scenarios with such requirements.
- Execute the addition, deletion and modification of native sql Exec
go's native sql code provides the Exec method for adding, deleting and modifying, which is the first method in the native demo at the beginning of this article:
result, err := db.Exec("INSERT INTO userinfo (username, departname, created) VALUES (?, ?, ?)","lisi","dev","2020-08-04")
In fact, you can pass the first parameter directly to the complete sql without passing the following parameters and using the question mark placeholder, like this:
result, err := db.Exec("INSERT INTO userinfo (username, departname, created) VALUES ('lisi', 'dev', '2021-11-04')")
Therefore, this ORM uses this feature to simply encapsulate the Exec method. The code is as follows:
//Add, delete and modify sqlfunc (E * smallormengine) exec (SQL string) (ID Int64, err error) {result, err: = e.db.exec (SQL) e.sql = SQL if err! = nil {return 0, e.seterrorinfo (ERR)} //Distinguish between insert and other (update, delete) if strings Contains(sql, "insert") { lastInsertId, _ := result.LastInsertId() return lastInsertId, nil } else { rowsAffected, _ := result.RowsAffected() return rowsAffected, nil }}
We can judge whether the sql statement is new or other, because if it is new, the self increasing ID is generally returned, while in other cases, the number of affected rows needs to be returned. In this way, we can easily call the native sql statement:
//result, err:= e.Exec("insert into userinfo(username,departname,created,status) values('dd', '31','2020-10-02',1)"); //result, err := e.Exec("delete from userinfo where username='dd'") result, err := e.Exec("update userinfo set username='dd' where uid = 132733") fmt.Println(err)fmt.Println(result)fmt.Println(e.GetLastSql())
- Execute Query operation of native sql
The Query method in the native go code is used for Query operations. It also supports direct transmission of native sql statements without using placeholders:
result, err := db.Query("SELECT * FROM userinfo limit 1")
So as like as two peas, we need to transform the Select method in ORM slightly, because the latter part is the same as the data acquisition.
//Directly execute the query sqlfunc (E * smallormengine) query (SQL string) ([] map [string] string, error) {rows, err: = e.db.query (SQL) e.sql = SQL if err! = nil {return nil, e.seterrorinfo (ERR)} //Read out the queried column field name column, err: = rows Columns() if err != nil { return nil, e.setErrorInfo(err) } //values is the value of each column. values obtained here in byte: = make ([] [] byte, len (column)) //Because the length of each query column is variable, use len(column) to fix the length of the current query. Scans: = make ([] interface {}, len(column)) for i := range values { scans[i] = &values[i] } //The final map results: = make ([] map [string] string, 0) for rows Next() { if err := rows.Scan(scans...); err != Nil {/ / query.scan put the indefinite length value queried in scans [i] = & values [i], that is, each line is placed in values. return nil, e.setErrorInfo(err)} row := make(map[string]string) //Each row of data for K, V: = range values {/ / each row of data is placed in values. Now move it to row key: = column [k] row [key] = string (V)} results = append (results, row)} return results, nil}
OK, we can call:
result, err := e.Query("SELECT * FROM userinfo limit 1") fmt.Println(err)fmt.Println(result)fmt.Println(e.GetLastSql())
(17) Transaction Begin/Commit/Rollback
Transaction operations in sql are also used in business at ordinary times. It is used to roll back data if one of them has a problem when adding, deleting and modifying operations are performed many times, so as to ensure data consistency. This ORM also provides corresponding methods. Transaction also calls the transaction method in the native go code through encapsulation. There are three methods called together:
- Begin() / / open things
- Rollback() / / rollback
- Commit() / / confirm the commit execution
- Start transaction Begin
It is relatively simple to enable the transaction function, just set a flag:
//Start transaction func (e *SmallormEngine) Begin() error{ //Call the native transaction opening method TX, err: = e.db Begin() if err != nil { return e.setErrorInfo(err) } e.TransStatus = 1 e.Tx = tx return nil}
In this method, we call the native dB The begin () method obtains a tx resource handle, which is specially used to execute transaction operations, and uses e.TransStatus=1 to mark that transaction operations have been started.
Next, in the specific methods of adding, deleting, modifying and querying, we use this flag to judge whether the current transaction state is:
//Determine whether it is a transaction var stmt * SQL Stmtvar err errorif e.TransStatus == 1 { stmt, err = e.Tx.Prepare(e.Prepare)} else { stmt, err = e.Db.Prepare(e.Prepare)} ... result, err := stmt.Exec(e.AllExec...)
It can be seen that the judgment is very simple, because the type of the last generated stmt variable is unchanged whether it is a transaction or not, so the operation of the second half does not need to be changed. In this way, we can easily turn on the transaction function.
- Rollback
The rollback operation means that after a problem occurs in our execution, we provide a rollback instruction to the mysql server, which will restore the execution results of this sql sentence. The original implementation is simpler. You can call the native directly:
//Transaction rollback func (E * smallormengine) rollback() error {e.transstatus = 0 return e.tx.rollback()}
- Confirm Commit
Confirming submission means that all our execution is OK. At this time, we need to issue a confirmation submission instruction to the mysql server before it can really execute sql. If you do not execute this instruction, the data will not be executed in fact, so we must not forget to perform the confirm submission operation in the end. The original implementation is also very simple. You can call the native directly:
//Transaction commit func (E * smallormengine) commit() error {e.transstatus = 0 return e.tx.commit()}
Let's look at the next complete transaction call example:
err0 := e.Begin() isCommit := trueif err0 != nil { fmt.Println(err0.Error()) os.Exit(1)} result1, err1 := e.Table("userinfo").Where("uid", "=", 10803).Update("departname", 110)if err1 != nil { isCommit = false fmt.Println(err1.Error())} //Not found, deletion failed if result1 < = 0 {iscommit = false FMT. Println ("update 0")} fmt.Println("result1 is :", result1)fmt.Println("sql is :", e.GetLastSql()) result2, err2 := e.Table("userinfo").Where("uid", "=", 10802).Delete()if err2 != nil { isCommit = false fmt.Println(err2.Error())} if result2 <= 0 { isCommit = false fmt.Println("delete 0")} fmt.Println("result2 is :", result2)fmt.Println("sql is :", e.GetLastSql()) user1 := User{ Username: "EE", Departname: "22", Created: "2012-12-12", Status: 1,} id, err3 := e.Table("userinfo").Insert(user1)if err3 != nil { isCommit = false fmt.Println(err3.Error())} fmt.Println("id is :", id)fmt.Println("sql is :", e.GetLastSql()) if isCommit { _ = e.Commit() fmt.Println("ok")} else { _ = e.Rollback() fmt.Println("error")}
We check the execution results of each step, set isCommit to false for any failure, and finally roll back and confirm the submission by judging the status of this value.
So far, we have basically realized more than 90% of the functions of ORM, which is a small, beautiful, elegant and simple ORM framework.
3, Function test and performance test
Functional testing is essential, and go also provides us with a very simple test function that can be completed. This can be gradually improved. Let's take a look at the performance test first. Let's run a test with GORM.
The structure of the database is as follows. There are 209w data in the table:
CREATE DATABASE `ApiDB`; USE ApiDB; CREATE TABLE `userinfo` ( `uid` int NOT NULL AUTO_INCREMENT, `username` varchar(64) DEFAULT NULL, `departname` varchar(64) DEFAULT NULL, `created` date DEFAULT NULL, `status` int NOT NULL, PRIMARY KEY (`uid`)) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
Write 2 simple running tests to test Select and Update:
package smallorm import ( "gorm.io/driver/mysql" "gorm.io/gorm" "testing") func BenchmarkSmallormSelect(b *testing.B) { e, _ := NewMysql("root", "123456", "127.0.0.1:3306", "ApiDB") type User struct { Username string `gorm:"username"` Departname string `gorm:"departname"` Created string `gorm:"created"` Status int64 `gorm:"status"` } var users[] User b.ResetTimer() for i := 0; i < b.N; i++ { _ = e.Table("userinfo").Where("uid", ">=", 50).Limit(100).Find(&users) } b.StopTimer()} func BenchmarkGormSelect(b *testing.B) { dsn := "root:123456@tcp(127.0.0.1:3306)/ApiDB?charset=utf8mb4&parseTime=True&loc=Local" db, _ := gorm.Open(mysql.Open(dsn), &gorm.Config{}) type User struct { Username string `gorm:"username"` Departname string `gorm:"departname"` Created string `gorm:"created"` Status int64 `gorm:"status"` } var users[] User b.ResetTimer() for i := 0; i < b.N; i++ { db.Table("userinfo").Where("uid >= ?", "50").Limit(50).Find(&users) } b.StopTimer()} func BenchmarkSmallormUpdate(b *testing.B) { e, _ := NewMysql("root", "123456", "127.0.0.1:3306", "ApiDB") b.ResetTimer() for i := 0; i < b.N; i++ { _,_ = e.Table("userinfo").Where("uid", "=", 15).Update("status", 0) } b.StopTimer()} func BenchmarkGormUpdate(b *testing.B) { dsn := "root:123456@tcp(127.0.0.1:3306)/ApiDB?charset=utf8mb4&parseTime=True&loc=Local" db, _ := gorm.Open(mysql.Open(dsn), &gorm.Config{}) b.ResetTimer() for i := 0; i < b.N; i++ { db.Table("userinfo").Where("uid = ?", "15").Update("status", 1) } b.StopTimer()}
Run and look at the running score data:
go test -bench=. -benchmem goos: darwingoarch: amd64pkg: smallormcpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHzBenchmarkSmallormSelect-12 1296 843769 ns/op 911 B/op 25 allocs/opBenchmarkGormSelect-12 598 1998827 ns/op 29250 B/op 1058 allocs/opBenchmarkSmallormUpdate-12 1197 864404 ns/op 727 B/op 21 allocs/opBenchmarkGormUpdate-12 314 4216470 ns/op 6246 B/op 76 allocs/opPASSok smallorm 6.880s
You can have a look at this score.
4, Functions to be realized
- Multi table joint query
- Quick hash table
- Optimization of log, performance, structure and security