You may also want to have your own AI model file format-

        After the last work, if you want to review or view the last content, you can click the following link, as shown below:

You may also want to have your own AI model file format - (1)https://blog.csdn.net/Pengcode/article/details/121754272?spm=1001.2014.3001.5502         The main content of this time is to build the soul of our special ai model, that is, the compilation of information composition description. Generally speaking, it is to define a data protocol. These data protocols contain model data structures. These data structures are the building blocks for us to build the whole model, or include how the whole model is defined.

        Following the steps in the previous section, we have built the environment and installed the corresponding tools. The tool I use this time is FlatBuffers. Next, I will use this tool to get our special ai model data protocol.

1, Use process of FlatBuffers:

1.1 prepare a Schema.fbs data exchange protocol file:

        Take one from the official website example Is the file shown below, with the file name monster.fbs:

// Example IDL file for our monster's schema.
 
namespace MyGame.Sample;
 
enum Color:byte { Red = 0, Green, Blue = 2 }
 
union Equipment { Weapon } // Optionally add more tables.
 
struct Vec3 {
  x:float;
  y:float;
  z:float;
}
 
table Monster {
  pos:Vec3; // Struct.
  mana:short = 150;
  hp:short = 100;
  name:string;
  friendly:bool = false (deprecated);
  inventory:[ubyte];  // Vector of scalars.
  color:Color = Blue; // Enum.
  weapons:[Weapon];   // Vector of tables.
  equipped:Equipment; // Union.
  path:[Vec3];        // Vector of structs.
}
 
table Weapon {
  name:string;
  damage:short;
}
 
root_type Monster;

        If you know C language and json files, you may feel that the above writing syntax is like a mixture of the two, which is strange and familiar. In fact, Schema.fbs uses an IDL defined by FlatBuffers (Interface description language), IDL is a general term of a class of languages, which is mainly used to describe the API of programming. IDL is used to describe an API file. Assuming there is a data generated according to the API file, whether in C + +, JAVA or any other programming language, I can also interpret the data according to the API file. This is usually the case It occurs in the parsing process after the server sends and receives data to the client, so IDL is often used in the server scenario.

         Therefore, the schema of FlatBuffers is an IDL. For better explanation, I will explain it according to the actual meaning of the data, because the usage scenario of monster.fbs is a game, which mainly describes the information of a complete monster. I will interpret it as follows:

namespace MyGame.Sample;
// Color category, used to describe the color of monsters
enum Color:byte { Red = 0, Green, Blue = 2 }
// Armor is a kind of weapon
union Equipment { Weapon }
// 3D data
struct Vec3 {
  x:float;
  y:float;
  z:float;
}
// monster
table Monster {
  pos:Vec3; //Describe the monster's coordinate points with three-dimensional data
  mana:short = 150; // Power is almost the same as force
  hp:short = 100;// hp value is the amount of blood
  name:string;//name
  friendly:bool = false (deprecated);//Whether it is friendly to players is whether it will take the initiative to attack
  inventory:[ubyte];  // Item column
  color:Color = Blue; // colour
  weapons:[Weapon];   // Carrying weapons
  equipped:Equipment; // You can only have one piece of armor
  path:[Vec3];        // Walking track
}
table Weapon {
  name:string;
  damage:short;
}
// The keyword root_type declares Monster as our main type and serves as the main type of subsequent serialization,
//This is particularly important for parsing Json files.
root_type Monster;

         Next, I will explain the internal syntax and meaning of monster.fbs. In order to reduce the length of the article, I will write it directly below in the form of notes:

// First of all, there is a keyword namespace, which is mainly used to declare namespaces for the convenience of C + +
// In C + + language, we need to use the namespace MyGame::Sample to use the API defined by IDL 
namespace MyGame.Sample;
// The keyword enum indicates that Color is an enumeration type, and byte indicates that the type of the internal enumeration value is a byte type
// Note that enum values inside enum can only be integer data types: byte, ubyte, short, ushort, int, uint
// long,ulong
enum Color:byte { Red = 0, Green, Blue = 2 }
// The keyword union represents the Equipment community, which is consistent with the concept of community in C language
union Equipment { Weapon }
// The keyword struct indicates that Vec3 is a structure. The difference between Vec3 and table is:
// 1. struct generally only contains other structures or built-in data types
// 2. The default value cannot be set
// 3. Faster and smaller storage than table
// 4. There are no added and deprecated attributes
struct Vec3 {
  x:float;
  y:float;
  z:float;
}
// The keyword table is the main data structure in schema and has rich features and interfaces
// Its member type can be any data structure, and the default value can be set;
// In addition, it can also declare list types, similar to vector types, such as [ubyte], [Weapon], [Vec3] as shown below
table Monster {
  pos:Vec3; // Struct.
  mana:short = 150; // The default setting is 150
  hp:short = 100;
  name:string;
  friendly:bool = false (deprecated);
  inventory:[ubyte];  // Vector of scalars.
  color:Color = Blue; // Enum.
  weapons:[Weapon];   // Vector of tables.
  equipped:Equipment; // Union.
  path:[Vec3];        // Vector of structs.
}
table Weapon {
  name:string;
  damage:short;
}
// The keyword root_type declares Monster as our main type and serves as the main type of subsequent serialization,
//This is particularly important for parsing Json files.
root_type Monster;

         The above is the schema writing rules of FlatBuffers. For more specific schema writing rules, please go to the website: Official website tutorial of schema writing ruleshttps://google.github.io/flatbuffers/flatbuffers_guide_writing_schema.html

1.2. Use flatc to compile the schema.fbs file written above

        flatc can be used when FlatBuffers is installed. It is a compiler, which I prefer to call code generator. Because this is mainly used to generate the code of the target platform. You can generate the code of C + +, JAVA, JavaScript, Python and other target programming languages from the schema.fbs file. The generated code is the corresponding data structure API.

        The following is the process of compiling the constant monster.fbs file using flatc:

$ gedit monster.fbs
$ ls
monster.fbs
$ flatc -c -o ./ monster.fbs 
$ ls
monster.fbs  monster_generated.h
$ cat monster_generated.h 
// The following is the generated code corresponding to the C + + programming language platform corresponding to monster.fbs
// automatically generated by the FlatBuffers compiler, do not modify
#ifndef FLATBUFFERS_GENERATED_MONSTER_MYGAME_SAMPLE_H_
#define FLATBUFFERS_GENERATED_MONSTER_MYGAME_SAMPLE_H_

#include "flatbuffers/flatbuffers.h"

namespace MyGame {
namespace Sample {

struct Vec3;

struct Monster;
struct MonsterBuilder;

struct Weapon;
struct WeaponBuilder;
// It's too long, so only the first part is shown
..........
}
}
$

1.3. Use the generated code on the corresponding compilation platform

        This section is too long and is not covered in this article, so it will be covered in subsequent chapters of this series. If you want to know how to use it now, please visit the link on the official website: How to use flatbuffers in code compilationhttps://google.github.io/flatbuffers/flatbuffers_guide_tutorial.html

2, Write a schema file dedicated to ai modules

2.1 write down your needs

        Before writing a schema file, find out the expectations for the special ai model. For example, I want the one that is large and complete, has good scalability and can be done once and for all; maybe you want Xiaoer Mei, and don't have high requirements for scalability. Everyone has different requirements. Writing down the requirements and expectations before writing a schema file will be efficient and accurate.

        Below, I list my expectations for specific ai modules to guide me in writing schema files. My requirements for specific ai modules are: (1) good scalability, that is, the parameter definitions of the network layer are not written dead;

(2) Operators can be added at will, especially in the code;

(3) After the schema is written, any operator and model can be adapted without subsequent changes.

2.2 formal preparation

        In order to reduce the length of the article, I can't explain it. I also explain it according to the annotation method of the actual meaning of monster.fbs. If you are interested, you can help me see if there are some omissions or areas that need to be changed. Please correct me.

namespace PzkModel;

attribute "priority";
file_extension "pmodelmeta";
file_identifier "PZKM";

// Class structure used to represent time
table time{
    year:uint32 = 2099;
    month:uint8 = 12;
    day:uint8 = 29;
    hour:uint8 = 6;
    min:uint8 = 6;
    sec:uint8 = 6;
}

// It contains most data types, from bool to int32
enum DataType: byte {
    INT32 = 0,
    BOOL = 1,
    INT4 = 2,
    UINT4 = 3,
    INT8 = 4,
    UINT8 = 5,
    INT16 = 6,
    UINT16 = 7,
    FP16 = 8,
    FP32 = 9,
    QSYMMEINT4 = 10, //quantize symmetry int4
    QSYMMEINT8 = 11, //quantize symmetry int8
    QASYMMEUINT4 = 12, //quantize asymmetry uint4
    QASYMMEUINT8 = 13, // quantize asymmetry uint8
    UINT32 = 14,
    CHAR = 15,
}
// Indicates whether the tensor at run time is constant, that is, whether it is weight or real-time data
enum TensorType: byte {
    CONST = 0,
    DYNAMIC = 1,
}
// At present, there are four data layout methods
enum DataLayout: byte {
    NCHW = 0,
    NHWC = 1,
    ND = 2,
    NCD = 3,
}
// Dimension information of tensor
table TensorShape{
    dimsize:ubyte;
    dims:[uint32];
}
// Weight data
table Weights{
    ele_bytes:ubyte=0;
    ele_num:uint64=0;
    buffer:[ubyte];
}
// Tensor, the inner member contains the above type
table Tensor{
    id:uint32;//Tensor id, which is the unique identifier
    name:string;
    tesor_type:TensorType;
    data_type:DataType;
    data_layout:DataLayout;
    shape:TensorShape;
    weights:Weights;
}
// A single attribute can be used to describe attribute information such as kernel size or step size similar to convolution
table AttrMeta{
    key:string;
    require:bool = false;
    buffer_data:DataType;
    buffer_ele_num:uint32;
    buffer:[ubyte];
}
// Attribute set is used to describe all attributes of a layer. For example, convolution includes all attributes such as kernel size, step size, group information and so on.
table Attributes{
    type:string;
    meta_num:uint32;
    meta_require_num:uint32;
    buffer:[AttrMeta];
}
// Connection information is used to describe the connection relationship between layers and tensors.
table Connect{
    name:string;
    necessary:bool = false;
    tensor_id:uint32;
}
// Layer description, including the necessary information above
table Layer{
    id:uint32;
    name:string;
    type:string;
    input_num:ubyte;//Number of input tensors
    output_num:ubyte;//Number of output tensors
    input_id:[Connect];
    output_id:[Connect];
    require_attrs:bool = false;
    attrs:Attributes;//Layer attribute collection
}


// Model description is the most important data structure, including all tensor descriptions, all layer descriptions, connection relationships, and ancillary information
table PModel{
    author:string;//Model author
    create_time:time;//Model creation time
    version:string;//Model version number
    model_name:string;//Model name
    model_runtime_input_num:uint32;//Number of model inputs
    model_runtime_output_num:uint32;//Number of model outputs
    model_runtime_input_id:[uint32];
    model_runtime_output_id:[uint32];
    all_tensor_num:uint32;
    tensor_buffer:[Tensor];//All tensors
    layer_num:uint32;
    layer_buffer:[Layer];//All layers
}

root_type PModel;

  2.3. The schema file is intended to be written as above

        According to my requirements in 2.1, it is impossible for me to directly write a convolution description in the schema. There are those parameters, because this will only make me fall into the table description of convolution, which will make my special ai module lose its expansibility; It will also increase the workload; We even need to think that new attributes may be added soon, so we have to modify the schema file, which runs counter to my expectation of "once and for all".

        Therefore, what I describe in the schema file is a general Layer description and tensor description. Any network Layer can be written as the table Layer representation in the above schema file. I will use the Layer I define to describe the description of a volume Layer as an example below to illustrate the generalization of this schema:

    {
      id: 0,
      name: "conv2d-index-1",
      type: "\"Convolution2dLayer\"",
      input_num: 3,
      input_id: [
        {
          name: "\"input\"",
          necessary: true
        },
        {
          name: "\"weights\"",
          necessary: true,
          tensor_id: 1
        },
        {
          name: "\"biases\"",
          necessary: true,
          tensor_id: 2
        }
      ],
      output_num: 1,
      output_id: [
        {
          name: "\"conv2d-output\"",
          necessary: true,
          tensor_id: 3
        }
      ],
      require_attrs: true,
      attrs: {
        type: "\"Convolution2dLayer\"-Attrs",
        meta_num: 2,
        meta_require_num: 2,
        buffer: [
          {
            key: "\"kernel_size\"",
            require: true,
            buffer_data: "CHAR",
            buffer: [12]
          },
          {
            key: "\"pad\"",
            require: true,
            buffer_data: "CHAR",
            buffer: [1]
          }
        ]
      }
    }

        The above file is a convolution layer defined based on my table Layer, which can perfectly represent any layer. Therefore, this schema basically meets my needs and objectives.

        Follow up: how to generate your first model file based on the special ai model and look forward to it.

Keywords: AI Network Protocol p2p

Added by markster on Thu, 09 Dec 2021 02:01:56 +0200