The depth of compiler learning starts from zero

0x0. preface The previous articles in this series have a superficial understanding of the components of MLIR. This article will not continue to talk about the architecture of MLIR. But from a practical point of view, let's take readers to see what MLIR has helped me do. Here we still take OneFlow Dialect as an example. stay Interpretation of ...

Added by nicandre on Thu, 10 Mar 2022 17:35:51 +0200

Notes of Web security in-depth learning practice: Chapter 8 harassment message recognition

This chapter mainly takes SMS Spam Collection data set as an example to introduce the identification technology of harassing SMS. This section explains in detail the feature extraction method of harassing SMS with Word2Vec. Word2Vec model 1, Principle Word2Vec is an efficient tool that Google opened in 2013 to represent words as real value v ...

Added by Maracles on Thu, 10 Mar 2022 13:51:35 +0200

DBMTL introduction and implementation of multitasking learning model

This paper introduces the multitasking learning algorithm published by Ali in 2019. The model shows the Bayesian network causality between targets, integrates and models the complex causality network between features and multiple targets, and eliminates the strong independent assumptions in the general MTL model. Since there is no specific assu ...

Added by rami on Thu, 10 Mar 2022 10:08:55 +0200

How to understand the residual network (resnet) structure and code implementation (pytoch) note sharing

In the network of deep learning, I think the most basic is the residual network. What I share today is not the theoretical part of the residual network. Just remember that the idea of the residual network runs through many network structures behind. If you understand the residual network structure, then some advanced network structures behind a ...

Added by cpharry on Thu, 10 Mar 2022 02:06:36 +0200

Practical learning notes of Pytrch neural network_ 12 (example) predicting the survival of passengers on the Titanic

1 sample processing 1.1 load sample code --- Titanic forecast Py (Part 1) import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from scipy import stats import pandas as pd import matplotlib.pyplot as plt import os os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" def moving_average(a, w=10):#Define a function to calcu ...

Added by skiingguru1611 on Wed, 09 Mar 2022 09:35:34 +0200

Focal Loss upgrade | E-Focal Loss makes Focal Loss dynamic, and the extreme imbalance of categories can be easily solved

Despite the recent success of long tail target detection, almost all long tail target detectors are developed based on the two-stage paradigm. In practice, one-stage detectors are more common in the industry because they have a simple and fast Pipeline and are easy to deploy. However, in the case of long tail, this work has not been explore ...

Added by fabby on Wed, 09 Mar 2022 04:11:09 +0200

Prevent overfitting

Get more training data (data enhancement) Data enhancement using geometric transformations Geometric transformations such as flip, crop, rotation and translation are some commonly used data enhancement techniques. GAN based data enhancement Reduce network capacity The simplest way to prevent overfitting is to reduce the size of the mod ...

Added by naveendk.55 on Tue, 08 Mar 2022 11:52:36 +0200

XCiT repeats the champion code - the accuracy exceeds the official_ Copy 1

Reprinted from AI Studio Project link https://aistudio.baidu.com/aistudio/projectdetail/3449604 XCiT: covariance image Quansi transformation network This is the fifth issue of the oar paper reproduction challenge< XCiT: Cross-Covariance Image Transformers >The champion code of GitHub is https://github.com/BrilliantYuKaimin/XCiT-PaddlePa ...

Added by jumphopper on Tue, 08 Mar 2022 03:47:55 +0200

Effect of shallow self encoder superimposed MemAE on fabric defect detection

Firstly, the network model is introduced, and then the experimental results of SL1 and SP5 are introduced. 1 introduction of shallow self encoder used 1.1 shallow self encoder model (AE) The encoder and decoder adopt convolution with three-layer step size of 2, and the number of characteristic layers of the encoder changes to 3-64-128-25 ...

Added by illushinz on Tue, 08 Mar 2022 03:32:11 +0200

Training of the first linear regression

  you can run the program if you want Reference here . For ndarray and autograd, please refer to the previous blogs. preface    now there is a function, y=w*x+b, W, b is known, so give an X, you can find the corresponding y.    but when w and b are unknown, we only give a pair of X and Y. the obtained W and b may on ...

Added by OilSheikh on Tue, 08 Mar 2022 03:01:53 +0200