Week 5 assignment: convolutional neural network (Part3)
The summary is taken from Mr. Gao GitHub
https://github.com/OUCTheoryGroup/colab_demo/blob/master/202003_models/MobileNetV1_CIFAR10.ipynb
https://github.com/OUCTheoryGroup/colab_demo/blob/master/202003_models/MobileNetV2_CIFAR10.ipynb
MobileNet v1
The core of Mobilenet v1 is to split the convolution into two parts: epthwise+Pointwise.
Dept ...
Added by itpvision on Sun, 03 Oct 2021 22:01:56 +0300
NNLM feedforward neural network model learning notes
The traditional statistical language model is a nonparametric model, that is, the conditional probability is estimated directly by counting, but the main disadvantage of this nonparametric model is poor generalization and can not make full use of similar context
The ...
Added by TheFreak on Sun, 03 Oct 2021 21:27:12 +0300
TI deep learning (TIDL) -- 3
1.4. Training As long as layers are supported and parameter constraints are met, existing cafe and TF slim models can be imported. However, these models usually include dense weight matrices. In order to take advantage of some advantages of tidl lib and obtain 3x-4x performance improvement (for convolution layer), it is necessary to use caffe J ...
Added by mikeatrpi on Sun, 03 Oct 2021 02:06:04 +0300
Convolution neural network convolution layer implementation for deep learning
in TensorFlow, you can not only build neural networks through the bottom implementation of custom weights, but also directly call the high-level implementation of existing convolution layer classes to quickly build complex networks. We mainly take 2D convolution as an example to introduce how to realize convolution neural network lay ...
Added by asparagus on Sat, 02 Oct 2021 04:46:05 +0300
Reproducing pointrcnn + Ubuntu 16.043080 graphics card + pytorch1.7.1+cu110
Reproduce pointrcnn
In the process of reproducing pointrcnn, the easiest place to report errors is to compile CUDA code. Most of the issue s of github emphasize the version problems of gcc and pytorch, but I use 3080 graphics card, which only supports > = cuda11.0, so installing a lower versi ...
Added by Mike-2003 on Fri, 01 Oct 2021 23:11:52 +0300
Deep learning VGG16 network based on tensorflow 2.0
The network depth of VGG series is improved compared with its previous networks. VGG16 and VGG19 are the representatives of VGG series. This time, VGG16 network is realized based on tensorflow 2.0.
1. Introduction to vgg16 network
VGG16 network model stood out in the 2014 ImageNet competition, ranking second in classification tasks and first ...
Added by aidema on Fri, 01 Oct 2021 22:52:04 +0300
ONNX to TensorRT accelerated model reasoning
preface
TensorRT is an efficient deep learning model reasoning framework launched by NVIDIA. It includes deep learning reasoning optimizer and runtime, which can make deep learning reasoning applications have the advantages of low latency and high throughput. In essence, it is to accelerate the reasoning speed of the whole network by fusing s ...
Added by rolwong on Thu, 30 Sep 2021 23:51:37 +0300
DDPG code implementation
DDPG code implementation
Code and explanation
1. Super parameter setting
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--train', dest='train', default=True)
parser.add_argument('--random_seed', type=int, default=0)
#Whether to render during training
parser.add_argument('--render', type=bool, defa ...
Added by tensionx on Thu, 30 Sep 2021 01:43:44 +0300
Classic network architecture of deep learning: AlexNet
1, Introduction
Alex net was designed by Hinton, the winner of the 2012 ImageNet competition, and his student Alex Krizhevsky. Also after that year, more and deeper neural networks were proposed, such as the excellent VGg and Google lenet. The accuracy of the official data model is 57.1% and the top 1-5 is 80.2%. This is quite excellent for th ...
Added by intermediate on Thu, 30 Sep 2021 01:30:52 +0300
Autoregressive model - pixel CNN
introduce
Generative model is an important model in unsupervised learning, which has attracted extensive attention in recent years. They can be defined as a kind of model, whose goal is to learn how to generate new samples from the same data set as the training data. In the training phase, the generation model attempts to solve the core task o ...
Added by rnintulsa on Wed, 29 Sep 2021 03:50:58 +0300