PointNet + + up sampling (Feature Propagation)
PointNet + + needs to restore the down sampled points to the same number of points as the input when processing the segmentation task, so as to facilitate the prediction of each point. But in the paper, I only give a simple description and formula, which is not very easy to understand, so I record my understanding process here.
1. Purpose of F ...
Added by Donny Bahama on Mon, 08 Nov 2021 15:14:55 +0200
Fully connected neural network is implemented in C language
About parameter acquisition: it has been proposed in the previous blog, please refer to For relevant links, please click
1, Analysis input and output
1. The handwriting input is a 28x28 black-and-white picture, so the input is 784 X 2. The output is the probability of identifying numbers 0-9, so there are 10 outputs 3. The input can only ...
Added by deljhp on Fri, 29 Oct 2021 04:57:45 +0300
In depth learning notes:
The previous article explained the logistic regression model of deep learning. This article will next talk about the vectorization of logistic regression and the basic code required for compilation.
1.sigmoid function:
The sigmoid function can be compiled using python's math libraryH ...
Added by lalov1 on Wed, 20 Oct 2021 09:40:55 +0300
Easy understanding of torch.utils.data.DataLoader (for beginners)
Official explanation: Dataloader combines dataset & sampler to provide iterable data
Main parameters:
1. Dataset: this dataset must be torch.utils.data.Dataset itself or a class inherited from it
The main method is __ getitem__(self, index) used to retrieve data according to the index index
2,batch_size: how many pieces of data sh ...
Added by nedpwolf on Fri, 15 Oct 2021 06:27:59 +0300
Detailed explanation of target detection using paddleX
preface
Using Baidu's open source paddleX tool, we can easily and quickly train a deep network model of target detection, image classification, instance segmentation and semantic segmentation using our own labeled data. This paper mainly records how to use pddleX to train a simple ppyolo for detecting cats and dogs in the whole process_ Tiny m ...
Added by 01chris on Wed, 13 Oct 2021 21:42:36 +0300
Wu Enda's in-depth learning programming assignment section 1 - Revolution model step by step V1 / V2
1. Gradually construct convolution network The basic network architecture built this time: Note: for each forward propagation operation, there will be corresponding backward propagation. The parameters of forward propagation will be stored, and these parameters will be used to calculate the gradient in the backward propagation process. 2. Conv ...
Added by PHPBewildered on Tue, 12 Oct 2021 01:33:38 +0300
[pytoch series-24]: neural network foundation - simple linear regression of single neuron without activation function - 2
Author home page( Silicon based workshop of slow fire rock sugar): Slow fire rock sugar (Wang Wenbing) blog silicon based workshop of slow fire rock sugar _csdnblog
Website of this article: https://blog.csdn.net/HiWangWenBing/article/details/120600611
catalogue
Introduction deep learning model framework
Chapter 1 business area analy ...
Added by ftrudeau on Sun, 03 Oct 2021 23:39:05 +0300
Week 5 assignment: convolutional neural network (Part3)
The summary is taken from Mr. Gao GitHub
https://github.com/OUCTheoryGroup/colab_demo/blob/master/202003_models/MobileNetV1_CIFAR10.ipynb
https://github.com/OUCTheoryGroup/colab_demo/blob/master/202003_models/MobileNetV2_CIFAR10.ipynb
MobileNet v1
The core of Mobilenet v1 is to split the convolution into two parts: epthwise+Pointwise.
Dept ...
Added by itpvision on Sun, 03 Oct 2021 22:01:56 +0300
NNLM feedforward neural network model learning notes
The traditional statistical language model is a nonparametric model, that is, the conditional probability is estimated directly by counting, but the main disadvantage of this nonparametric model is poor generalization and can not make full use of similar context
The ...
Added by TheFreak on Sun, 03 Oct 2021 21:27:12 +0300
TI deep learning (TIDL) -- 3
1.4. Training As long as layers are supported and parameter constraints are met, existing cafe and TF slim models can be imported. However, these models usually include dense weight matrices. In order to take advantage of some advantages of tidl lib and obtain 3x-4x performance improvement (for convolution layer), it is necessary to use caffe J ...
Added by mikeatrpi on Sun, 03 Oct 2021 02:06:04 +0300