Explanation of BILSTM-CRF code
BILSTM-CRF code
Code from Named entity recognition (NER): introduction to the principle of BiLSTM-CRF + pytoch_ Tutorial code parsing Part I: Guide Package 1.torch.nn package mainly contains Modules used to build each layer, such as full connection, two-dimensional convolution, pooling, etc; The torch.nn package also contains a series of usefu ...
Added by msurabbott on Fri, 15 Oct 2021 23:47:19 +0300
Text classification based on MLP
Recently, I learned the MLP, CNN and RNN network models based on pytoch framework, and conducted text classification experiments using the commodity comment data obtained on GitHub. This paper introduces how to establish MLP under the pytoch framework to classify the data. The data sets are roughly as follows:
1. Import module
import pandas ...
Added by pagod on Thu, 14 Oct 2021 22:33:17 +0300
[RNN architecture analysis] LSTM model
preface
Understand the internal structure and calculation formula of LSTMMaster the use of LSTM tools in pytochUnderstand the advantages and disadvantages of LSTM
LSTM (long short term memory), also known as long short term memory structure, is a variant of traditional RNN. Compared with classical RNN, it can effectively capture the seman ...
Added by tonga on Thu, 07 Oct 2021 08:00:49 +0300
[NLP] news topic classification task
preface
Learning objectives
Learn about news topic classification and relevant data Master the implementation process of constructing news topic classifier using shallow network About news topic classification tasks:
Taking the text description content of a news report as the input, the model is used to help us judge which type of n ...
Added by ragear on Tue, 05 Oct 2021 22:26:59 +0300
NNLM feedforward neural network model learning notes
The traditional statistical language model is a nonparametric model, that is, the conditional probability is estimated directly by counting, but the main disadvantage of this nonparametric model is poor generalization and can not make full use of similar context
The ...
Added by TheFreak on Sun, 03 Oct 2021 21:27:12 +0300
Visualizing Glove vectors using t-SNE
1. Introduction of GloVe word vector
GloVe: The full name is Global Vectors for Word Representations. Its document [2] was presented at the EMNLP conference in 2014. It combines the idea of word vector and matrix decomposition to pre-train the original corpus and get a low-dimensional, continuous and sparse representation. Visualizing the pre- ...
Added by Todd_Z on Fri, 01 Oct 2021 19:12:45 +0300
Data structures and algorithms
#Data structure and algorithm Introduction
Logical structure
Physical structure
Talk about algorithm
Algorithm time complexity
Just tell you to pay attention to the highest order, then ignore the constant product of constant and order, and pay attention to enough data. The number of executions is ...
Added by Ryan0r on Sun, 19 Sep 2021 12:49:19 +0300
Natural language processing learning road 01 search engine and its simple implementation
This paper mainly refers to the teacher's teaching, the teacher's course learning, memory and notes. Original link
There is a road to the mountain of books. Diligence is the path. There is no end to learning. It is hard to make a boat.
Zero. Draught does not forget the well digger
Please support the teacher's original text Original li ...
Added by cunoodle2 on Sun, 19 Sep 2021 06:37:51 +0300
NLP star sky intelligent dialogue robot series for natural language processing: in depth understanding of Transformer natural language processing multi head attention architecture-1
NLP star sky intelligent dialogue robot series for natural language processing: in depth understanding of Transformer's multi head attention architecture for natural language processing This paper starts with the architecture of Transformer's multi head attention layer, and then uses an example of Python coding to help readers understand the mu ...
Added by R_P on Sat, 11 Sep 2021 04:18:23 +0300