|
14 June - 27 June 2019
2130 new papers
|
Etymo Newsletter provides the latest development in machine learning research,
including the most popular datasets and the most trending papers in the past two weeks.
If you like this newsletter, you can subscribe to our fortnightly newsletters here.
|
Fortnight Summary
Popular datasets include MNIST, ImageNet, CIFAR-10, CelebA, and COCO.
Our trending phrases are Explicit Attention, Local Regression, and Online Feature.
In our trending papers section, we include a paper on XLNet: Generalized Autoregressive Pretraining
for Language Understanding, a paper on exchangeable stochastic processes called
the Functional Neural Processes (FNPs), and a paper on an unsupervised version of capsule networks.
|
Popular Datasets
Here are most mentioned datasets over the last two weeks.
Name |
Type |
Number of Papers |
MNIST |
Handwritten Digits |
75 |
ImageNet |
Image Dataset |
45 |
CIFAR-10 |
Tiny Image Dataset in 10 Classes |
44 |
CelebA |
Large-scale CelebFaces Attributes (CelebA) Dataset |
12 |
COCO |
Common Objects in Context |
12 |
|
Trending Phrases
In this section, we present a list of phrases that appeared significantly more in this newsletter than the previous newsletters.
|
Etymo Trending
Presented below is a list of the most trending papers added in the last two weeks.
-
XLNet: Generalized Autoregressive Pretraining for Language Understanding:
XLNet is a generalized autoregressive pretraining method that
enables learning bidirectional contexts by maximizing the expected
likelihood over all permutations of the factorization order and
overcomes the limitations of BERT thanks to its autoregressive
formulation. XLNet also integrates ideas from Transformer-XL,
the state-of-the-art autoregressive model, into pretraining.
-
The Functional Neural Process:
The Functional Neural Processes (FNPs) is a new family of exchangeable
stochastic processes. FNPs model distributions over functions
by learning a graph of dependencies on top of latent representations
of the points in the given dataset. They are scalable to large
datasets through mini-batch optimization. They can make predictions
for new points via their posterior predictive distribution.
-
Stacked Capsule Autoencoders:
The authors describe an unsupervised version of capsule networks,
in which a neural encoder, which looks at all of the parts, is
used to infer the presence and poses of object capsules. They
learn object- and their part-capsules on unlabeled data, and
then cluster the vectors of presences of object capsules.
|
Hope you have enjoyed this newsletter! If you have any comments or suggestions, please email ernest@etymo.io or steven@etymo.io.
|