|
19th October - 1st November 2018
1869 new papers
|
In this newsletter from Etymo, you can find out the latest development in machine learning research, including the most popular datasets used, the most frequently appearing keywords, the important research papers associated with the keywords, and the most trending papers in the past two weeks.
If you and your friends like this newsletter, you can subscribe to our fortnightly newsletters here.
|
Fortnight Summary
There was still a strong focus on computer vision (CV) from the papers published in the last two weeks, as reflected on the popularity of the CV datasets used. The ranking of the datasets appearing in research papers stayed almost the same compared to the last few of newsletters. The only non-image based dataset of Twitter reappeared near the top after being dropped last time.
We present the emerging interests in research under the "Trending Phrases" section. The research on expression recognition is going strong, ranging from medical application of such technology
(Alzheimer's Disease Diagnosis Based on Cognitive Methods in Virtual Environments and Emotions Analysis)
to new methods for expression recognition and classification
(Deep generative-contrastive networks for facial expression recognition,
Classifying and Visualizing Emotions with Emotional DAN, and
Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories).
There is also an emergence of interest in risk measures, such as
Recursive Optimization of Convex Risk Measures: Mean-Semideviation Models
and
An Approximation Algorithm for Risk-averse Submodular Optimization
.
The trending of the last two weeks included a new proposal to save memory space using reversible recurrent neural networks (Reversible Recurrent Neural Networks), some cautions against the conventional thinking about the destiny estimates from deep generative models (Do Deep Generative Models Know What They Don't Know?) and a new algorithm using random network distillation bonus with increased flexibly to combine intrinsic and extrinsic rewards to achieve the state-of-the-art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods (Exploration by Random Network Distillation).
In other areas of machine learning, reviews and summaries of current machine learning status and techniques are again very popular, including
Model Selection Techniques -- An Overview,
One Deep Music Representation to Rule Them All? : A comparative analysis of different representation learning strategies,
Taking Human out of Learning Applications: A Survey on Automated Machine Learning,
and
Everything you always wanted to know about a dataset: studies in data summarisation.
There is also some new development on continuous mobile vision
(NestDNN: Resource-Aware Multi-Tenant On-Device Deep Learning for Continuous Mobile Vision),
and a new algorithm for open set recognition problems
(Collective decision for open set recognition).
|
Popular Datasets
Computer vision is still the main focus area of research. Twitter dataset is near the top again since dropped from the last newsletter.
Name |
Type |
Number of Papers |
MNIST |
Handwritten Digits |
63 |
ImageNet |
Image Dataset |
51 |
CIFAR-10 |
Tiny Image Dataset in 10 Classes |
34 |
COCO |
Common Objects in Context |
16 |
KITTI |
Autonomous Driving |
13 |
CelebA |
Large-scale CelebFaces Attributes |
13 |
Cityscapes |
Urban Street Scenes |
11 |
Twitter |
Tweets |
7 |
|
Trending Phrases
In this section, we present a list of words/ phrases that appeared significantly more in this newsletter than the previous newsletters.
|
Etymo Trending
Presented below is a list of the most trending papers added in the last two weeks.
-
Reversible Recurrent Neural Networks:
RNN models are usually memory intensive. The authors present reversible RNN, which the hidden-to-hidden transition can be recompted during back propagation without storing the hidden states. Their method achieves comparable performance to traditional models while reducing the activation memory cost by a factor of 10-15.
-
Do Deep Generative Models Know What They Don't Know?:
The authors of this paper challenge the widely accepted view that generative models are robust to mistaken confidence of predicting inputs that were drawn from a different distribution than that of the training data, because modeling the density of the input features can be used to detect novel, out-of-distribution inputs. They demonstrate that generative models still have the behavior of mistaken confidence. They caution against using the density estimates from deep generative models to identify inputs similar to the training distribution, until their behavior on out-of-distribution inputs is better understood.
-
Exploration by Random Network Distillation:
This 17-page paper presents a combination of random network distillation bonus and a method to flexibly combine intrinsic and extrinsic rewards, which achieves the state-of-the-art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods.
|
Frequent Words
"Learning", "Model", "Data" and "Set" are the most frequent words. The top two papers associated with each of the key words are:
|
Hope you have enjoyed this newsletter! If you have any comments or suggestions, please email ernest@etymo.io or steven@etymo.io.
|