newsletter.etymo

31st December 2018 - 10th January 2019

960 new papers

In this newsletter from Etymo, you can find out the latest development in machine learning research, including the most popular datasets used, the most frequently appearing keywords, the important research papers associated with the keywords, and the most trending papers in the past two weeks.

If you and your friends like this newsletter, you can subscribe to our fortnightly newsletters here.

Fortnight Summary

The new year 2019 has taken a slow start in machine learning related research, as the number of papers published in the past two weeks is significantly smaller than usual. Nevertheless, computer vision (CV) is still a main research area, as reflected on the popularity of the CV datasets and the most trending papers.

We present the emerging interests in research under the "Trending Phrases" section. The papers in this section shows some cutting edge results. There are three good papers related to differential evolution, a category of optimization algorithms applicable to problems that are not continuous and noisy. There is also a notable progress on foreground and background estimation using PCA for moving cameras. Please read the "Trending Phrases" section for more details.

Other notable development in research includes the following:

  • More efficient active tasks learning using mid-level visual representation than the from-scratch learning approach: Mid-Level Visual Representations Improve Generalization and Sample Efficiency for Learning Active Tasks
  • A new model schedule approach to choose the model with the best predictive accuracy under a given budget: Cost-sensitive Selection of Variables by Ensemble of Model Sequences
  • A solution to determine layer-wise parallelism for deep neural network training with an array of DNN accelerators: HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array
  • To establish the fundamental limits of learning in deep neural networks by characterizing what is possible if no constraints on the learning algorithm and the amount of training data are imposed: Deep Neural Network Approximation Theory
  • A deep network embedding model to learn the low-dimensional node vector representations with structural balance preservation for the signed networks: Deep Network Embedding for Graph Representation Learning in Signed Networks

  • Some of the notable review papers include:
  • A Comprehensive Survey on Graph Neural Networks
  • An introduction to domain adaptation and transfer learning
  • Coevolution spreading in complex networks
  • FPGA-based Accelerators of Deep Learning Networks for Learning and Classification: A Review
  • A Survey on Multi-output Learning
  • Popular Datasets

    Computer vision is still the main focus area of research.

    Name Type Number of Papers
    MNIST Handwritten Digits 41
    CIFAR-10 Tiny Image Dataset in 10 Classes 28
    ImageNet Image Dataset 23
    KITTI Autonomous Driving 16
    COCO Common Objects in Context 9
    Cityscapes Images from 50 different cities 9

    Trending Phrases

    In this section, we present a list of phrases that appeared significantly more in this newsletter than the previous newsletters.

    Etymo Trending

    Presented below is a list of the most trending papers added in the last two weeks.

    • A Comprehensive Survey on Graph Neural Networks:
      In this 22-page review, the authors give a comprehensive overview of graph neural networks (GNN) in data mining and machine learning fields. They propose a new taxonomy to divide the state-of-the-art graph neural networks into different categories. With a focus on graph convolutional networks, they review recently developed alternative architectures and discuss the applications of graph neural networks across various domains. They also summarize the open source codes and benchmarks of the existing algorithms on different learning tasks.

    • An introduction to domain adaptation and transfer learning:
      The author introduces domain adaptation and transfer learning guided by the aim to generalize a classifier from a source to a target domain appropriately. The author starts with simpler dataset shifts, namely prior, covariate and concept shift. He then moves on to more complex domain shifts, including importance-weighting, subspace mapping, domain-invariant spaces, feature augmentation, minimax estimators and robust algorithms.

    • Mid-Level Visual Representations Improve Generalization and Sample Efficiency for Learning Active Tasks:
      One of the ultimate objectives of computer vision is to help robotic agents perform active tasks. The conventional approach using Deep Reinforcement Learning need to learn active tasks from scratch using images as input. The authors show that proper use of mid-level perception confers significant advantages over training from scratch. They implement a perception module as a set of mid-level visual representations and demonstrate that learning active tasks with mid-level features is significantly more sample-efficient than scratch and able to generalize in situations where the from-scratch approach fails. However, gaining these advantages requires careful selection of the particular mid-level features.

    Frequent Words

    "Learning", "Model", "Data" and "Training" are the most frequent words. The top two papers associated with each of the key words are:

    Hope you have enjoyed this newsletter! If you have any comments or suggestions, please email ernest@etymo.io or steven@etymo.io.