Copy
Nov 13 · Issue 131

Hey folks,

This week in deep learning, we bring you a look at PyTorch at Tesla, news of P100s for Colab users, Chinese AI startups on U.S. blacklists, a peak at deep learning experiments from Adobe, and a smartphone app from Microsoft that administers drivers tests.

You may also enjoy training TensorFlow.js models in your browser, a deep dive into the neural tangent kernel, a new system for identifying and handling labeling errors, an open source repository for audio source separation, a review of self-supervised learning, and more.

As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.

Until next week!

Industry

U.S. Blacklists 28 Chinese Entities Over Abuses in Xinjiang

Many AI giants like SenseTime and Megvii included.

 

Google launches TensorBoard.dev and TensorFlow Enterprise

The TensorFlow universe continues to expand with enterprise support and a hosted version of TensorBoard for data sharing.

 

Every user on Google Colab gets access to a P100 GPU

The new cards are 4-6X faster than the previous generation of K80s.

 

[Video] PyTorch at Tesla

Andrej Karpathy, Head of AI at Tesla, talks about how the models behind AutoPilot and Smart Summon work.

 

Microsoft is testing a smartphone-based AI system for driving license tests in India

Administering drivers tests with a dash-mounted smartphone running neural networks to assess performance.

 

Adobe's Experimental New Features Promise a Future Where Nothing's Real

A look at some experimental, deep learning powered features from Adobe.

Learning

[Google] Teachable Machine

Train image, sound, and pose recognition models directly in-browser with TensorFlow.js.

 

Deploy Machine Learning Models with Django

A detailed tutorial on serving machine learning models with the popular Python-based Django framework.

 

Understanding the Neural Tangent Kernel

A deep dive into the concepts and mathematics behind the Neural Tangent Kernel and infinitely wide networks.

 

An Introduction to Confident Learning: Finding and Learning with Label Errors in Datasets

An overview of a new Python tool called cleanlab that identifies label errors in classification datasets.

 

Deep learning has a size problem

Shifting from state-of-the-art accuracy to state-of-the-art efficiency (disclosure: I wrote this one).

 

Self-Supervised Representation Learning

A fantastic dive into self-supervised representation learning from colorization to autonomous goal generation.

Datasets

[GitHub] ROC-HCI/UR-FUNNY

This repository presents UR-FUNNY dataset: first dataset for multimodal humor detection

Libraries & Code

[GitHub] deezer/spleeter

A source separation library with pretrained models written in Python and uses Tensorflow.

 

[GitHub] fchollet/ARC

This repository contains the ARC task data, as well as a browser-based interface for humans to try their hand at solving the tasks manually.

 

[GitHub] suinleelab/attributionpriors

Tools for training explainable models using attribution priors.

Papers & Publications

C3DPO: Canonical 3D Pose Networks for Non-Rigid Structure From Motion

Abstract: We propose C3DPO, a method for extracting 3D models of deformable objects from 2D keypoint annotations in unconstrained images. We do so by learning a deep network that reconstructs a 3D object from a single view at a time, accounting for partial occlusions, and explicitly factoring the effects of viewpoint changes and object deformations. In order to achieve this factorization, we introduce a novel regularization technique. We first show that the factorization is successful if, and only if, there exists a certain canonicalization function of the reconstructed shapes. Then, we learn the canonicalization function together with the reconstruction one, which constrains the result to be consistent. We demonstrate state-of-the-art reconstruction results for methods that do not use ground-truth 3D supervision for a number of benchmarks, including Up3D and PASCAL3D+.

 

Dancing to Music

Abstract: Dancing to music is an instinctive move by humans. Learning to model the music-to-dance generation process is, however, a challenging problem. It requires significant efforts to measure the correlation between music and dance as one needs to simultaneously consider multiple aspects, such as style and beat of both music and dance. Additionally, dance is inherently multimodal and various following movements of a pose at any moment are equally likely. In this paper, we propose a synthesis-by-analysis learning framework to generate dance from music. In the analysis phase, we decompose a dance into a series of basic dance units, through which the model learns how to move. In the synthesis phase, the model learns how to compose a dance by organizing multiple basic dancing movements seamlessly according to the input music. Experimental qualitative and quantitative results demonstrate that the proposed method can synthesize realistic, diverse,style-consistent, and beat-matching dances from music.

For more deep learning news, tutorials, code, and discussion, join us on SlackTwitter, and GitHub.
Copyright © 2019 Deep Learning Weekly, All rights reserved.