Copy
Oct 14 · Issue 167

Hey folks,

This week in deep learning we bring you Waymo's robo-taxi service restart without human safety drivers, making the most of TinyML, and Stanford researchers' use of AI in battery development.

You may also enjoy this tutorial on implementing RNNs using NumPy, this tutorial on how to automatically pixelate faces on iOS, and more!

As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.

Until next week!

Industry

Waymo Restarts Robotaxi Service Without Human Safety Drivers

Seven months after the coronavirus halted Waymo’s autonomous ride service in Phoenix, they are relaunching public operations there and going fully driverless, dispatching robot minivans with no backup human safety driver to pick up people using the Waymo One app.

AI Is Throwing Battery Development Into Overdrive

Improving batteries has always been hampered by slow experimentation and discovery processes. Machine learning is speeding it up by orders of magnitude.

Microsoft wants AI to be more helpful for people who are blind or use wheelchairs

Researchers are building diverse training data sets that include information from people with low vision and individuals living with conditions like ALS.

U.S. Congress calls for antitrust reforms to limit powers of Amazon, Apple, Facebook, and Google

Members of Congress investigating the activity of Amazon, Apple, Facebook, and Google say antitrust law reform is needed to safeguard democracy and “ensure that our economy remains vibrant and open in the digital age.” The findings come from a document released on 10/6 (PDF) that is the culmination of a 16-month investigation carried out by the antitrust subcommittee, part of the House Judiciary Committee.

Mobile + Edge

Automatically Pixelate Faces on iOS using Native Swift Code for Face Detection

Leveraging the native Swift library to perform face detection in an iOS app.

Google Assistant gets deeper app integrations as voice assistant usage skyrockets

Google Assistant can quickly open, search, and interact with some of the most popular Android apps on the Google Play Store.

Supercharging your Mobile Apps with On-Device GPU Accelerated Machine Learning using the Android NDK & Vulkan Kompute

A hands-on tutorial that teaches you how to leverage your on-device phone GPU for accelerated data processing and machine learning. You will learn how to build a simple Android App using the Native Development Kit (NDK) and the Vulkan Kompute framework.

Nest’s new thermostat has a Soli gesture sensor built in

Nest introduced a new thermostat that leverages Google’s Soli technology to recognize gestures behind its mirror-like display.

Making the most of TinyML for your IoT applications

This article digs into how you can run TinyML models on a real-world microcontroller (MCU) architecture, in this case an Azure Sphere MCU.

Learning

Object Detection from 9 FPS to 650 FPS in 6 Steps

This article is a practical deep dive into making a specific deep learning model (Nvidia’s SSD300) run fast on a powerful GPU server, but the general principles apply to all GPU programming.

The Joy of Neural Painting

Learning Neural Painters fast using PyTorch and Fast.ai.

Implementing Recurrent Neural Network using Numpy

A comprehensive tutorial on how to implement recurrent neural networks using Numpy.

Deep Learning vs Puzzle Games

Is deep learning better suited to solving Flow Free than good old brute force techniques?

Libraries & Code

[GitHub] nicolas-chaulet/torch-points3d

Pytorch framework for doing deep learning on point clouds.

[GitHub] facebookresearch/BLINK

BLINK is an Entity Linking python library that uses Wikipedia as the target knowledge base.

Datasets

[GitHub] esdurmus/Wikilingua

Multilingual abstractive summarization dataset extracted from WikiHow.

Papers & Publications

Learning the Pareto Front with Hypernetworks

Abstract: Multi objective optimization problems are prevalent in machine learning. These problems have a set of optimal solutions, called the Pareto front, where each point on the front represents a different trade-off between possibly conflicting objectives. Recent optimization algorithms can target a specific desired ray in loss space, but still face two grave limitations: (i) A separate model has to be learned for each point on the front; and (ii) The exact trade-off must be known prior to the optimization process. Here, we tackle the problem of learning the entire Pareto front, with the capability of selecting a desired operating point on the front after training. We call this new setup Pareto-Front Learning (PFL). We describe an approach to PFL implemented using HyperNetworks, which we term Pareto HyperNetworks (PHNs). PHN learns the entire Pareto front simultaneously using a single hypernetwork, which receives as input a desired preference vector, and returns a Pareto-optimal model whose loss vector is in the desired ray. The unified model is runtime efficient compared to training multiple models, and generalizes to new operating points not used during training. We evaluate our method on a wide set of problems, from multi-task learning, through fairness, to image segmentation with auxiliaries. PHNs learns the full Pareto front in roughly the same time as learning a single point on the front, and also reaches a better solution set. PFL opens the door to new applications where models are selected based on preferences that are only available at run time.

What Can We Learn from Collective Human Opinions on Natural Language Inference Data?

Abstract: Despite the subjective nature of many NLP tasks, most NLU evaluations have focused on using the majority label with presumably high agreement as the ground truth. Less attention has been paid to the distribution of human opinions. We collect ChaosNLI, a dataset with a total of 464,500 annotations to study Collective HumAn OpinionS in oft-used NLI evaluation sets. This dataset is created by collecting 100 annotations per example for 3,113 examples in SNLI and MNLI and 1,532 examples in Abductive-NLI. Analysis reveals that: (1) high human disagreement exists in a noticeable amount of examples in these datasets; (2) the state-of-the-art models lack the ability to recover the distribution over human labels; (3) models achieve near-perfect accuracy on the subset of data with a high level of human agreement, whereas they can barely beat a random guess on the data with low levels of human agreement, which compose most of the common errors made by state-of-the-art models on the evaluation sets. This questions the validity of improving model performance on old metrics for the low-agreement part of evaluation datasets. Hence, we argue for a detailed examination of human agreement in future data collection efforts, and evaluating model outputs against the distribution over collective human opinions. The ChaosNLI dataset and experimental scripts are available at this https URL.

Curated by Matt Moellman

For more deep learning news, tutorials, code, and discussion, join us on SlackTwitter, and GitHub.
Copyright © 2020 Deep Learning Weekly, All rights reserved.