Modern neural network training relies on piece-wise (sub-)differentiable functions in order to use backpropagation to update model parameters. In this work, we introduce a novel method to allow simple non-differentiable functions at intermediary layers of deep neural networks. We do so by training with a differentiable approximation bridge (DAB) neural network which approximates the non-differentiable forward function and provides gradient updates during backpropagation. We present strong empirical results (performing over 600 experiments) in four different domains: unsupervised (image) representation learning, variational (image) density estimation, image classification, and sequence sorting to demonstrate that our proposed method improves state of the art performance. We demonstrate that training with DAB aided discrete non-differentiable functions improves image reconstruction quality and posterior linear separability by 10% against the Gumbel-Softmax relaxed estimator as well as providing a 9% improvement in the test variational lower bound in comparison to the state of the art RELAX discrete estimator. We also observe an accuracy improvement of 77% in neural sequence sorting and a 25% improvement against the straight-through estimator in an image classification setting. The DAB network is not used for inference and expands the class of functions that are usable in neural networks.

Related readings and updates.

An On-device Deep Neural Network for Face Detection

Apple started using deep learning for face detection in iOS 10. With the release of the Vision framework, developers can now use this technology and many other computer vision algorithms in their apps. We faced significant challenges in developing the framework so that we could preserve user privacy and run efficiently on-device. This article discusses these challenges and describes the face detection algorithm.

See highlight details

Improving the Realism of Synthetic Images

Most successful examples of neural nets today are trained with supervision. However, to achieve high accuracy, the training sets need to be large, diverse, and accurately annotated, which is costly. An alternative to labelling huge amounts of data is to use synthetic images from a simulator. This is cheap as there is no labeling cost, but the synthetic images may not be realistic enough, resulting in poor generalization on real test images. To help close this performance gap, we've developed a method for refining synthetic images to make them look more realistic. We show that training models on these refined images leads to significant improvements in accuracy on various machine learning tasks.

See highlight details