• Making sense of NIPS 2017
    Billion dollar investments. Top-tier scientists. Flo Rida. NIPS 2017 was a confusing, absurd, and inspirational roller coaster ride. Let's try to understand what happened.
  • Visualizing and Understanding Atari Agents
    Deep RL agents are effective at maximizing rewards but it's often unclear what strategies they use to do so. I'll talk about a paper I just finished, aimed at solving this problem.
  • Parameter bloat
    Do we REALLY need over 100,000 free parameters to build a good MNIST classifier? It turns out that we can eliminate 80-90% of them.
  • Taming wave functions with neural networks
    The wave function is essential to most calculations in quantum mechanics, and yet it's a difficult beast to tame. Can neural networks help?
  • Differentiable memory and the brain
    DeepMind's Differentiable Neural Computer (DNC) represents the state of the art in differentiable memory models. I introduce an analogy between the DNC and human memory, then discuss where it breaks down.
  • Learning the Enigma with Recurrent Neural Networks
    Recurrent Neural Networks (RNNs) are Turing-complete. In other words, they can approximate any function. As a tip of the hat to Alan Turing, let's see if we can use them to learn the Nazi Enigma.
  • A bird's eye view of synthetic gradients
    Synthetic gradients achieve the perfect balance of crazy and brilliant. In a 100-line Gist I'll introduce this exotic technique and use it to train a neural network.
  • The art of regularization
    Regularization seems fairly insignificant at first glance, but it has a huge impact on deep models. I'll use a one-layer neural network trained on the MNIST dataset to give an intuition for how common regularization techniques affect learning.
  • Scribe: realistic handwriting with TensorFlow
    In this post, I will demonstrate the power of deep learning by using it to generate human-like handwriting. This work is based on Generating Sequences With Recurrent Neural Networks by Alex Graves
  • What is deep learning?
    After being obsessed with this field for more than a year, I should have a concise and satisfying answer. Strangely, I have three.