Title
1
Deep Learning with Bayesian Principles
2
Imitation Learning and its Application to Natural Language Generation
3
Human Behavior Modeling with Machine Learning: Opportunities and Challenges
4
Coffee Break
5
Machine Learning for Computational Biology and Health
6
Efficient Processing of Deep Neural Network: from Algorithms to Hardware Architectures
7
Interpretable Comparison of Distributions and Models
8
Lunch Break on Your Own
9
Representation Learning and Fairness
10
Synthetic Control
11
Reinforcement Learning: Past, Present, and Future Perspectives
12
Break
13
Opening Remarks
14
How to Know
15
Opening Reception
16
Veridical Data Science
17
Coffee Break
18
Kernel Instrumental Variable Regression
19
Uniform convergence may be unable to explain generalization in deep learning
20
Logarithmic Regret for Online Control
21
Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input
22
Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks
23
Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments
Drag to adjust the number of frozen columns
Attending
Type
Speaker
Description
DateTime Start
DateTime End
Location
Link
Tutorial
Mohammad Emtiyaz Khan
Deep learning and Bayesian learning are considered two entirely different fields often used in complementary settings. It is clear that combining ideas from the two fields would be beneficial, but how can we achieve this given their fundamental differences? This tutorial will introduce modern Bayesian principles to bridge this gap. Using these principles, we can derive a range of learning-algorithms as special cases, e.g., from classical algorithms, such as linear regression and forward-backwar
12/9/2025
8:30am
12/9/2025
10:30am
West Exhibition Hall A
https://www.nips.cc/Conferences/2019/Schedule?showEvent=13205
Tutorial
Hal Daumé III
Kyunghyun Cho
Imitation learning is a learning paradigm that interpolates reinforcement learning on one extreme and supervised learning on the other extreme. In the specific case of generating structured outputs--as in natural language generation--imitation learning allows us to train generation policies with neither strong supervision on the detailed generation procedure (as would be required in supervised learning) nor with only a sparse reward signal (as in reinforcement learning). Imitation learning accom
12/9/2025
8:30am
12/9/2025
10:30am
West Exhibition Hall C + B3
https://www.nips.cc/Conferences/2019/Schedule?showEvent=13209
Tutorial
Albert Ali Salah
Nuria M Oliver
Human behavior is complex, multi-level, multimodal, culturally and contextually shaped. Computer analysis of human behavior in its multiple scales and settings leads to a steady influx of new applications in diverse domains including human-computer interaction, affective computing, social signal processing and computational social sciences, autonomous systems, smart healthcare, customer behavior analysis, urban computing and AI for social good. In this tutorial, we will share a proposed taxonomy
12/9/2025
8:30am
12/9/2025
10:30am
West Ballroom A+B
https://www.nips.cc/Conferences/2019/Schedule?showEvent=13207
Break
12/9/2025
10:30am
12/9/2025
11:15am
None
https://www.nips.cc/Conferences/2019/Schedule?showEvent=14666
Tutorial
Barbara Engelhardt
Anna Goldenberg
Questions in biology and medicine pose big challenges to existing ML methods. The impact of creating ML methods to address these questions may positively impact all of us as patients, as scientists, and as human beings. In this tutorial, we will cover some of the major areas of current biomedical research, including genetics, the microbiome, clinical data, imaging, and drug design. We will focus on progress-to-date at the intersection of biology, health, and ML. We will also discuss challenges a
12/9/2025
11:15am
12/9/2025
1:15pm
West Ballroom A+B
https://www.nips.cc/Conferences/2019/Schedule?showEvent=13210
Tutorial
Vivienne Sze
This tutorial describes methods to enable efficient processing for deep neural networks (DNNs), which are used in many AI applications including computer vision, speech recognition, robotics, etc. While DNNs deliver best-in-class accuracy and quality of results, it comes at the cost of high computational complexity. Accordingly, designing efficient algorithms and hardware architectures for deep neural networks is an important step towards enabling the wide deployment of DNNs in AI systems (e.g.,
12/9/2025
11:15am
12/9/2025
1:15pm
West Exhibition Hall C + B3
https://www.nips.cc/Conferences/2019/Schedule?showEvent=13206
Tutorial
Arthur Gretton
Dougal J Sutherland
Wittawat Jitkrittum
Modern machine learning has seen the development of models of increasing complexity for high-dimensional real-world data, such as documents and images. Some of these models are implicit, meaning they generate samples without specifying a probability distribution function (e.g. GANs), and some are explicit, specifying a distribution function \u2013 with a potentially quite complex structure which may not admit efficient sampling or normalization. This tutorial will provide modern nonparametric to
12/9/2025
11:15am
12/9/2025
1:15pm
West Exhibition Hall A
https://www.nips.cc/Conferences/2019/Schedule?showEvent=13208
Break
12/9/2025
1:15pm
12/9/2025
2:45pm
None
https://www.nips.cc/Conferences/2019/Schedule?showEvent=14667
Tutorial
Sanmi Koyejo
Moustapha Cisse
It is increasingly evident that widely-deployed machine learning models can lead to discriminatory outcomes and can exacerbate disparities in the training data. With the accelerating adoption of machine learning for real-world decision-making tasks, issues of bias and fairness in machine learning must be addressed. Our motivating thesis is that among a variety of emerging approaches, representation learning provides a unique toolset for evaluating and potentially mitigating unfairness. This tut
12/9/2025
2:45pm
12/9/2025
4:45pm
West Exhibition Hall A
https://www.nips.cc/Conferences/2019/Schedule?showEvent=13212
Tutorial
Devavrat Shah
Vishal Misra
Alberto Abadie
The synthetic control method, introduced in Abadie and Gardeazabal(2003), has emerged as a popular empirical methodology for estimating a causal effects with observational data, when the \u201cgold standard\u201d of a randomized control trial is not feasible. In a recent survey on causal inference and program evaluation methods in economics, Athey and Imbens (2015) describe the synthetic control method as \u201carguably the most important innovation in the evaluation literature in the last fift
12/9/2025
2:45pm
12/9/2025
4:45pm
West Ballroom A+B
https://www.nips.cc/Conferences/2019/Schedule?showEvent=13213
Tutorial
Katja Hofmann
Reinforcement learning (RL) is a systematic approach to learning and decision making. Developed and studied for decades, recent combinations of RL with modern deep learning have led to impressive demonstrations of the capabilities of today's RL systems, and have fuelled an explosion of interest and research activity. Join this tutorial to learn about the foundations of RL - elegant ideas that give rise to agents that can learn extremely complex behaviors in a wide range of settings. Broadening o
12/9/2025
2:45pm
12/9/2025
4:45pm
West Exhibition Hall C + B3
https://www.nips.cc/Conferences/2019/Schedule?showEvent=13211
Break
12/9/2025
4:45pm
12/9/2025
5:00pm
None
https://www.nips.cc/Conferences/2019/Schedule?showEvent=14668
Break
12/9/2025
5:00pm
12/9/2025
5:45pm
West Exhibition Hall C + B3
https://www.nips.cc/Conferences/2019/Schedule?showEvent=15899
Invited Talk
Celeste Kidd
This talk will discuss Kidd\u2019s research about how people come to know what they know. The world is a sea of information too vast for any one person to acquire entirely. How then do people navigate the information overload, and how do their decisions shape their knowledge and beliefs? In this talk, Kidd will discuss research from her lab about the core cognitive systems people use to guide their learning about the world\u2014including attention, curiosity, and metacognition (thinking about th
12/9/2025
5:45pm
12/9/2025
6:35pm
West Exhibition Hall C + B3
https://www.nips.cc/Conferences/2019/Schedule?showEvent=15483
Break
12/9/2025
6:35pm
12/9/2025
8:30pm
East Exhibition A, Ballrooms B C
https://www.nips.cc/Conferences/2019/Schedule?showEvent=15900
Invited Talk
Bin Yu
Data science is a field of evidence-seeking that combines data with domain information to generate new knowledge. It addresses key considerations in AI regarding when and where data-driven solutions are reliable and appropriate. Such considerations require involvement from humans who collectively understand the domain and tools used to collect, process, and model data. Throughout the data science life cycle, these humans make judgment calls to extract information from data. Veridical data scienc
12/10/2025
8:30am
12/10/2025
9:20am
West Exhibition Hall C + B3
https://www.nips.cc/Conferences/2019/Schedule?showEvent=15484
Break
12/10/2025
9:20am
12/10/2025
10:05am
West and East
https://www.nips.cc/Conferences/2019/Schedule?showEvent=15901
Oral
Arthur Gretton
Maneesh Sahani
Rahul Singh
Instrumental variable (IV) regression is a strategy for learning causal relationships in observational data. If measurements of input X and output Y are confounded, the causal relationship can nonetheless be identified if an instrumental variable Z is available that influences X directly, but is conditionally independent of Y given X and the unmeasured confounder. The classic two-stage least squares algorithm (2SLS) simplifies the estimation problem by modeling all relationships as linear functi
12/10/2025
10:05am
12/10/2025
10:20am
West Ballrooms A + B
https://www.nips.cc/Conferences/2019/Schedule?showEvent=15676
Oral
J. Zico Kolter
Vaishnavh Nagarajan
Aimed at explaining the surprisingly good generalization behavior of overparameterized deep networks, recent works have developed a variety of generalization bounds for deep learning, all based on the fundamental learning-theoretic technique of uniform convergence. While it is well-known that many of these existing bounds are numerically large, through numerous experiments, we bring to light a more concerning aspect of these bounds: in practice, these bounds can {\em increase} with the train
12/10/2025
10:05am
12/10/2025
10:20am
West Exhibition Hall C + B3
https://www.nips.cc/Conferences/2019/Schedule?showEvent=15844
Oral
Karan Singh
Elad Hazan
Naman Agarwal
We study optimal regret bounds for control in linear dynamical systems under adversarially changing strongly convex cost functions, given the knowledge of transition dynamics. This includes several well studied and influential frameworks such as the Kalman filter and the linear quadratic regulator. State of the art methods achieve regret which scales as T^0.5, where T is the time horizon. We show that the optimal regret in this fundamental setting can be significantly smaller, scaling as polyl
12/10/2025
10:05am
12/10/2025
10:20am
West Exhibition Hall A
https://www.nips.cc/Conferences/2019/Schedule?showEvent=15788
Oral
Julie Grollier
Damien Querlioz
Yoshua Bengio
Benjamin Scellier
Maxence Ernoult
Equilibrium Propagation (EP) is a biologically inspired learning algorithm for convergent recurrent neural networks, i.e. RNNs that are fed by a static input x and settle to a steady state. Training convergent RNNs consists in adjusting the weights until the steady state of output neurons coincides with a target y. Convergent RNNs can also be trained with the more conventional Backpropagation Through Time (BPTT) algorithm. In its original formulation EP was described in the case of real-time neu
12/10/2025
10:05am
12/10/2025
10:20am
West Ballroom C
https://www.nips.cc/Conferences/2019/Schedule?showEvent=15732
Spotlight
Chris Eliasmith
Ivana Kajić
Aaron Voelker
We propose a novel memory cell for recurrent neural networks that dynamically maintains information across long windows of time using relatively few resources. The Legendre Memory Unit~(LMU) is mathematically derived to orthogonalize its continuous-time history -- doing so by solving $d$ coupled ordinary differential equations~(ODEs), whose phase space linearly maps onto sliding windows of time via the Legendre polynomials up to degree $d - 1$. Backpropagation across LMUs outperforms equivalentl
12/10/2025
10:20am
12/10/2025
10:25am
West Ballroom C
https://www.nips.cc/Conferences/2019/Schedule?showEvent=15733
Spotlight
Greg Lewis
Keith Battocchi
Maggie Hei
Miruna Oprescu
Victor Lei
Vasilis Syrgkanis
We consider the estimation of heterogeneous treatment effects with arbitrary machine learning methods in the presence of unobserved confounders with the aid of a valid instrument. Such settings arise in A/B tests with an intent-to-treat structure, where the experimenter randomizes over which user will receive a recommendation to take an action, and we are interested in the effect of the downstream action. We develop a statistical learning approach to the estimation of heterogeneous effects, redu
12/10/2025
10:20am
12/10/2025
10:25am
West Ballrooms A + B
https://www.nips.cc/Conferences/2019/Schedule?showEvent=15677
1,763 records

Alert

Lorem ipsum
Okay