Machine Learning Foundations for Natural Systems

Summer Notes:

We start with Kernel Principal Components: Review material, Codes, prepare chapter and examples.

  1. Introduction

  2. For ML to be useful
  3. A Probabilistic Perspective

  4. Localization in Learning

  5. Gaussian Graphical Models
  6. Kernel Components Part 1
  7. Kernel Principal Components Part 2
  8. Perceptrons, Associative Memories, Hopfield Networks
  9. Recurrent and Feedforward Networks
  10. Backpropagation and Multistage Two Point Boundary Value Problems
  11. Adjoint Machinery for Training Neural Networks
  12. Regularization
    1. Overview
    2. Smoothness
    3. Sparsity
    4. Randomization
    5. Entropy
  13. Optimization Approaches in Learning
  14. Bias and Variance, Invariance and Selectivity, Validation and Model Selection
  15. Convolutional Networks and Anisotropies in Convolution
  16. Recurrent Networks
  17. Time and Sequential Data, LSTM
  18. Restricted Boltzmann Machines 
  19. (Learning to) Optimize Neural Structure
  20. Neural Networks as Stochastic Systems
  21. Neural Networks as Chaotic Systems
  22. Neural Model Reduction for Dynamical Systems
  23. Training for Uncertainty in Neural Networks
    1. Learning with the Ensemble Filter and Smoother
    2. Mutual Information and Entropy functionals

CODES 01

Follow the course progress on Stellar by clicking here

12.S592, 54-1623, F 1000-1200, Spring 2018

Sai Ravela (ravela@mit.edu)

The first part of an advanced two-semester sequence in Machine Learning that enables one to design Learning-based approaches for Natural Systems Science.

Natural Systems Science:

We are motivated at present  by problems in applications across the breadth of Nature pertaining to Estimation, Prediction, Parameterization, Characterization, Discovery and Decision Making.  Thre is a focus on Geophysics this term: Data Assimilation, Autonomous Environmental Mapping, Model Reduction, Uncertainty Quantification, Sensor Planning, Prediction and Predictability, Planning for Risk Mitigation, Convective Super-parameterization, Radiative-Convective Equilibrium, Nonlocal Operators, Teleconnections, Particle Detection and Sizing, Species Characterization, Paleoclimate, Event Detection and Tracking in X (Voclanoes, Earthquakes, Hurricanes, Tsunamis, Storms and Transits). Super-resolution/Downscaling, Coherent Structures in Turbulence,  Seismic Imaging and Geomorphology, Porus Media, Reservoirs, Exoplanets.

However, other Engineering, Science and Finance applications may be included depending on participant interest. Whilst our motivation and reach is broad, we will drill down each term to a few core applications that is set by participant interest.

Participation

Participants will be required to complete monthly assignments and a collaborative project.  This course is different from a typical course. It's hands-on, for both of us. Think about it as a "learnathon," we come together to solve problems/applications better than what either one of us can do alone. My job is to help tease you to the fundamentals and your job is to help take them to your application.  Prior experience with Probability, Linear Algebra, Statistics or Data Science courses would be helpful, or permission of the instructor.

Topics

The entire course is organized in three phases; Preparatory, Core and Advanced Topics.

Preparatory Topics (both terms): Classification, Regression, Clustering, Density Estimation,  Supervised vs. Unsupervised Learning, Loss functions, Training-Calibration-Validation, Sampling and Variational Inference, Nearest Neighbors, Generalization, Bias-Variance Dilemma, Model Selection, Feature Selection, Reproducing Kernel Hilbert Spaces, Regularization: Smoothness, Sparseness, Entropy and Information.

Core Material Term 1 Core Material Term 2
Non Parametric Bayesian Inference, Graphical Models, Regression Machines, Kernel Machines, Ensemble Learning, Manifold Learning, Transfer Learning, Recurrent and Deep Learning Basis and Dictionary Learning, Gaussian Processes. Stochastic Model Reduction, Abstraction, Segmentation and Grouping.  Markov Decision Processes, Reinforcement Learning, Incremental Online and Active Learning, Causal Learning, Information Theoretic Learning

Advanced Topics (both terms): Uncertain Probabilities and Ambiguity, Curse of Dimensionality, Sparse Data vs. Big Data, Rigged Hilbert Spaces, Parsimony Principles, Generic Chaining and Extremes, Predictability and Learnability, Model-based (Physically-based) Learning. As with applications, we will be selective with the Advanced Topics each term.

 

Follow the course on Stellar by clicking here