Generalized CoAST is an approach that dynamically selects, tunes, combines and adapts theory, analogs (models), experts (oracles), and data (pereception) to maximize information gain for optimizing (or improving robustness, reliability, resilience) inference (or control, estimation, learning, decisions etc). Not only does CoAST infer from these sources in a way that is better than any source alone but, in reverse, it dynamically adapts the sources.
CoAST derives from a rich history in Optimal and Sequential Experimental Design, Incremental Online and Active Learning, Active Sampling, Expected Informativeness, Relevance Feedback, and Co-Active Learning. Generalized CoAST extends the classical notion that typically only involve two-way interactions between specific forms, e.g., Co-Active Learning between humans and machines, or models and instrumentation (e.g., "dddas"). It promotes the "co-evolution of multiple subsystems, each benefiting from the others."
CoAST can be viewed at both the systems and methodological level.
- At a systems level, we apply this approach to the design of Co-Active Observing Systems (CAOS) for CLEPS-related applications.
- At an application level, we apply this approach in the Sloop project to recognize individual animals co-actively with humans. Not only are the humans providing relevance feedback to improve system performance through retraining, but the system is also learning to ask them better questions.
- At a methodological level, we apply this as a Co-Active Learning with Dynamics, Optimization, and Learning Systems (DOLS).
Informative Optimization Inference and Learning:
- Variational Information-theoretic Inference
- Manifold approaches to Quantify Uncertainty and Recursive Estimation (ManiQURE).
- Informative Neural Ensemble Kalman Learning
- Ensemble Learning in Non Gaussian Data Assimilation
Statistical Transport & Inference for Coherent Structures (stics.mit.edu):
Fluids have coherent structures in turbulence. These features have always been used to describe fluids but they are rarely used to solve inference problems. We've wondered why that is. This area considers patterns emergent in coherent fluids as a means to efficient Inference. Here are some problems we have solved:
- Data Assimilation by Field Alignment
- Scale Cascaded Alignment
- Quantifying Uncertainty for Coherent Structures using Field Coalescence
- Coherent Random Fields and Principal Appearance and Geometry Modes
- Field alignment for Blending Fields (FABle)
- Adaptive Reduced Order Modeling by Alignment (AROMA)
- Dynamically Deformable Sampling Plans
- Field Alignment System and Testbed: Publicly released, patented codes that exploit pattern information in data assimilation.
Dynamics and Optimization of Learning Systems (DOLS)
- Dynamic Data Driven Deep Learning
- Informative Neural Ensemble Kalman Learning
- Informative Learning of Dynamics from Data
- Neural Networks and Geometric Chaos
- Stable Hybrid Dynamical Systems
- Stochastic Dynamics of Learning
- Learnability and Predictability
- Learning Sparse human relevance feedback strategies for high recall (sloop.mit.edu)
- Learning Machines for System Dynamics: Parameterization, Reduction, Uncertainty Quantification, Upscaling and Downscaling, Simulation
- Neural models of intensification and decay
Sensing and Imaging
- Tracking and Localization in Distributed Sensor Networks
- Distributed Sensor Network for Flood Monitoring
- Synthetic Aperture Fluid Imaging and Reconstruction
- Autonomous Remote and Insitu Measurement of Plume Composition using UAS
Planning and Control
- Markerless Augmented Reality,
- Visual Servo Control
- View Point Control
- Ensemble Control
- Retrospective Cost Adaptive Control for Autopilots
- Glider Planning
- Decisions with uncertain probabilities
Natural Scales in Hierarchical Uncertainty Quantification and Recursive Estimation
Quantifying Uncertainty for Coherent Structures using Field Coalescence
Learning to Predict Uncertainty using Dynamic Data Driven Deep Learning
Quantifying Extreme Rare Event Tails (QuERET)
Decisions with imprecise probabilities