Learning and Inference with Large-Scale Graphical Models and Gaussian Process

   October 11,  2013, 11AM

302- 308

 

Abstract:

Probabilistic Graphical Models (PGMs) promise to play a prominent role in many complex real-world systems. Probability Relational Models (PRMs) scale the representation and learning of PGMs. Answering question using PRMs enables many current and future applications, such as medical informatics, weather events prediction, financial forecasting and robot localization. Scaling inference algorithms for large models is a key challenge for scaling up current applications and enabling future ones.

I will present new insightson large-scale PRMs. This talk includes  new algorithms that maintain compact structure while answering  questions or inference about large, continuous models. The first part of the talk presents an efficient estimation algorithm, the Lifted Relational Kalman filter(LRKF), for large-scale linear dynamic systems.The new relational Kalman filter enables scaling the exact vanilla Kalman filter from thousands of variable stobillions.

In the second part of the talk ,I will present a new variational framework, Lifted Relational Variationa lInference(LRVI). The variational learning algorithm takes large-scale hybrid(continuous-discrete) PRMs and convertsthem to close-to-optimal variational models with bounded approximation error. Then, a variational belief propagation algorithm solves inference problems on the compact variational models. Experiment sinareal-world ground water model show the efficiency of the two frameworks(the lifted Kalmanfilter and the lifted variational models)