Paper
20 February 2006 Dynamic Bayesian learning by expectation propagation
Tao Wei
Author Affiliations +
Proceedings Volume 6041, ICMIT 2005: Information Systems and Signal Processing; 60411N (2006) https://doi.org/10.1117/12.664342
Event: ICMIT 2005: Merchatronics, MEMS, and Smart Materials, 2005, Chongqing, China
Abstract
For modeling time-series data, it is natural to use directed graphical models, since they can capture the time flow. If arcs of a graphical model are all directed both within and between time-slice, the model is called dynamic Bayesian network (DBN). Dynamic Bayesian networks are becoming increasingly important for research and applications in the area of machine learning, artificial intelligence and signal processing. It has several advantages over other data analysis methods including rule bases, neural network, decision trees, etc. In this paper, there explored dynamic Bayesian learning over DBNs by a new deterministic approximate inference method called Expectation Propagation (EP). EP is an extension of belief propagation and is developed in machine learning. A crucial step of EP is the likelihoods recycling, which makes possible further improvement over the extended Kalman smoother. This study examined EP solutions to a non-linear state-space model and compared its performance with other inference methods such as particle filter, extended Kalman filter, etc.
© (2006) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Tao Wei "Dynamic Bayesian learning by expectation propagation", Proc. SPIE 6041, ICMIT 2005: Information Systems and Signal Processing, 60411N (20 February 2006); https://doi.org/10.1117/12.664342
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Signal to noise ratio

Filtering (signal processing)

Systems modeling

Data modeling

Dynamical systems

Particle filters

Machine learning

Back to Top