Tuesday, January 29, 2013

1301.6199 (Ayaka Sakata et al.)

Sample Complexity of Bayesian Optimal Dictionary Learning    [PDF]

Ayaka Sakata, Yoshiyuki Kabashima
We consider a learning problem of identifying a dictionary matrix D (M times N dimension) from a sample set of M dimensional vectors Y = N^{-1/2} DX, where X is a sparse matrix (N times P dimension) in which the density of non-zero entries is 0rho is satisfied in the limit of N to infinity. Our analysis also implies that the posterior distribution given Y is condensed only at the correct dictionary D when the compression rate alpha is greater than a certain critical value alpha_M(rho). This suggests that belief propagation may allow us to learn D with a low computational complexity using O(N) samples.
View original: http://arxiv.org/abs/1301.6199

No comments:

Post a Comment