tag:blogger.com,1999:blog-5822805028291837738.post3002281269421224931..comments2019-08-20T06:52:03.585-04:00Comments on Various Consequences: Gaussian Processes for Machine LearningJoshua Stultshttp://www.blogger.com/profile/03506970399027046387noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-5822805028291837738.post-10418197784480112392015-02-07T08:32:34.978-05:002015-02-07T08:32:34.978-05:00The gpml software is very well documented. I like...The gpml software is very well documented. I like the way their user documentation (doc/index.html in the download) links directly to all the source scripts that they mention. Great way to introduce the reader to the code. Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-49943403219626703682015-01-24T21:17:15.600-05:002015-01-24T21:17:15.600-05:00Well, I didn't read far enough in chapter 8, t...Well, I didn't read far enough in chapter 8, they actually do mention the improved Gauss transform method, and then far too quickly dismiss "iterative methods" from their subsequent comparisons. <br /><br />I think there are plenty of times when an approximate solution to the whole problem is more useful than an exact solution to a partial problem.Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.comtag:blogger.com,1999:blog-5822805028291837738.post-545614173815034792015-01-24T21:04:13.225-05:002015-01-24T21:04:13.225-05:00Here's what's missing from the book, itera...Here's what's missing from the book, iterative methods to avoid N^3 scaling of direct inversion for the linear system solution: <br /> - Improved Fast Gauss Transform <a href="http://www.umiacs.umd.edu/labs/cvl/pirl/vikas/Software/IFGT/IFGT_code.htm" rel="nofollow">code</a>, <a href="http://www.umiacs.umd.edu/labs/cvl/pirl/vikas/Software/IFGT/IFGT_user_manual.pdf" rel="nofollow">user's manual</a>, <a href="http://www.umiacs.umd.edu/labs/cvl/pirl/vikas/publications/IFGT_slides_lean.pdf" rel="nofollow">slides</a>, <a href="http://www.umiacs.umd.edu/labs/cvl/pirl/vikas/publications/raykar_learning_workshop_2007_slides.pdf" rel="nofollow">slides</a><br /> - <a href="http://stanford.edu/~rezab/nips2013workshop/accepted/preconditioned.pdf" rel="nofollow">Preconditioned Krylov Solvers for Kernel Regression</a><br /><br />The strategies are to get an approximate solution using an iterative method, and also to approximate the matrix-vector multiply (N or NlogN instead of N^2), of course preconditioning is useful for any method relying on Krylov subspace approaches. The interesting thing about the matrix-vector multiply approximation is that it can be done with <i>worse</i> accuracy as the solution progresses, further saving wall-time. <br /><br />Another acceleration approach is to fit on only a subset of the data or do a direct inversion on a reduced rank approximation of the matrix (this is covered in <a href="http://www.gaussianprocess.org/gpml/chapters/RW8.pdf" rel="nofollow">Chapter 8</a>):<br /> - <a href="http://jmlr.org/papers/volume6/quinonero-candela05a/quinonero-candela05a.pdf" rel="nofollow">A Unifying View of Sparse Approximate Gaussian Process Regression</a><br /><br />The really cool thing is that all of these acceleration approaches can, in concept, be combined. I haven't found a demonstration that actually does combine them all, so if you know of someone who has published on that please share a link! Joshua Stultshttps://www.blogger.com/profile/03506970399027046387noreply@blogger.com