机器学习的几何观点-LAMDA课件.ppt

上传人(卖家):三亚风情 文档编号:3581353 上传时间:2022-09-20 格式:PPT 页数:47 大小:3.01MB
下载 相关 举报
机器学习的几何观点-LAMDA课件.ppt_第1页
第1页 / 共47页
机器学习的几何观点-LAMDA课件.ppt_第2页
第2页 / 共47页
机器学习的几何观点-LAMDA课件.ppt_第3页
第3页 / 共47页
机器学习的几何观点-LAMDA课件.ppt_第4页
第4页 / 共47页
机器学习的几何观点-LAMDA课件.ppt_第5页
第5页 / 共47页
点击查看更多>>
资源描述

1、A Geometric Perspective on Machine Learning何晓飞浙江大学计算机学院1Machine Learning:the problemf何晓飞Information(training data)f:XYX and Y are usually considered as a Euclidean spaces.2Manifold Learning:geometric perspectiveo The data space may not be a Euclidean space,but a nonlinear manifold.Euclidean distance

2、.f is defined on Euclidean space.ambient dimension geodesic distance.f is defined on nonlinear manifold.manifold dimension.instead3Manifold Learning:the challenges The manifold is unknown!We have only samples!o How do we know M is a sphere or a torus,or else?o How to compute the distance on M?o vers

3、usThis is unknown:This is what we have:?or else?TopologyGeometryFunctional analysis4Manifold Learning:current solutiono Find a Euclidean embedding,and then perform traditional learning algorithms in the Euclidean space.5Simplicity6Simplicity7Simplicity is relative8Manifold-based Dimensionality Reduc

4、tionoGiven high dimensional data sampled from a low dimensional manifold,how to compute a faithful embedding?oHow to find the mapping function?oHow to efficiently find the projective function?fff9A Good Mapping Function o If xi and xj are close to each other,we hope f(xi)and f(xj)preserve the local

5、structure(distance,similarity)o k-nearest neighbor graph:o Objective function:n Different algorithms have different concerns10Locality Preserving ProjectionsPrinciple:if xi and xj are close,then their maps yi and yj are also close.11Locality Preserving ProjectionsPrinciple:if xi and xj are close,the

6、n their maps yi and yj are also close.Mathematical formulation:minimize the integral of the gradient of f.12Locality Preserving ProjectionsPrinciple:if xi and xj are close,then their maps yi and yj are also close.Mathematical formulation:minimize the integral of the gradient of f.Stokes Theorem:13Lo

7、cality Preserving ProjectionsPrinciple:if xi and xj are close,then their maps yi and yj are also close.Mathematical formulation:minimize the integral of the gradient of f.Stokes Theorem:LPP finds a linear approximation to nonlinear manifold,while preserving the local geometric structure.14Manifold o

8、f Face ImagesExpression(Sad Happy)Pose(Right Left)15Manifold of Handwritten DigitsThicknessSlant16o Learning target:o Training Examples:o Linear Regression ModelActive and Semi-Supervised Learning:A Geometric Perspective17Generalization Erroro Goal of RegressionObtain a learned function that minimiz

9、es the generalization error(expected error for unseen test input points).o Maximum Likelihood Estimate18Gauss-Markov TheoremFor a given x,the expected prediction error is:19-4-3-2-10123400.10.20.30.40.50.60.70.8-4-3-2-10123400.10.20.30.40.50.60.70.8Gauss-Markov TheoremFor a given x,the expected pred

10、iction error is:Good!Bad!20Experimental Design MethodsThree most common scalar measures of the size of the parameter(w)covariance matrix:o A-optimal Design:determinant of Cov(w).o D-optimal Design:trace of Cov(w).o E-optimal Design:maximum eigenvalue of Cov(w).Disadvantage:these methods fail to take

11、 into account unmeasured(unlabeled)data points.21Manifold Regularization:Semi-Supervised Settingo Measured(labeled)points:discriminant structureo Unmeasured(unlabeled)points:geometrical structure?22o Measured(labeled)points:discriminant structureo Unmeasured(unlabeled)points:geometrical structure?ra

12、ndom labelingManifold Regularization:Semi-Supervised Setting23o Measured(labeled)points:discriminant structureo Unmeasured(unlabeled)points:geometrical structure?random labelingactive learningactive learning+semi-supervsed learningManifold Regularization:Semi-Supervised Setting24Unlabeled Data to Es

13、timate GeometryoMeasured(labeled)points:discriminant structure25Unlabeled Data to Estimate GeometryoMeasured(labeled)points:discriminant structureoUnmeasured(unlabeled)points:geometrical structure26Unlabeled Data to Estimate GeometryoMeasured(labeled)points:discriminant structureoUnmeasured(unlabele

14、d)points:geometrical structureCompute nearest neighbor graph G27Unlabeled Data to Estimate GeometryoMeasured(labeled)points:discriminant structureoUnmeasured(unlabeled)points:geometrical structureCompute nearest neighbor graph G28Unlabeled Data to Estimate GeometryoMeasured(labeled)points:discrimina

15、nt structureoUnmeasured(unlabeled)points:geometrical structureCompute nearest neighbor graph G29Unlabeled Data to Estimate GeometryoMeasured(labeled)points:discriminant structureoUnmeasured(unlabeled)points:geometrical structureCompute nearest neighbor graph G30Unlabeled Data to Estimate GeometryoMe

16、asured(labeled)points:discriminant structureoUnmeasured(unlabeled)points:geometrical structureCompute nearest neighbor graph G31Laplacian Regularized Least Square(Belkin and Niyogi,2006)o Linear objective functiono Solution32Active LearningHow to find the most representative points on the manifold?3

17、3pObjective:Guide the selection of the subset of data points that gives the most amount of information.pExperimental design:select samples to labeln Share the same objective function as Laplacian Regularized Least Squares,simultaneously minimize the least square error on the measured samples and pre

18、serve the local geometrical structure of the data space.Active Learning34o o ,o In order to make the estimator as stable as possible,the size of the covariance matrix should be as small as possible.o D-optimality:minimize the determinant of the covariance matrix2()CovIy12TXLXI 2111()()CovHHHw12TTHZZ

19、XLXI w()Cov w112()TTZZXLXIZwyAnalysis of Bias and Variance35Select the first data point such that is maximized,Suppose k points have been selected,choose the(k+1)th point such that .Update Manifold Regularized Experimental Design Where are selected from 111111/HHHHH1(,.,)maxkZH zz1,.,kzz1,.,mxx1z1 1

20、12TTXLXIz z11argmaxkTkkZH zzzz1111111111111()1TTkkkkkkkkkTkkkHHHHHHzzzzzz11 112TTHXLXIz zThe algorithm36o Consider feature space F induced by some nonlinear mapping,and=K(xi,xi).o K(,):positive semi-definite kernel functiono Regression model in RKHS:o Objective function in RKHS:222121,1()()()()2kmTT

21、TLapRLSiiijijii jJySzxxF1(),miiXii x(),TyxFNonlinear Generalization in RKHS37Select the first data point such that is maximized,Suppose k points have been selected,choose the(k+1)th point such that .Update Kernel Graph Regularized Experimental Design where are selected from2112()()XZZXXXXXXXCovKKKLK

22、K1(,.,)12maxkZXZZXXXXXXXKKKLKKzz1,.,kzz1,.,mxx1v1112TXXXXXXKLKKv v11argmaxkTkkM vvvvUV11111111111TkkkkkkTkkkMMMMMvvvv11112TXXXXXXMKLKKv vNonlinear Generalization in RKHS38A Synthetic ExampleA-optimal DesignLaplacian Regularized Optimal Design39A Synthetic ExampleA-optimal DesignLaplacian Regularized

23、 Optimal Design40Application to image/video compression41Video compression42TopologyCan we always map a manifold to a Euclidean space without changing its topology??43TopologySimplicial ComplexHomology GroupBetti NumbersEuler CharacteristicGood CoverSample PointsHomotopyNumber of components,dimensio

24、n,44TopologyThe Euler Characteristic is a topological invariant,a number that describes one aspect of a topological spaces shape or structure.1-2012The Euler Characteristic of Euclidean space is 1!0045Challengeso Insufficient sample pointso Choose suitable radiuso How to identify noisy holes(user interaction?)Noisy holehomotopyhomeomorphsim46Q&A47

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 办公、行业 > 各类PPT课件(模板)
版权提示 | 免责声明

1,本文(机器学习的几何观点-LAMDA课件.ppt)为本站会员(三亚风情)主动上传,163文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。
2,用户下载本文档,所消耗的文币(积分)将全额增加到上传者的账号。
3, 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(发送邮件至3464097650@qq.com或直接QQ联系客服),我们立即给予删除!


侵权处理QQ:3464097650--上传资料QQ:3464097650

【声明】本站为“文档C2C交易模式”,即用户上传的文档直接卖给(下载)用户,本站只是网络空间服务平台,本站所有原创文档下载所得归上传人所有,如您发现上传作品侵犯了您的版权,请立刻联系我们并提供证据,我们将在3个工作日内予以改正。


163文库-Www.163Wenku.Com |网站地图|