1、A Geometric Perspective on Machine Learning何晓飞浙江大学计算机学院1Machine Learning:the problemf何晓飞Information(training data)f:XYX and Y are usually considered as a Euclidean spaces.2Manifold Learning:geometric perspectiveo The data space may not be a Euclidean space,but a nonlinear manifold.Euclidean distance
2、.f is defined on Euclidean space.ambient dimension geodesic distance.f is defined on nonlinear manifold.manifold dimension.instead3Manifold Learning:the challenges The manifold is unknown!We have only samples!o How do we know M is a sphere or a torus,or else?o How to compute the distance on M?o vers
3、usThis is unknown:This is what we have:?or else?TopologyGeometryFunctional analysis4Manifold Learning:current solutiono Find a Euclidean embedding,and then perform traditional learning algorithms in the Euclidean space.5Simplicity6Simplicity7Simplicity is relative8Manifold-based Dimensionality Reduc
4、tionoGiven high dimensional data sampled from a low dimensional manifold,how to compute a faithful embedding?oHow to find the mapping function?oHow to efficiently find the projective function?fff9A Good Mapping Function o If xi and xj are close to each other,we hope f(xi)and f(xj)preserve the local
5、structure(distance,similarity)o k-nearest neighbor graph:o Objective function:n Different algorithms have different concerns10Locality Preserving ProjectionsPrinciple:if xi and xj are close,then their maps yi and yj are also close.11Locality Preserving ProjectionsPrinciple:if xi and xj are close,the
6、n their maps yi and yj are also close.Mathematical formulation:minimize the integral of the gradient of f.12Locality Preserving ProjectionsPrinciple:if xi and xj are close,then their maps yi and yj are also close.Mathematical formulation:minimize the integral of the gradient of f.Stokes Theorem:13Lo
7、cality Preserving ProjectionsPrinciple:if xi and xj are close,then their maps yi and yj are also close.Mathematical formulation:minimize the integral of the gradient of f.Stokes Theorem:LPP finds a linear approximation to nonlinear manifold,while preserving the local geometric structure.14Manifold o
8、f Face ImagesExpression(Sad Happy)Pose(Right Left)15Manifold of Handwritten DigitsThicknessSlant16o Learning target:o Training Examples:o Linear Regression ModelActive and Semi-Supervised Learning:A Geometric Perspective17Generalization Erroro Goal of RegressionObtain a learned function that minimiz
9、es the generalization error(expected error for unseen test input points).o Maximum Likelihood Estimate18Gauss-Markov TheoremFor a given x,the expected prediction error is:19-4-3-2-10123400.10.20.30.40.50.60.70.8-4-3-2-10123400.10.20.30.40.50.60.70.8Gauss-Markov TheoremFor a given x,the expected pred
10、iction error is:Good!Bad!20Experimental Design MethodsThree most common scalar measures of the size of the parameter(w)covariance matrix:o A-optimal Design:determinant of Cov(w).o D-optimal Design:trace of Cov(w).o E-optimal Design:maximum eigenvalue of Cov(w).Disadvantage:these methods fail to take
11、 into account unmeasured(unlabeled)data points.21Manifold Regularization:Semi-Supervised Settingo Measured(labeled)points:discriminant structureo Unmeasured(unlabeled)points:geometrical structure?22o Measured(labeled)points:discriminant structureo Unmeasured(unlabeled)points:geometrical structure?ra
12、ndom labelingManifold Regularization:Semi-Supervised Setting23o Measured(labeled)points:discriminant structureo Unmeasured(unlabeled)points:geometrical structure?random labelingactive learningactive learning+semi-supervsed learningManifold Regularization:Semi-Supervised Setting24Unlabeled Data to Es
13、timate GeometryoMeasured(labeled)points:discriminant structure25Unlabeled Data to Estimate GeometryoMeasured(labeled)points:discriminant structureoUnmeasured(unlabeled)points:geometrical structure26Unlabeled Data to Estimate GeometryoMeasured(labeled)points:discriminant structureoUnmeasured(unlabele
14、d)points:geometrical structureCompute nearest neighbor graph G27Unlabeled Data to Estimate GeometryoMeasured(labeled)points:discriminant structureoUnmeasured(unlabeled)points:geometrical structureCompute nearest neighbor graph G28Unlabeled Data to Estimate GeometryoMeasured(labeled)points:discrimina
15、nt structureoUnmeasured(unlabeled)points:geometrical structureCompute nearest neighbor graph G29Unlabeled Data to Estimate GeometryoMeasured(labeled)points:discriminant structureoUnmeasured(unlabeled)points:geometrical structureCompute nearest neighbor graph G30Unlabeled Data to Estimate GeometryoMe
16、asured(labeled)points:discriminant structureoUnmeasured(unlabeled)points:geometrical structureCompute nearest neighbor graph G31Laplacian Regularized Least Square(Belkin and Niyogi,2006)o Linear objective functiono Solution32Active LearningHow to find the most representative points on the manifold?3
17、3pObjective:Guide the selection of the subset of data points that gives the most amount of information.pExperimental design:select samples to labeln Share the same objective function as Laplacian Regularized Least Squares,simultaneously minimize the least square error on the measured samples and pre
18、serve the local geometrical structure of the data space.Active Learning34o o ,o In order to make the estimator as stable as possible,the size of the covariance matrix should be as small as possible.o D-optimality:minimize the determinant of the covariance matrix2()CovIy12TXLXI 2111()()CovHHHw12TTHZZ
19、XLXI w()Cov w112()TTZZXLXIZwyAnalysis of Bias and Variance35Select the first data point such that is maximized,Suppose k points have been selected,choose the(k+1)th point such that .Update Manifold Regularized Experimental Design Where are selected from 111111/HHHHH1(,.,)maxkZH zz1,.,kzz1,.,mxx1z1 1
20、12TTXLXIz z11argmaxkTkkZH zzzz1111111111111()1TTkkkkkkkkkTkkkHHHHHHzzzzzz11 112TTHXLXIz zThe algorithm36o Consider feature space F induced by some nonlinear mapping,and=K(xi,xi).o K(,):positive semi-definite kernel functiono Regression model in RKHS:o Objective function in RKHS:222121,1()()()()2kmTT
21、TLapRLSiiijijii jJySzxxF1(),miiXii x(),TyxFNonlinear Generalization in RKHS37Select the first data point such that is maximized,Suppose k points have been selected,choose the(k+1)th point such that .Update Kernel Graph Regularized Experimental Design where are selected from2112()()XZZXXXXXXXCovKKKLK
22、K1(,.,)12maxkZXZZXXXXXXXKKKLKKzz1,.,kzz1,.,mxx1v1112TXXXXXXKLKKv v11argmaxkTkkM vvvvUV11111111111TkkkkkkTkkkMMMMMvvvv11112TXXXXXXMKLKKv vNonlinear Generalization in RKHS38A Synthetic ExampleA-optimal DesignLaplacian Regularized Optimal Design39A Synthetic ExampleA-optimal DesignLaplacian Regularized
23、 Optimal Design40Application to image/video compression41Video compression42TopologyCan we always map a manifold to a Euclidean space without changing its topology??43TopologySimplicial ComplexHomology GroupBetti NumbersEuler CharacteristicGood CoverSample PointsHomotopyNumber of components,dimensio
24、n,44TopologyThe Euler Characteristic is a topological invariant,a number that describes one aspect of a topological spaces shape or structure.1-2012The Euler Characteristic of Euclidean space is 1!0045Challengeso Insufficient sample pointso Choose suitable radiuso How to identify noisy holes(user interaction?)Noisy holehomotopyhomeomorphsim46Q&A47