人工智能09贝叶斯网络(57张)课件.ppt

上传人(卖家):晟晟文业 文档编号:4171229 上传时间:2022-11-16 格式:PPT 页数:57 大小:1.49MB
下载 相关 举报
人工智能09贝叶斯网络(57张)课件.ppt_第1页
第1页 / 共57页
人工智能09贝叶斯网络(57张)课件.ppt_第2页
第2页 / 共57页
人工智能09贝叶斯网络(57张)课件.ppt_第3页
第3页 / 共57页
人工智能09贝叶斯网络(57张)课件.ppt_第4页
第4页 / 共57页
人工智能09贝叶斯网络(57张)课件.ppt_第5页
第5页 / 共57页
点击查看更多>>
资源描述

1、Bayesian networks贝叶斯网络Frequentist vs.Bayesian客观 vs.主观Frequentist(频率主义者):概率是长期的预期出现频率.P(A)=n/N,where n is the number of times event A occurs in N opportunities.“某事发生的概率是0.1”意味着0.1是在无穷多样本的极限条件下能够被观察到的比例但是,在许多情景下不可能进行重复试验发生第三次世界大战的概率是多少?Bayesian:degree of belief.It is a measure of the plausibility(似然性)

2、of an event given incomplete knowledge.相信的程度,是在不确定知识的环境下对事件似然性的衡量Probability概率Probability is a rigorous formalism for uncertain knowledge概率是对不确定知识一种严密的形式化方法Joint probability distribution specifies probability of every atomic event全联合概率分布指定了对随机变量的每种完全赋值,即每个原子事件的概率Queries can be answered by summing ov

3、er atomic events可以通过把对应于查询命题的原子事件的条目相加的方式来回答查询For nontrivial domains,we must find a way to reduce the joint sizeIndependence and conditional independence provide the toolsIndependence/Conditional IndependenceA and B are independent iffP(A|B)=P(A)or P(B|A)=P(B)or P(A,B)=P(A)P(B)A is conditionally ind

4、ependent of B given C:P(A|B,C)=P(A|C)在大多数情况下,使用条件独立性能将全联合概率的表示由n的指数关系减为n的线性关系。Conditional independence is our most basic and robust form of knowledge about uncertain environments.Probability TheoryProbability theory can be expressed in terms of two simple equations概率理论可使用两个简单线性方程来表达 Sum Rule(加法规则)变量

5、的概率是通过边缘化或者求和其他变量获得的 Product Rule(乘法规则)用条件表达联合概率所有的概率推理和学习相当于不断重复加法和乘法法则大纲 Graphical models(概率图模型)Bayesian networks Syntax(语法)Semantics(语义)Inference(推导)in Bayesian networks什么是图模型?概率分布的图表示 概率论和图论的结合 Also called 概率图模型 They augment analysis instead of using purealgebra(代数)What is a Graph?Consists of no

6、des(also called vertices)and links(also called edges or arcs)在概率图模型中 每个节点表示一个随机变量(or 一组随机变量)边表示变量间的概率关系Graphical Models in CS 处理不确定性和复杂性的天然工具贯穿整个应用数学和工程领域 图模型中最重要的思想是模块性概念 a complex system is built by combining simpler parts.Why are Graphical Models useful 概率理论提供了“黏合剂”whereby 使每个部分连接起来,确保系统作为一个整体是一致

7、的 提供模型到数据的连接方法.图理论方面提供:直观的接口 by which humans can model highly-interacting sets of variables 数据结构 that lends itself naturally to designing efficient general-purpose(通用的)algorithmsGraphical models:统一的框架 考虑传统的多变量的概率系统作为一般基础形式的实例 mixture models(混合模型),factor analysis(因子分析),hidden Markov models,Kalman fil

8、ters(卡尔曼滤波器),etc.在系统工程,信息论,模式识别和统计力学中被用到 优势:在某一领域中的专业技术能够在该领域中相互转化并被充分利用 Provides natural framework for designing new systems图模型在机器学习中的角色1.形象化概率模型结构的简单方法2.Insights into properties of modelConditional independence properties by inspecting graph3.执行推理和学习表示为图形化操作需要复杂的计算图的方向性 有向图模型 方向取决于箭头 贝叶斯网络 随机变量间的因

9、果关系 More popular in AI andstatistics 无向图模型 边没有箭头 Markov random fields(马尔科夫随机场)更适合表达变量之间的软约束 More popular in Vision and physicsBayesian networks一种简单的,图形化的数据结构,用于表示变量之间的依赖关系(条件独立性),为任何全联合概率分布提供一种简明的规范。Syntax语法:a set of nodes,one per variablea directed(有向),acyclic(无环)graph(link direct influences)a cond

10、itional distribution for each node given its parents:P(Xi|Parents(Xi)量化其父节点对该节点的影响In the simplest case,conditional distribution represented as aconditional probability table 条件概率表(CPT)giving thedistribution over Xi for each combination of parent valuesExampleTopology(拓扑结构)of network encodes conditio

11、nal independence assertions:Weather 独立于其他变量Toothache and Catch are conditionally independent given CavityExample我晚上在单位上班,此时邻居John给我打电话说我家警报响了,但是邻居Mary没有给打电话。有时轻微的地震也会引起警报。那么我家真正遭贼了吗?Variables:Burglary(入室行窃),Earthquake,Alarm,JohnCalls,MaryCalls网络拓扑结构反映出因果关系:A burglar can set the alarm off An earthqua

12、ke can set the alarm off The alarm can cause Mary to call The alarm can cause John to callExample contd.Compactness(紧致性)A CPT for Boolean Xi with k Boolean parents has 2k rows for the combinations of parent values一个具有k个布尔父节点的布尔变量的条件概率表中有2k个独立的可指定概率Each row requires one number p for Xi=true(the numbe

13、r for Xi=false is just 1-p)If each variable has no more than k parents,the complete network requires O(n 2k)numbersI.e.,grows linearly with n,vs.O(2n)for the full joint distributionFor burglary net,1+1+4+2+2=10 numbers(vs.25-1=31)Global semantics(全局语义)The full joint distribution is defined as the pr

14、oduct of the local conditional distributions:全联合概率分布可以表示为贝叶斯网络中的条件概率分布的乘积Global semantics(全局语义)The full joint distribution is defined as the product of the local conditional distributions:全联合概率分布可以表示为贝叶斯网络中的条件概率分布的乘积Local semanticsLocal semantics:each node is conditionally independent of its nondesc

15、endants(非后代)given its parents给定父节点,一个节点与它的非后代节点是条件独立的Theorem:Local semantics global semanticsCausal Chains因果链 一个基本形式:Is X independent of Z given Y?Evidence along the chain“blocks”the influenceCommon Cause共同原因 另一个基础的形态:twoeffects of the same cause Are X and Z independent?Are X and Z independent given

16、 Y?Observing the cause blocks influencebetween effects.Common Effect共同影响 最后一种配置形态:two causes of oneeffect(v-structures)Are X and Z independent?Yes:remember the ballgame and the raincausing traffic,no correlation?Are X and Z independent given Y?No:remember that seeing traffic put the rainand the ball

17、game in competition?This is backwards from the other cases Observing the effect enables influence between causes.构造贝叶斯网络Need a method such that a series of locally testable assertions of conditional independence guarantees the required global semantics需要一种方法使得局部的条件独立关系能够保证全局语义得以成立1.Choose an orderin

18、g of variables X1,Xn2.For i=1 to nadd Xi to the networkselect parents from X1,Xi-1 such thatP(Xi|Parents(Xi)=P(Xi|X1,.Xi-1)该父亲选择保证了全局语义:构造贝叶斯网络要求网络的拓扑结构确实反映了合适的父节点集对每个变量的那些直接影响。添加节点的正确次序是首先添加“根本原因”节点,然后加入受它们直接影响的变量,以此类推。ExampleExampleExampleExampleExampleExample contd.在非因果方向决定条件独立性是很难的(Causal models

19、 and conditional independence seem hardwired for humans!)Network is less compact:1+2+4+2+4=13 numbers needed因果关系?当贝叶斯网络反映真正的因果模式时:Often simpler(nodes have fewer parents)Often easier to think about Often easier to elicit from experts(专家)BNs 不一定必须是因果 有时无因果关系的网络是存在的(especially if variables are missing)

20、箭头反映相关性,而不是因果关系 箭头的真正含义是什么?Topology may happen to encode causal structure Topology really encodes conditional independenceInference in Bayesian networks推理任务简单查询:计算后验概率P(Xi|E=e)e.g.,P(NoGas|Gauge油表=empty,Lights=on,Starts=false)联合查询:P(Xi,Xj|E=e)=P(Xi|E=e)P(Xj|Xi,E=e)最优决策:decision networks include util

21、ity information;probabilistic inference required forP(outcome|action,evidence)通过枚举进行推理上一章解释了任何条件概率都可以通过将全联合分布表中的某些项相加而计算得到在贝叶斯网络中可以通过计算条件概率的乘积并求和来回答查询。通过枚举进行推理上一章解释了任何条件概率都可以通过将全联合分布表中的某些项相加而计算得到Evaluation tree变量消元法Variable elimination(变量消元):carry out summations right-to-left,storing intermediate re

22、sults(factors:因子)to avoid recomputation精确推理的复杂度Singly connected networks单联通网络(or polytrees多树):any two nodes are connected by at most one(undirected)path time and space cost of variable elimination are O(dkn)多树上的变量消元的时间和空间复杂度都与网络规模呈线性关系。Multiply connected networks多联通网络:can reduce 3SAT to exact infere

23、nce NP-hard equivalent to counting 3SAT models#P-completeExample:Nave Bayes model单一父亲变量和一批孩子变量,孩子变量在给定父亲变量下是相互独立的Nave Bayes modelTotal number of parameters(参数)is linear in nExample:垃圾邮件检测想象一下试图去自动检测垃圾邮件的问题.一个简单的方案是只检测主题,然后根据邮件的标题检查一些简单的特征来尝试识别垃圾邮件.我们先考虑两个简单的特征:Caps:是否标题是彻底大写的Free:是否标题中包含大写或小写的单词free

24、 e.g.:a message with the subject header“NEW MORTGAGE RATE“is likely to be spam.Similarly,for“Money for Free”,“FREE lunch”,etc.Example:垃圾邮件检测模型的构建基于以下三个随机变量,Caps,Free and Spam,each of which take on the values Y(for Yes)or N(for No)Caps=Y if and only if the subject of the message does notcontain lower

25、case lettersFree=Y if and only if the word free appears in the subject(letter case is ignored)Spam=Y if and only if the message is spamP(Free,Caps,Spam)=P(Spam)P(Caps|Spam)P(Free|Spam)Example:垃圾邮件检测P(Free,Caps,Spam)=P(Spam)P(Caps|Spam)P(Free|Spam)Example:垃圾邮件检测Example:垃圾邮件检测Example:Learning to class

26、ify textdocuments 文本分类是在文档所包含的文本基础上,把给定的文档分配到固定类别集合中某一个类别的任务。这个任务中常常用到朴素贝叶斯模型。在这些模型中,查询变量是文档类别,“结果”变量则是语言中每个词是否出现。我们假设文档中的词的出现都是独立的,其出现频率由文档类别确定。a.准确地解释当给定一组类别已经确定的文档作为“训练数据”时,这样的模型是如何构造的。b.准确地解释如何对新文档进行分类。c.这里独立性假设合理吗?请讨论。Example:Learning to classify textdocuments模型包含先验概率P(Category)和 条件概率 P(word i|

27、Category)P(Category=c)is estimated as the fraction of all documents that are of category c P(word i=true|Category=c)is estimated as the fraction of documents of category c that contain word iTwenty NewsgroupsGiven 1000 training documents from each group.Learn to classify new documents according to w

28、hich newsgroup it came fromNave Bayes:89%classification accuracyLearning Curve for 20 NewsgroupsExample:A Digit RecognizerNave Bayes for Digits简单版本:一种特征Fij for each grid position 可能的特征值是on/off,基于图像中像素的亮度是否大于或小于0.5 每一个输入映射到一个特征向量,e.g.Here:lots of features,each is binaryNave Bayes model:What do we nee

29、d to learn?Examples:CPTsComments on Nave BayesMakes probabilistic inference tractable by making astrong assumption of conditional independence.Tends to work fairly well despite this strongassumption.Experiments show it to be quite competitive withother classification methods on standard datasets.Par

30、ticularly popular for text categorization,e.g.spam filtering.Summary Bayesian networks provide a natural representationfor(causally induced)conditional independence Topology+CPTs=compact representation of jointdistribution Generally easy for domain experts to construct Exact inference by variable elimination:polytime on polytrees,NP-hard on general graphs space=time,very sensitive to topology Nave Bayes model作业 14.3(a,b,c),14.4,14.7(a,b,c)(不交)

展开阅读全文
相关资源
猜你喜欢
相关搜索
资源标签

当前位置:首页 > 办公、行业 > 各类PPT课件(模板)
版权提示 | 免责声明

1,本文(人工智能09贝叶斯网络(57张)课件.ppt)为本站会员(晟晟文业)主动上传,163文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。
2,用户下载本文档,所消耗的文币(积分)将全额增加到上传者的账号。
3, 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(发送邮件至3464097650@qq.com或直接QQ联系客服),我们立即给予删除!


侵权处理QQ:3464097650--上传资料QQ:3464097650

【声明】本站为“文档C2C交易模式”,即用户上传的文档直接卖给(下载)用户,本站只是网络空间服务平台,本站所有原创文档下载所得归上传人所有,如您发现上传作品侵犯了您的版权,请立刻联系我们并提供证据,我们将在3个工作日内予以改正。


163文库-Www.163Wenku.Com |网站地图|