ImageVerifierCode 换一换
格式:PPT , 页数:43 ,大小:3.36MB ,
文档编号:4301314      下载积分:25 文币
快捷下载
登录下载
邮箱/手机:
温馨提示:
系统将以此处填写的邮箱或者手机号生成账号和密码,方便再次下载。 如填写123,账号和密码都是123。
支付方式: 支付宝    微信支付   
验证码:   换一换

优惠套餐
 

温馨提示:若手机下载失败,请复制以下地址【https://www.163wenku.com/d-4301314.html】到电脑浏览器->登陆(账号密码均为手机号或邮箱;不要扫码登陆)->重新下载(不再收费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录  
下载须知

1: 试题类文档的标题没说有答案,则无答案;主观题也可能无答案。PPT的音视频可能无法播放。 请谨慎下单,一旦售出,概不退换。
2: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
3: 本文为用户(晟晟文业)主动上传,所有收益归该用户。163文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(点击联系客服),我们立即给予删除!。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

1,本文(《并行程序设计导论》-第一章课件.ppt)为本站会员(晟晟文业)主动上传,163文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。
2,用户下载本文档,所消耗的文币(积分)将全额增加到上传者的账号。
3, 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(发送邮件至3464097650@qq.com或直接QQ联系客服),我们立即给予删除!

《并行程序设计导论》-第一章课件.ppt

1、Copyright 2010,Elsevier Inc.All rights ReservedChapter 1Why Parallel Computing?An Introduction to Parallel ProgrammingPeter PachecoCopyright 2010,Elsevier Inc.All rights ReservedRoadmapnWhy we need ever-increasing performance.nWhy were building parallel systems.nWhy we need to write parallel program

2、s.nHow do we write parallel programs?nWhat well be doing.nConcurrent,parallel,distributed!#Chapter SubtitleChanging timesCopyright 2010,Elsevier Inc.All rights ReservednFrom 1986 2002,microprocessors were speeding like a rocket,increasing in performance an average of 50%per year.nSince then,its drop

3、ped to about 20%increase per year.An intelligent solutionCopyright 2010,Elsevier Inc.All rights ReservednInstead of designing and building faster microprocessors,put multiple processors on a single integrated circuit.Now its up to the programmersnAdding more processors doesnt help much if programmer

4、s arent aware of themn or dont know how to use them.nSerial programs dont benefit from this approach(in most cases).Copyright 2010,Elsevier Inc.All rights ReservedWhy we need ever-increasing performancenComputational power is increasing,but so are our computation problems and needs.nProblems we neve

5、r dreamed of have been solved because of past increases,such as decoding the human genome.nMore complex problems are still waiting to be solved.Copyright 2010,Elsevier Inc.All rights ReservedClimate modelingCopyright 2010,Elsevier Inc.All rights ReservedProtein foldingCopyright 2010,Elsevier Inc.All

6、 rights ReservedDrug discoveryCopyright 2010,Elsevier Inc.All rights ReservedEnergy researchCopyright 2010,Elsevier Inc.All rights ReservedData analysisCopyright 2010,Elsevier Inc.All rights ReservedWhy were building parallel systemsnUp to now,performance increases have been attributable to increasi

7、ng density of transistors.nBut there areinherent problems.Copyright 2010,Elsevier Inc.All rights ReservedA little physics lessonnSmaller transistors=faster processors.nFaster processors=increased power consumption.nIncreased power consumption=increased heat.nIncreased heat=unreliable processors.Copy

8、right 2010,Elsevier Inc.All rights ReservedSolution nMove away from single-core systems to multicore processors.n“core”=central processing unit(CPU)Copyright 2010,Elsevier Inc.All rights ReservednIntroducing parallelism!Why we need to write parallel programsnRunning multiple instances of a serial pr

9、ogram often isnt very useful.nThink of running multiple instances of your favorite game.nWhat you really want is forit to run faster.Copyright 2010,Elsevier Inc.All rights ReservedApproaches to the serial problemnRewrite serial programs so that theyre parallel.nWrite translation programs that automa

10、tically convert serial programs into parallel programs.nThis is very difficult to do.nSuccess has been limited.Copyright 2010,Elsevier Inc.All rights ReservedMore problemsnSome coding constructs can be recognized by an automatic program generator,and converted to a parallel construct.nHowever,its li

11、kely that the result will be a very inefficient program.nSometimes the best parallel solution is to step back and devise an entirely new algorithm.Copyright 2010,Elsevier Inc.All rights ReservedExamplenCompute n values and add them together.nSerial solution:Copyright 2010,Elsevier Inc.All rights Res

12、ervedExample(cont.)nWe have p cores,p much smaller than n.nEach core performs a partial sum of approximately n/p values.Copyright 2010,Elsevier Inc.All rights ReservedEach core uses its own private variablesand executes this block of codeindependently of the other cores.Example(cont.)nAfter each cor

13、e completes execution of the code,is a private variable my_sum contains the sum of the values computed by its calls to Compute_next_value.nEx.,8 cores,n=24,then the calls to Compute_next_value return:Copyright 2010,Elsevier Inc.All rights Reserved1,4,3,9,2,8,5,1,1,5,2,7,2,5,0,4,1,8,6,5,1,2,3,9Exampl

14、e(cont.)nOnce all the cores are done computing their private my_sum,they form a global sum by sending results to a designated“master”core which adds the final result.Copyright 2010,Elsevier Inc.All rights ReservedExample(cont.)Copyright 2010,Elsevier Inc.All rights ReservedExample(cont.)Copyright 20

15、10,Elsevier Inc.All rights ReservedCore01234567my_sum8197157131214Global sum8+19+7+15+7+13+12+14=95Core01234567my_sum95197157131214Copyright 2010,Elsevier Inc.All rights ReservedBut wait!Theres a much better wayto compute the global sum.Better parallel algorithmnDont make the master core do all the

16、work.nShare it among the other cores.nPair the cores so that core 0 adds its result with core 1s result.nCore 2 adds its result with core 3s result,etc.nWork with odd and even numbered pairs of cores.Copyright 2010,Elsevier Inc.All rights ReservedBetter parallel algorithm(cont.)nRepeat the process n

17、ow with only the evenly ranked cores.nCore 0 adds result from core 2.nCore 4 adds the result from core 6,etc.nNow cores divisible by 4 repeat the process,and so forth,until core 0 has the final result.Copyright 2010,Elsevier Inc.All rights ReservedMultiple cores forming a global sumCopyright 2010,El

18、sevier Inc.All rights ReservedAnalysisnIn the first example,the master core performs 7 receives and 7 additions.nIn the second example,the master core performs 3 receives and 3 additions.nThe improvement is more than a factor of 2!Copyright 2010,Elsevier Inc.All rights ReservedAnalysis(cont.)nThe di

19、fference is more dramatic with a larger number of cores.nIf we have 1000 cores:nThe first example would require the master to perform 999 receives and 999 additions.nThe second example would only require 10 receives and 10 additions.nThats an improvement of almost a factor of 100!Copyright 2010,Else

20、vier Inc.All rights ReservedHow do we write parallel programs?nTask parallelism nPartition various tasks carried out solving the problem among the cores.nData parallelismnPartition the data used in solving the problem among the cores.nEach core carries out similar operations on its part of the data.

21、Copyright 2010,Elsevier Inc.All rights ReservedProfessor PCopyright 2010,Elsevier Inc.All rights Reserved15 questions300 examsProfessor Ps grading assistantsCopyright 2010,Elsevier Inc.All rights ReservedTA#1TA#2TA#3Division of work data parallelismCopyright 2010,Elsevier Inc.All rights ReservedTA#1

22、TA#2TA#3100 exams100 exams100 examsDivision of work task parallelismCopyright 2010,Elsevier Inc.All rights ReservedTA#1TA#2TA#3Questions 1-5Questions 6-10Questions 11-15Division of work data parallelismCopyright 2010,Elsevier Inc.All rights ReservedDivision of work task parallelismCopyright 2010,Els

23、evier Inc.All rights ReservedTasksReceiving1)Addition CoordinationnCores usually need to coordinate their work.nCommunication one or more cores send their current partial sums to another core.nLoad balancing share the work evenly among the cores so that one is not heavily loaded.nSynchronization bec

24、ause each core works at its own pace,make sure cores do not get too far ahead of the rest.Copyright 2010,Elsevier Inc.All rights ReservedWhat well be doingnLearning to write programs that are explicitly parallel.nUsing the C language.nUsing three different extensions to C.nMessage-Passing Interface(

25、MPI)nPosix Threads(Pthreads)nOpenMPCopyright 2010,Elsevier Inc.All rights ReservedType of parallel systemsnShared-memorynThe cores can share access to the computers memory.nCoordinate the cores by having them examine and update shared memory locations.nDistributed-memorynEach core has its own,privat

26、e memory.nThe cores must communicate explicitly by sending messages across a network.Copyright 2010,Elsevier Inc.All rights ReservedType of parallel systemsCopyright 2010,Elsevier Inc.All rights ReservedShared-memoryDistributed-memoryTerminology nConcurrent computing a program is one in which multip

27、le tasks can be in progress at any instant.nParallel computing a program is one in which multiple tasks cooperate closely to solve a problemnDistributed computing a program may need to cooperate with other programs to solve a problem.Copyright 2010,Elsevier Inc.All rights ReservedConcluding Remarks(

28、1)nThe laws of physics have brought us to the doorstep of multicore technology.nSerial programs typically dont benefit from multiple cores.nAutomatic parallel program generation from serial program code isnt the most efficient approach to get high performance from multicore computers.Copyright 2010,Elsevier Inc.All rights ReservedConcluding Remarks(2)nLearning to write parallel programs involves learning how to coordinate the cores.nParallel programs are usually very complex and therefore,require sound program techniques and development.Copyright 2010,Elsevier Inc.All rights Reserved

侵权处理QQ:3464097650--上传资料QQ:3464097650

【声明】本站为“文档C2C交易模式”,即用户上传的文档直接卖给(下载)用户,本站只是网络空间服务平台,本站所有原创文档下载所得归上传人所有,如您发现上传作品侵犯了您的版权,请立刻联系我们并提供证据,我们将在3个工作日内予以改正。


163文库-Www.163Wenku.Com |网站地图|