多智能体深度强化学习.pptx

上传人(卖家):无敌的果实 文档编号:2526789 上传时间:2022-04-29 格式:PPTX 页数:51 大小:2.58MB
下载 相关 举报
多智能体深度强化学习.pptx_第1页
第1页 / 共51页
多智能体深度强化学习.pptx_第2页
第2页 / 共51页
多智能体深度强化学习.pptx_第3页
第3页 / 共51页
多智能体深度强化学习.pptx_第4页
第4页 / 共51页
多智能体深度强化学习.pptx_第5页
第5页 / 共51页
亲,该文档总共51页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述

1、深度多智能体强化学习Outline Introduction Single-agent RL Multiagent RL Our recent works SummaryIntroduction2015.10 AlphaGo v.s Fan Hui, the European Gochampion5 : 02016.03 - AlphaGo v.s Lee Sedol, the Go Master4 : 12017.05 - AlphaGo v.s Ke Jie, the Go world champion3 : 0the worlds best Go playerBeat AlphaGo 1

2、00:0 (2017.10)IntroductionDeepMind AlphaGo Wins Marvin Minsky Medal 2018IntroductionDeep Learning vs. Reinforcement LearningOne-shot decision makingSequential decision makingIntroductionDeep Learning vs. Reinforcement LearningRL is like life - Learning from interactionOutline Introduction Single-age

3、nt RL Multiagent RL Our recent works SummarySingle agent RLTypes of RL algorithmsValue BasedLearnt Value Function(e.g. ?-greedy)Implicit policyPolicy BasedNo Value FunctionLearnt PolicyActor-CriticLearnt Value FunctionLearnt PolicySingle-agent RLQ-value Definition Return = Definition=0 +1The action-

4、value (Q-value) function Q (s, a) is theexpectation of G under taking action a, and then followingtpolicy afterwards , = | = , = , = +“How good is action a in state s?”Single-agent RLOptimal Q-value DefinitionThe optimal Q-value function (, ) is the maximum Q-value over all policies , = max (, ) The

5、oremThere exists a policy such that , = , forall s , aThus, it suffices to find Single-agent RLQ-learning Bellman Optimality Equation , satisfies the following equation: , = , + ? ( |, ) max ( , ) Q-LearningLet a be ?-greedy w.r.t. , and abe optimal w.r.t. . converges to if we iteratively apply the

6、followingupdate: , , + , + 1 (, )Single-agent RLQ-learning In practice, tabular-based Q-learning isimpractical Ver y limited states/actions Cannot generalize to unobserved states Think about the breakout game State: screen pixels Image size: 84*84 Consecutive 4 images256(84*84*4) rows in Q-table! Gr

7、ayscale with 256 gray levelsSingle-agent RLDeep Q Learning Represent Q-values using deepneural network as functionalapproximator Experience replay buffer Remove sample correlation Target network Stabilize Q-function updateHuman-level control through Deep Reinforcement Learning, Nature, 2015Single-ag

8、ent RLDeep Q Learning Loss function2 = (,)()( , ; ) with the target , = + max , ; Update the q-network parameters following the gradient = , , ; , ; Single-agent RLExtensions of DQN Double deep Q-learning Dueling Deep Q-learning Prioritized replay Distributional deep Q-learning Rainbow Soft Q-learni

9、ngRainbow: Combining Improvements in Deep Reinforcement Learning,ArXiv 1710.02298, 2017Single-agent RLPolicy-based RL Advantages Better convergence properties Effective in high-dimensional or continuous action space Can learn stochastic policies (Rock-paper-scissor example) Disadvantages Typically c

10、onverge to a local rather than global optimum Evaluating a policy is typically inefficient and of highvarianceSingle-agent RLSingle-Agent Policy Gradient MethodsOptimize with gradient ascent on expected return: = , , (,)Good when greedification is hard, e.g., continuous actionsREINFORCE Williams 199

11、2: = ? ( | ) =0Single-agent RLSingle-agent Actor-Critic MethodsReduce variance in g() by learning a critic (, ): = ? ( , ) =0Policy gradient theorem Sutton et al. 2000 = (|) (, ) , (,) Single-agent RLSingle-agent Actor-Critic MethodsFurther reduce variance with a baseline b(s): = ? ( , ( ) =0 = , =

12、(, ), the advantage function: = ? ( , ) =0TD-error + +1 () is an unbiased estimate of (,): = ? ( + )+1 =0Single-agent RLDeep Single-agent Actor-Critic MethodsActor and critic are both deep neural networksConvolutional and recurrent layersActor and critic share layersBoth trained with stochastic grad

13、ient descentActor trained on policy gradientCritic trained on TD() or Sarsa()Single-agent RLDDPG The gradient of the policys performance , = , = = , = , = | = Use replay buffer to address the samples non-iid problem. Both actor and critic are updated by sampling a minibatch uniformlyfrom the buffer

14、(off-policy algorithm). Use target network for both actor and critic to stabilize training,which are updated slowly and improve the stability of learning. Use batch normalization to improve the scalability towardsdifferent tasks.CONTINUOUS CONTROL WITH DEEP REINFORCEMENTLEARNING, Lillicrap et al., I

15、CLR 2016Outline Introduction Single-agent RL Multiagent RL Our recent works SummaryMultiagent RLFrom Single-agent Deep RL to Multiagent settings Traditional reinforcement learning approaches such as Q-Learning or policy gradient are poorly suited to multi-agentenvironments. Learning and teaching are

16、 inseparable in multiagent settings(multiagent adaptation) The key feature for stabilizing DQN - the use of experiencereplay- might confuse an agents perception in a multiagentlearning setting. Effective multiagent deep RL framework is required.Multiagent RLIndependent Multiagent Actor-CriticInspire

17、d by independent Q-learning Tan 1993Each agent learns independently with its own actor and criticTreats other agents as part of the environmentSpeed learning with parameter sharingDifferent inputs, including a, induce different behavioursaaStill independent: critics condition only on and Limitations

18、:Nonstationary learningHard to learn to coordinateMulti-agent credit assignment problemMultiagent RLMultiagent AC framework Each agent owns an independentActor and Critic. Each agents critic is augmentedwith extra information about thepolicies of other agents. The reward signal for updatingeach crit

19、ic can be the same ordifferent. The actor of each agent is trainingusing only local observation asinput.(Lowe et al. NIPS, 2017)Multiagent RLMultiagent AC framework Multiagent actor-critic policy gradient methods = , , , . , 1 MADDPG = , , , | = ( ), 1 2 = , , , , ,1 = + , , , | = ( )1(Lowe et al. N

20、IPS, 2017)Multiagent RLMADDPG PerformancePredator-prey gameMulti-Agent Actor-Critic for Mixed Cooperative-Competitive Environments, NIPS, 2017Multiagent RLIndependent Deep Multiagent Learning Relax the assumption ofcentralized training todecentralized training consider more complexmultiagent coordin

21、ationPredator-prey gameOur recent worksDecentralized Training/Independent Learning continuous states and actionsGenerative Adversarial Self-Imitation LearningOur recent worksDecentralized Training/Independent Learning continuous states and actions Predator-prey gameGenerative Adversarial Self-Imitat

22、ion LearningOur recent worksDecentralized Training/Independent Learning continuous states and actions StarCraft GameOur recent worksDeep Cooperative MultiagentReinforcement Learning centralizedtraining Deep RMARL in Smart Grid Two-level MARL in advertising exposure (Alimama) Hierachical Deep MARL in

23、 Games (NetEase)Our recent worksRecurrent Deep Multiagent Q-Learning forAutonomous Brokers in Smart Grid Retail brokers offer tariffs for both local consumers and small-scale producers; develop pricing strategy to adjust tariff prices for the sake ofmaking profits while balancing the demand and supp

24、ly in thelocal tariff market.publish tariffdistributedconsumersimbalance partpunishmentfactorsubscribeserviceoperatorretailbrokersnew energyproducerspublish tariffYang et al., IJCAI18Our recent worksRecurrent Deep Multiagent Q-Learning forAutonomous Brokers in Smart Grid Cluster customersMultiagent

25、RDQN learningReward shapingYang et al., IJCAI18Our recent worksRecurrent Deep Multiagent Q-Learning forAutonomous Brokers in Smart GridRDMRL vs. Single Agent RDQNYang et al., IJCAI18Our recent worksHierarchical Deep MARL for AdvertisingDisplay Optimization Recommendation and adverting products displ

26、ay are mixed. Constrained advertising exposure optimization problem (per-queryand per-day constraints) How to adaptively adjust each ad score for different customers tomaximize the long-term advertising revenue?Our recent worksHierarchical Deep MARL for AdvertisingDisplay Optimization Model it as a

27、ConstrainedMDP with per-stateconstraints Propose a two-levelmultiagent ReinforcementLearning framework Propose a constrainedhindersight experiencereplay (CHER) mechanismto facilitate trainingprocess.Our recent worksHierarchical Deep MARL for AdvertisingDisplay OptimizationOur recent worksHierarchica

28、l Deep MARL for AdvertisingDisplay OptimizationDDPG with CHER Compared with DDPG without CHEROur recent worksHierarchical Deep Multiagent Learning inGames Sparse and delayed rewardproblem Introduce task hierarchyinto multiagent learningframework designOur recent worksHierarchical Deep Multiagent Lea

29、rning inGamesInd-hDQNhComhQmixOur recent worksHierarchical Deep Multiagent Learning inGames Augmented Concurrent Experience Replay Coordinate agents policy update Improve high-level sparse experience Low-level Parameter Sharing Support the learning of specialized skills Improve sample efficiency and

30、 facilitate training processOur recent worksHierarchical Deep Multiagent Learning inGamesOur recent worksCompetitive Multiagent Environments Cooperative settings Centralized training Centralized/decentralized implementation Competitive settings / Open settings How to efficiently play with non-statio

31、nary opponentsonline? (Zheng et al., NeurIPS, 2018)Our recent worksCompetitive Multiagent Environments123Our recent worksCompetitive Multiagent Environments Policy Distillation Given a class of teacher policies(represented in Q-network), generatea generalized student policy whichshould perform well

32、across allteacher tasks. Can accelerate policy learning facedwith new tasks (reducebootstrapping time)Our recent worksCompetitive Multiagent Environments Opponent Policy Detection Estimate the opponents policy Predict the opponents type using KD-divergence Opponent prediction model combing belief an

33、dopponent modelsOur recent worksCompetitive Multiagent EnvironmentsOur recent worksCompetitive Multiagent EnvironmentsSummary Deep MARL in Cooperative vs. Competitive multiagentsettings Other (multiagent) RL application scenarios Auto-driving scenarios Software Testing (e.g., Fuzzing, code summarization) Data Mining (Feature Engineering) Cyber-Physical Systems (security check) NLP (image to caption, dialogue generation) Multi-robotic systems Military ScenariosThank You!Q&A

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 办公、行业 > 常用办公文档
版权提示 | 免责声明

1,本文(多智能体深度强化学习.pptx)为本站会员(无敌的果实)主动上传,163文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。
2,用户下载本文档,所消耗的文币(积分)将全额增加到上传者的账号。
3, 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(发送邮件至3464097650@qq.com或直接QQ联系客服),我们立即给予删除!


侵权处理QQ:3464097650--上传资料QQ:3464097650

【声明】本站为“文档C2C交易模式”,即用户上传的文档直接卖给(下载)用户,本站只是网络空间服务平台,本站所有原创文档下载所得归上传人所有,如您发现上传作品侵犯了您的版权,请立刻联系我们并提供证据,我们将在3个工作日内予以改正。


163文库-Www.163Wenku.Com |网站地图|