大数据技术基础培训-GraphLab技术培训_第1页
大数据技术基础培训-GraphLab技术培训_第2页
大数据技术基础培训-GraphLab技术培训_第3页
大数据技术基础培训-GraphLab技术培训_第4页
大数据技术基础培训-GraphLab技术培训_第5页
已阅读5页,还剩25页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、大数据技术基础培训1GraphLab 技术培训2Social Media3图用来编码/描述知识和联系::Big: billions of vertices and edges and rich metadataAdvertisingScienceWeb人事实产品兴趣想法4图是机器学习和数据挖掘的重要基础之一发现有影响力的人或者信息发现社区或者社交圈或者团伙精准广告/产品定位 复杂数据/知识依赖性建模图并行算法的特性DependencyGraphIterativeComputationWhat I LikeWhat My Friends LikeFactored Computation 为什么M

2、apReduce不合适Map-Reduce不能高效表示依赖图User must code substantial data transformations Costly data replicationIndependent Data RowsSlowProcessor为什么MapReduce不合适(2)Map-Reduce 无法高效描述迭代算法:DataDataDataDataDataDataDataDataDataDataDataDataDataDataCPU 1CPU 2CPU 3DataDataDataDataDataDataDataCPU 1CPU 2CPU 3DataDataDat

3、aDataDataDataDataCPU 1CPU 2CPU 3IterationsBarrierBarrierBarrier为什么MapReduce不合适(3)图算法通常只有一部分数据(子图)需要计算:DataDataDataDataDataDataDataDataDataDataDataDataDataDataCPU 1CPU 2CPU 3DataDataDataDataDataDataDataCPU 1CPU 2CPU 3DataDataDataDataDataDataDataCPU 1CPU 2CPU 3IterationsBarrierBarrierBarrier为什么MapRedu

4、ce不合适(4)Hadoop Map/Reduce没有为迭代算法优化DataDataDataDataDataDataDataDataDataDataDataDataDataDataCPU 1CPU 2CPU 3DataDataDataDataDataDataDataCPU 1CPU 2CPU 3DataDataDataDataDataDataDataCPU 1CPU 2CPU 3IterationsDisk PenaltyDisk PenaltyDisk PenaltyStartupPenaltyStartup PenaltyStartup Penalty10图并行算法抽象A user-def

5、ined Vertex-Program runs on each vertexGraph constrains interaction along edgesUsing messages (e.g. Pregel PODC09, SIGMOD10)Through shared state (e.g., GraphLab UAI10, VLDB12)Parallelism: run multiple vertex programs simultaneously11示例Whats the popularityof this user?Popular?Depends on popularityof

6、her followersDepends on the popularity their followersPageRank AlgorithmUpdate ranks in parallel Iterate until convergenceRank of user iWeighted sum of neighbors ranks12BarrierPregel (Giraph)Bulk Synchronous Parallel Model:ComputeCommunicate14The Pregel AbstractionVertex-Programs interact by sending

7、 messages.iPregel_PageRank(i, messages) : / Receive all the messages total = 0 foreach( msg in messages) : total = total + msg / Update the rank of this vertex Ri = 0.15 + total / Send new messages to neighbors foreach(j in out_neighborsi) : Send msg(Ri * wij) to vertex jMalewicz et al. PODC09, SIGM

8、OD1015The GraphLab AbstractionVertex-Programs directly read the neighbors stateiGraphLab_PageRank(i) / Compute sum over neighbors total = 0 foreach( j in in_neighbors(i): total = total + Rj * wji / Update the PageRank Ri = 0.15 + total / Trigger neighbors to run again if Ri not converged then foreac

9、h( j in out_neighbors(i): signal vertex-program on jLow et al. UAI10, VLDB12自然图16Power-Law Degree Distribution幂律分布 长尾分布17Power-Law Degree DistributionTop 1% of vertices are adjacent to50% of the edges!High-Degree VerticesNumber of VerticesAltaVista WebGraph1.4B Vertices, 6.6B EdgesDegreeMore than 10

10、8 vertices have one neighbor.Asynchronous Executionrequires heavy locking (GraphLab)18Challenges of High-Degree VerticesTouches a largefraction of graph(GraphLab)Sequentially processedgesSends manymessages(Pregel)Edge meta-datatoo large for singlemachineSynchronous Executionprone to stragglers (Preg

11、el)19Power-Law Degree Distribution“Star Like” MotifPresidentObamaFollowers20Power-Law Graphs are Difficult to PartitionPower-Law graphs do not have low-cost balanced cuts Leskovec et al. 08, Lang 04Traditional graph-partitioning algorithms perform poorly on Power-Law Graphs.Abou-Rjeili et al. 06CPU

12、1CPU 2Machine 1Machine 221Split High-Degree verticesNew Abstraction Equivalence on Split VerticesPowerGraphProgramFor ThisRun on ThisGather InformationAbout NeighborhoodUpdate VertexSignal Neighbors &Modify Edge Data22A Common Pattern forVertex-ProgramsGraphLab_PageRank(i) / Compute sum over neighbo

13、rs total = 0 foreach( j in in_neighbors(i): total = total + Rj * wji / Update the PageRank Ri = 0.1 + total / Trigger neighbors to run again if Ri not converged then foreach( j in out_neighbors(i) signal vertex-program on j23GAS DecompositionY+ + YParallelSumUser Defined:Gather( ) Y1 + 2 3YGather (R

14、educe)Apply the accumulated value to center vertexApplyUpdate adjacent edgesand vertices.ScatterAccumulate information about neighborhoodY+ User Defined:Apply( , ) YYYYUpdate Edge Data &Activate NeighborsUser Defined:Scatter( ) YY24PageRank in PowerGraphPowerGraph_PageRank(i)Gather( j i ) : return w

15、ji * Rjsum(a, b) : return a + b;Apply(i, ) : Ri = 0.15 + Scatter( i j ) :if Ri changed then trigger j to be recomputedMachine 2Machine 1Machine 4Machine 325Distributed Execution of a PowerGraph Vertex-Program1234+ + + YYYYYYYYGatherApplyScatterMasterMirrorMirrorMirror26Minimizing CommunicationYYYA v

16、ertex-cut minimizes machines each vertex spansPercolation theory suggests that power law graphs have good vertex cuts. Albert et al. 2000Communication is linear in the number of machines each vertex spansNew Approach to PartitioningRather than cut edges:we cut vertices:27CPU 1CPU 2YYMust synchronize

17、 many edgesCPU 1CPU 2YYMust synchronize a single vertexNew Theorem:For any edge-cut we can directly construct a vertex-cut which requires strictly less communication and storage.System DesignImplemented as C+ APIUses HDFS for Graph Input and OutputFault-tolerance is achieved by check-pointing Snapsh

18、ot time 5 seconds for twitter network28EC2 HPC NodesMPI/TCP-IPPThreadsHDFSPowerGraph (GraphLab2) SystemImplemented Many AlgorithmsCollaborative FilteringAlternating Least SquaresStochastic Gradient DescentSVDNon-negative MFStatistical InferenceLoopy Belief PropagationMax-Product Linear ProgramsGibbs SamplingGraph Analyti

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论