Introdution of InfiniBand 【Infiniband介绍】._第1页
Introdution of InfiniBand 【Infiniband介绍】._第2页
Introdution of InfiniBand 【Infiniband介绍】._第3页
Introdution of InfiniBand 【Infiniband介绍】._第4页
Introdution of InfiniBand 【Infiniband介绍】._第5页
已阅读5页,还剩10页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、InfiniBand (abbreviated IB), a computer networking communications standard used in high-performance computing. Features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is mainly utilized as either a direct, or switched interconn

2、ect between servers and storage systems, as well as an interconnect between storage systems. 1 2 3 4 It means new standards, protocols, hardware interface cards and switches and software. InfiniBand originated in 1999 from the merger of two competing designs: Future I/O and Next Generation I/O. This

3、 led to the formation of the InfiniBand Trade Association, which included, Compaq, Dell, Hewlett-Packard, IBM, Intel, Microsoft, and Sun. 1 2 Of the top 500 supercomputers in the world, Infiniband is the internal interconnect technology in 224 installations, compared with 100 using Gigabit Ethernet,

4、 and 88 using 10 Gigabit Ethernet. Mellanox ships InfiniBridge 10Gbit/s devices and ships over 10,000 InfiniBand ports. Virginia Tech builds an InfiniBand cluster ranked number three on the Top500 at the time. The OpenFabrics Alliance develops a standardized, Linux-based InfiniBand software stack. L

5、inux adds IB support. Of the top 500 supercomputers in the world, Gigabit Ethernet is the internal interconnect technology in 259 installations, compared with 181 using InfiniBand Oracle makes a major investment in Mellanox. 2001 2004 2003 2005 2009 2010 2014 As the performance of CPU more and more

6、powerful, the I/O gradually become the bottleneck of the server. People began to focus on the PCI architecture issued decades ago. During that period, PCI bandwidth was improved from 8/16 bits to 32 bits, even 64 bits, but it still have some defects due to its first design. In one word, processing p

7、ower is substantially outstripping the capabilities of industry standard I/O systems using busses. 1 2 3 4 Buses are inherently shared, requiring arbitrationprotocols on each use.That is only one PCI device which can use the bus at any time. Do not provide the level of reliability and availability n

8、ow being required of server systems Fiber Channel, still through an industry standard bus, making it impossible to avoid the bottlenecks and low availability characteristic of standard I/O busses As the clock speed of a bus increases, the interference between the signal lines become more and more se

9、rious, thus make it hard to setting multiple bus on one motherboard. Since the PCI device was connected to memory by MMIO, make it hard for PCI device to hot plug. 1 2 3 4 5 User App Kernel Stack CA Wire Client Server setupsetup connect listen accept bind User App Kernel Stack CA Wire blue lines: co

10、ntrol information data send data recv red lines: user data green lines: control and data transfer transfer data copy data copy There are some defects in traditional TCP/IP protocol causing low speed and high latency between two nodes. Copy between user space application and kernel space Operating sy

11、stems are involved during data transfers Threads are blocked during I/O transfers 1 2 3 User App Kernel Stack CA Wire Client Server setupsetup rdma_ connect rdma_listen rdma_accept rdma_bind User App Kernel Stack CA Wire blue lines: control information data rdma_ post_ send data rdma_ post_ recv red

12、 lines: user data green lines: control and data transfer transfer Compared with legacy node,the infiniband node dont use the kernel buffer. 1 Protocol stack of infiniband HCA(Host Channel Adapter) - to connect the memory controller and TCA. TCA(Target Channel Adapter) - to connect the i/o device and

13、 HCA. 2 Topological structure of infiniband 3 Data transmission Step one Step three Step two TCA packed the binary data from i/o device to message,and divide the message to packet cell,each cell contains a id for routing. HCA get the packet from TCA, then unpacked the packet cell,encapsulate the data,and pass to HCA driver and directly place it to application user space. HCA driver integrated the bus driver and hardware driver,no waiting at the pci bus,and no operating system data buffer In a word, the infiniband seperate the i/o system from pci bus by using protocol InfiniBand was

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论