|
Infiniband是应用于存储网络的一个关键标准协议。在中国,存储网络起步的比较晚,相关协议比较封闭,对于这方面的技术资料较少。万兆通做为中国领先的Twinax Cable和Infiniband Cable的制造商,是全球少数几家可以提供SFP+, QSFP+ , Infinband Cable 组件的供应商。现在我们整理了一些关于Infiniband的资料供大家参考,如有相关的需求,请联系:info@10gtek.com.
What is InfiniBand?(什么是InfiniBand)InfiniBand is a switched fabric communications link used in high-performance computing and enterprise data centers.
Its features include high throughput, low latency, quality of service and failorver, and it is designed to be scalrable. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. Infiniband host bus adapters and network switches are manufactured by Mellanox and Intel (which acquired Qlogic's infiniband business in January 2012)
InfiniBand forms a superset of the Virtual Interface Architecture (VIA).
Description of InfiniBand (InfiniBand的描述):
Like Fibre Channel, PCI Express, Serial ATA, and many other modern interconnects, InfiniBand offers point-to-point bidirectional serial links intended for the connection of processors with high-speed peripherals such as disks. On top of the point to point capabilities, InfiniBand also offers multicast operations as well. It supports several signaling rates and, as with PCI Express, links can be bonded together for additional throughput.
Applications(应用):InfiniBand has been adopted in enterprise datacenters, for example Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud and Oracle SPARC SuperCluster, financial sectors, cloud computing (an InfiniBand based system won the best of VMWorld for Cloud Computing) and more. InfiniBand has been mostly used for high performance clustering computer cluster applications. A number of the TOP500 supercomputers have used InfiniBand including the former reigning fastest supercomputer, the IBM Roadrunner.
SGI, LSI, DDN, Oracle, Rorke Data among others, have also released storage utilizing InfiniBand "target adapters". These products essentially compete with architectures such as Fibre Channel, SCSI, and other more traditional connectivity-methods. Such target adapter-based discs can become a part of the fabric of a given network, in a fashion similar to DEC VMS clustering. The advantage to this configuration is lower latency and higher availability to nodes on the network (because of the fabric nature of the network). In 2009, the Oak-Ridge National Lab Spider storage system used this type of InfiniBand attached storage to deliver over 240 gigabytes per second of bandwidth.
What is an Infiniband Cable?(什么是Infiniband Cable)
InfiniBand cables are factory terminated copper cable assemblies constructed of high-speed twinaxial shielded cable terminated to 4X MicroGigaCN™ type connectors on each end. The cables are designed for insertion into standard 4X receptacles.
InfiniBand cables are generally thicker, bulkier, and heavier than traditional Category 5e and Category 6 UTP cabling. These cables are also sensitive to bend radius and care should be used during installation with attention to proper strain relief to ensure reliability of connection over time. Passive copper InfinBand copper cables do have reach limitations and thus confine the size of the cluster that can be built with copper. Generally DDR clusters of 500 nodes can be built and even larger clusters can be achieved with optimized layouts and readily be realized with passive copper cables. |
|