SeminarTopics.in

Published on Jun 05, 2023



Abstract

InfiniBand is a powerful new architecture designed to support I/O connectivity for the Internet infrastructure.InfiniBand is supported by all the major OEM server vendors as a means to expand beyond and create the next generation I/O interconnect standard in servers. For the first time, a high volume, industry standard I/O interconnect extends the role of traditional "in the box" busses. InfiniBand is unique in providing both, an "in the box" backplane solution an external interconnect and "Bandwidth Out of the box", thus it provides connectivity in a way previously reserved only for traditional networking interconnects.

Description of InfiniBand

This unification of I/O and system area networking requires a new architecture that supports the needs of these two previously separate domains. Underlying this major I/O transition is InfiniBand's ability to support the Internet's requirement for RAS: reliability, availability, and serviceability. This white paper discusses the features and capabilities which demonstrate InfiniBand's superior abilities to support RAS relative to the legacy PCI bus and other proprietary switch fabric and I/O solutions. Further, it provides an overview of how the InfiniBand architecture supports a comprehensive silicon, software, and system solution.

The comprehensive nature of the architecture is illustrated by providing an overview of the major sections of the InfiniBand 1.0 specification. The scope of the 1.0 specification ranges from industry standard electrical interfaces and mechanical connectors to well defined software and management interfaces.Amdahl's Law is one of the fundamental principles of computer science and basically states that efficient systems must provide a balance between CPU performance, memory bandwidth, and I/O performance. At odds with this, is Moore's Law which has accurately predicted that semiconductors double their performance roughly every 18 months.

Application Clustering

The Internet today has evolved into a global infrastructure supporting applications such as streaming media, business to business solutions, E-commerce, and interactive portal sites. Each of these applications must support an ever increasing volume of data and demand for reliability. Service providers are in turn experiencing tremendous pressure to support these applications. They must route traffic efficiently through increasingly congested communication lines, while offering the opportunity to charge for differing QoS and security levels.

Application Service Providers (ASP) has arisen to support the outsourcing of e-commerce, e-marketing, and other e-business activities to companies specializing in web-based applications. These ASPs must be able to offer highly reliable services that offer the ability to dramatically scale in a short period of time to accommodate the explosive growth of the Internet. The cluster has evolved as the preferred mechanism to support these requirements.

Shared Bus Architecture

In a bussed architecture, all communication shares the same bandwidth. The more ports added to the bus, the less bandwidth available to each peripheral. They also have severe electrical, mechanical, and power issues. On a parallel bus, there are many pins necessary for each connection (64 bit PCI requires 90 pins), making layout of a board very tricky and consuming precious printed circuit board (PCB) space. At high bus frequencies, the distance of each signal is limited to short traces on the PCB board. In a slot-base system with multiple card slots, termination is uncontrolled and can cause problems if not designed properly.