авторефераты диссертаций БЕСПЛАТНАЯ БИБЛИОТЕКА РОССИИ



Pages:     | 1 |   ...   | 7 | 8 || 10 | 11 |   ...   | 12 |

«Основы проектирования SAN Джош Джад Второе издание (русское, v.1.0) Copyright © 2005 - 2008 Brocade Communications Systems, ...»

-- [ Страница 9 ] --

Figure 89 - Brocade 4018 Embedded Switch The 4018 has four outbound ports (to the SAN) and 14 inbound ports (one to each bl ade server). All ports are non-blocking and uncongested 4Gbit (8Gbit full-duplex) Fibre Channel fabric U_Ports. This board is typically f ac tory installed, since - unlike othe r blade switch es - it is a daughter board for an already existing controller module.

Brocade 4024 встроенный FC коммутатор The Brocade 4024 was designed f or the HP c-class BladeSystem. The Brocade 4024 is powered by the “Goldeneye” ASIC (p 502) and is a single-stage central memory switch. It h as a cro ss-sectional bandwidth suf fi Приложение A Базовые материалы cient to support all ports fu ll-speed full-duplex at once.

Fabric OS 5.0.5 or later is required.

The Brocade 4024 ( Figure 90) has eight outbound ports (to the SAN) and 16 inbound ports (one to each blade server), all ports ar e non-blocking and uncongested 4Gbit (8Gbit full-duplex) Fibr e Channel fabric U_Ports.

This platform was introduced in 2006 by Brocade and HP.

The 4024 is available with so ftware packages ranging from entry level (“12-port conf iguration”) up to the f ull enterprise-class Fabric OS 5.x feature set with all 24-ports enabled via Ports-On-Demand.

Figure 90 - Brocade 4024 Embedded Switch Brocade 4012 встроенный FC коммутатор The Brocade 4012 was introduced in 2005 by Brocade and HP. It represented the industry’s ever first 4Gbit switch for embedded Blade Server m arket. The Brocade 4012 was specifically designed for the HP p-class Blade System. It is powered by the “Goldeneye” ASIC. It has a cross-sectional bandwidth sufficient to support all ports full-speed full-duplex at once. Fabric OS 5.0.1 or later is required. The Brocade 4012 ( Figure 91) has four out bound (to the SAN) and 8 inbound ports (one to each blade server), all outbo und ports are non-blocking and uncongested 4Gbit (8Gbit full-duplex) and the inbound are all non-blocking and unc ongested 2Gbit (4Gbit full duplex) FC fabric U_Ports.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад Figure 91 - Brocade 4012 Embedded Switch Brocade iSCSI шлюз The Brocade iSCSI Gateway is an iSCSI-optim ized product, designed to connect enterprise FC fabrics to low cost “edge” servers. (Figure 92) Figure 92 - Brocade iSCSI Gateway Because this platform is sm aller and offers fewer fea tures than th e FC4-16IP (p 419), it can be less expensive, and may be adequate for user s who desire an entry point into the iSCSI bridging m arket. However, there are a dif ferences between the platform s besides cos t and port count which must be considered when making a selection.

The iSCSI Gateway product is not capable of provid ing FC f abric switching. It has fewer features and lower performance than the bladed version. The Gigabit Ethernet interfaces on the iSCSI product are lo w-end cop per, whereas the FC4-16IP uses more reliab le optical ports capable of spanni ng greater distances. Be Приложение A Базовые материалы cause the iS CSI Gateway has RJ45 copper GE interfaces on the g ateway itself, rather than just on the iSCSI hosts, users need to m ake sure th at their IT networking group provides the correct interface.

This solution should be cons idered for customers who need a low cost entry point into the iSCSI bridging market above all else. Otherwise, a native Fibre Channel solution or the FC4-16IP will likely provide better results.

Платформы классической McDATA In 2007, Brocade purchased McDATA: one of its long-time rivals. However, this was not the first tim e that the two companies had enjoye d a partnership-style rela tionship. In fact, McDATA wa s one of Brocade’s firs t customers, having purchased intellectual property from Brocade with which to implement its line of FC directors.

Many McD ATA installed-base platfor ms still run Bro cade ASICs and code-chunks to this day. In addition, some of the com panies that McDATA acquired prio r to being purchased by Brocade ha d equivalently long-term partnerships with Brocade. For example, Brocade had a long-standing relationship with CN T in which CNT re sold Brocade switches, a nd Brocade supported CNT for DR and BC solutions requiring certain distance extens ion methods.

Upon the close of the a cquisition, Brocade announced end of sale for a subset of McDATA products in cases where they directly overlapped with Brocade offerings.

For exam ple, the McD ATA “pizzabox” edge switches were superseded by the Brocade 5000. They had no value-added features beyond t hose available on the Bro cade switches, so it was not necessary to con tinue to sh ip them for much longer after the close of the acquisition.

Brocade announced that it intended to stop shipp ing these platforms at the end of 2007.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад However, Brocade has a firm comm itment to McDATA customers, and has not stopped shipping prod ucts such as the 140- or 256-por t directors. It is expected that these platforms will converge with the Brocade direc tor strategy at som e point, but even when that happens they will b e supported in Brocade networks v ia routed connections and com patible softw are rele ases for the foreseeable future. Also, Brocade intends to h onor th e support lifecycle commitments made by McDATA, which means that even products which Brocade no longer in tends to actively sell are still being supported. Typically, support continues for five year s after end of sale is an nounced.

This section discusses a few of the more notab le clas sic McDATA products, and indicates how they m ay be integrated into a Brocade environment.

Директор Brocade Mi10k The Brocade Mi10K offers up to 256 1-, 2-, and 4Gbit FC ports in a 14 U chassis. 10Gbit FC interfaces are also available fo r DR and BC solutions. It offers exception al performance and availability. In som e cas es, it can even outperform the Brocade 48000, although in m ost deploy ments the 48000 has 50% m ore usable bandwidth 103 as well as 50% greater rack density, and m uch lower power and cooling requirements. Brocade is a ctively selling the Mi10k platform and has no immediate plans to stop doing so.

While th is direc tor is built using som ewhat lim ited technology compared to the Brocade 48000, costs quite a The cases in which the Mi10k can outperform the 48000 are those in which little or no flow locality is achievable, and the host-to-storage port ra tio is near 1:1. If either of those statements are false, then the 48000 will outperform the Mi10k by a considerable margin.

Приложение A Базовые материалы bit more, and requires considerably more power and cool ing resources, for Classic McDATA customers who already have extensive Mi10k deploym ents, this is still the best option for transpar ently growing those environ ments. It is expected that Brocad e will conv erge the applicable portions of the Mi10k feature-set with Brocade “native” director technology at some point in the future.

In the m ean tim e, the Mi10k is still being sold and sup ported, and can co-ex ist with Bro cade-classic platforms using a number of strategies such as compatible firmware, routers, and storage-centric network topologies.

Директор Brocade M The 140-port Brocade M6140 provides a high avail ability, high-perform ance, flexible building block for large SAN deploym ents. It is a single-stage, 140-port di rector designed supporting 1Gbit to 10Gbit FC interfaces.

It can m eet the connectivity dem ands of both open sys tems and m ainframe FICON environm ents. Brocade is actively selling this platfor m and ha s no i mmediate plans to stop doing so.

While this direc tor is built us ing s omewhat outdated technology com pared to the Brocade 48000, for Classic McDATA customers who already have extensive M or 6064 deploym ents, the M6140 is still the best option for transparently growing those environments.

Периферийные коммутаторы Brocade M4400 и M The M4400 has 16x 4G bit FC ports in a 1u / rack width form factor. The M4700 has 32x 4Gbit F C ports in 1u, and takes a full rack-width. These two platform s are still shipping at the time of this writing. Since the Brocade 5000 offers a superset of th eir capabilities, Brocade will stop selling the M4400 and M4700 at the end of 2007.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад Support is expected to continue for five years after the fi nal shipment date.

Маршрутизаторы Brocade M1620 и M The M1620 has two GE ports for SA N extension, and two FC ports for local E _Port connectivity. The platform can be deployed to support lower-end DR and BC envi ronments. The M2640 has a sim ilar architecture and use case, but with 12x FC ports and 4x GE ports.

These platform s used the now-defunct iFCP protocol for SAN extension. Since no other vendors ever im ple mented iFCP besides McDATA, an d even McDATA had an FCIP roadm ap, the iFCP pr otocol has actually been considered a dead end by the industry at large for several years. As a resu lt, Bro cade intend s to stop se lling the se two platforms at the end of 2007 in favor of extension so lutions using the Brocade 7500 router and FR4-18i blade, which support the FCIP protocol.

Шлюз Brocade Edge M The Edge M3000 interconne cts Fibre Channel SAN islands over an IP, ATM or SONET/SDH infr astructure.

Brocade is activ ely se lling th is platfor m and ha s no im mediate plans to stop doing so.

The M3000 enables many cost-effective, enterprise strength data replication solutions, including both disk mirroring and rem ote t ape backup/restore to m aximize data ava ilability and bu siness continuity. Its any-to-any connectivity and m ulti-point SAN routing capab ility pro vide a flex ible sto rage infrastru cture for rem ote storage applications.

In m ost cas es, the Edge M3000 has been superseded by the Brocade 7500 router and FR4-18i blade. However, in some cases the M3000 provides a superior fit. For ex ample, depending upon the nature of the payload, Приложение A Базовые материалы the M3000 can com press data by up to 20:1, dram atically reducing ba ndwidth cos ts. W ith this com pression tech nology, custom ers can achieve gigabit per second throughput using existing 100Mb Ethernet infrastructure – but at a fraction of the cost. It also implem ents tape pipe lining which can prov ide a cons iderable perfor mance benefit for remote tape vaulting solutions.

Of course, not all custom ers have such highly com pressible data, and e quivalent features enabled or planned for the Brocade 7500 and FR4-18i m ay provide equiva lent benefits, so the market for the M3000 is considered to be limited where com pression and tape pipelining in par ticular and concerned. Bu t the M3000 does have ATM and SONET/SDH connectiv ity advantages which are likely to keep it in the product portfolio for quite som e time to come.

Шлюз Brocade USD-X The USD-X is a high -performance platform that con nects and extends m ainframe and open-system s storage related da ta replication app lications for both disk and tape, along with rem ote channel networking for a wide range of device types. Brocade is actively selling this plat form and has no immediate plans to stop doing so.

While it is p ossible to use this platform in pure o pen systems environm ents, the prim ary curren t use cases for this product are m ixed and pure m ainframe environments as other products solve the extension problem more cost effectively for most open-systems customers.

This m ulti-protocol gateway and e xtension pla tform interconnects host-to-storage and storage-to-storage sys tems across the en terprise – regardless of distance – to create a hig h capacity, high performance storage network using the latest high speed in terfaces. It supports Fibre Channel, FICON™, ESCON, Bus and Tag, or m ixed en Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад vironment systems. The intermediate WAN may be ATM, IP, DS3, or many other technologies.

Инсталлированая база платформ Brocade One of the advantages that Brocade has in the SAN marketplace is its la rge insta lled b ase. Brocad e has m il lions of ports running in m ission-critical production environments around the world, representing literally bil lions of hours of production operation to date. Brocade has a po licy of prioritizing backwards compatibility with the installed base for new products. 104 This allo ws cus tomers buying Brocade products to get a long useful life out of them, to achieve high ROI before needing to up grade.

This subsection describes many of the platforms in the Brocade installed base. SAN designers may encounter any of these products, and must know their capabilities when designing solutions that involve them.

Коммутаторы SilkWorm 1xx0 FC The first platform-group that Brocade shipped was the Brocade 1xx0 series ( Figure 93 and Figure 94). Shipped first in early 1997, this design was sim ply called the “SilkWorm switch” as there we re no other Brocade plat forms to differentiate between. Over time, other platforms were added. The first 16-port switch became known as the “SilkWorm I,” with its successo r b eing the “S ilkWorm II.” In early 1998, a lower co st 8-port “S ilkWorm Ex press” platfor m was shipped based on the sam e Some restrictions apply, of course. For example, it may be necessary to run certain firmware versions and design solutions within scalability con straints to fit within a vendor support matrix. Also, it is not possible to continue support for an installed-base platform literally forever. The typical case is to continue support for five years after the last sale date of a product line.

Приложение A Базовые материалы architecture, but with h alf of the ports rem oved. By the time that the SilkW orm 2000 series shipped, Brocade had enough platfor ms that the first generation switches be came known as the “SilkWorm 1xx0 series.” Figure 93 - SilkWorm II (1600) FC Fabric Switch Figure 94 - SilkWorm Express (800) FC Fabric Switch These switches could be configured at the tim e of manufacture to support either FC-AL or FC fabric devices (Flannel or Stitch ASICs respec tively, p 503) using com binations of 2-port daughter cards (Figure 95).

Figure 95 - SilkWorm 1xx0 Daughter Card All SilkWorm 1xx0 swi tches ran Fabric OS 1.x. The product line consisted of 8- and 16-port FC fabric switches, with all ports runni ng at 1Gbit. (8-port = Silk Worm Express and SilkWor m 800;

16-port = SilkW orm Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад II and S ilkWorm 1600.) Ports could accep t either optical or copper GBICs. Managem ent tasks could be perform ed using buttons on the front panel on m ost models. All models had RJ45 IP/Ethernet and DB9 serial interfaces.

This Brocade platform group is considered to be en tirely obsolete. The 1xx0 switches are sim ply not compatible with m any of the new features released from Brocade ov er the pas t few years, and the hard ware pre dated som e of the FC stan dards. Brocade recomm ends that SilkWorm 1xx0 series switches be upgraded to newer Brocade products and technologies in all cases.

Коммутаторы SilkWorm 2xx0 FC The SilkW orm 2xx0 series consisted of several plat forms all u sing th e L oom ASIC (p 504) and running Fabric OS 2.x. The first platform s in this group – the SilkWorm 2400 and 2800 – sh ipped in the m iddle of 1999. At the time of this writing, the SilkWorm 2xx0 plat form group has reached the end o f its suppo rtable life.

Most OEMs have declared these switches to be unsup ported, and the rest are expected to do so by the end of the year. Users should consider 2xx0 switches to be obsolete, and should plan for upgrading in the near future.

Figure 96 through Figure 99 show the m ost popular 2xx0 series platform s. All of these products operated at 1Gbit Fibre Channel, and had a single-stage central mem ory architecture for non- blocking and uncongested operation. All of the switches in this series had an IP/Ethernet management port. Most had a DB9 serial port for initial configuration, em ergency access, an d out-of band m anagement, with the 2800 being the exception to that rule. (It had a push-button control panel and screen for initial configuration.) The 2xx0 series has been superseded by other Brocade products. However, these sw itches are still widely Приложение A Базовые материалы deployed. Brocade has found that the num ber of Silk Worm 2800 platform s still in production is close to the number that originally shipped: something on the order of a million ports in p roduction. As a result, Brocade antici pates that many custom ers will need to perf orm 1Gbit to 4Gbit m igrations over the next year, now that these switches have reached the end of their lifecycle.

SilkWorm 20x The entry-level SilkW orm 20x0 ( Figure 96) was a 1u 8-port switch, with seven fi xed ports (GLMs) and one port with removable media (GBIC).

The platform could be purchased in three varieties, depending on the software keys that were load ed at the factory. The third digit in the platform product ID (20x 0) indicated these software op tions, not any difference in hardware. T he 2010 cam e with support only for Quick Loop, so only FL_Ports could be attached, not F_Port fabric devices or E_Ports. The 2040 supported fabric nodes but only one E_Port, and the 2050 had unlim ited fabric support. Both the 2010 and 2040 provided custom ers with complete investm ent protection, as either could be upgraded to the full-fabric 2050 with license keys available through all channel partners. Power input was provided by a single fixed suppl y, and fans were fixed as well, so the entire platform was considered a FRU.

Figure 96 - SilkWorm 2010/2040/ Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад SilkWorm 22x The 1.5u 16-port S ilkWorm 22x0 (Figure 97) brought higher rack density to the entry-level switch market.

Figure 97 - SilkWorm 2210/2240/ It had a single fixed powe r supply, like the 20x0, and could be purchased with the same three software licens e variations. Also like the 20x0, the entire platform was considered a single FRU. However, all 16 m edia on the 22x0 were removable GBICs.

This platform was also us ed as the basic building block for the SilkW orm 6400, wh ich consisted of a sheet metal enclosure containing si x SilkWor m 2250 switches, configured and wired together at the factory to for m a Core/Edge f abric, m anageable as a single platform. That arrangement yielded sixty-four usable ports.

SilkWorm The SilkW orm 2400 ( Figure 98) was targeted at the midrange segment. Like the 20x0, it was an 8-port switch, but had redundant hot-swappa ble power supplies and fans.

Figure 98 - SilkWorm SilkWorm The SilkWorm 2800 (Figure 99) was a 16-port switch like the 22x0, but had ente rprise-class RAS fea Приложение A Базовые материалы tures like the 2400. This was by far the m ost popular of the 2xx0 series. In m any environm ents, the num ber of 2800 switches installed today st ill rivals the number of later p latforms. This was th e only platf orm in the s eries that did not have an externally -accessible serial port. In stead, the initial switch conf iguration could be perform ed using buttons and a screen built into the cable-side panel.

Figure 99 - SilkWorm Коммутаторы SilkWorm 3200 / In 2001, the SilkW orm 2xx0 product fam ily was su perseded by the SilkWorm 3200 and 3800 switches. They were both powered by the Bloom ASIC (p505), which in creased the port speed to 2Gb it and added a range of new features including trunking, a dvanced performance moni toring, and more advanced zoning. Both platform s had IP/Ethernet and DB9 serial m anagement interfaces, and both ran Fabric OS 3.x. Another major difference between these and prior Brocade platform s was that th e SilkWorm 3200 and 3800 used SFPs, whereas all prior platforms had used GBICs.

At the tim e of this wri ting, the S ilkWorm 3200 has been superceded by the SilkW orm 3250 (p 436), and the SilkWorm 3800 has been largely superceded by the Silk Worm 3850. (The Silk Worm 3800 is still ship ping, but most users are expected to transition to the 3850 in the near future because of its many improvements.) SilkWorm This platform had eight 2Gbit FC ports in a 1u enclo sure. It was targ eted at the entry m arket. Like its Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад predecessor, the SilkW orm 20x0, this switch had a single fixed power supply and fixed fans: the entire platform was considered a FRU.

Figure 100 - SilkWorm SilkWorm The SilkWorm 3800 was targeted at the m idrange and enterprise markets. It had RAS features equ ivalent to the SilkWorm 2800.

Figure 101 - SilkWorm Коммутаторы SilkWorm 3250 / 3850 FC These platforms represented th e entry level of the Fi bre Channel fabric switchi ng market. They each had non removable power supplies. Both were powered by the “Bloom-II” ASIC (p 503). The ASIC arrangem ent in both platforms yielded a single-stage central m emory switch.

They both had a cross-secti onal b andwidth su fficient to support all ports full-speed fu ll-duplex at once. Fabric OS 4.2 or later was required. The SilkW orm 3250 ( Figure 102) had eight non-blocking and uncongested 105 2Gbit There has been debate in the industry about the definition of “blocking.” When Brocade uses the word, it refers to Head of Line Blocking (HoLB). For example, the SilkWorm 24000 is not subject to HoLB because it uses virtual channels on the backplane. It is therefore “non-blocking.” All ports can run full-speed full-duplex at the same time, which is “uncongested operation.” Приложение A Базовые материалы (4Gbit full-duplex) Fibre Channel fabric U_Ports. The SilkWorm 3850 (Figure 103) had sixteen ports.

These two platform s were introduced in 2004 to re place the p opular Silk Worm 3200 and 3800 switches.

Both were available with software p ackages ranging from the lowest e ntry lev el (“ Value Line” ) package u p to the full enterprise-class Fabric OS 4.x feature set. (See “Brocade Software” on p444.) This allowed the platform s to be purchased with the right balance of cost v s. features for a wide range of custom ers, from sm all businesses to major enterprises. Reg ardless of licensed options, both switches had enterprise features such as hot (non disruptive) code load and activation (HCL/A) and th e Fabric OS CLI.

Figure 102 - SilkWorm Figure 103 - SilkWorm SilkWorm 3900 и The SilkW orm 3900 ( Figure 104) delivered 32 ports of 2Gbit Fibre Channel in a 1.5u rack-m ountable enclo sure.

U_Port interfaces automatically detect FC topology to become F_Port, FL_Port, or E_Port as needed.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад Figure 104 - SilkWorm First shipped in 2002, this platform was targeted at the midrange SAN market, but had many features appropriate for the enterprise m arket as well. In m any ways, the SilkWorm 3900 was more like a small director than like a switch. Like the SilkW orm 12000, this platform had an “XY” topology CCMA m ultistage architecture. ( See “Многоуровневые внутренние архитектуры” on page for more information.) Like the 12000, it supported FICON (a m ainframe protoc ol), had redundant and hot swappable power and cooling FRUs, and ran Fabric OS 4.x with hot code load and activation.

Typical usage cases for th e 3900 included stand-alone applications for small fabrics, edge deploym ents in sm all to large Core/Edge (CE) fabrics, and core deploym ents in small to medium CE fabrics.

The SilkWorm 12000 ( Figure 105) was Brocade’s first fully-modular 10-slot ente rprise-class director. This system first shipped in 2002.

Приложение A Базовые материалы Figure 105 - SilkWorm 12000 Director The chassis was rack-mountab le in 14u, and could be populated with up to eight port-blades and two CPs.

Overall, the chass is could be configured star ting with and going up to 128 2Gbit Fibre Channel ports. Each blade was hot-pluggable, as we re the fans and power sup plies. The redundant CPs ran Fabric OS 4.x and supported HCL/A. Typical usage cases for the 12000 included stand-alone applications, edge deploym ents in large CE fabrics, and core deployments in medium to large CE fab rics.

The backplane interco nnected the port blad es with each other to form two separate 64 -port domains. The in terconnection em ployed an “XY ” topology CCM A multistage architectu re, much like the SilkW orm 3900.

The two 64-port dom ains were both controlled by the same redundant CP blades, and resided in the sam e chas sis, but had no internal data path between them. They could be used separately in redundant fabrics, or could be used together in th e same fabric by connecting them with ISLs.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад At the tim e of this wri ting, the S ilkWorm 3900 has been superseded by the SilkW orm 4100 (p 400), and the SilkWorm 12000 has been supe rseded, first by the Silk Worm 24000 (p 403), and then the 48000 (p 403). For the foreseeable future, the older platforms will continue to b e supported in networks with m ore advanced platform s. In addition, the SilkWorm 12000 chassis can be upgraded in the field to become a SilkWorm 24000 or 48000. Директор SilkWorm The SilkWorm 24000 ( Figure 106) was a fully modular 10-slot enterprise-c lass director, and could be populated w ith up to eight port-blades and two Control Processors (CPs). This platform first shipp ed in ea rly 2004. It could be configured from 32 to 128 ports in a single domain using 16-port 2G bit Fibre Chann el blades.

The platform had industry-l eading performance and high availability characteristics. Each blade was hot-pluggable, as were the fans and power supplies. The chassis had re dundant control processors (CPs) with redundant active active uncongested and non-bloc king switching elem ents, which ran Fabric OS 4.2 or higher and supported HCL/A.

Of course, not all OEMs support this procedure.

Приложение A Базовые материалы Figure 106 - SilkWorm 24000 Director The SilkW orm 24000 was an evolution of the Silk Worm 12000 design. It could use the sam e chassis as the 12000: the power supplies, fans, backplane, and sheet metal enclosure were a ll com patible. As a result, it was possible to upgrade an ex isting 12000 chassis to the 24000 in the field by replacing just the CP and port blades. Look between Figure 106 and Figure 105 (p 439) and the similarity will be apparent. It can also su pport 16 port 4Gbit FC Brocade 48000 blades in som e com bina tions with existing SilkWorm 24000 blades.

Even though the chassis we re m echanically compati ble, there were differen ces between the SilkW orm and the SilkWorm 12000.

Some of the differences were m inor. For example, the 24000 chassis and blade set had an im proved rail glide system that makes blade insertion / extraction easier. Lar ger ejector levers helped by providing greater m echanical advantage. The 24000 CP blades had a blue L ED to indi cate which CP was active.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад There were also more important differences in the un derlying technology. For exam ple, the 24000 used the “Bloom-II” ASIC, while the 120 00 used th e orig inal “Bloom” chipset. (See “ Bloom и Bloom -II” p 505.) The overall chassis power consum ption and cooling require ments were lowered by m ore than 60%, with the resu lt that ongoing operational costs were reduced and MTBF increased by m ore than 25%. Further im provements in MTBF were achieved through com ponent integration:

fewer components means less frequ ent failures. Perform ance was improved by changing the m ultistage ch ip layout from an “XY” t opology to a “CE” arrangem ent.

(See “Многоуровневые внутренние архитектуры” on page 511 for m ore infor mation.) This al lowed the 24000 to present al l of its ports in a single internally-connected domain. The 12000, in contrast, pre sented two 64-port domains and needed external ISLs if traffic was required to flow between the domains.

The SilkWorm 24000 Fibre Channel Director pro vided the following features:

128 ports per chassis in 16-port increments Port blades are 1Gbit/2Gbit Fibre Channel Management access via Ethernet and serial ports High-availability features include hot-swappable FRUs for port blades, redundant power supplies and fans, and redundant CP blades Extensive diagnostics and monitoring for high Reli ability, Availability, and Serviceability (RAS) Non-disruptive software upgrades (HCL/A) 14U rack mountable enclosure allows up to 384 ports in a single rack.

Non-blocking architecture allows all 128 ports to op erate at line rate in full-duplex mode Forward and backward compatibility within fabrics with all Brocade 2000-series and later switches Приложение A Базовые материалы SilkWorm 12000s are upgradeable to 24000s Small Form-Factor Pluggable (SFP) optical transceiv ers allow any combination of supported Short and Long Wavelength Laser media (SWL, LWL, ELWL), as well as CWDM media Cables, blades, and PS are serviced from the cable side and fans from the non-cable side Air is pulled into the non-cable-side of the chassis and exits cable-side above the port and CP blades and through the power supplies to the right Встроенные продукты SilkWorm 3016 встроенный FC коммутатор The SilkWorm 3016 was specifically designed for the IBM eServ er BladeCenter. It was powered by th e “Bloom-II” ASIC. It had a cross-se ctional bandwidth suf ficient to support all ports fu ll-speed full-duplex at once.

The SilkWorm 3016 (Figure 107) has two outbound ports (i.e. facing to the SAN) and 14 inbound ports (one to each blade server), all are non-blocking and uncongested 2Gbit (4Gbit full-duplex) F ibre Channel fabric U_Ports. This platform wa s introduced in 2004 by Brocade and IBM.

The 3016 was available with software packages ranging from entry level (“Value Line”) p ackage up to the full en terprise-class Fabric OS 4.x feature set.

Figure 107 - SilkWorm 3016 Embedded Switch Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад SilkWorm 3014 встроенный FC коммутатор The SilkWorm 3014 was specifically designed for the Dell PowerEdge blade server. It was powered by the “Bloom-II” ASIC. It had a cross-se ctional bandwidth suf ficient to support all ports fu ll-speed full-duplex at once.

The SilkW orm 3014 ( Figure 108) had four outbound (to the SAN) and 10 inbound ports (one to each blade server), all were non-blocking and uncongested 2Gbit (4Gbit full duplex) Fibre Channel fabric U_Ports.

Figure 108 - SilkWorm 3014 Embedded Switch This platform was introduced in late 2004 by Brocade and Dell. The 3014 was available w ith software packages ranging from entry level (“Value Line”) package up to the full enterprise-class Fabric OS 4.x feature set.

Лицензируемые функции Brocade Brocade adds value in its products with both hardware (i.e. ASICs) and software. This subsection describes some of the m ost popular software features Brocade offers. It only covers features develope d internally by Bro cade En gineering;

it does not, for ex ample, discuss third-party management tools which use one of the supported APIs.

Модель лицензирования Brocade Some features are basic com ponents of the operating system and platform ASICs, such as support for nodes us ing N_Port. (I.e. support for F_Port on a switch.) These generally do not require purch asing a license key, but do add value. Som e Brocade com petitors (i.e. lo op-switch Приложение A Базовые материалы vendors) do not offer products that support F_Port, so even though it seem s like this should be a basic building block of all switches, it is worth calling it out exp licitly to show its value.

Other featu res, such as the FC-FC Routing Service, require m uch higher value enhancem ents to both ASIC and OS support. Routing features and more advanced fab ric service options require the purchase and installation of license keys to enable them. On all platform s, the CLI command licenseShow can be used to determ ine which keys are installed. If a desi red feature is m issing, work with the ap propriate sa les ch annel to purch ase the k ey, and then us e the licenseAdd command to ins tall it on th e switch or router.

Подключение Fabric Node (F_Port) At the tim e of this writing, m ost Fibre Channel nodes (e.g. host and storage devices ) use the N_Port topology.

“Node Port” is a set of sta ndards-defined behaviors that allow a no de to acces s a fabric and its s ervices m ost cleanly. In order to co nnect an N _Port to a s witch, the switch m ust support the corresponding “Fabric Port,” or F_Port topology as defined in the standards. E very Bro cade platform ever shipped supports F_Port, although in a few of the older platform s (e.g. the SilkW orm 2010) this feature required purchasing a se parate license key. This is the preferred method for connecting nodes into fabrics.

Подключение Loop Node (FL_Port) (QL/FA) Early in the evolution of Fi bre Channel, there was de bate about whether or not fa brics were necessary. Som e There are also equivalent GUI commands in WEBTOOLS and Fabric Manager. CLI commands are generally used for examples because all plat forms include the CLI as part of the base OS, while some do not include the GUI tools.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад vendors believed that F C-AL hubs and “loop switches” provided sufficient connec tivity. The argument went something like, “How many people will ev er n eed m ore that a dozen or so devices in a SAN? Nobody!” It turned out that the real answer wa s, “Just about everybody,” so the vastly more scalable and f lexible fabric switches ra p idly eroded the hub market.

To accomplish this market transition gracefully, it was necessary for nodes designed fo r FC-AL hubs to attach to fabric switches. The Fibre Channel standards defined a switch port type to accomplish this: the FL_Port. (“Fabric Loop” port.) This allowed, for exam ple, HBA drivers written for hubs to present NL_Ports (“Node Loop” ports) and plug into switch FL_Ports. Brocade developed the Flannel ASIC (p503) to address this need. Platforms using Flannel needed to be configured with loop ports at the factory, but in subsequent pr oducts with m ore advanced ASICs, any port could support loop nodes. There are s ome im portant variab les that affect how loop devices connect to a fabric:

Does the loop device know how to talk to the name server, and does it know how to address devices using all three bytes of the fabric “PID” address? (Public vs.

private loop.) If the device uses private loop, is it an initiator or a target? Private loop initiators need more help to use fabric services, i.e. the name server.

Is there just one loop device directly attached to a switch port (like an NL_Port HBA) or are there many loop devices on that port (like a JBOD)?

Throughout the remainder of this subsection, the obsolete SilkWorm 1xx series will not be considered. E.g. statements about “all platforms” may actu ally refer to “all platforms except the SilkWorm 1xx0.” Приложение A Базовые материалы Public loop support for a dire ctly attached NL_Port is the easiest case for a switch to handle. The switch ASIC needs to be able to support FC-AL “loop prim itives,” which is the protocol used fo r loop initialization and con trol. All ports on all B rocade platform s today have the hardware and software to support this m ode of operation as part of the base OS.

Public loop support for m ultiple nodes on a sing le switch port is slightly more complex. At the time of this writing, all platform s excep t the A P7420 Multiprotocol Router support this mode as part of the base OS. The m a jor a pplication f or this is JBODs: it is not curren tly possible to attach a JBOD directly to the AP7420, but JBODs can coexist in a f abric or Meta SAN with that platform.

Private loo p storage d evices requ ire s till m ore ad vanced ASIC functionality known as “phan tom logic,” and corresponding software e nhancements. This allows Network Address T ranslation (NAT) between the one byte priv ate loop and three-by te fabric add ress spaces.

This needs ASIC hardware support because every fram e needs to be rewritten without performance penalty. Trying to implement multi-gigabit NAT in software would not be practical. Brocade began to provide support for private loops with the Flannel ASIC.

Private loop technology has been declining rapidly, so Brocade had not prioritized phantom logic for future plat forms. All ASICs through Bloom -II (p 505) support this, but subsequent ASICs like FiGeRo (p 510) and Condor (p506) do n ot. Platform s like the Multip rotocol Router and the Brocade 4100 cannot accept direct private storage attachment, but can co-exist seamlessly in ne tworks with private storage atta ched to Loom, Bloom, and Bloom -II switches. Switches with private storage support include it as part of the base OS.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад Private loop initiators (hosts) are the hardest case to solve. Not only do they requir e loop primitives and phan tom logic, but they also re quire much m ore advanced fabric services enhancements.

An initia tor norm ally queries th e f abric nam e server for targets, and then sends IO to the m. With public initia tors ta lking to private targets, a switch can “no tice” the IO f rom the initia tor an d automatically set up phantom logic NAT entries as n eeded. Private initiators do not know how to talk to the nam e server;

they learn about available targets by probing th eir loop. They cannot send IO to a target un til after NAT has been set u p, so th e automatic learning mechanism does not work.

The “Quick Loop / Fabric A ssist” optionally licensed feature set is designed to addr ess this need. Users explic itly define which devic es a p rivate host n eeds to a ccess using zoning, and the switch creates the required NAT en tries on that basis. QL/FA is supported as an optionally licensed feature on the SilkW orm 2xx0 series, and the SilkWorm 3200/3800 switches, i. e. all Fabric OS 2.x and 3.x platform s. QL/FA only applies to private initiators, not to any other usage case, an d private initiators are the most rapidly declining segm ent of the SAN m arket. As a result, Brocade has not priori tized porting the feature to 4.x or beyond, except to support QL/FA on 2.x/3.x switches in the same fabric as 4.x switches. At the time of this writing, even that level of QL/FA support is essen tially obsolete.

Фабрики из нескольких коммутаторов (E_Port) The E_Port (Expansion Port) protocol allows sw itches to be in terconnected to f orm a larg er f abric: a single re gion of c onnectivity built f rom multiple disc reet Приложение A Базовые материалы switching com ponents. 110 This f eature allows SAN solu tions to b e built us ing a “pay as y ou grow” approach, adding switches to a fabric as needed. It also allows much more flexible network designs, including support for geo graphical separation of com ponents. Without this feature, the m aximum scalability of a connectivity m odel would be limited to the num ber of ports on a single switch, and the maximum geographical radius of a network would be the distance supported by a node connected to that switch.

Today, the ability to n etwork switches together to form a fabric seem s comm onplace, but when Brocade started selling switches for production use in 1997, it was a key differentiator. Most competitors could not do this a t all, and the f ew that had th e feature had m any configura tion constraints. Brocade was not just a pioneer in this space;

Brocade was the pioneer. T his is r eflected in th e fact that FSPF 111 was authored and given to the standards bodies by Brocade. W ithout this and other Brocade authored protocols, it woul d not be possible m uch less commonplace to form multi-switch fabrics today.

Виртуальные каналы A unique feature available in every Brocade 2Gbit and 4Gbit fabric switch, Brocade Virtual Channel (V C) tech nology represents an important breakthrough in the design of large SANs. 112 To ensure reliable ISL communications, VC technology logically partitions bandwidth within each This also requires the interaction of other fabric services, such as the name server and zoning database processes, but Brocade keys the feature off of E_Port.

The protocol used by all vendors to determine topology and path selection.

Actually, even the SilkWorm 1xxx series of switches had a form of VC support, but it was quite different and not particularly relevant to SAN design today. But it is interesting to note that Brocade has already gone through four generations of VC development: it’s a “well-baked” feature.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад ISL into m any different virt ual channels as shown in Figure 109, and prioritizes traffic to optimize performance and prevent head of line blocking.

Fabric Oper ating Sys tem autom atically m anages VC configuration, eliminating the need to manually tune links for performance. This technology also works in conjunc tion with trunking to im prove the efficiency of s witch-to switch communications, and simplify fabric design.

Virtual Channels Mapped to ISLs (2Gb/ Sec Switches) Physical Inter-Switch Link (ISL) VC VCs provide separate queues for different traffic streams. This E_Port E_Port Switch Switch prevents head of line blocking (HoLB), and allows QoS between different classes of traffic.

VC Multiple logical Virtual Channels (VCs) exist within a single physical ISL or trunk group.

Figure 109 - VCs Partition ISLs into Logical Sub-Channels In 2Gbit Brocade products, there were a to tal of VCs (0-7) assigned to any li nk. This could be internal links, ISLs, or trunk groups. Each VC had its own inde pendent flow control mechanisms and buffering scheme.

In Brocades 4Gbit products, the Virtual Channel in frastructure has been greatly enhanced, and som e of the automatic assignm ent m echanisms have been im proved.

There are now 17 VCs assigned to any given internal link:

one for class F traffic a nd sixteen for data. Each data VC now has 8 sub-lists or sub-Virtual Channels;

each of those has its own credit m echanism and independent flow con trol. SID/DID pairs are assigned in a round-robin Приложение A Базовые материалы fashion acro ss all the VCs, but with these n ew enhance ments, a better distribution is m ade. Of course, when connecting 4Gbit switches toge ther with 2Gbit switch es, the ISLs and trunk groups still use 8 VCs. This is done to avoid potential backwards compatibility issues.

In the ne ar f uture, Broc ade will b e rele asing a QoS feature which allows 4Gbit sw itches to u se the increas ed VC capabilities to prioritize so me flows above others in congested networks. As a practic al matter, this feature is expected to apply almost exclusively to long distance con nections in DR or BC solutions, s ince, for local-distance ISLs and IFLs, it is generally better to avoid congestion in the first place than it is to m anage which devices are most harmed by congestion.

Буферные кредиты Buffer-to-buffer (BB) credits are used by switch ports to determine how many frames can be sent to the recipient port, thus preventing a sour ce device from sending m ore frames than can be received. The B B credit m odel is th e standard method of controlling the flow of traffic within a Fibre Channel fabric.

Like VCs, BB credits are hand led autom atically by the Fabric Operating System in most cases. For e xtremely long distance links, it m ay be desirable to m anually in crease the num ber of cred its on a port to m aximize performance. (This m ay require an Extended Fabrics li cense.) In the context of host or storage connections to a switch, the number of BB credits on a link will be negoti ated be tween the devic e and the s witch at initia lization time. For ISL connectio ns, each Virtual Channel will re ceive its own share of BB credit s. In this case, credits are handled the sam e way whether the port is part of a trunk group or operating independently.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад This topic is discussed in more detail under “” on page 346.

Шлюз доступа Access Gateway uses the N_Port ID Virtualization (NPIV) stan dard to pres ent blade server FC connection s as logical nodes to fabrics. This elim inates entire catego ries of traditiona l heterog eneous switch -to-switch interoperability cha llenges. Attach ing throug h NPIV enabled switches and d irectors, Access Gatew ay seam lessly conn ects serv er blades to Brocade, clas sic McDATA, or even to other vendors’ SAN fabrics.

Traditionally, when blade se rver chassis hav e been connected to SANs, each enclosure would add one or two more switch dom ains to the fabric, which had a poten tially disastrous effect on scalability. Increasing th e number of blade enclo sures also m eant additional switch domains to m anage, increasing day-to-day SAN m anage ment burden. These additional dom ains created complexity and could sometimes disrupt fabric operations during the deployment process. Finally, fabrics with large numbers of switch dom ains created firm ware version compatibility m anagement cha llenges: som etimes it was impossible to find a firmwa re version which was sup ported by all devices in the fabric.

To address these challenges, Access Gateway presents blade serv er NPIV connection s rather th an switch do mains to the fabric. This means that Access Gate way can support a much larger f abric, and that switch firmware on the Acces s Gateway does not in teract with the o ther switches in the f abric as a switch. Rather, it interacts as a node, which greatly reduces firmware dependencies.

Unlike FC pass-th rough solutions, it can do all of this without substantially increas ing the num ber of switch ports required.

Приложение A Базовые материалы To enhance availability, Access Gateway can auto matically and dyna mically fail over the pref erred I/O connectivity path in cas e one or more fabric connections fails. This approach helps en sure that I/O operations fin ish to com pletion, even duri ng link failures. Moreover, Access Gateway can autom atically fail back to the pre ferred fabric link after the c onnection is restored, helping to maximize bandwidth utilization.

ПО Value Line The Value Line software license p ackages reduce the cost of acquiring and depl oying an entry-level SAN, while allowing software-key upgrades to full enterprise class functionality. Designed for s mall and m edium sized organizations, the Value Line integrates innovative hard ware and software features that m ake it easy to deploy, manage, and integrate into a wide range of IT environ ments. These powerful yet flex ible capab ilities enable organizations to start sm all and grow their storage net works in a s calable, non-disruptive, and efficient m anner.

This is especially beneficial for organizations that need to upgrade their existing SAN e nvironment with m inimal disruption. In addition, th ey sim plify adm inistration through embedded Brocade WEBTOOLS software.

The main thing that SAN designers need to be aware of is that a Value Line switch m ight not have f ull fabric capabilities. In exchang e fo r substantially reduced acqui sition cost, the buyer of a Value Lin e switch would giv e up features such as fabric sc alability (number of dom ains supported) or num ber of E_ Ports allowed. W hen deploy ing a Value Line switc h into a lar ger solu tion, it m ight therefore be necessary to upgrad e its license k ey to a full fabric key.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад Виртуальные фабрики / административные домены Virtual Fabrics allows the partitioning of one physical fabric into multiple logical fabrics that can be managed by separate A dmin D omain a dministrators. Virtu al Fabric s are ch aracterized by hierarch ical m anagement, granular and flexible security, an d fast and easy reconfiguration to adapt to n ew inf rastructure requ irements. They allow IT administrators to m anage se parate corporate functions separately, use different perm ission levels f or SAN ad ministrators, provide storage for team s in re mote offices without comprom ising local SAN security, and increase levels of data and fault isolation without increasing SAN cost and complexity. Once Fabr ic OS 5.2.0 or later is in stalled in th e SAN, Virtual Fabric s can be im plemented on the fly w ith no physical topology changes and no dis ruption.

The Adm inistrative Dom ains feature is the key en abler for V irtual Fabrics technology. Adm in Dom ains create par titions in th e f abric. Admin Dom ain m ember ship allows device resources in a fabric to be grouped together into separately m anaged logical groups. For ex ample, a SAN adm inistrator m ight have the Adm in role within one or m ore Admin Domains, but be restricted to the Zone Admin role for other Admin Domains.

Although they are part of the sam e physical fabric, Virtual Fabr ics ar e sepa rate logica l entities be cause they are isolated from each other via sev eral mechanisms such as:

Data isolation: Although data can pass from one Vir tual Fabric to another usi ng device sharing, and links can be shared among m ultiple Virtual Fabrics, no data can be unintentionally transferred even when Virtual Fabrics are not zoned.

Приложение A Базовые материалы Control isolation : W ithin Virtua l Fabrics, f abric ser vices are independent and ar e secu red from unwanted interaction with other Virtua l Fabric se rvices. This in cludes zoning, RSCNs, and so on.

Management isolation: Switc hes in a Virtual Fabric provide independent management partitions. If a switch is a member of more than one Virtual Fabric, it has multiple, independent management entiti es. Administrators are au thenticated to m anage one or m ore Virtual Fabrics, but they cannot access m anagement objects in other, unau thorized Virtual Fabrics.

Fault iso lation: Data co ntrol or m anagement fa ilures in one Virtual Fabr ic will no t im pact any other Vir tual Fabric services.

Admin Dom ain adm inistrators can m anage one or more Admin Domains while Virtual Fabric administrators have adm inistrative perm issions on all Adm in Dom ains.

Separate Admin Domains can be created for different op erating system s (FICON®, Z-Series, and open system s FCP, for example).

Devices can easily be s hared among different Adm in Domains without any special routing requirem ents.

Admin Domain administrators can configure and m anage their own zones;

they can c onfigure all rights and devices as long as they have the Adm in role for that particular Admin Domain. The Admin Domain feature is backwards compatible with the m illions of Brocade SAN ports al ready deployed, and no new hardware is required.

Implementing Virtual Fabrics is straight-forward, and fits into existing SAN m anagement m odels. The m an agement and best p ractices used today in a pre-Fabric OS 5.2.0 physical fabric with zoning can be im plemented in the sam e way in a Fabric OS 5.2.0 fabric with Adm in Domains and zoning.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад FCIP FastWrite и Tape Pipelining FCIP is a method of transparently tunneling FC ISLs between two geographically distant locations using IP as a transport. S torage is o ften sensitive to laten cy, and throughput is a great concern as well. Unfortunately, IP networks tend to have hi gh latency and low throughput compared to native F C solutions. Tape Pipelining and FastWrite are features av ailable on the Brocade router and FR4-18i blade th at im prove throughput and mitigate the negative affects of IP-related delay.

Tape Pipelining refers to writing to tape over a W ide Area Network (W AN) connection. FastW rite ref ers to Remote Data Replic ation (RDR) between two stor age subsystems. Tape is serial in nature, meaning that data is steadily streamed byte by byte, one file at a time onto the tape from the perspective of the host writing the file. Disk data tends to be bursty and random in nature. Disk data can be written anywhere on the dis k at any time. Because of these differences, tape and disk are handled differently by extension acceleration technologies.

Tape Pipelining accelerates the transport of streaming data by m aintaining optim al utiliza tion of the I P W AN.

Tape traffic without an accel erator mechanism can result in periods of idle link tim e, becoming more inefficient as link delay increases.

When a host sends a write comm and, a Br ocade 7500/FR4-18i sitting in the data path intercepts the com mand, and responds with a “transfer ready”. T he router buffers the incoming data and starts sending that data over the WAN. The data is sent as fast as it can, lim ited only by the bandwidth of the link or the committed rate limit.

On the heels of the write comm and is the wr ite data th at was enabled by the proxy targ et’s transfer-ready reply.

After the remote target receives the command, it responds with a transfer ready. The remote router intercepts Приложение A Базовые материалы that transfer ready, acts as a proxy initiator, and starts for warding the data arriving over the WAN.

The host is on a high-speed FC network, and most of ten will have com pleted se nding the data to the local router by this time. The local router returns an affirmative response. While the buffers are still transmitting data over the link, the host sends the next write comm and and the process is repeated on the host side until the host is ready to write a f ilemark. This proce ss m aintains a b alance of data in the r emote router’s buffers, permitting a constant stream of data to arrive at the tape device.

On the target side, the transfer ready indicates th e al lowable amount of data that can be receiv ed, which is generally less than what the host sent. The tran sfer ready on the host side, from t he proxy target, is for the entire quantity of data advertised in the write command. The transfer ready the proxy target responds with for the entire amount of data does not have to be the sam e as the trans fer ready the tape device responds with, which m ay be for a sm aller amount of da ta, that is, the am ount that it was capable of accepting at that time. The proxy initiator parses out the data in sizes acceptable to the target per the transfer re ady f rom the tape devic e. This m ay result in additional write commands a nd transfer read ies on the tape side compared to the host side. Buffering on the re mote side helps to facilitate this process.

The command to write the f ilemark is not inte rcepted by the routers and passes unf ettered from end to end.

When the filem ark is com plete, the target responds with the status. A status of OK indica tes to the ho st that it can move on.

FastWrite works in a som ewhat different m anner.

FastWrite is an algorithm that reduces the n umber of round trips required to com plete a SCSI write operation.

FastWrite can m aintain throughput levels over links that Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад have sign ificant latency. The Remote Data Replic ation (RDR) application still experi ences latency ;

bu t reduced throughput due to that latency is minimized.

There are two steps to a SCSI write:

1. The write comm and is sent acro ss the WAN to the target. This is essentially asking permission of the storage array to send data. The target responds with an acceptance (FCP_XFR_RDY).

2. The initiator waits until it receives that res ponse from the target before star ting the second step, which is sending the actual data (FCP_DATA_OUT).

With the FastW rite algo rithm, the local SAN router intercepts the originati ng write command and responds immediately requesting the initiator to send the entire data set. This happens in a coupl e of microseconds. The initia tor s tarts to send th e da ta, which is then buffered by the router. The buffer space in th e router includes enough to keep the “pipe” f ull plus addition al m emory to com pen sate for links with up to 1% packe t loss. 113 The Brocad e 7500/FR4-18i has a continuous su pply of data in its buff ers that it c an use to com pletely f ill the W AN, driving optimized throughput.

The Brocade 7500/FR4-18i se nds data across the link until the co mmitted ba ndwidth has been consu med. The receiving router acts on behalf of the initiator and opens a write exchange with the target over the local f abric or d i rect connection. Often, this t echnology allows a write to complete in a single round trip, speeding up the process considerably and mitigating link latency by 50%.

If a link has more than 1% packet loss or more, it means that there are se rious network issues that must be resolved prior to a successful implementation of FastWrite.

Приложение A Базовые материалы There is no possibility of undetected data corruption with FastW rite becau se the final response (FCP_RSP) is never spoofed, intercepted, or altered in any way. It is this final response that th e receiving device sends to indicate that the entire da ta set has been successfully received an d committed. The local router does not generate the final re sponse in an effort to expedi te the process, nor does it need to. If any single FC fr ame were to be co rrupted o r lost along the way, the target would detect th e condition and not send the final response. If the final response is not received within a certain am ount of tim e, the write se quence times out (REC_TOV) and is retransmitted. In any case, the host init iator knows that the write w as unsuc cessful and recovers accordingly.

FC FastWrite For native FC links or FC over xWDM, delay and congestion are typically one or more orders of magnitude better than with FCI P. However, th e spe ed of ligh t through glass still creates not iceable latency o ver long distance co nnections. As a result, it is poss ible for FC links over MAN/WAN distances to benefit from the same algorithms used in FCIP FastW rite. Brocade has added support for this feature to its 4Gbit router portfolio.

For exam ple, it is possible to deploy FR4-18i blades into chassis at each side of a DR or BC solutio n, and at tach storag e ports directly to th ese blades. (This is illustrated in “” s tarting on page 364.) After configuring appropriate zoning policies, any replication or m irroring traffic between the storage por ts will be accelerated us ing a similar mechanism to the one d escribed in the previou s section. This can sometimes result in massive increases in throughput, with the exact improvement depending on the distance, co ngestion of the ne twork, block size, and the number of devices sharing the inter-site links.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад Горячая загрузка и активация кода Hot code lo ad and activati on supports the stringent availability requirements of mission-critical environments by enabling firmware upgrades to be downloaded and ac tivated without disrupting othe r operations or disruption to data traf fic in the SAN. The switch continue s to route frames and provide full fabr ic services while new f irm ware is loaded onto its non- volatile storage. Once the download is complete, the new image is activated. During the ac tivation proces s, the switch still continues to rou te frames, without losing even a single bit of data traffic.

Advanced ISL Trunking (Frame-Level) Brocade IS L Trunking is ideal for optim izing per formance a nd sim plifying the m anagement of a m ulti switch SAN fabric containi ng Brocade switches. W hen two, three, or four adjacent ISLs are us ed to connect two Brocade 2Gbit FC switches, the s witches au tomatically group the ISLs into a single logical ISL, or “trunk.” W ith 4Gbit switches, it is pos sible to trunk up to eight adjacent links. Traf fic will be b alanced a cross these links, while still guaranteeing in-order and on-time delivery.

ISL Trunking is designed to significantly reduce traf fic congestion in storage ne tworks. W hen up to eight 4Gbit ISLs are combined into a single logical ISL, the ag gregated link has a total bandwidth of 32 Gbit/sec which can support a large number of si multaneous full-speed “conversations” between devices.

To balance workload across a ll o f the ISLs in the trunk, each incom ing fr ame is sent across the first avail able physical ISL in the tr unk. As a resu lt, transient workload p eaks for one system or application are m uch less likely to im pact the performance of other parts of the SAN fabric. Because the full bandwidth of each physical link is available, bandw idth is not wasted by inef Приложение A Базовые материалы ficient traffic routing. As a result, the entire fabric is util ized more efficiently.

Динамический Выбор Пути (Exchange-Level) Dynamic Path Selection (DPS) may also be referred to as exchange-level trunking. Like Advance ISL Trunking, DPS balanc es traf fic ac ross m ultiple ISLs. Unlike trunk ing, DPS does not req uire that th e ISLs be adjacen t. It uses the industry standard Fa bric Shortest P ath Firs t (FSPF) algorithm to select the m ost efficient route for transferring data in multi-switch environments. Any paths which are d eemed by FSPF to have equal cost will be evenly balanced by the DPS software and hardware. This is a particular advantage in core/edge networks with mul tiple core switches, since DPS can distribute load between different cores while Adva nced ISL Trunking cannot do so.

DPS matches or outperform s all similar features from any vendor except for Brocade Advanced ISL Trunking.

However, because DPS can be combined with frame-level trunking, organizations can achieve both m aximum per formance and availability.

Зонирование Brocade Zoning is a feature of all switch m odels. Us ing zoning, organizations can autom atically or dynamically arrange fabric-connected devices into logical groups (zones) across the phys ical configuration of the fabric. It is f unctionally sim ilar to VLANs f rom the IP networking world, though consid erably more advanced in many ways. In fact, zones coul d be thought of as being a combination of VLAN controls plus firewall-like ACLs.

Providing secure access control over fabric reso urces, Zoning prevents unauthorized data access, simplifies het erogeneous storage m anagement, segregates storage Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад traffic, m aximizes storage capacity, and reduces provi sioning time.

The need for this kind o f access con trol relates to th e “roots” of SAN technology: th e SC SI DAS m odel. Stor age devices directly attached to hosts (DAS) have no need for network-based access control features: access by other hosts is precluded by the limitations of the DAS architec ture. In contrast, SANs a llow a potentia lly larg e num ber of hosts to access all storage in the network, not just the systems that they are intended to access. If each host is al lowed to access every s torage array, the potential im pact of user error, virus infecti on, or hacker attacks could be immense. To prevent unintended access, it is necessary to provide access control in the network and/or the storage devices themselves.

There are m any m echanisms f or solving the SAN based acces s contro l problem. All of them have som e form of m anagement interface that allows th e creation of an access control policy, and some mechanism for enforc ing that policy. Brocade switc hes and routers use a set of methods collectively referred to as “Brocade Advanced Zoning.” Brocade Advanced Zoning requires a license key on all platform s, but all currently shipping platform s bundle this key with the base OS.

Using this key allows the creation of m any z ones within a fabric, each of whic h may be comprised of many “zone objects,” which are storage or host PIDs or WWNs.

These objects can belo ng to zero, one, or m any zones.

This a llows the cre ation of overlapping zones. Every switch in the fabric then enforces access control for its at tached nodes. Zone objects ar e grouped into zones, and zones are grouped into zone c onfigurations. A fabric can have any number of zone c onfigurations. This provides a comprehensive and secure m ethod for defining exactly which devices should or should not be allowed to com Приложение A Базовые материалы municate.

Fabric OS CLI All Brocade switches provide a comprehensive Com mand Line Interface (C LI) which enables m anual “lowest common denominator” control, as well as task automation through scripting mechanisms via the switch serial port or telnet interfaces.

W T Brocade WEBTOOLS is web-brow ser-based G raphi cal User Interface (G UI) for elem ent and network management of Brocade switches. WEBTOOLS uses a set of processes (e.g. httpd) and web pages that run on all Fabric OS s witches in a f abric. Once a switch or router has an IP address configure d, it is possible to m anage most functions sim ply by pointing a Java-enabled web browser at that address.

This product sim plifies m anagement by enabling ad ministrators to configure, m onitor, and m anage switch and fabric param eters from a sing le onlin e access poin t.

Organizations m ay configure and adm inister individual ports or switches as well as small SAN fabrics. User name and password login procedures protect against unauthor ized actions by lim iting access to configuration features.

Web Tools provides adm inistrative control point for Brocade Ad vanced Fabric Se rvices, including Advanced Zoning, ISL Trunking, Advanced Perform ance Monitor ing, Fabric Watch, and Fabric Manager integration. For instance, ad ministrators can utilize tim esaving zoning wizards to step them through the zoning process.

While this is technic ally a licensed f eature, lik e zon ing, WEBTOOLS is included with all currently shipping Brocade platforms.

Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад Fabric Manager Fabric Manager is a flexib le and powerful tool that provides rapid access to crit ical SAN inf ormation and configuration functions. It allo ws ad ministrators t o effi ciently configure, m onitor, provision, and other perform daily m anagement tas ks f or m ultiple f abrics or M eta SANs from a single location. Through this single-point SAN m anagement architecture, Fabric Manager lowers the overall cost of SAN ownershi p. It is tightly integrated with other Brocade SAN m anagement products, such as Web Tools and Fabric W atch, and enables third-party product integration through bui lt-in m enu func tions and the Brocad e SMI Agen t. Organizations can us e Fabric Manager in conjunction with other leading SAN and stor age resource m anagement applications as the drill-down element manager for a single or multiple Brocade fabrics, or use Fabric Manager as the prim ary SAN m anagement interface.

SAN Health SAN Health is a powerf ul too l th at helps op timize a SAN and track its components in an autom ated fashion.

The tool g reatly increase s SAN manager productivity, since it au tomates m any m andatory recurring S AN m an agement tasks. It sim plifies the process of data colle ction for audits and change tracking, uses a client/server “ex pert systems” appro ach to identify poten tial is sues, and can be run regularly to monitor fabrics over time. This is especially useful to SAN designers in three ways:

When designing changes to existing environments, the tool can help to audit the target environment before finalizing a design In any design context, it can help to document a SAN after implementation It can be specified in the SAN project plan as an on Приложение A Базовые материалы going proactive maintenance and change-control tool to satisfy manageability requirements The tool has two software components: a data capture application and a back-end report processing engine. SAN managers may run the data captu re application as often as needed. After SAN Health fi nishes capturing diagnostic data, the b ack-end reporting process automatically gener ates a point-in-tim e snapshot of the SAN, including a Visio topology diagram and a detailed report on the SAN configuration. This report co ntains summ ary inform ation about the entire SAN as well as specific details about fab rics, switches, and individual ports. Other useful item s in the report include alerts, historical perfor mance graphs, and any recomm ended change s based on continually up dated best practices.

The SAN Health prog ram is powerf ul and f lexible.

For exam ple, it is possible to configure m any different fabrics in a single audit set, and schedule them to run automatically on a recurring basis. These audits can run in “unattended mode”, with automatic e-mailing of captured data to a designated recipient.

The tool als o has enhan ced change-track ing features to show how a f abric has evolved over tim e, or to f acili tate troubleshooting if something goes wrong. This can be an inva luable add ition to the chan ge-tracking process, both for most-m ortem analysis and for proactive m an agement. For instan ce, SAN Health can track traffic pattern changes in weekly or m onthly increm ents. This can help to identify loom ing perfor mance problem s pro actively, an d take corrective action before end -users are affected.

SAN Health is curren tly availab le to SAN end-users and Brocade OEM and r eseller channel partners. It can be used with Brocade install-base fabrics, and fabrics using equipment from selected othe r infrastructure vendors as Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад well. The tool is ava ilable for download on the public Brocade web site (www.brocade.com /sanhealth). Fo r partners, Brocade also provides a co-branded version.

Fabric Watch Brocade Fabric W atch provides advanced m onitoring capabilities for Brocade products. F abric W atch enables real-time proactiv e awareness of the health, perform ance and security of each switch, and autom atically alerts net work m anagers to pro blems in order to avo id costly failures. Mo nitoring f abric-wide events, ports, and envi ronmental p arameters p ermits ear ly f ault de tection and isolation as well as performance measurement.

With Fabric W atch, SAN adm inistrators can selec t custom fabric elem ents and al ert thresholds or they can choose from a selection of preconfigured settings for gathering valuable health, pe rformance and security m et rics. In addition, it is easy to integrat e Fabric Watch with enterprise systems management solutions.

By im plementing Fabric W atch, storage and network managers can rapidly im prove SAN availability and per formance without installing new software or system administration tools.

Advanced Performance Monitoring Brocade Advanced Performance Monitoring is a com prehensive tool for m onitoring the perform ance of networked storage resources. It en ables adm inistrators to monitor both “transmit” and “receive” traffic from source devices to destination devi ces, enabling end-to-end visi bility into the f abric. Using this to ol, adm inistrators c an quickly identify bottlenecks and optimize fabric configu ration resources to compensate.

Приложение A Базовые материалы Extended Fabrics Extended Fabrics software enab les native Fibre Chan nel ISLs to span extrem ely long distances. Extended Fabrics optimizes switch buffering (BB credits) to ensure the highest possible perfor mance on these long-distance ISLs. W hen Extended Fabrics is installed on gateway switches, th e ISLs (E_P orts) are co nfigured with a large pool of buffer credits. T he e nhanced switch buffers help ensure that data transfer can occu r at full o r near-full bandwidth to ef ficiently u tilize the connection over the extended links. As a r esult, organizations can use Ex tended Fabrics to implement strategic applications such as wide area data replication, high-speed rem ote backup, cost-effective remote storage centralization, and business continuance strategies.

Remote Switch Remote Switch is a n ow large ly obsole te f eature which enabled f abric co nnectivity o f two switches over long distances by supporting ex ternal gateways to encap sulate Fibre Channel over ATM. Connecting SAN islands over Fibre Channel-to-ATM device enabled organizations to extend their solutions ove r a W AN. This type of con figuration could be used for solutions such as rem ote disk mirroring and remote tape backup. While ATM extension may still be used, this method has largely been superseded by FC over SONET/SDH and native FC links using Ex tended Fabrics. For all such configurations, Brocade now supports an “Open E_Port” m ode to support for Gate way/Bridge devices. Custom ers m ay simply use portCfgISLMode CLI command which is now part of the base OS: there is no need for a license anymore.

FICON / CUP The Brocade directors and selected switches support the FICON protocol for m ainframe environm ents, ena Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад bling o rganizations to u tilize a sing le p latform f or both open system s and m ainframe storage networks. FICON certified Brocade platforms support the ability to run both open systems Fibre Channel and FICON traffic on a port by-port basis within a single platform. The Brocade FI CON i mplementation also supports cascaded FICON fabrics at 1 and 2 Gbit/sec FICON speeds.

With Fabric OS version 4.4, Brocade fully supports CUP in-band m anagement functions, which enable m ain frame applica tions to perform configuration, management, monitoring, and error handling for Brocade directors and switches. CUP support also en ables ad vanced f abric s tatistics repor ting to f acilitate m ore efficient network performance tuning.

Маршрутизация Fibre Channel The Brocade FC-FC Routing Service provides con nectivity between two or m ore fabrics without m erging them. Any platform it is running on can be referred to as an FC router, or FCR for short. At the time of this writing, the feature is available on the Brocade AP7420, the Bro cade 7500, and the FR4-18i blade.

The serv ice allows the cre ation of Logica l Storag e Area Networks, or LSANs, which provide connectivity that can sp an fabrics. It is most useful to think of an LSAN in term s of zoning: an LSAN is a zon e that spans fabrics. The fact that an FCR can connect autonom ous fabrics without m erging them has advantages in term s of change m anagement, network m anagement, scalability, reliability, a vailability, and service ability to na me just a few areas.

The customer needs fo r this p roduct are sim ilar to those that brought first routers and then Layer 3 switches to the data networking world. An FC router is to an FC fabric as an IP router is to an Eth ernet subn et.

Приложение A Базовые материалы Early efforts were m ade to cre ate la rge, fla t Ethe rnet LANs without routers. Thes e efforts hit a ceiling beyond which they could not grow effectively. In m any cases, Ethernet broadcast storms would create re liability issues, or it would becom e i mpossible to resolve dependencies for change control. Perhaps m erging Ethernet networks that grew independently woul d involve too much effort and risk. An analogous situation exists today with flat Fi bre Channel fabrics. Using an FCR with LSANs solves that problem, while other proposed solutions – such as VSANs – just m ove the problem around in a shell-game effort to confuse users.

For more information about this feature, see the book Multiprotocol Routing for SANs by Josh Judd.

FCIP Fibre Channel over IP (Internet standard RFC 3821) is one of several m echanisms available to extend F C SANs across long distances. FCIP transparently tunnels FC ISLs across an in termediate I P ne twork, m aking the entire IP MAN or WAN appear to be an ISL from the viewpoint of the fabric. This is available as a fully-integrated feature on the Brocade AP7420 Multip rotocol Router, the Brocade 7500 router, and the FR4-18i blade.

It is important to note that FCIP is neither the only nor always the best approach to distance extension. The major advantages of FCIP are cost and ubiquitous availability of IP MAN an d WAN services. However, for users inter ested in reliability and perf ormance, it is th eoretically impossible for FCIP – or an y other IP SAN technology for that m atter – to m atch native FC solutions. Generally speaking, SAN designers prefer distance extension solu tions in the following order:

1. Native FC over dark fiber or xWDM 2. FC over SONET/SDH Send feedback to bookshelf@brocade.com Основы проектирования SAN Джош Джад 3. FC over ATM 4. FC over IP Many of the shortcomings of FCIP can be m itigated – though not elim inated – by usi ng FastW rite and/or Tape Pipelining. (p 456) In fact, before the advent of FC Fast Write, it was som etimes even possible to achie ve bette r performance on a 1Gbit FCIP link than a 4Gbit FC link.

FCIP should therefore almost always be used in com bina tion with some form of write acceleration technology.

For more information about this feature, see the book Multiprotocol Routing for SANs by Josh Judd.

Secure Fabric OS As organizations interconnect larger and larger SANs over longer distances and thr ough existing networks, they have an ever greater need to effectively m anage their se curity and policy requirem ents. To help these organizations im prove securi ty, Secure Fabric OS™, a comprehensive security solution for Brocade-based SAN fabrics, provided policy-ba sed security protection for more predictable change m anagement, assured configura tion integrity, and reduced risk of downtim e. Secure Fabric OS protected the ne twork by using the strongest, enterprise-class security m ethods available. W ith its flexible design, Secure Fabric OS allowed organizations to customize SAN security in o rder to m eet specific pol icy requirements. All Secure Fabric OS features have now been made available in the ba se OS for free as of Fabric OS 5.3.0. It is recommended th at custom ers m igrate to that solu tion as it pro vides additional f eatures such as DH-CHAP to end devices (HBAs) and is also more s cal able.

Pages:     | 1 |   ...   | 7 | 8 || 10 | 11 |   ...   | 12 |

© 2013 www.libed.ru - «Бесплатная библиотека научно-практических конференций»

Материалы этого сайта размещены для ознакомления, все права принадлежат их авторам.
Если Вы не согласны с тем, что Ваш материал размещён на этом сайте, пожалуйста, напишите нам, мы в течении 1-2 рабочих дней удалим его.