您的当前位置:首页正文

Symmetrix Basic Handbook V1.0

2020-06-26 来源:好走旅游网


Symmetrix Basic Handbook V1.0

Legend Fan PSE Lab Shanghai Serviceprocessor.net

1

EMC Symmetrix, 20 Years in the making ......................................................................................... 3 EMC Symmetrix and DMX Serial Numbers .................................................................................. 13 EMC Symmetrix – DMX Models by Cabinets Types ..................................................................... 14 Symmetrix Hardware Components ................................................................................................. 15 DMX Hardware Components ......................................................................................................... 16 EMC Symmetrix DMX-4: Components ......................................................................................... 16 EMC Symmetrix DMX-4: Supported Drive Types ......................................................................... 18 EMC Symmetrix DMX-4 and Symmetrix V-Max: Basic Differences ............................................ 24 EMC Symmetrix V-Max: Enginuity 5874 ...................................................................................... 30 EMC Symmetrix V-Max: Supported drive types ............................................................................ 33 Symmetrix V-Max Systems: SRDF Enhancements and Performance ............................................ 35 EMC Symmetrix Enginuity Operating Environment ...................................................................... 37 EMC Symmetrix: BIN file .............................................................................................................. 40 EMC Symmetrix: Calculations for Heads, Tracks, Cylinders, GB ................................................. 44 EMC Symmetrix File System (SFS) ............................................................................................... 47 EMC Symmetrix: VCMDB and ACLX .......................................................................................... 49 EMC Symmetrix: Dynamic Hot Spares .......................................................................................... 53 EMC Symmetrix: Permanent Sparing ............................................................................................. 55 EMC Symmetrix DMX device type, COVD: Cache Only Virtual Device ..................................... 57 EMC Symmetrix Management Console (SMC – For Symmetrix V-Max Systems) ....................... 58 Symcli Basic Commands ................................................................................................................ 62 EMC Timefinder Commands .......................................................................................................... 62 EMC SRDF Basics ......................................................................................................................... 65 SRDF Commands ........................................................................................................................... 66 EMC Symmetrix / DMX SRDF Setup ............................................................................................ 67

2

EMC Symmetrix, 20 Years in the making

So next year will mark a history of Symmetrix Products within EMC, still classified as one of the most robust systems out there after 20 years of its inception. In this section, we will talk about some facts on Symmetrix products as it relates to its features, characteristics, Enginuity microcode versions, model numbers, year released, etc.

So the journey of Symmetrix systems started with Moshe Yanai (along with his team) joining EMC in late 80’s. A floating story says, the idea of a cache based disk array was initially pitched to both IBM and HP and was shot down. EMC was predominately a mainframe memory selling company back in the late

1980’s. The Symmetrix products completely changed the direction of EMC in a decade.

Joe Tucci comes in at the end of 90’s from Unisys with a big vision. Wanted to radically change EMC. Through new acquisitions, new technologies, vision and foremost the integration of all the technologies created today’s EMC.

Symmetrix has always been the jewel of EMC. Back in the Moshe days, the engineers were treated so royally (Have heard stories about helicopter rides and lavish parties with a satellite bus waiting outside for a support call). Then comes the Data General acquisition in late 90’s that completely changed the game. Some people within EMC were against the DG acquisition and didn’t see much value in it. While the Clariion DG backplane is what changed the Symmetrix to a Symmetrix DMX – Fiber Based Drives. Over this past decade, EMC radically changes its position and focuses on acquisitions, support, products, quality, efficiency, usability and foremost changing itself from a hardware company to an Information Solutions company focusing on software as its integral growth factor. New acquisitions like Legato, Documentum, RSA, kept on changing the culture and the growth focus within EMC.

Then came VMware and it changed the rules of the game, EMC’s strategic move to invest into VMware paid off big time. Then happens the 3-way partnership between VMware – EMC – Cisco, to integrate next generation products, V-Max (Symmetrix), V-Sphere and UCS are born.

Here we are in 2009, almost at the end of 20 years since the inception of the Symmetrix, the name, the product, the Enginuity code, the robust

characteristics, the investment from EMC all stays committed with changing market demands.

3

Jumping back into the Symmetrix, here are a few articles you might find interesting, overall talking about various models, serial numbers of the machines and importantly a post on Enginuity Operating Environment.

Symmetrix Family 1.0

ICDA – Integrated Cache Disk Array Released 1990 and sold through 1993 A 24GB total disk space introduced

Wow, I was in elementary school or may be middle school when this first generation Symmetrix was released…. Symmetrix 4200

———————————————————————————————————————

Symmetrix Family 2.0

ICDA – Integrated Cache Disk Array Released 1991 and sold through 1994 A 36GB total disk space Mirroring introduced Symmetrix 4400

———————————————————————————————————————

Symmetrix Family 2.5

ICDA – Integrated Cache Disk Array Released 1992 and sold through 1995 RSF capabilities added

4

(I actually met a guy about 2 years ago, he was one of the engineers that had worked on developing the first RSF capabilities at EMC and was very instrumental in developing the Hopkinton PSE lab) Symmetrix 4800:

———————————————————————————————————————

Symmetrix Family 3.0 also called Symmetrix 3000 and 5000 Series Released 1994 and sold through 1997 ICDA: Integrated Cache Disk Array Includes Mainframe Support (Bus & Tag) Global Cache introduced 1GB total Cache NDU – Microcode SRDF introduced

Supports Mainframe and open systems both Enginuity microcode 50xx, 51xx

Symmetrix 3100: Open systems support, half height cabinet, 5.25 inch drives Symmetrix 5100: Mainframe support, half height cabinet, 5.25 inch drives Symmetrix 3200: Open Systems support, single cabinet, 5.25 inch drives Symmetrix 5200: Mainframe support, single cabinet, 5.25 inch drives Symmetrix 3500: Open Systems support, triple cabinet, 5.25 inch drives Symmetrix 5500: Mainframe support, triple cabinet, 5.25 inch drives ———————————————————————————————————————

Symmetrix Family 4.0 also called Symmetrix 3000 and 5000 Series

5

Released 1997 and sold through 2000 RAID XP introduced

3.5 Inch drive size introduced

On triple cabinet systems 5.25 inch drives used Supports Mainframe and Open Systems both Timefinder, Powerpath, Ultra SCSI support Enginuity microcode 5265.xx.xx, 5266.xx.xx

Symmetrix 3330: Open Systems Support, half height cabinet, 32 drives, 3.5 inch drives

Symmetrix 5330: Mainframe Support, half height cabinet, 32 drives, 3.5 inch drives

Symmetrix 3430: Open Systems Support, single frame, 96 drives, 3.5 inch drives

Symmetrix 5430: Mainframe Support, single frame, 96 drives, 3.5 inch drives Symmetrix 3700: Open Systems Support, triple cabinet, 128 drives, 5.25 inch drives

Symmetrix 5700: Mainframe Support, triple cabinet, 128 drives, 5.25 inch drives

———————————————————————————————————————

Symmetrix Family 4.8 also called Symmetrix 3000 and 5000 Series Released 1998 and sold through 2001 Symmetrix Optimizer Introduced

Best hardware so far: least outages, least problems and least failures (not sure if EMC will agree to it, most customers do) 3.5 inch drives used with all models

6

Enginuity microcode 5265.xx.xx, 5266.xx.xx, 5267.xx.xx

Symmetrix 3630: Open Systems support, half height cabinet, 32 drives Symmetrix 5630: Mainframe support, half height cabinet, 32 drives Symmetrix 3830: Open Systems support, single cabinet, 96 drives Symmetrix 5830: Mainframe support, single cabinet, 96 drives Symmetrix 3930: Open Systems support, triple cabinet, 256 drives Symmetrix 5930: Mainframe support, triple cabinet, 256 drives

Models sold as 3630-18, 3630-36, 3630-50, 5630-18, 5630-36,

5630-50,3830-36, 3830-50, 3830-73, 5830-36, 5830-50, 5830-73, 3930-36, 3930-50, 3930-73, 5930-36, 5930-50, 5930-73 (the last two digits indicate the drives installed in the frame)

———————————————————————————————————————

Symmetrix Family 5.0 also called Symmetrix 8000 Series [ 3000 (open sytems) + 5000 (mainframe) = 8000 (support for both) ] Supports Open Systems and Mainframe without BUS and TAG through ESCON Released 2000 and sold through 2003 181GB Disk introduced

Enginuity microcode 5567.xx.xx, 5568.xx.xx Symmetrix 8130: Slim cabinet, 48 drives Symmetrix 8430: Single cabinet, 96 drives Symmetrix 8730: Triple cabinet, 384 drives

Some models sold as 8430-36, 8430-73, 8430-181 or 8730-36, 8730-73, 8730-181 (the last two digits indicate the drives installed in the frame) ———————————————————————————————————————

7

Symmetrix Family 5.5 LVD also called Symmetrix 8000 Series Released 2001 and sold through 2004 LVD: Low Voltage Disk Introduced 146GB LVD drive introduced

Ultra SCSI drives cannot be used with the LVD frame Mainframe optimized machines introduced

4 Slice directors introduced with ESCON and FICON FICON introduced

Enginuity microcode 5567.xx.xx, 5568.xx.xx

Symmetrix 8230: Slim cabinet, 48 drives, (rebranded 8130, non lvd frame) Symmetrix 8530: Single cabinet, 96 drives, (rebranded 8430, non lvd frame) Symmetrix 8830: Triple cabinet, 384 drives, (rebranded 8730, non lvd frame) Symmetrix 8230 LVD: LVD frame, slim cabinet, 48 LVD drives Symmetrix 8530 LVD: LVD frame, single cabinet, 96 LVD drives Symmetrix 8830 LVD: LVD frame, triple cabinet, 384 LVD drives Symmetrix z-8530: LVD frame, Single cabinet, 96 drives, optimized for mainframes

Symmetrix z-8830: LVD frame, Triple cabinet, 384 drives, optimized for mainframe

Some models sold as 8530-36, 8530-73, 8530-146, 8530-181 or 8830-36, 8830-73, 8830-146, 8830-181 (the last two digits indicate the drives installed in the frame)

———————————————————————————————————————

8

Symmetrix DMX or also called Symmetrix Family 6.0 Released Feb 2003 and sold through 2006

Direct Matrix Architecture (Data General Backplane) introduced DMX800 was the first DMX system introduced 4 Slice directors introduced

RAID 5 introduced after being introduced on DMX-3 First generation with common DA / FA hardware Introduction of modular power

Enginuity Microcode 5669.xx.xx, 5670.xx.xx, 5671.xx.xx

Symmetrix DMX800: Single cabinet, DAE based concept for drives, 96 drives (I swear, a customer told me, they have ghost like issues with their DMX800) Symmetrix DMX1000: Single cabinet, 18 drives per loop, 144 drives total Symmetrix DMX1000-P: Single cabinet, 9 drives per loop, 144 drives total, P= Performance System

Symmetrix DMX2000: Dual cabinet, modular power, 18 drives per loop, 288 drives

Symmetrix DMX2000-P: Dual cabinet, modular power, 9 drives per loop, 288 drives, P=Performance System

Symmetrix DMX3000-3: Triple cabinet, modular power, 18 drives per loop, 3 phase power, 576 drives

———————————————————————————————————————

Symmetrix DMX2 or also called Symmetrix Family 6.5 Released Feb 2004 and sold through 2007 Double the processing using DMX2

9

DMX and DMX2 frames are same, only directors from DMX must be changed to upgrade to DMX2, reboot of entire systems required with this upgrade RAID 5 introduced after being introduced on DMX-3 64GB memory introduced 4 Slice Directors

Enginuity Microcode 5669.xx.xx, 5670.xx.xx, 5671.xx.xx

Symmetrix DMX801: 2nd generation DMX, Single cabinet, DAE based concept for drives, 96 drives, FC SPE 2 (I swear, a customer told me, they have ghost like issues with their DMX800)

Symmetrix DMX1000-M2: 2nd generation DMX, Single cabinet, 18 drives per loop, 144 drives

Symmetrix DMX1000-P2: 2nd generation DMX, Single cabinet, 9 drives per loop, 144 drives, P=Performance System

Symmetrix DMX2000-M2: 2nd generation DMX, Dual cabinet, 18 drives per loop, 288 drives

Symmetrix DMX2000-P2: 2nd generation DMX, Dual cabinet, 9 drives per loop, 288 drives, P=Performance System

Symmetrix DMX2000-M2-3: 2nd generation DMX, Dual cabinet, 18 drives per loop, 288 drives, 3 Phase power

Symmetrix DMX2000-P2-3: 2nd generation DMX, Dual cabinet, 9 drives per loop, 288 drives, P=Performance System, 3 Phase power

Symmetrix DMX3000-M2-3: 2nd generation DMX, Triple cabinet, 18 drives per loop, 576 drives, 3 Phase power

———————————————————————————————————————

Symmetrix DMX-3 or also called Symmetrix 7.0 Released July 2005 and still being sold 8 Slice directors

10

1920 disk (RPQ ‘ed to 2400 drives) DAE based concept introduced Symmetrix Priority Controls

RAID 5 introduced and then implemented on older DMX, DMX-2 Virtual LUN technology SRDF enhancements

Concept of vaulting introduced

Enginuity microcode 5771.xx.xx, 5772.xx.xx

Symmetrix DMX-3 950: System Cabinet, Storage Bay x 2, 360 drives max, Modular Power, 3 Phase power

Symmetrix DMX-3: System Cabinet, Storage Bay x 8 (Expandable), 1920 drives max, RPQ’ed to 2400 drives, 3 Phase power

———————————————————————————————————————

Symmetrix DMX-4 or also called Symmetrix 7.0 Released July 2007 and still being sold Virtual provisioning Flash Drives FC / SATA drives RAID 6 introduced SRDF enhancements Total Cache: 512 GB Total Storage: 1 PB

Largest drive supported 1TB SATA drive

Flash drives 73GB, 146GB later now support for 200GB and 400GB released

11

1920 drives max (RPQ’ed to 2400 drives) Enginuity microcode 5772.xx.xx, 5773.xx.xx

Symmetrix DMX-4 950: System Cabinet, Storage Bay x 2, 360 drives max, Modular Power, 3 Phase power

Symmetrix DMX-4: System Cabinet, Storage Bay x 8 (Expandable), 1920 drives max, RPQ’ed to 2400 drives, Modular power, 3 Phase Power

Some models sold as DMX-4 1500, DMX-4 2500, DMX-4 3500 and DMX-4 4500 ———————————————————————————————————————

Symmetrix V-Max (Released April 2009)

Enginuity Microcode 5874.xxx.xxx Total number of drives supported: 2400 Total Cache: 1 TB mirrored (512GB usable) Total Storage: 2 PB

All features on the V-Max have been discussed earlier on the blog post linked below

Symmetrix V-Max SE: Single System Bay, SE=Single Engine, Storage Bay x 2, 360 drives max, cannot be expanded to a full blown 8 engine system if purchased as a SE, 3 Phase power, Modular Power

Symmetrix V-Max: System Cabinet, Storage Bay x 10, 2400 drives max, modular power, 3 phase power

12

EMC Symmetrix and DMX Serial Numbers

You always wondered how EMC comes up with these serial numbers for your Symmetrix and DMX Machines.

If your machine serial number starts with HK (it means it was manufactured in Hopkinton, MA) and for most of the international customers if it starts with CK (it means it was manufactured in Cork, Ireland).

With the DMX Series of machines, EMC has introduced two new manufacturing centers (TN and SA).

There are still machines starting with HK and CK that will be shipped internationally and vice versa.

• • • •

The serial number HK would always have a 1 following it. The serial number CK would always have a 2 following it. The serial number TN would always have a 2 following it. The serial number SA would always have a 2 following it.

Here is the Symmetrix and DMX Serial Numbering Convention.

• • • • • • • • • • • • •

Symmetrix 3.0, 1/2 cabinet: HK18160xxxx Symmetrix 3.0, 1 cabinet: HK18150xxxx Symmetrix 3.0, 3 cabinet: HK18140xxxx Symmetrix 4.0, 1/2 Cabinet: HK18260xxxx Symmetrix 4.0, 1 cabinet: HK18250xxxx Symmetrix 4.0, 3 cabinet: HK18240xxxx Symmetrix 4.8, 1/2 cabinet: HK18360xxxx Symmetrix 4.8, 1 cabinet: HK18350xxxx Symmetrix 4.8, 3 cabinet: HK18370xxxx Symmetrix 5.0, 1 cabinet: HK18450xxxx Symmetrix 5.0, 3 cabinet: HK18470xxxx Symmetrix 5.5, 1 cabinet: HK18550xxxx Symmetrix 5.5, 3 cabinet: HK18570xxxx

The DMX Serial numbers still need more research because its hard to find a trend with the numbering convention on it.

• • • • •

DMX800: HK18790xxxx DMX1000-S: HK18740xxxx DMX1000-P: HK18746xxxx DMX2000-S: HK18770xxxx DMX2000-P: HK18776xxxx

13

• • • • DMX3000: HK18788xxxx DMX3000-M2:HK18789xxxx DMX3: HK19010xxxx DMX4: HK19110xxxx

It is very important that you Service processor Serial Number is exactly similar to that of the Symmetrix / DMX Serial Number (As defined in the BIN FILE). If both these serial numbers are different, your basic symcfg discover commands will fail.

Your actual hardware Symmetrix / DMX Serial Number can still be different than the Serial number defined in the BIN file, since the BIN file serial number takes precedence.

The find your Symmetrix / DMX Serial Number look at the front and back of the Symmetrix on the top, the number should begin with HK or CK or TN or SA. To find your Symmetrix / DMX Serial Number from the service processor, run E7,CF or you can also try to run symcfg discover or syminq from the service processor via SYMCLI located C:\\Program Files\\EMC\\Symclibin. One option before running this, you can delete the file located on the service processor called symapi_db.bin located C:\\Program Files\\EMC\\Symapidb. During the symcfg discover process, this file will be recreated. The logs if this operation fails can be found at C:\\Program Files\\EMC\\Symapilog.

It is very important you do not change your Symmetrix / DMX Serial number since the FA WWN are determined using the last two digits of your actual Serial Number. If you change this, all the WWN’s will change causing your FA’s, Disk WWN etc to all change. As far as my knowledge, this can only be changed through a BIN FILE change.

EMC Symmetrix – DMX Models by Cabinets Types

The below is the true breakdown of the type of the EMC Symmetrix and EMC DMX machines to the type of cabinet properties it has.

14

Starting with the Symm 3.0’s EMC introduced a 1/2 Hieght cabinets, Single Full Cabinet and a 3 Cabinet machine. The same ideas went into the Symm 4.0 and 4.8.

Starting the Symm 5.0 and into Symm 5.5, EMC introduced the Badger cabinets, which where much slimmer and about 5 ft in height, it was a disaster with those cabinets. Really no one bought it.

Starting the DMX800’s and DMX1000’s which are the single cabinet, EMC introduced the DMX2000’s in 2 cabinets and DMX3000 in 3 cabinet style.

Also if you ever wondered where those Symm modell numbers came from

1st Digit: 3 = Open Systems. 5 = Mainframe. 8 = Mixed. 2nd Digit: Related to Cabinet size, dependant on Generation’ 3rd Digit: 00 = 5¼” Drives. 30 = 3½” Drives

The DMX uses 31/2″ Fiber Drives

Symmetrix Hardware Components

There are various Components in a EMC Symmetrix series of machines. To name a few

Disk Directors

Channel Directors (CA, FA, EA, FI, GI, FA2) Memory Cards Disk Drives Power Supply

For 3 Bay cabinets (AC-DC PS and DC-DC PS) Fan

Back End Disk Adapters

Back End Channel Adapters (CA, FA, EA, FI, GI, FA2) Communication Card EPO Module Battery

Service Processor

15

DMX Hardware Components

Some of the important hardware components of a DMX Machine are Disk Directors (DA)

Back End Disk Adapters

Channel Directors (CA, EA, FA, FA2, FI, GI) Back End Channel Directors Memory Cards Power Module FAN Battery

ECM (Environmental Control Module), CCM (Communication Control Module) Disk Drives

Service Processor

EMC Symmetrix DMX-4: Components

In my previous posts on EMC Symmetrix 3, 5, 8 Series and EMC Symmetrix DMX, DMX-2 Series we discussed some important components that comprise in

systems, in this post we will discuss some of the important components of EMC Symmetrix DMX-4.

EMC Symmetrix DMX-4 consist of 1 System Bay and (1 upto 8)Scalable Storage Bay’s. Each Storage Bay can hold up to 240 Disk Drives totaling 1920 drive in 8 Storage bays or 1024 TB System. Systems with special requirements can be configured to 2400 drives instead of standard 1920 drives.

16

The primary bay is the System Bay which includes all directors, service

processor, adapters, etc, while the Storage Bay contains all the disk drives, etc.

System Bay (1 Bay)

Channel directors: Front End Directors (FC, ESCON, FICON, GigE, iSCSI), these are the I/O Directors.

Disk directors: Back End Directors (DA), these control the drives in the System.

Global memory directors: Mirrored Memory available with DMX-4, Memory Director sizes range from 8GB, 16GB, 32GB or 64GB totaling 512GB (256GB mirrored).

Disk adapters: Back End Adapters, they provide an interface to connect disk drives through the storage bays.

Channel adapters: Front End Adapters, they provide an interface for host connection (FC, ESCON, FICON, GigE, iSCSI).

Power supplies: 3 Phase Delta or WYE configuration, Zone A and Zone B based Power Supplies, maximum 8 of them in the system bay.

Power distribution units (PDU): One PDU per zone, 2 in total.

Power distribution panels (PDP): One PDP per zone, 2 in total, power on/off, main power.

Battery backup Unit (BBU): 2 Battery backup modules, 8 BBU units, between 3 to 5 mins of backup power in case of a catastrophic power failure. Cooling fan modules: 3 Fans at the top of the bay to keep it cool.

Communications and Environmental Control (XCM) modules: Fabric and Environmental monitoring, 2 XCM located at the rear of the system bay. This is the message fabric, that is the interface between directors, drives, cache, etc. Environmental monitoring is used to monitor all the VPD (Vital Product Data). Service processor components: Keyboard, Video, Display and Mouse. Used for remote monitoring, call home, diagnostics and configuration purposes. UPS: UPS for the Service Processor

17

Silencers: Made of foam inside, different Silencers for System and Storage bay’s.

Storage bay (1 Bay Minimum to 8 Bay’s Maximum)

Disk drives: Combination of 73GB, 146GB, 300GB, 400GB, 450GB, 500GB, 1TB and now EFD’s 73GB, 146GB and 200GB available. Speed: 10K, 15K, 7.2K SATA are all compatible, each RAID Group and each drive enclosure should only have similar speed drives, similar type drives. 15 drives per Enclosure, 240 per bay, 1920 total in the system. If the color of the LED lights on the drive is Blue its 2GB speed, if the color of the LED is green, the speed is 4GB. Drive Enclosure Units: 16 per Storage Bay, 15 drives per enclosure Battery Backup Unit (BBU): 8 BBU modules per Storage bay, each BBU support 4 Drive enclosures

Power Supply, System Cooling Module: 2 per drive enclosure Link Control Cards: 2 per drive enclosure

Power Distribution Unit (PDU): 1 PDU per zone, 2 in total Power Distribution Panels (PDP): 1 PDP per zone, 2 in total

EMC Symmetrix DMX-4: Supported Drive Types

In this section we will discuss the supported drive models for EMC Symmetrix DMX-4. Right before the release of Symmetrix V-Max systems, in early Feb 2009 we saw some added support for EFD’s (Enterprise Flash Disk) on the Symmetrix DMX-4 platform. The additions were denser 200GB and 400GB EFD’s.

18

The following size drives types are supported with Symmetrix DMX-4 Systems at the current microcode 5773: 73GB, 146GB, 200GB, 300GB, 400GB, 450GB, 500GB, 1000GB. Flavors of drives include 10K or 15K and interface varies 2GB or 4GB.

The drive has capabilities to auto negotiate to the backplane speed. If the drive LED is green the speed is 2GB, if its neon blue its 4GB interface.

The following are details on the drives for the Symmetrix DMX-4

Systems. You will find details around Drive Types, Rotational Speed, Interface, Device Cache, Access times, Raw Capacity, Open Systems Formatted Capacity and Mainframe Formatted Capacity.

73GB FC Drive Drive Speed: 10K Interface: 2GB / 4GB Device Cache: 16MB Access speed: 4.7 – 5.4 mS Raw Capacity: 73.41 GB

Open Systems Formatted Cap: 68.30 GB Mainframe Formatted Cap: 72.40 GB 73GB FC Drive Drive Speed: 15K Interface: 2GB / 4GB Device Cache: 16MB Access speed: 3.5 – 4.0 mS Raw Capacity: 73.41 GB

Open Systems Formatted Cap: 68.30 GB Mainframe Formatted Cap: 72.40 GB

19

146GB FC Drive Drive Speed: 10K Interface: 2GB / 4GB Device Cache: 32MB Access speed: 4.7 – 5.4 mS Raw Capacity: 146.82 GB

Open Systems Formatted Cap: 136.62 GB Mainframe Formatted Cap: 144.81 GB 146GB FC Drive Drive Speed: 15K Interface: 2GB / 4GB Device Cache: 32MB Access speed: 3.5 – 4.0 mS Raw Capacity: 146.82 GB

Open Systems Formatted Cap: 136.62 GB Mainframe Formatted Cap: 144.81 GB 300GB FC Drive Drive Speed: 10K Interface: 2GB / 4GB Device Cache: 32MB Access speed: 4.7 – 5.4 mS Raw Capacity: 300.0 GB

Open Systems Formatted Cap: 279.17 GB Mainframe Formatted Cap: 295.91 GB

20

300GB FC Drive Drive Speed: 15K Interface: 2GB / 4GB Device Cache: 32MB Access speed: 3.6 – 4.1 mS Raw Capacity: 300.0 GB

Open Systems Formatted Cap: 279.17 GB Mainframe Formatted Cap: 295.91 GB 400GB FC Drive Drive Speed: 10K Interface: 2GB / 4GB Device Cache: 16MB Access speed: 3.9 – 4.2 mS Raw Capacity: 400.0 GB

Open Systems Formatted Cap: 372.23 GB Mainframe Formatted Cap: 394.55 GB 450GB FC Drive Drive Speed: 15K Interface: 2GB / 4GB Device Cache: 16MB Access speed: 3.4 – 4.1 mS Raw Capacity: 450.0 GB

Open Systems Formatted Cap: 418.76 GB Mainframe Formatted Cap: 443.87 GB

21

500GB SATA II Drive Drive Speed: 7.2K Interface: 2GB / 4GB Device Cache: 32MB

Access speed: 8.5 to 9.5 mS Raw Capacity: 500.0 GB

Open Systems Formatted Cap: 465.29 GB Mainframe Formatted Cap: 493.19 GB 1000GB SATA II Drive Drive Speed: 7.2K Interface: 2GB / 4GB Device Cache: 32MB Access speed: 8.2 – 9.2 mS Raw Capacity: 1000.0 GB

Open Systems Formatted Cap: 930.78 GB Mainframe Formatted Cap: 986.58 GB 73GB EFD

Drive Speed: Not Applicable Interface: 2GB

Device Cache: Not Applicable Access speed: 1mS Raw Capacity: 73.0 GB

Open Systems Formatted Cap: 73.0 GB Mainframe Formatted Cap: 73.0 GB

22

146GB EFD

Drive Speed: Not Applicable Interface: 2GB

Device Cache: Not Applicable Access speed: 1mS Raw Capacity: 146.0 GB

Open Systems Formatted Cap: 146.0 GB Mainframe Formatted Cap: 146.0 GB 200GB EFD

Drive Speed: Not Applicable Interface: 2GB / 4GB

Device Cache: Not Applicable Access speed: 1mS Raw Capacity: 200 GB

Open Systems Formatted Cap: 196.97 GB Mainframe Formatted Cap: 191.21 GB 400GB EFD

Drive Speed: Not Applicable Interface: 2GB / 4GB

Device Cache: Not Applicable Access speed: 1mS Raw Capacity: 400.0 GB

Open Systems Formatted Cap: 393.84 GB Mainframe Formatted Cap: 382.33 GB

23

Support for 73GB and 146GB EFD’s have been dropped with the Symmetrix V-Max Systems, they will still be supported with the Symmetrix DMX-4 Systems which in addition to 73 GB and 146GB also supports 200GB and 400GB EFD’s.

EMC Symmetrix DMX-4 and Symmetrix V-Max: Basic Differences

EMC Symmetrix DMX-4 and Symmetrix V-Max: Basic Differences

In this post we will cover some important aspects / properties / characteristics / differences between the EMC Symmetrix DMX-4 and EMC Symmetrix V-Max. It seems like a lot of users are searching on blog posts about this information. From a high level, I have tried to cover the differences in terms of performance and architecture related to the directors, engines, cache, drives, etc

It might be a good idea to also run both the DMX-4 and V-max systems through IOmeter to collect some basic comparisons between the front end and coordinated backend / cache performance data.

Anyways enjoy this post, and possibly look for some more related data in the future post.

EMC Symmetrix DMX-4 EMC Symmetrix V-Max Called EMC Symmetrix DMX-4 Called EMC Symmetrix V-Max V-Max: Virtual Matrix Architecture Max Capacity: 2 PB of Usable Storage Max Drives: 2400 EFD’s Supported Symmetrix Management Console 7.0 Solutions Enabler 7.0 EFD: 200GB, 400GB DMX: Direct Matrix Architecture Max Capacity: 1 PB Raw Storage Max Drives: 1900. On RPQ: 2400 max EFD’s Supported Symmetrix Management Console 6.0 Solutions Enabler 6.0 EFD: 73GB, 146GB, 200GB, 400GB

24

FC Drives: 73GB, 146GB, 300GB, 400GB, 450GB SATA II: 500GB, 1000 GB FC Drive Speed: 10K or 15K SATA II Drive Speed: 7.2K FC Drives: 73GB, 146GB, 300GB, 400GB SATA II: 1000 GB FC Drive Speed: 15K SATA II Drive Speed: 7.2K Predecessor of V-Max is DMX-4 Ease of Use with Management – atleast with SMC 7.0 or so called ECC lite 8 Ports per Director Engine based concept The concept of slots is gone 1 System bay, 10 Storage bays 8 Engines in one System (serial number) 128 Fiber Channel total ports on directors/engines for host connectivity 64 FICON ports for host connectivity 64 GbE iSCSCI ports Total Cache: 1024 GB with 512 GB usable (mirrored) Drive interface speed 4GB Predecessor of DMX-4 is DMX-3 DMX-4 management has got a bit easy compared to the previous generation Symmetrix 4 Ports per Director No Engine based concept 24 slots 1 System bay, 9 Storage bays No engines 64 Fiber Channel total ports on all directors for host connectivity 32 FICON ports for host connectivity 32 GbE iSCSI ports Total Cache: 512GB with 256 GB usable (mirrored) Drive interface speed either 2GB or 4GB, drives auto negotiate speed Green color drive LED means 2GB loop speed, Blue color drive LED means 4GB loop speed 512 byte style drive (format) Only 4GB drive speed supported. 520-byte style drive (8 bytes used for storing data check info). Remember the clarion drive styles, well the data stored in both the cases is different. The 8 bytes used with the Symmetrix V-Max are the data integrity field based on the algorithm D10-TIF standard proposal FAST: Fully Automated Storage Tiering will be supported later this year on the V-Max systems FAST: Fully Automated Storage Tiering may not be supported on DMX-4’s (most likely since the support might come based on a microcode level rather than a hardware level) 25

Microcode: 5772 / 5773 runs DMX-4’s Released in July 2007 Microcode: 5874 runs V-Max Released in April 2009 Concept of condensed Director and Cache on board 300% better TImefinder Performance compared to DMX-4 IP Management interface to the Service Processor, can be managed through the customer’s Network – IP infrastructure Symmetrix Management Console to be licensed at a cost starting the V-Max systems Architecture of V-Max is completely redesigned with this generation and is completely different from the predecessor DMX-4 Microcode 5874 has been build on base 5773 from previous generation DMX-4 Concepts of Directors and Cache on separate physical slots / cards DMX-4 Timefinder performance has been better compared to previous generation No IP Management interface into the Service Processor Symmetrix Management Console is not charged for until (free) DMX-4 Architecture of DMX-4 has been similar to the architecture of its predecessor DMX-3 Microcode 5772 and 5773 has be build on previous generation of microcode 5771 and 5772 respectively No RVA: Raid Virtual Architecture Largest supported volume is 64GB per LUN 128 hypers per Drive (luns per drive) Configuration change not as robust as V-Max Systems Implementation of RVA: Raid Virtual Architecture Large Volume Support: 240GB per LUN (Open Systems) and 223GB per LUN (Mainframe Systems) 512 hypers per Drive (luns per drive) V-Max systems introduced the concept of concurrent configuration change allowing customers to perform change management on the V-Max systems combined to work through single set of scripts rather than a step based process. Reduced mirror positions giving customers good flexibility for migration and other opportunities Virtual Provisioning allowed now with RAID 5 and RAID 6 devices Concept of Autoprovisioning groups 26

DMX-4 does present some challenges with mirror positions No Virtual Provisioning with RAID 5 and RAID 6 devices No Autoprovisioning groups

introduced with V-Max Systems Minimum size DMX-4: A single storage cabinet system, supporting 240 drives can be purchased with a system cabinet No concepts of Engine, architecture based on slots Minimum size V-Max SE (single engine) system can be purchased with 1 engine and 360 drive max. Each Engine consists of 4 Quad Core Intel Chips with either 32GB, 64GB or 128GB cache on each engine with 16 front-end ports with each engine. Backend ports per engine is 4 ports connecting System bay to storage bay Intel Quad Core chips used on Engines Powerpath VE supported for Vsphere – Virtual machines for V-Max V-Max fits in the category of Modular Storage and eliminates the bottle neck of a backplane V-Max systems have been sold with a big marketing buzz around hundreds of engines, millions of IOPs, TB’s of cache, Virtual Storage The concept of Federation has been introduced with V-Max systems, but systems are not federated in production or customer environments yet Engines are connected through copper RAPID IO interconnect at 2.5GB speed Power PC chips used on directors Powerpath VE support for Vsphere – Virtual machines for DMX-4 Concept of Backplane exists with this generation of storage DMX-4 was truly sold as a generation upgrade to DMX-3 Systems cannot be federated Directors are connected to the system through a legacy backplane (DMX – Direct Matrix Architecture). No support for FCOE or 10GB Ethernet No support for FCOE or 10GB Ethernet No support for 8GB loop interface speeds Virtual Marketing for Virtual Matrix (V-Max) since the product was introduced with FAST as a sales strategy with FAST not available for at least until the later part of the year. Would InfiniBand be supported in the future to connect engines at a short or long distance (several meters) With Federation expected in the 27

No support for 8GB loop interface speeds Strong Marketing with DMX-4 and good success No support for InfiniBand expected with DMX-4 No Federation

upcoming versions of V-Max, how would the cache latency play a role if you had federation between systems that are 10 to 10 meters away? Global Cache on Global Memory Directors Global Cache on local engines chips: again as cache is shared between multiple engines, cache latency is expected as multiple engines request this IO The V-Max building blocks (engines) can create a much larger storage monster 200GB of vault space per Engine, with 8 engines, we are looking at 1.6TB of vault storage IOPS per PORT of V-Max Systems 128 MB/s Hits 385 Read 385 Write IOPS for 2 PORT of V-Max Systems 128MB/s Hits 635 Read 640 Write V-Max performs better compared to DMX-4 FICON 2.2 x Performance on FICON compared to DMX-4 Systems. 2 Ports can have as many as 17000 IOPS on FICON Large Metadata overhead with the amount of volumes, devices, cache slots, etc, etc SRDF Technology Supported A reduction of 50 to 75% overhead with the V-Max related to metadata New SRDF/EDP (extended distant protection) Diskless R21 passthrough device, no disk required for this passthrough Symmetrix Management Console 6.0 supported, no templates and

DMX-4 is a monster storage system 256GB total vault on DMX-4 systems Performance on DMX-4 has been great compared to its previous generation DMX, DMX2, DMX-3 Templates and Wizards within the new SMC 7.0 console 28

wizards Total SRDF Groups supported 128 16 Groups on Single Port for SRDF V-Max comparison on Connectivity V-Max comparison on Usability (Storage) Total SRDF Groups supported 250 64 Groups on Single Port for SRDF 2X Connectivity compared to the DMX-4 3X usability compared to the DMX-4 RAID 6 is 3.6 times better than the DMX-4 RAID 6 on V-Max (performance) is equivalent to RAID 1 on DMX-4 SATA II drives do not support the 520-byte style. EMC takes those 8 bytes (520 – 512) of calculation for data integrity T10-DIF standard proposal and writes it in blocks or chunks of 64K through out the entire drive causing performance degradation. The performance of SATA II drives on V-Max is bad the DMX-4 systems Fiber Channel performance compared to DMX-4 improved by about 36% Fiber Channel performance 5000 IOPS per channel RVA: Raid Virtual Architecture allows to have one mirror position for RAID volumes allowing customers to used the rest of the 3 positions for either BCV’s, SRDF, Migration, etc, etc. MIBE: Matrix Interface Board Enclosure connects the Odd and the Evens or (Fabric A and Fabric B) Directors together. The SIB (System Interface Board) connects these engines together using Rapid IO Director count goes from 1 on the bottom to 16 (F) on the top, based on each engine having 2 directors. 8 Engines, 16 Directors. Single engine failure (2 Directors) will not cause Data Loss / Data Unavailable 29

DMX-4 was the first version of Symmetrix where RAID6 support was rolled out RAID6 support on DMX-4 is and was a little premature SATA II performance on DMX-4 is better than V-Max SATA II performance on DMX-4 is better than V-Max Fiber Channel performance better compared to DMX and DMX-2’s. DMX-4 start supporting 4GB interface host connectivity RVA not available on DMX-4 platforms No MIBE and SIB with DMX-4. Rather the DMX-4 directors are connected through a common backplane. Director count goes from Director 1 on the left to Director 18 (Hex) on the right 2 Directors failures if not in the same fabric or bus, rather are not

DI’s (Dual Initiators) of each other will not cause a system outage or data loss / data unavailable Single loop outages will not cause DU

and the system will not cause an outage. Failed components can be Directors, Engines, MIBE, PS’s, Fan, Cache in a single Engine or 2 directors. Single loop outages will not cause DU EMC Symmetrix V-Max: Enginuity 5874

EMC Symmetrix V-Max systems were introduced back in the month of April 2009. With this new generation of Symmetrix came a new name V-Max and a new Enginuity family of microcode 5874.

With this family of microcode 5874: there are 7 major areas of enhancements as listed below.

Base enhancements

Management Interfaces enhancements SRDF functionality changes

Timefinder Performance enhancements Open Replicator Support and enhancements Virtualization enhancements

Also EMC introduced SMC 7.0 (Symmetrix Management Console) for managing this generation of Symmetrix.

With Enginuity family 5874 you also need solutions enabler 7.0

The initial Enginuity was release 5874.121.102, a month into the release we saw a new emulation and SP release 5874.122.103 and the latest release as of

30

18th of June 2009 is 5874.123.104. With these new emulation and SP releases, there aren’t any new features added to the microcode rather just some patches and fixes related to the maintenance, DU/DL and environmentals.

Based on some initial list of enhancements by EMC and then a few we heard at EMC World 2009, to sum up, here are all of those. RVA: Raid Virtual Architecture:

With Enginuity 5874 EMC introduced the concept of single mirror positions. Normally it has always been challenging to reduce the mirror positions since they cap out at 4. With enhancements to mirror positions related to SRDF environments and RAID 5 (3D + 1P, 7D +1P) / RAID 6 (6D+2P, 14D+2P) / RAID 1 devices, now it will open doors to some further migration and data movement opportunities related to SRDF and RAID devices. Large Volume Support:

With this version of Enginuity, we will see max volume size of 240GB for open systems and 223GB for mainframe systems with 512 hypers per drive. The maximum drive size supported on Symmetrix V-Max system is 1TB SATA II drives. The maximum drive size supported for EFD on a Symmetrix V-Max system is 400GB. Dynamic Provisioning:

Enhancements related to SRDF and BCV device attributes will overall improve efficiency during configuration management. Will provide methods and means for faster provisioning.

Concurrent Configuration Changes:

Enhancements to concurrent configuration changes will allow the customer and customer engineer to perform through Service Processor and through Solutions enabler certain procedures and changes that can be all combined and executed through a single script rather than running them in a series of changes. Service Processor IP Interface:

All Service Processors attached to the Symmetrix V-Max systems will have Symmetrix Management Console 7.0 on it, that will allow customers to login and perform Symmetrix related management functions. Also the service processor will have capabilities to be managed through the customer’s current IP (network) environment. Symmetrix Management Console will have to be licensed and

31

purchased from EMC for V-Max systems. The prior versions of SMC were free. SMC will now have capabilities to be opened through a web interface. SRDF Enhancements:

With introduction of RAID 5 and RAID 6 devices on the previous generation of Symmetrix (DMX-4), now the V-Max offers a 300% better performance with TImefinder and other SRDF layered apps to make the process very efficient and resilient.

Enhanced Virtual LUN Technology:

Enhancements related to Virtual LUN Technology will allow customers to non-disruptively perform changes to the location of disk either physically or logically and further simplify the process of migration on various systems. Virtual Provisioning:

Virtual Provisioning can now be pushed to RAID 5 and RAID 6 devices that were restrictive in the previous versions of Symmetrix. Autoprovisioning Groups:

Using Autoprovisiong groups, customers will now be able to perform device masking by creating host initiators, front-end ports and storage volumes. There was an EMC Challenge at EMC World 2009 Symmetrix corner for auto

provisioning the symms with a minimum number of clicks. Autoprovisioning groups are supported through Symmetrix Management Console.

So the above are the highlights of EMC Symmetrix V-Max Enginuity 5874. As new version of the microcode is released later in the year stay plugged in for more info.

32

EMC Symmetrix V-Max: Supported drive types

With the release of EMC Symmetrix V-Max systems, EMC introduced higher density EFD’s (Enterprise Flash Disks) than being supported on its predecessor, the EMC Symmetrix DMX-4.

Below are some stats related to the supported drive types on a Symmetrix V-Max system with 5874.123.104 microcode.

Possibly with introduction of FAST (Fully Automated Storage Tiering) later in the year we will see an upgrade to the microcode family for the V-Max systems to 5976, also with that expect a much denser EFD support.

In the mean time we should atleast see some additional support for VSphere 4.0 (Vmware) in 2009 with 5875 family of microcode. With that we should see sort of a new concept of Federation with Symmetrix V-Max Systems where EMC might give some clues on how the 8 engine systems might be expanded into either 16 or 32 engine systems.

The following size drives types are supported with Symmetrix V-Max Systems at the current microcode 5874: 146 GB, 200 GB, 300 GB, 400 GB, 450 GB, 1000 GB.

Drive Types, Rotational Speed and Formatted Capacity

146 GB FC Drive Drive Speed: 15K

Open Systems Format Cap: 143.53 GB Mainframe Format Cap: 139.34 GB 300 GB FC Drive Drive Speed: 15K

Open Systems Format Cap: 288.19 GB Mainframe Format Cap: 279.77 GB

33

400 GB FC Drive Drive Speed: 10K

Open Systems Format Cap: 393.84 GB Mainframe Format Cap: 382.32 GB 450 GB FC Drive Drive Speed: 15K

Open Systems Format Cap: 432.29 GB Mainframe Format Cap: 419.64 GB 1000 GB SATA II Drive Drive Speed: 7.2K

Open Systems Format Cap: 984.81 GB Mainframe Format Cap: 956.02 GB 200 GB EFD

Drive Speed: Not Applicable

Open Systems Format Cap: 196.97 GB Mainframe Format Cap: 191.21 GB 400 GB EFD

Drive Speed: Not Applicable

Open Systems Format Cap: 393.84 GB Mainframe Format Cap: 382.33 GB

Support for 73GB and 146GB EFD’s have been dropped with the Symmetrix V-Max Systems, they will still be supported with the Symmetrix DMX-4 Systems which in addition to 73 GB and 146GB also supports 200GB and 400GB EFD’s.

34

Symmetrix V-Max Systems: SRDF

Enhancements and Performance

So this was one of those posts that I always wanted to write related to

Symmetrix V-Max and SRDF enhancements that were incorporated with the 5874 microcode.

Yesterday morning had a chat with a friend and ended up talking about SRDF and then later in the day had another interesting conference call on SRDF with a potential customer. So I really thought, today was the day I should go ahead and finish this post.

Here are the highlights of SRDF for V-Max Systems SRDF Groups:

1. 250 SRDF Groups with Symmetrix V-Max (5874) Systems. In the prior

generation Symmetrix DMX-4 (5773), it had support for 128 groups. Logically even with 2PB of storage, very seldom do customers hit that mark of 250 groups.

2. 64 SRDF groups per FC / GigE channel. In the previous generation

Symmetrix DMX-4 (5773), there was support for 32 groups per channel. SRDF Consistency support with 2 mirrors:

1. Each leg is placed in a separate consistency group so it can be changed

separately without affecting the other. Active SRDF Sessions and addition/removal of devices:

1. Now customers can add or remove devices from a group without

invaliding the entire group, upon the device becoming fully synced it should be added to the consistency group (with previous generation Symmetrix DMX-4, one device add or remove would cause the entire group to invalidate requiring the customers to run full establish again). SRDF Invalid Tracks:

35

1. The “long tail” – last few tracks search has been vastly improved. The

search procedure and methods for the “long tail’ has been completely redesigned. It is a known fact with SRDF, that the last invalid tracks take a lot of time to sync as its going through the cache search.

2. The SRDF establish operations speed is at least improved by 10X; see the

numbers below in the performance data. Timefinder/Clone & SRDF restores:

1. Customers can now restore Clones to R2 and R2’s to R1’s simultaneously,

initially with the DMX-4’s this was a 3-step process. SRDF /EDP (Extended Distance Protection):

1. 3-way SRDF for long distance with secondary site as a pass through site

using Cascaded SRDF.

2. For Primary to Secondary sites customers can use SRDF/S, for

Secondary to Tertiary sites customer can use SRDF/A

3. Diskless R21 pass-through device, where the data does not get stored on

the drives or consume disk. R21 is really in cache so the host is not able to access it. Needs more cache based on the amount of data transferred. 4. R1 — S –> R21 — A –> R2 (Production site > Pass-thru Site >

Out-of-region Site)

5. Primary (R1) sites can have DMX-3 or DMX-4 or V-Max systems, Tertiary

(R2) sites can have DMX-3 or DMX-4 or V-Max systems, while the Secondary (R21) sites needs to have a V-Max system. R22 – Dual Secondary Devices:

1. R22 devices can act as target devices for 2 x R1 devices 2. One Source device can perform Read write on R22 devices 3. RTO improved with primary site going down Other Enhancements:

1. 2. 3. 4. 5. 6. 7.

Dynamic Cache Partitioning enhancements QoS for SRDF/S Concurrent writes Linear Scaling of I/O

Response times equivalent across groups Virtual Provisioning supported with SRDF

SRDF supports linking Virtual Provisioned device to another Virtual Provisioned device.

8. Much more faster dynamic SRDF operations

9. Much more faster failover and failback operations

36

10. Much more faster SRDF sync’s

Some very limited V-Max Performance Stats related to SRDF:

1. 2. 3. 4. 5. 6. 7.

36% improved FC performance

FC I/O per channel up to 5000 IOPS GigE I/O per channel up to 4000 IOPS

260 MB/sec RA channel I/O rate, with DMX-4 it was 190 MB/seconds 90 MB/sec GigE channel I/O rate, with DMX-4 it was almost the same 36% improvement on SRDF Copy over FC

New SRDF pairs can be created in 7 secs compared to 55 secs with previous generations

8. Incremental establishes after splits happen in 3 seconds compared to 6

secs with previous generations

9. Full SRDF establishes happen in 4 seconds compared to 55 seconds with

previous generations

10. Failback SRDF happen in 19 seconds compared to 47 seconds with

previous generations

EMC Symmetrix Enginuity Operating Environment

The Clariion Environment is governed by Flare Code and the Symmetrix / DMX by Enginuity Code. The Enginuity Code was developed internally at EMC and so far to my knowledge not outsourced anywhere for development purposes. EMC Engineering is the crown of EMC, inventing new technology and pushing the envelope in terms of defining future products, technologies and markets. Unlike the Clariion Flare Code that is customer upgradeable, the code on EMC Symmetrix / DMX is upgraded through EMC only. This code sits on the Service Processor but also gets loaded on all the Directors during installation and

upgrades. On these Directors is also loaded the BIN FILE (Configuration of the Symmetrix) along with the Emulation code. The initial Enginuity code load and BIN FILE setup is performed when the customer first purchases the machine and is customized based on their SAN environment.

37

As new Enginuity code releases hit market, customers can get the upgrades from EMC. It is very normal for customers to go through multiple code upgrades during the 3 to 5 year life cycle of these machines.

The service processor houses the code, but the Symmetrix / DMX can be

rebooted or can be fully functional without the Service processor present. The Service processor will allow an EMC trained and qualified engineer to perform diagnostics and enable the call home feature for proactive fixes and failures. For any host related configurations changes, the presence of this service

processor including EMC’s Symmwin Software is absolutely necessary. Without the presence of above it becomes impossible to obtain configuration locks on the machine through ECC or Symcli, restricting customer BIN FILE Changes for reconfiguration.

Enginuity Code level break down are based on the Family of machines. Typically 50XX versions are limited to Symm 3.0 Models (3100/5100, 3200/5200, 3500/5500).

The 37xx versions are limited to Symm 2.5 Models (4200,4400, 4800) The code levels 5265, 5266, 5267 are limited to Symm 4.0 (3330/5300, 3400/5430, 3700/5700) and Symm 4.8 family (3630/5630, 3830/5830, 3930/5930) of machines.

For Symm 5.0 and 5.5 the Enginuity code versions are 5567 and 5568. The last code version for the Symm 5.0 and 5.5 is 5568.68.28. There will be no code upgrades for the Symmetrix after this version.

Going into the DMX1 & DMX2 (DMX800, DMX1000, DMX2000, DMX3000), code levels 5669, 5670 and 5671 are the major family Enginuity Code levels. For the DMX3 and DMX4 code levels 5771, 5772 and 5773 are the major releases. The latest version 5671.75.75 is the last known version for the DMX1 and DMX2 family of machines.

The guidelines for Enginuity Code level breakdowns is as follows. Example 5671.75.75 (Please see the color coded system below)

First Two digits

38

50=Symm 3.0 52=Symm 4.0, 4.8 55=Symm 5.0, 5.5 56 = DMX1/DMX2 57 = DMX3/DMX4

The next two digits are

67, 68 = Microcode Family, Major Symmetrix Releases for Symm 5.0/Symm 5.5 69, 70, 71 = Microcode Family, Major Symmetrix-DMX Releases for DMX1/DMX2

71, 72, 73 = Microcode Family, Major Symmetrix-DMX Releases for DMX3/DMX4

The next two digits are

Emulation Number designated as EE

The last two digits are

Field Release level Service Processor Code Level (Symmwin Version)

The version of the Enginuity code will define what functionality and features the Symmetrix / DMX will have for that generation. As the hardware gets better and faster, the Enginuity Code has to improve and add features to perform along with it.

39

EMC Symmetrix: BIN file

EMC Symmetrix BIN file, largely an unknown topic in the storage industry and practically there is no available information related to it. This post is just an attempt to shed some light as to what a BIN file is, how it works, what’s in it and why is it essential with the Enginuity code.

Some EMC folks have capitalized on the BIN file as to the personality it brings to the Symmetrix, while the EMC competition always uses it against them as it introduces complexities in the storage environment with management and change control.

Personally I feel a Symmetrix wouldn’t be a Symmetrix if the BIN file weren’t there. The personality, characteristics, robustness, compatibility, flexibility, integration with OS’s, etc wouldn’t be there if the BIN file didn’t exist.

With the total number of OS’s, device types, channel interfaces and flags it supports today, sort of making it one of the most compatible storage arrays in the market. The configuration and compatibility on the Symmetrix can be verified using the E-Lab navigator available on Powerlink.

So here are some facts about the BIN file

• • • • •

Only used with Symmetrix systems (Enginuity Code) .

BIN file stands for BINARY file. .

BIN file holds all information about the Symmetrix configuration .

One BIN file per system serial number is required. .

BIN file was used with Symmetrix Gen 1 in 1990 and is still used in 2010 with Symmetrix V-Max systems. .

40

• • •

BIN file holds information on SRDF configurations, total memory,

memory in slots, serial number of the unit, number of directors, type of directors, director flags, engines, engine ports, front end ports, back end ports, drives on the loop, drives on the SCSI bus, number of drives per loop, drive types in the slots, drive speeds, volume addresses, volume types, meta’s, device flags and many more settings. .

The setup for host connection if the OS is Open Systems or Mainframe environments using FICON, ESCON, GbE, FC, RF, etc is all defined in the BIN file. Also director emulations, drive formats if OSD or CKD, format types, drive speeds, etc is all defined in the BIN file. .

BIN file is required to make a system active. It is created based on customer specifications and installed by EMC during the initial setup. .

Any ongoing changes in the environment related to hardware upgrades, defining devices, changing flags, etc is all accomplished using BIN file changes. .

BIN file changes can be accomplished 3 ways. .

BIN file change for hardware upgrades is typically performed by EMC only. .

BIN file change for other changes that are device, director, flags, meta’s, SRDF configurations etc is either performed through the SYMAPI infrastructure using SymCLI or ECC (Now Ionix) or SMC (Symmetrix Management Console) by the customer. (Edited based on the comments: Only some changes now require traditional BIN file change, typically others are performed using sys calls in enginuity environment) .

Solutions enabler is required on the Symcli, ECC, SMC management stations to enable SYMAPI infrastructure to operate. .

VCMDB needs to be setup on the Symmetrix for SymCLI, ECC, SMC related changes to work. .

Gatekeeper devices need to be setup on the Symmetrix front end ports for SymCLI, ECC, SMC changes to work .

For Symmetrix Optimizer to work in your environment, you need DRV devices setup on your Symmetrix.(EDITED based on comments: Only required until DMX platform. Going forward with DMX3/4 & V-Max platforms it uses sys calls to perform these Optimizer changes).

41

Back in the day

All and any BIN file changes on the Symmetrix 3.0, Symmetrix 4.0 used to be performed by EMC from the Service Processor. Over the years with introduction of SYMAPI and other layered software products, now seldom is EMC involved in the upgrade process.

Hardware upgrades

BIN File changes typically have to be initiated and performed by EMC, again these are the hardware upgrades. If the customer is looking at adding 32GB’s of Cache to the existing DMX-4 system or adding new Front End connectivity or upgrading 1200 drive system to 1920 drives, all these require BIN file changes initiated and performed by EMC. To my understanding the turn around time is just a few days with these changes, as it requires change control and other processes within EMC.

Customer initiated changes

Configuration changes around front end ports, creating volumes, creating meta’s, volume flags, host connectivity, configuration flags, SRDF volume configurations, SRDF replication configurations, etc can all be accomplished through the customer end using the SYMAPI infrastructure (with SymCLI or ECC or SMC).

Enginuity upgrade

Upgrading the microcode (Enginuity) on a DMX or a V-Max is not a BIN file change, but rather is a code upgrade. Back in the days, many upgrades were performed offline, but in this day and age, all changes are online and accomplished with minimum pains. Today

42

So EMC has moved quite ahead with the Symmetrix architecture over the past 20 years, but the underlying BIN file change requirements haven’t changed over these 8 generations of Symmetrix.

Any and all BIN file changes are recommended to be done during quite times (less IOPS), at schedule change control times. Again these would include the ones that EMC is performing from a hardware perspective or the customer is performing for device/flag changes.

The process

During the process of a BIN file change, the configuration file typically ending with the name *.BIN is loaded to all the frontend directors, backend directors, including the global cache. After the upload, the system is refreshed with this new file in the global cache and the process makes the new configuration

changes active. This process of refresh is called IML (Initial Memory Load) and the BIN file is typically called IMPL (Initial Memory Program Load) file. A customer initiated BIN file works in a similar way, where the SYMAPI infrastructure that resides on the service processor allows the customer to interface with the Symmetrix to perform these changes. During this process, the scripts verify that the customer configurations are valid and then perform the changes and make the new configuration active.

To query the Symmetrix system for configuration details, reference the SymCLI guide. Some standard commands to query your system would include symcfg, symcli, symdev, symdisk, symdrv, symevent, symhost, symgate, syminq,

symstat commands and will help you navigate and find all the necessary details related to your Symmetrix. Also similar information in a GUI can be obtained using ECC and SMC. Both will allow the customer to initiate SYMAPI changes. Unless something has changed with the V-Max, typically to get an excel based representation of your BIN file, ask your EMC CE. Issues

You cannot run two BIN files in a single system, though at times the system can end up in a state where you can have multiple BIN files on various directors. This phenomenon typically doesn’t happen to often, but an automated script when not finished properly can put the system in this state. At this point the

43

Symmetrix will initiate a call home immediately and the PSE labs should typically be able to resolve these issues.

Additional software like Symmetrix Optimizer also uses the underlying BIN file infrastructure to make changes to the storage array to move hot and cold devices based on the required defined criteria. There have been quite a few known cases of Symmetrix Optimizer causing the above phenomenon of multiple BIN files. , Though many critics will disagree with that statement.

(EDITED based on comments: Only required until DMX platform. Going forward with DMX3/4 & V-Max platforms it uses sys calls to perform these Optimizer changes).

NOTE: One piece of advice, never run SYMCLI or ECC scripts for BIN file

changes through a VPN connected desktop or laptop. Always run all necessary SymCLI / SMC / ECC scripts for changes from a server in your local environment. Very highly recommend, never attempt to administer your Symmetrix system with an iPhone or a Blackberry.

Hope in your quest to get more information on BIN files, this serves as the starting point.

EMC Symmetrix: Calculations for Heads, Tracks, Cylinders, GB

Here is the quick and dirty math on EMC Symmetrix Heads, Tracks, Cylinder sizes to actual usable GB’s of space.

Based on different generations of Symmetrix systems, here is how the conversions work.

Before we jump into each model type, let’s look at what the basics are, with the following calculations.

44

There are s number of splits (hyper) per physical device. There are n number of cylinders per split (hyper) There are 15 tracks per cylinder (heads)

There are either 64 or 128 blocks of 512 bytes per track

All the calculations discussed here are for Open Systems (FBA) device types. Different device emulations like 3380K, 3390-1, 3390-2, 3390-3, 3390-4, 3390-27, 3390-54 have different bytes/track, different bytes/cylinder and cylinders/volume.

Symmetrix 8000/DMX/DMX-2 Series Enginuity Code: 5567, 5568, 5669, 5670, 5671

Includes EMC Symmetrix 8130, 8230, 8430, 8530, 8730, 8830, DMX1000, DMX2000, DMX3000 and various different configurations within those models. GB = Cylinders * 15 * 64 * 512 / 1024 / 1024 / 1024 eg: 6140 Cylinder devices equates to 2.81 GB of usable data 6140 * 15 * 64 * 512 / 1024 / 1024 / 1024 = 2.81 GB Cylinders = GB / 15 / 64 / 512 * 1024 * 1024 * 1024 Where

15 = tracks per cylinder 64 = blocks per track 512 = bytes per block

1024 = conversions of bytes to kb to mb to gb.

45

Symmetrix DMX-3/DMX-4 Series Enginuity Code: 5771, 5772, 5773

Includes EMC Symmetrix DMX-3, DMX-4 and various different configurations within those models.

GB = Cylinders * 15 * 128 * 512 / 1024 / 1024 / 1024 Eg: 65520 Cylinder device equates to 59.97 GB of usable data 65540 * 15 * 128 * 512 / 1024 / 1024 / 1024 = 59.97 GB Cylinders = GB / 15 / 128 / 512 * 1024 * 1024 * 1024 15 = tracks per cylinder 128 = blocks per track 512 = bytes per block

1024 = conversions of bytes to kb to mb to gb

Symmetrix V-Max Enginuity Code: 5874

Includes EMC Symmetrix V-Max and various different configurations within this model.

GB = Cylinders * 15 * 128 * 512 / 1024 / 1024 / 1024

Eg: 262668 Cylinder device equates to 240.47 GB of usable data 262668 * 15 * 128 * 512 / 1024 / 1024 / 1024 = 240.47 GB Cylinders = GB / 15 / 128 / 512 * 1024 * 1024 * 1024 15 = tracks per cylinder 128 = blocks per track 512 = bytes per block

8 bytes = 520-512 used for T10-DIF

46

1024 = conversions of bytes to kb to mb to gb

Drive format on a V-Max is 520 bytes, out of which 8 bytes are used for T10-DIF ( A post on DMX-4 and V-Max differences).

EMC Symmetrix File System (SFS)

Very little is known about the Symmetrix File System largely known as SFS. Symmetrix File System is an EMC IP and practically only used within the Symmetrix environment for housekeeping, security, access control, stats collection, performance data, algorithm selection, etc.

If there are any facts about SFS that are known to you, please feel free to leave a comment. This post talks about the effects of SFS and not really the underlying file system architecture.

Some facts about the Symmetrix File System are highlighted below.

• • • •

Symmetrix File System (SFS) resides on volumes that have specially been created for this purpose on the Symmetrix

SFS volumes are created during the initial Enginuity Operating Environment load (Initial install)

4 Volumes (2 Mirrored Pairs) are created during this process

SFS volumes were introduced with Symmetrix Series 8000, Enginuity 5567 and 5568

Characteristics

• • • • •

4 SFS volumes are spread across multiple Disk Directors (Backend Ports) for redundancy

SFS volumes are considered as reserved space and not available to use by the host

Symmetrix 8000 Series: 4 SFS volumes, 3GB each (cylinder size 6140). Reserved space is 3GB x 4 vols = 12 GB total

Symmetrix DMX/DMX-2: 4 SFS volumes, 3GB each (cylinder size 6140). Reserved space is 3GB x 4 vols = 12 GB total

Symmetrix DMX-3/DMX-4: 4 SFS volumes, 6GB each (cylinder size

6140). Reserved space is 6GB x 4 vols = 24 GB total, (It’s different how

47

• • • • • • • •

the GB is calculated based on cylinder size on a DMX/DMX-2 vs a DMX-3/DMX-4)

Symmetrix V-Max: 4 SFS volumes, 16GB each, Reserved space is 16GB x 4 vols = 64GB total

SFS volumes cannot reside on EFD (Enterprise Flash Drives) SFS volumes cannot be moved using FAST v1 and/or FAST v2 SFS volumes cannot be moved using Symmetrix Optimizer SFS volumes cannot reside on Vault Drives or Save Volumes

SFS volumes are specific to a Symmetrix (Serial Number) and do not need migration

SFS volumes are managed through Disk Directors (Backend Ports) only SFS volumes cannot be mapped to Fiber Directors (now FE – Frontend Ports)

Effects

• •

• •

SFS volumes are write enabled but can only be interfaced and managed through the Disk directors (Backend Ports).

SFS volumes can go write disabled, which could cause issues around VCMDB. VCMDB issues can cause host path (HBA) and disk access issues.

SFS volume corruption can cause hosts to lose access to disk volumes. If SFS volumes get un-mounted on a Fiber Director (Frontend Port), can result into DU (Data Unavailable) situations.

Fixes

• •

• •

Since the SFS volumes are only interfaced through the Disk Directors (Backend Ports), the PSE lab will need to be involved in fixing any issues. SFS volumes can be VTOC’ed (formatted) and some key information below will need to be restored upon completion. Again this function can only be performed by PSE lab.

SFS volumes can be formatted while the Symmetrix is running, but in a SCSI-3 PGR reservation environment it will cause a cluster outage and/or a split brain.

No Symmetrix software (Timefinder, SYMCLI, ECC, etc) will be able to interface the system while the SFS volumes are being formatted. The security auditing / access control feature is disabled during the format of SFS volumes, causing any Symmetrix internal or external software to stop functioning.

Access Control Database and SRDF host components / group settings will need to be restored after the SFS format

Access / Use case

48

• • •

• • • • • • •

Any BIN file changes to map SFS volumes to host will fail.

SFS volumes cannot be managed through SYMCLI or the Service Processor without PSE help.

SYMAPI (infrastructure) works along with SYMMWIN and SFS volumes to obtain locks, etc during any SYMCLI / SYMMWIN / ECC activity (eg. Bin Changes).

Since FAST v1 and FAST v2 reside as a policy engine outside the

Symmetrix, it uses the underlying SFS volumes for changes (locks, etc). Performance data relating to FAST would be collected within the SFS volumes, which FAST policy engine uses to gauge performance.

Performance data relating to Symmetrix Optimizer would be collected within the SFS volumes, which Optimizer uses to gauge performance. Other performance data collected for the DMSP (Dynamic Mirror Service Policy).

All Audit logs, security logs, access control database, ACL’s etc is all stored within the SFS volumes.

All SYMCLI, SYMAPI, Solutions enabler, host, interface, devices, access control related data is gathered on the SFS volumes.

With the DMX-4 and the V-Max, all service process access, service

processor initiated actions, denied attempts; RSA logs, etc are all stored on SFS volumes.

Unknowns

• • • • •

SFS structure is unknown SFS architecture is unknown

SFS garbage collection and discard policy is unknown SFS records stored, indexing, etc is unknown

SFS inode structures, function calls, security settings, etc is unknown

As more information gets available, I will try to update this post. Hope this is useful with your research on SFS volumes…

EMC Symmetrix: VCMDB and ACLX

VCMDB: Volume Control Manager Database ACLX: Access Control Logix

49

VCM: Volume Control Manager device (where the database resides) VCM Gatekeeper: Volume Control Manager Gatekeeper (database doesn’t reside on these devices)

SFS Volumes: Symmetrix File System Volumes

If you work with EMC Symmetrix systems, you know the importance of VCMDB. Introduced with Symmetrix 4.0 and used in every generation after that, VCMDB stands for Volume Control Manager Database). Also in the latest generation of systems the VCM device is at times also referenced as VCM Gatekeeper. VCMDB is a relatively small device that is created on the Symmetrix system that allows for hosts access to various devices on the Symmetrix. VCMDB keeps an inventory of which devices have access to which host (HBA’s). Without a VCMDB in place, host systems will not be able to access the Symmetrix. The VCMDB should be backed up on regular intervals and would be helpful in a rainy day. The VCMDB device size grew along with new generations of Symmetrix systems that got introduced, primarily a means to keep a track of more supported

devices (hypers / splits) on these platforms. With the introduction of Symmetrix V-Max, the VCMDB concept is now a bit changed to ACLX (Access Control Logix). Access Logix is being used on the Clariion systems for years now.

Here are a few things to consider with VCMDB

• • • • •

On the older Symmetrix systems (4.0, 4.8, 5.0 and 5.5), the VCMDB (device) is mapped to all the channels, host

In these systems the VCMDB access is typically restricted by Volume Logix or ACL (access control lists)

With the Symmetrix DMX, DMX2 Systems – Enginuity Code 5670, 5671 the VCM device only requires to be mapped to the Management stations Management stations include SYMCLI Server / Ionix Control Center Server / Symmetrix Management Console

At all given times on the DMX, DMX2 platforms, the VCMDB would need to be mapped to at least one station to perform online SDDR changes. Alternatively this problem of not having device mapped to at least one host can also be fixed by the PSE lab

Mapping VCMDB to multiple hosts, channels may make the device venerable to crashes, potential tampering, device attributes and data change

50

• • • •

• •

You can write disable VCMDB to avoid the potential of the above With these systems, the host can communicate to the VCMDB via Syscalls

The VCM Edit Director Flag (fibrepath) needs to be enabled for management stations to see VCM device

The database (device masking database) of the VCMDB resides on the SFS volumes. This feature was introduced with DMX-3 / DMX-4 (5772 version of microcode). A 6 cylinder VCM Gatekeeper device is okay to use with these versions of microcode.

Starting Symmetrix V-Max systems, the concept of ACLX was introducted for Auto Provisioning etc.

VCM volumes are required to be mirrored devices like SFS volumes

Various different types of VCMDB

Type 0, Type 1, Type 2, Type 3, Type 4, Type 5, Type 6

• • • •

Type 0: Symmetrix 4.0, 32 Director System, 16 cylinder device size, Volume Logix 2.x

Type 1: Symmetrix 4.8, 64 Director System, 16 cylinder device size, ESN Manager 1.x

Type 2: Symmetrix 5.0/5.5, 64 Director System, 16 cylinder device size, ESN Manager 2.x

Type 3: Symmetrix DMX, supports 32 fibre/ 32 iSCSI initiator records per port, 24 cylinder device in size. Enginuity 5569, Solutions Enabler 5.2, Support 8000 devices

Type 4: Symmetrix DMX/DMX-2, supports 64 fibre/ 128 iSCSI initiator records per port, 48 cylinder device in size. Enginuity 5670, Solutions Enabler 5.3, Supports 8000 devices

Type 5: Symmetrix DMX/DMX-2, supports 64 fibre / 128 iSCSI initiator records per port, 96 cylinder device in size, Enginuity 5671, Solutions Enabler 6.0, Supports 16000 devices

Type 6: Symmetrix DMX-3, DMX-4, supports 256 fibre / 512 iSCSI initiator records per port, 96 cylinder device in size, Enginuity 5771, 5772 Solutions Enabler 6.0, Supports 64000 devices

Notes about various Types of VCMDB

Type 3 of VCMDB can be converted to Type 4 VCMDB (code upgrade from 5669 to 5670 to 5671)

51

• • •

Solutions enabler 5.2 and Solutions Enabler 5.3 can read/write Type 3 VCMDB

Solutions enabler 5.3 can read/write Type 4 VCMDB

VCMDB device is recommended to be a certain size, but it is okay to use a larger size device if no choices are available.

Converting various types of VCMDB using SymCLI

If the device cylinder size is equal with a conversion you are attempting, the following will help you convert your VCMDB from type x to type y.

o Backup the device

o symmaskdb –sid backup –file backup o Check the VCMDB type using

o symmaskdb – sid list database o Convert from type 4 to type 5

o Symmaskdb – sid convert –vcmdb_type 5 –file

Covertfilename

To initialize VCMDB for the first time on a Symmetrix System Within Ionix Control Center

• • • • • • •

Click on the Symmetrix array you are trying to initialize the VCMDB Select Masking then VCMDB Management and then initialize Select a new backup and create a file name Create a file name with .sdm extenstion Click on Activate the VCMDB VCMDB backups are stored at

\\home\\ecc_inf\\data\\hostname\\data\\backup\\symmserial\\ Also it will be viewable within Ionix Control Center at Systems/Symmetrix/VCMDB Backups/

With SymCLI

To query the VCMDB database

o symmaskdb –sid list database

o To backup and init an existing VCMDB database

▪ symmaskdb – sid init –file backup

52

EMC Symmetrix: Dynamic Hot Spares

There are two types of sparing strategies available on EMC Symmetrix Series of machines.

Dynamic Hot Sparing: Starting the Symmetrix 4.0, EMC had introduced dynamic hot spares in its Enginuity code to support customers against failing disk drives and reducing the probability of a data loss. Available there onwards on each version of Symmetrix, customers have been able to use this Hot

Sparing technology. Today the Dynamic sparing is available on Symmetrix 4.0, Symmetrix 4.8, Symmetrix 5.0, Symmetrix 5.5, DMX, DMX2, DMX3, and DMX4 systems.

Permanent Spares: Was introduced starting the Symmetrix DMX3 products, now available on DMX4’s and V-Max systems. I believe, Enginuity code 5772 started supporting Permanent Spares to guard customers against failing disk drives to further help reduce any performance, redundancy and processing degradation on the Symmetrix systems with features that were not available with the Dynamic Hot Sparing.

Highlights of Permanent Sparing

Due to some design, performance, redundancy limitations and Symmetrix mirror positions, dynamic hot spares were becoming a bottleneck related to customer internal job processing, example: a failed 1TB SATA drive sync to dynamic spare might take more than 8 to 48 hours. While a similar process to remove the dynamic spare and equalize the replaced drive might take the same. During this time the machine is more or less in a lock down (Operational but not configurable).

Due to these limitations, a concept of Permanent spares was

introduced on EMC Symmetrix systems, which would help fulfill some gaps the Dynamic hot spares technology has. Following are the criteria for Dynamic Hot Spares.

Some important things to consider with Dynamic Hot Sparing

53

1. Supported through microcode (Enginuity) version starting Symmetrix

Family 4.0, support extended through all later releases of Enginuity until DMX-4 (5773).

2. Dynamic Hot Spares configured and enabled in the backend by an EMC

CE.

3. No BIN file change is performed as the Dynamic Hot Spare gets invoked

or removed upon a disk drive failure.

4. No BIN file change is allowed until the Dynamic Hot Spare is removed

from the active used devices pool and inserted back into the Spares pool. 5. An EMC CE will need to attend site to replace the failed drive and put the

dynamic hot spare back in the pool of devices available for sparing. 6. Enginuity does not check for performance and redundancy when the

dynamic hot spare is invoked.

7. In the previous generation of Symmetrix systems, an exact match

(speed, size, block size) was required with Dynamic hot spares. Starting I believe the 5772 (DMX3 onwards) version of microcode that

requirement is not necessary. Now larger or smaller multiple dynamic spares can be spread across protecting multiple devices not ready, the one to one relationship (failed drive to dynamic spare) is not true any more.

8. Related to performance on DMX3 systems and above, if correct dynamic

spares are not configured, customers can see issues around redundancy and performance. Example, A 10K drive can be invoked automatically against a failed drive that is 15K causing performance issues. Also a drive on the same loop as other raid group devices can be invoked as a hot spare, potentially causing issues if the entire loop was to go down. 9. Dynamic spares will not take all the characteristics of failed drives.

Example, mirror positions.

10. While the Permanent Spare or Dynamic Hot Spare is not invoked and is

sitting in the machine waiting for a failure, these devices are not

accessible from the front end (customer). The folks back at the PSE labs, will still be able to interact with these devices and invoke it for you incase of a failure or a proactive failure or for any reasons the automatic invoke fails.

11. If a Permanent Spare fails to invoke, a Dynamic Hot Spare is invoked, if

a Dynamic Hot Spare fails to invoke, the customer data stays unprotected.

12. Dynamic Hot Spare is supported with RAID-1, RAID-10, RAID-XP, RAID-5

and various configurations within each Raid type. Dynamic hot sparing does not work with RAID-6 devices.

13. As far as I know for the V-Max systems, Dynamic hot sparing is not

supported.

54

Some important benefits of Dynamic Hot Sparing

1. Dynamic Hot Sparing kicks in when Permanent Sparing fails to invoke 2. Provides additional protection against data loss No BIN file change is performed with Dynamic Hot Sparing

As a requirement to all the new systems that are configured now, sparing is required. Hope this provides a vision into configuring your next EMC Symmetrix on the floor.

EMC Symmetrix: Permanent Sparing

There are two types of sparing strategies available on EMC Symmetrix Series of machines.

Dynamic Hot Sparing: Starting the Symmetrix 4.0, EMC had introduced dynamic hot spares in its Enginuity code to support customers against failing disk drives and reducing the probability of a data loss. Available there onwards on each version of Symmetrix, customers have been able to use this Hot

Sparing technology. Today the Dynamic sparing is available on Symmetrix 4.0, Symmetrix 4.8, Symmetrix 5.0, Symmetrix 5.5, DMX, DMX2, DMX3, and DMX4 systems.

Permanent Spares: Was introduced starting the Symmetrix DMX3 products, now available on DMX4’s and V-Max systems. I believe, Enginuity code 5772 started supporting Permanent Spares to guard customers against failing disk drives to further help reduce any performance, redundancy and processing degradation on the Symmetrix systems with features that were not available with the Dynamic Hot Sparing.

Highlights of Permanent Sparing

Due to some design, performance, redundancy limitations and Symmetrix mirror positions, dynamic hot spares were becoming a bottleneck related to customer internal job processing, example: a failed 1TB SATA drive sync to dynamic spare might take more than 8 to 48 hours. While a similar process to remove the dynamic spare and equalize the replaced drive might take the same.

55

During this time the machine is more or less in a lock down (Operational but not configurable).

Due to these limitations, a concept of Permanent spares was

introduced on EMC Symmetrix systems, which would help fulfill some gaps the Dynamic hot spares technology has. Following are the criteria for Permanent Spares.

Some important things to consider with Permanent Spares

1. Permanent Spares are supported through the microcode (Enginuity)

versions starting the DMX-3 (5772 onwards) into the latest generation Symmetrix V-Max Systems.

2. The customer needs to identify and setup the devices for Permanent

Spares using Solutions enabler or an EMC CE should perform a BIN file change on the machine to enable Permanent Spares and the associated devices.

3. When the Permanent Spare kicks in upon a failing / failed drive, a BIN file

change locally within the machine is performed using the unattended SIL. Any configuration locks or un-functional Service Processors will kill the process before it’s initiated, in this instance the Permanent Spare will not be invoked but rather will invoke the Dynamic Hot Spare.

4. An EMC CE will not require attending the site right away to replace the

drive since the Permanent Spare has been invoked and all the data is protected. All failed drives where Permanent spares have been invoked can be replaced in a batch. When the failed drive is replaced, it will become a Permanent spare and will go the Permanent spares pool. 5. Configuration of Permanent Spares is initiated through BIN file change,

during this process, the CE or the customer will required to consider Permanent Spares rules related to performance and redundancy.

6. If a Permanent Spare cannot be invoked due to any reasons related to

performance and redundancy, a Dynamic Hot Spare will be invoked against the failing / failed device.

7. The Permanent Spare will take all the original characteristics of a failed

disk (device flags, meta configs, hyper sizes, mirror positions, etc) as it gets invoked.

8. The rule of thumb with permanent spares is to verify that the machine

has required type / size / speed / capacity / block size of the related permanent spare drives configured.

9. You can have a single Symmetrix frame with Permanent Spares and

Dynamic Hot Spares both configured.

10. While the Permanent Spare or Dynamic Hot Spare is not invoked and is

sitting in the machine waiting for a failure, these devices are not

accessible from the front end (customer). The folks back at the PSE labs, will still be able to interact with these devices and invoke it for you incase

56

of a failure or a proactive measure or for any reasons the automatic invoke fails.

11. Permanent spares can be invoked against Vault drives, if a permanent

spare drive is available on the same DA where the failure occurred. 12. Permanent spares can be configured with EFD’s. I believe for every 2

DAE’s (30+ drives) you have to configure one hot spare EFD (permanent spares).

13. Permanent Spares supports RAID type RAID 1, RAID 10, RAID 5, RAID 6

and all configurations within. Some important Benefits of Permanent Sparing

1. Additional protection against data loss

2. Permanent sparing reduces the number of times the data copy is

required (one time) instead of dynamic spares that needs to data copy (two times).

3. Permanent sparing resolves the problem of mirror positions.

4. Permanent spares (failed) drives can be replaced in batches, do not

require immediate replacement.

5. Permanent spares do not put a configuration lock on the machine, while

an invoked dynamic spare will put a configuration lock until replaced. 6. Permanent spares obey the rules of performance and redundancy while

Dynamic hot sparing does not.

EMC Symmetrix DMX device type, COVD: Cache Only Virtual Device

Here is some information on Cache Only Virtual Devices. I do not have a very clear picture on the overall operation of this device type, but from a high level it can be summed up as following based on it characteristics.

Starting with microcode 5670 on EMC Symmetrix DMX Systems, EMC

introduced COVD (device types). We have seen instances of COVD on 5671, 5771 and 5772 microcodes, really unknown if they exist on EMC Symmetrix V-Max systems at this point.

57

Here are some highlights on COVD:

• •

• • • • • • • •

Even though COVD’s were introduced on the 5670 microcode,

recommendation is to upgrade to 5671 on the R2 side of SRDF/A before implementing COVD’s.

Used with SRDF/A technology for caching data on R2 side.

Symconfigure will not allow (block) you to change SRDF/A group on R2 side for COVD devices. You will need a BIN File change for this process by the Customer Engineer.

COVD is a Virtual Device but does end up taking two device numbers within your list of Symmetrix device numbers (I believe 8192 device numbers are available on the early DMX’s).

If you are using COVD, your configured capacity might show more than your Raw Capacity in ECC and StorageScope. COVD’s cannot be snapped using TImefinder

COVD’s can only be created and destroyed by BIN File (not through SYMCLI)

COVD is only found on R2 side of SRDF/A Cache is used as part of creating the COVD

COVD’s are used in pairs, one is used for active SRDF/A cycle and 1 is used for inactive SRDF/A cycle

No Data is stored on COVD, used practically for caching

Primarily introduction of COVD was to reduce the write pending limits with SRDF/A

Haven’t really seen a lot of customers using COVD (device types). But

sometimes during storage analysis of customer meta data reveals these device types since it is assigned a device number.

EMC Symmetrix Management Console (SMC – For Symmetrix V-Max Systems)

The Symmetrix Management Console is a very important step towards allowing customers take control of their Symmetrix V-Max Systems. With the new Symmetrix V-Max comes a new version of Symmetrix Management Console allowing customers to manage their EMC Symmetrix V-Max Systems through a

58

GUI web browser interface with tons of new added features and wizards for usability.

The Symmetrix Management Console was developed back in the day as a GUI to view customers Symmetrix DMX environment, over years it has evolved more to be a functional and operational tool to interface the machine for data

gathering but also to perform changes. EMC Solutions Enabler Symcli is a CLI based interface to the DMX and V-Max Systems, but the SMC complements the CLI by allowing customers to perform more or less similar functions through a GUI. The looks & feels of SMC also resemble ECC (EMC Control Center) and customers sometime refer it as a ECC-lite (SMC).

EMC Symmetrix Management Console in action monitoring EMC

Symmetrix V-Max Systems Some of the important features and benefits of the SMC for V-Max are listed below:

1) Allows customers to manage multiple EMC Symmetrix V-Max Systems

2) Increase customer management efficiency by using Symmetrix

Management Console to automate or perform functions with a few set of clicks

59

3) The Symmetrix Management Console 7.0 only works with Symmetrix V-Max systems

4) The Symmetrix Management Console is installed on the Service Processor of the V-Max System and can also be installed on a host in the SAN environment.

5) Customers can now do trending, performance reporting, planning and consolidation using SMC

6) SMC will help customers reduce their TCO with V-Max Systems 7) It takes minutes to install. Windows environment running a Windows Server 2003 along with IIS would be the best choice.

8 ) The interface the customers work on is a GUI. It has the looks and feels of ECC and the Console also integrates with ECC.

9) New Symmetrix V-Max systems are configured and managed through the Symmetrix Management Console.

10) SMC also manages user, host permissions and access controls 11) Alert Management

12) From a free product, SMC now becomes a licensed product, which the customers will have to pay for

13) It allows customers to perform functions related to configuration changes like creating and mapping masking devices, changing device attributes, flag settings, etc

14) Perform replication functions using SMC like Clone, Snap, Open Replicator, etc

15) SMC enables Virtual Provisioning with the Symmetrix V-Max arrays 16) Enables Virtual LUN technology for automated policies and tiering. 17) Auto Provisioning Group technology is offered through wizards in SMC 18) Dynamic Cache Partitioning: Allocates and deallocates cache based on policies and utilization.

19) Symmetrix Priority Controls

60

20) From the SMC, customers can now launch SPA (Symmetrix Performance Analyzer), this is more on the lines of Workload Analyzer which is a standard component of ECC Suite. This allows customers to view their storage &

application performance & monitoring. SPA will can be obtained as a Add-on product from EMC based on licensing.

Virtual LUN Technology in works using a wizard

21) The SMC gives the customer capabilities for Discovery, Configuration, Monitoring, Administration and Replication Management.

22) SMC can be obtained from EMC Powerlink or through your account manager from EMC if you have an active contract in place with EMC for

hardware/software maintenance or if your systems are under warranty. Highly recommended management tool for SAN Admins and yea it’s not free anymore for V-Max Systems.

61

Symcli Basic Commands

Following are the Symcli Commands

You can use the man pages for further info.

You can also use the option symxxx -h and will help you navigate around. Most of the commands come in flavors of three as follows: Reference Description Example

pd Physical device name /dev/dsk/c3t4d5 dev Symmetrix device name 0FF

ld Symmetrix logical device name DEV001 Examples are as follows: 1. symdev list 2. sympd list

3. symld -g ${group} list

Command Note

symdev Performs operations on a device given the Symmetrix device name. sympd Performs operations on devices given the devices physical name (c2t0d0) symgate Performs operations on gatekeeper devices. symdg Performs operations on Symmetrix device groups symld Performs operations on devices within a device group symbcv Performs support operations on BCV pairs symmir Performs control operations on BCV pairs symrdf Performs control operations on RDF pairs

symcfg discover Creates a local database of the attached symmetricies. syminq Shows internal & external devices that the host sees.

EMC Timefinder Commands

The following are the Timefinder Procedural Commands

62

It outlines everything that needs to be done from start to finish. Realize that for routine operations, some of these steps won’t be needed; however, for the sake of completeness. Prepare EMC structures

1. Create a Symmetrix disk group

symdg -t [ Regular | RDF1 | RDF2 ] create ${group} 2. Add devices to the disk group

symld -g ${group} add pd /dev/dsk/c#t#d# symld -g ${group} add dev 01a 3. Associate BCV devices to the disk group

symbcv -g ${group} associate pd ${bcv_ctd} symbcv -g ${group} associate dev ${bcv_dev} Establish BCV mirrors

1. ID the logical device names: Timefinder defaults to using the logical device names. You can id the logical device names by: symmir -g ${group} query

2. First time establish, execute a full establish:

symmir -g ${group} -full establish ${std_log_dev} bcv ${bcv_log_dev}

3. Use symmir query to monitor progress. symmir -g ${group} query Break BCV mirrors 1. Types of splits:

1. Instant split: Split is performed in the background after the completion of the split I/O request.

2. Force split: Splits the pair during establish or restore operations; invalid tracks may exist.

3. Reverse split: Resyncs the BCV with the full data copy from its local or remote mirror.

63

4. Reverse differential split: Enables a copy of only out-of-sync tracks to the BCV from its mirror.

5. Differential split: Enables a copy of only the updated tracks to the BCV’s mirror. 2. Commands:

symmir -g ${group} split

symmir -g ${group} split -instant

symmir -g ${group} split -differential

symmir -g ${group} reverse split -differential Reestablish or restore BCV mirrors

1. Restore copies data from BCV back to standard pair. >Reestablish, on the other hand, does a differential update of the BCV from the standard device. 2. Commands:

symmir -g ${group} establish Differential reestablish from standard device to BCV

symmir -g ${group} -full restore Full restore of all tracks on BCV to standard device.

symmir -g ${group} restore Differential restore of BCV data to standard device.

The Timefinder Strategies are as follows

1. Maintain BCV mirrors with the standard device; break the mirrors when you want to backup, test, or develop on a copy of the original.

This is probably the most common way of running Timefinder. The advantage is that the split operation will happen almost instantly as the mirrors are fully synced all the time. The disadvantage is that anything towards that happens to the standard device will be reflected in the BCV mirror.

2. Maintain the BCV as a split device to keep an online backup of the original data.

64

EMC SRDF Basics

Conceptually and operationally, SRDF is designed to work in a

WAN/Internet/Cloud/SAN environment with multiple Symms involved, while Timefinder is local to a Symm, but performs the same functions.

The difference, SRDF can be performed without Geographic boundaries, while Timefinder is local. The following are various different forms of SRDF that can be used by a customer to perform SRDF operations. Synchronous mode

With Synchronous mode, the remote symm must have I/O in cache before the application receives the acknowledgement. Depending on distance where these Symmetrix machines are located, this may have a significant impact on

performance. This form of SRDF is suggested to be implemented in a campus environment.

If you want to ensure that the data is replicated real time without dirty tracks from one symmetrix to the other, you might want to enable Domino effect. With Domino effect, your R1 devices will become not ready if the R2 devices cant be reached.

Semi-synchronous mode

With Semi-synchronous mode, the I/O between the R1 and R2 devices are always out of sync. The application receives the acknowledgement from the first write I/O to the local cache. The second I/O isn’t acknowledged until the first is in the remote cache. This form of SRDF is faster than the previous mentioned Synchronous mode.

Adaptive Copy-Write Pending

With Adaptive Copy-Write Pending, all the R2 volumes are copied over without the delay of acknowledgement from the application. With this mode, we can setup a skew parameter that will allow max number of dirty tracks. Once that number is reached, the system switches to a preconfigured mode like the semi-synchronous mode until the remote data is all synced. Once this is hit, SRDF is switched back to Adaptive Copy-Write Pending mode.

65

SRDF Commands

The following are SRDF Commands and what they are used for. Composite SRDF commands 1. Failover: 1. Actions:

1. Write disables (WD) R1

2. Sets link to Not Ready (NR) 3. Write enables R2 2. Command:

symrdf -g ${group} failover

2. Update: Helps to speed up the failback operation by copying invalid tracks before write disabling any disks. 1. Actions:

1. Leaves service state as is. 2. Merges the tracks 3. Copies invalid tracks 2. Command:

symrdf -g ${group} update 3. Failback: 1. Actions:

1. Write disables R2 2. Suspends RDF link

3. Merges the disk tracks. 4. Resumes the link 5. Write enables R1

6. Copies the changed data 2. Command:

symrdf -g ${group} failback

4. Split: Leaves both R1 & R2 in write enabled state. 1. Actions:

1. Suspends the rdf link.

66

2. Write enables R2 2. Command:

symrdf -g ${group} split 5. Establish: 1. Actions:

1. Write disables R2 2. Suspends the rdf link

3. Copies data from R1 to R2 4. Resumes the rdf link. 2. Command:

symrdf -g ${group} [ -full ] establish 6. Restore: Copies data from R2 to R1 1. Actions:

1. Write disables both R1 & R2 2. Suspends the rdf link. 3. Merges the track tables 4. Resumes the rdf link. 5. Write enables R1 2. Command:

symrdf -g ${group} [ -full ] restore

Singular SRDF commands

1. Suspend: symrdf -g ${group} suspend 2. Resume: symrdf -g ${group} resume

3. Set mode: symrdf -g ${group} set mode sync symrdf -g ${group} set domino on

symrdf -g ${group} set acp_disk skew 1000

EMC Symmetrix / DMX SRDF Setup

This section talks about setting up basic SRDF related functionality on the Symmetrix / DMX machines using EMC Solutions Enabler Symcli.

67

For this setup, let’s have two different host, our local host will be R1 (Source) volumes and our remote host will be R2 (Target) volumes.

A mix of R1 and R2 volumes can reside on the same symmetrix, in short you can configure SRDF between two Symmetrix machines to act as if one was local and other was remote and vice versa.

Step 1

Create SYMCLI Device Groups. Each group can have one or more Symmetrix devices specified in it.

SYMCLI device group information (name of the group, type, members, and any associations) are maintained in the SYMAPI database.

In the following we will create a device group that includes two SRDF volumes. SRDF operations can be performed from the local host that has access to the source volumes or the remote host that has access to the target volumes. Therefore, both hosts should have device groups defined. Complete the following steps on both the local and remote hosts.

a) Identify the SRDF source and target volumes available to your assigned hosts. Execute the following commands on both the local and remote hosts. # symrdf list pd (execute on both local and remote hosts) or

# syminq

b) To view all the RDF volumes configured in the Symmetrix use the following # symrdf list dev

c) Display a synopsis of the symdg command and reference it in the following steps.

# symdg –h

d) List all device groups that are currently defined. # symdg list

68

e) On the local host, create a device group of the type of RDF1. On the remote host, create a device group of the type RDF2.

# symdg –type RDF1 create newsrcdg (on local host) # symdg –type RDF2 create newtgtdg (on remote host)

f) Verify that your device group was added to the SYMAPI database on both the local and remote hosts. # symdg list

g) Add your two devices to your device group using the symld command. Again use (–h) for a synopsis of the command syntax. On local host: # symld –h

# symld –g newsrcdg add dev ### or

# symld –g newsrcdg add pd Physicaldrive# On remote host:

# symld –g newtgtdg add dev ### or

# symld –g newtgtdg add pd Physicaldrive#

h) Using the syminq command, identify the gatekeeper devices. Determine if it is currently defined in the SYMAPI database, if not, define it, and associate it with your device group. On local host: # syminq

# symgate list (Check SYMAPI)

# symgate define pd Physicaldrive# (to define)

# symgate -g newsrcdg associate pd Physicaldrive# (to associate)

69

On remote host: # syminq

# symgate list (Check SYMAPI)

# symgate define pd Physicaldrive# (to define)

# symgate -g newtgtdg associate pd Physicaldrive# (to associate) i) Display your device groups. The output is verbose so pipe it to more. On local host:

# symdg show newsrcdg |more On remote host:

# symdg show newtgtdg | more

j) Display a synopsis of the symld command. # symld -h

k) Rename DEV001 to NEWVOL1 On local host:

# symld –g newsrcdg rename DEV001 NEWVOL1

On remote host:

# symld –g newtgtdg rename DEV001 NEWVOL1

l) Display the device group on both the local and remote hosts. On local host:

# symdg show newsrcdg |more On remote host:

# symdg show newtgtdg | more Step 2

Use the SYMCLI to display the status of the SRDF volumes in your device group.

70

a) If on the local host, check the status of your SRDF volumes using the following command:

# symrdf -g newsrcdg query Step 3

Set the default device group. You can use the “Environmental Variables” option. # set SYMCLI_DG=newsrcdg (on the local host) # set SYMCLI_DG=newtgtdg (on the remote host) a) Check the SYMCLI environment.

# symcli –def (on both the local and remote hosts)

b) Test to see if the SYMCLI_DG environment variable is working properly by performing a “query” without specifying the device group. # symrdf query (on both the local and remote hosts) Step 4

Changing Operational mode. The operational mode for a device or group of devices can be set dynamically with the symrdf set mode command. a) On the local host, change the mode of operation for one of your SRDF volumes to enable semi-synchronous operations. Verify results and change back to synchronous mode.

# symrdf set mode semi NEWVOL1 # symrdf query

# symrdf set mode sync NEWVOL1 # symrdf query

b) Change mode of operation to enable adaptive copy-disk mode for all devices in the device group. Verify that the mode change occurred and then disable adaptive copy.

# symrdf set mode acp disk # symrdf query

71

# symrdf set mode acp off # symrdf query

Step 5

Check the communications link between the local and remote Symmetrix. a) From the local host, verify that the remote Symmetrix is “alive”. If the host is attached to multiple Symmetrix, you may have to specify the Symmetrix Serial Number (SSN) through the –sid option.

# symrdf ping [ -sid xx ] (xx=last two digits of the remote SSN) b) From the local host, display the status of the Remote Link Directors. # symcfg –RA all list

c) From the local host, display the activity on the Remote Link Directors. # symstat -RA all –i 10 –c 2 Step 6

Create a partition on each disk, format the partition and assign a filesystem to the partition. Add data on the R1 volumes defined in the newsrcdg device group. Step 7

Suspend RDF Link and add data to filesystem. In this step we will suspend the SRDF link, add data to the filesystem and check for invalid tracks. a) Check that the R1 and R2 volumes are fully synchronized. # symrdf query

b) Suspend the link between the source and target volumes. # symrdf suspend c) Check link status. # symrdf query

d) Add data to the filesystems.

72

e) Check for invalid tracks using the following command: # symrdf query

f) Invalid tracks can also be displayed using the symdev show command. Execute the following command on one of the devices in your device group. Look at the Mirror set information. On the local host: # symdev show ###

g) From the local host, resume the link and monitor invalid tracks. # symrdf resume # symrdf query

73

因篇幅问题不能全部显示,请点此查看更多更全内容