Symm virtual provisioning
TDAT � thin device pool
TDEV � thin data device, which has extents or chunks (allocation unit is 12 tracks, or 768KB)
Enabling technologies
- TDEV, TDAT, and pools
- IVTOC (VTOC now doesn�t happen when the bin file is loaded, it�s VTOC�ed when it�s written, so there�s a performance impact for this) (will this be like a COFW penalty?)
- FE-enabled pre-fetch
Write Flow
- TDEV has an allocation table, pointers to physical storage
- Rather than point directly to a track on the TDAT, it points to a different allocation table at the end of the TDAT.
- The allocation table (id_tables?) in the TDAT then points to the track group (12 tracks) on the TDAT.
- Capacity is allocated round robin to all the TDATs.
Wide Striping
- Drives are carved up into TDATs and added to a thin pool. The drives are spread out everywhere across al the DA pairs. Looks like 3PAR chunklets.
- Spreads workload more evenly across all spindles.
- No need for Symm Optimizer, not applicable.
- Some results from lab testing. You�ll notice that more devices increases IOPS. This is because of the increase in the number of available queues for IO.
- Random read miss for 256x devices on 480x R1 drives got up to 110k IOPS (~229 IOPS per drive). Uniform workload.
- Random read miss for 480x devices on 480x R1 drives got up to 120k IOPS (~250 IOPS per drive). Uniform workload.
- Is there an optimal size for a pool? Large is obviously better but size could be dictated by drive types or application workloads.
- Would you create separate pools for an Exchange database and log?
Thin Data Device Considerations (TDAT)
- Protection Type
- What kind of RAID type am I going to use?
- The pools are a fixed RAID type. The pool will inherit the RAID type of the first TDAT added.
- OLTP workloads � R1 2 hyper > R5 3+1 4 hyper > R6 14+2 16 hyper
- DSS workloads � R1 2 hyper > R5 3+1 4 hyper > R6 14+2 16 hyper
- You may consider R6 over R5 despite the write performance impact on R6 because a double drive failure would significant impact more volumes, since it will have TDATs for many more thin volumes.
- Configuration Best Practices
- Data devices should all reside on the same rotational speed.
- Data devices should be spread evenly across as many Das and drives as possible.
- Data devices should be the same size, if possible. Uneven sizes could result in uneven data distribution.
- Fewer, larger devices is better than many, tiny devices.
- Expand pools in large chunks to avoid allocations that use a few TDATs.
- Expanding Pools
- Best model would be doubling of the pool size.
- No mechanism today to balance out the TDATs. Coming but not there yet.
Thin Device Considerations (TDEV)
- First Write
- Case 1: Unallocated
- Allocate extent
- VTOC track, pad if necessary
- For random writes, response time goes from 0.5ms to 4.0ms for the first random write. With 16KB writes, 47 other writes will use this extent (remember it�s 768KB)
- For sequential writes, response time is much better because they utilize the allocated extents. Doing 64KB, 11 out of every 12 write will ride for free. (0.6ms)
- Case 2: Pre-allocated
- VTOC track, pad if necessary
- When you bind the TDEV, you can pre-allocate the tables to avoid the penalty.
- For random writes, it looks like the sequential write in case 1. Low IVTOC impact (0.6ms)
- For sequential writes, it looks the same (0.6ms).
- Case 3: Pre-written
- Clear to write
- Case 1: Unallocated
- Reads
- Sequential read streams
- Now data is on multiple physical spindles
- Pre-fetch mechanism changes in 73 code. It�s now in the front-end FA. Used to be in the back-end DA.
- The front-end can detect when a sequence is occurring and intelligently issue pre-fetch requests to the respective DA.
- As long as the ahead buffer is kept full enough, it will minimize seek latency.
- Sequential read streams
- Meta Volume decisions
- Concatenated metas gave good sequential read but not great random.
- Now with TDEVs, concatenated metas are recommended.
- They are already striped at the pool level.
- They can be extended while leaving data in place.
Replication Considerations
- Local replication with TimeFinder/Clone. Thin devices will take longer.
- 4 DA pairs with 480 drives
- Mirrored thick did 1500 MB/s
- Mirrored thin did 1100 MB/s
- 4 DA pairs with 480 drives
- Various thin source allocations
- With less actual allocated data, clone pre-copy times could be faster than thick. This is just because there will be less data to copy to the clone.
- Remote Replication with SRDF/S
- Will have higher response time than thick for pre-written TDEVs
- According to graphs by roughly 30-40%
- Remote Replication with SRDF/A
- Pre-written doesn�t see as much overhead.
- Unallocated still see additional response time.
Best Practice Consideratoins
- Always consider disk throughput requirements when creating or growing a data device pool
- Segregate applications by pools if they won�t play well together
- Use R1 or R5 (3+1) when write performance is critical. Use R6 for highest available within a thin pool.
- Pre-allocate capacity if response sensitive applications expand by randomly writing into new space.
- Use concatenated meta volumes for ease of expansion
- Be aware of performance considerations of replication
- General performance tuning principles still apply to thin devices
download file now
- Get link
- X
- Other Apps
Labels
provisioning symm virtual
Labels:
provisioning
symm
virtual
- Get link
- X
- Other Apps