Dell PowerVault MD3000i 튜닝 매뉴얼 - 페이지 16

{카테고리_이름} Dell PowerVault MD3000i에 대한 튜닝 매뉴얼을 온라인으로 검색하거나 PDF를 다운로드하세요. Dell PowerVault MD3000i 32 페이지. Configuration for vmware esx server software
Dell PowerVault MD3000i에 대해서도 마찬가지입니다: 지원 참고 사항 (22 페이지), 업그레이드 매뉴얼 (28 페이지), 사용자 설명서 (46 페이지), 교체 매뉴얼 (4 페이지), 구성 매뉴얼 (10 페이지)

Dell PowerVault MD3000i 튜닝 매뉴얼
Dell™ PowerVault MD3000 and MD3000i Array Tuning Best Practices
to how tightly the variance of sequential or random data access is contained
within the volume. This can be purely random across an entire virtual disk, or
random within some bounds, such as a large file stored within a virtual disk
compared to large non-contiguous bursts of sequential data access randomly
distributed within some bounds. Each of these is a different I/O pattern, and has
a discrete case to be applied when tuning the storage.
The data from the stateCaptureData.txt file can be helpful in determining these
characteristics. Sequential read percentage can be determined based on the
percentage of total cache hits. If the cache hits and read percentage are high,
then first assume the I/O pattern tends to be more sequential I/O. However,
since cache hits are not broken out statistically by read and write, some variable
experimentation may have to be performed with the representative data set if the
pattern is unknown. For single threaded I/O host streams, this behavior can be
confirmed by comparing the magnitude of reads to read pre-fetch statistics.
In cases where many sequential read operations are expected, enabling read
pre-fetch in cache is recommended. If the cache hits percentage is low, the
application tends to be more random and read-ahead should be disabled. Mid-
range percentages possibly indicate bursts of sequential I/O but do not
necessarily denote their affiliation to read or write I/O. Again, testing with read-
ahead on/off would be required.
In the second generation firmware, the segment, stripe, and pre-fetch statistics
have been reorganized as seen in Figure 4 from the lower half of Figure 2.

4.7.4 Stripe Size

For the best performance, stripe size should always be larger than the maximum
I/O size performed by the host. As identified previously, stripes should be sized
as even powers of two. The average block size can be identified from the
collected data. Additionally, I/Os over 2MiB are considered large and broken out
separately from smaller I/Os in the statistics. While all RAID level's benefit from
careful tuning of stripe and segment size, RAID 5 and 6 with their parity
calculations are the most dependant.
December 2008 – Revision A01 
Figure 4: Second Generation Firmware - Performance
Statistics Broken Out. File: stateCaptureData.txt
*** Performance stats ***
Cluster Reads
6252626
Stripe Writes
2040493
RPA Requests
982036
Full Writes
653386
No Parity Writes
0
Cluster Writes
3015009
Cache Hits
Cache Hit Blks
4685032
RPA Width
3932113
Partial Writes
29
Fast Writes
Full Stripe WT
0
Stripe Reads
5334257
737770040
RPA Depth
418860162
RMW Writes
328612
0
Page 16