Posted by & filed under Exchange Server, General, Hardware.

Here are the Dell MD1000 Direct Attached Storage Benchmarks I had promised earlier. The performance is great. I’m trying to squeeze some more MB/s out of the read performance. I’ve also included performance / disk. Looks as though 4x and 8x disks in RAID10 have the best performance per disk. All the disks are 73 GB Seagate 15K SAS.

  Write (MB) Write (MB)/disk Rewrite (MB) Rewrite (MB)/ disk Read (MB) Read (MB)/disk
2 x RAID1 35.0 17.5 25.0 12.5 99.9 50.0
4 x RAID10 94.2 23.5 66.7 16.7 252.7 63.2
6 x RAID10 100.0 16.7 72.1 12.0 295.5 49.3
8 x RAID10 166.0 20.7 100.4 12.6 434.7 54.3
10 x RAID10 164.3 16.4 97.5 9.8 404.5 40.4
12 x RAID10 186.2 15.5 104.9 8.7 425.5 35.5
14 x RAID10 195.7 14.0 105.7 7.6 450.2 32.2


Dell MD1000 Write (MB) Performance

Dell MD1000 Write (MB) Performance Dell MD1000 Rewrite (MB) Performance Dell MD1000 Rewrite (MB) Performance Dell MD1000 Read (MB) Performance

Dell MD1000 Read (MB) Performance

(No Ratings Yet)

6 Responses to “Dell DAS MD1000 Benchmarks”

  1. Chris

    No SATA benchmarks yet. Will soon though. I just shifted around some disks and added some 500GB SATA Seagate ES drives.

    Reply
  2. Gerg

    I’ve been trying to configure an MD100 and getting only somewhat better numbers than you are. What surprises me is that you call these numbers “great”. I was expecting *much* better and have been stumped tracking down the bottleneck.

    Looking at the sequential read throughput you’re seeing 250MB/s with 4 drives that should be capable of about double that. And you’re seeing only 450MB/s with 14 drives which should be capable of saturating the x4 connection at 1.2GB/s (assuming you have a PCIe system). 450MB/s is the kind of throughput you should be getting with 4 decent drives.

    Have you found in the past year any further information on why the MD1000 falls so far short of its theoretical potential?

    Reply
    • Chris

      The numbers are “great” for the price of the system compared to other low-end direct attached storage. By no means are they a good representation of what can be achieved with a good quality system. Of course, it still is slow – compared to the theoretical maximum.

      I’ve got better numbers using a 3ware card (instead of the Perc 5/e), and some have posted better numbers with the Perc 6.

      I have heard the MD3000 suffers from the same bottleneck. Some have found bad controller boards in the unit itself, and others have posted far better numbers in Windows than in Linux – drivers?

      Unfortunately I haven’t had the ambition to improve beyond the numbers I have as the throughput is secondary to the IOPS (which are fine) for me.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>