In January of 2011, we had ordered 24 Mac minis with the intention to build out our Xgrid cluster. We had ordered the A1347 Mac Minis, which came with a 2.66GHz Core 2 Duo processor and 8GB of Samsung DDR3 memory. Each mini also came preinstalled with two Hitachi 500GB 7200RPM SATA drives.
Shortly after receiving the minis, we decided to try out Hadoop, so the machines got repurposed into a trial Hadoop cluster. Ubuntu 11.10 was installed on them, but at the time, our system administrator was unable to get Ubuntu to boot on three of the 24 minis, so we ended up with a 21 node cluster.
We used Macessity’s MX4 rack trays in our server room. Leaving OS X on a 1GB partition and dual booting, resulted in a Hadoop cluster with 42 cores, 42 hard disks, and 715GB of usable space per node. Our HDFS had about 17 terrabytes of space configured.
The first node to fail failed approximately a year after the cluster was up. The secondary drive failed on node 12. In January of 2013, we had a power outage, and when bringing up the minis, I noticed many errors. We suddenly had a bad primary drive on node 7 and a bad secondary drive on node 18. Before I could bring these two nodes back online, the primary drive failed in node 4 and node 22.
When replacing the hard disks, I found that the temperature sensor connector would often crumble when disconnecting it, especially on the secondary/bottom hard disk, apparently due to heat transfer issues.
After approximately two years in use, we have five failed disks, and SMART shows 11 more that are failing, for a total of 16 of 42 drives failing after two years. Mac minis don’t appear to be a good choice to use as servers, at least with two hard disks packed tightly into the tiny cases.
We haven’t had any failed network interfaces, memory, CPUs, PSUs, or fans, but I was still surprised to read recently, in this article, that some data centers are focusing on providing only Mac minis for servers.