Joined on 01/06/06
Great use of port-multiplier abilities
Pros: When your host system has SATA port multiplier support, this thing just works (moreso with linux and libata with modern kernel builds than with windows). You get instant storage expansion without the need to fit more disks inside a PC case at SATA interconnect speeds. I've been able to carry a single md raid array across multiple systems without the need of using a screwdriver and without any loss of data using this enclosure.
Cons: Depending on the host system's hardware and software this thing can be finickey. It seems that certain combinations of sata devices on the motherboard coupled with bios settings AND the specific ordering of drives in the enclosure can raise some bugs which may make it appear one drive in the enclosure is not working, so with respect to ease of use, your milage may varry.
Overall Review: Until I used this on Ubuntu 12.10 I had zero problems and was amazed how well it "just worked". I started running in to some issues where I thought I might have a bad drive because although AHIC BIOS detected all drives, the initramfs environment would always get hung up on one of the drives and there would be one drive missing from the /dev file system. If I watched the LED's on the front of the enclosure I could figure out which drive seemed to be having trouble by observing the blink patterns. Moving "problem" the drive around to different slots would result in the failure following to the slot the drive was put in. Bad drive, right? When I tried this "problem drive" in slot 4, no problem! I totally blame the default kernel configuration in Ubuntu 12.10 for this problem but I haven't spent any time doing kernel debugging to say exactly what the issue is. My expectation is that therer will be a kernel update which, after I apply it, the issue goes away... until some future kernel update has a regression to this issue again :D Finaly thought: my experience is exactly why the LTS Ubuntu versions exist (and Debian stable, for that matter)
not exactly value
Cons: low quality
Overall Review: I Used this memory in a shuttle barebones system with a low power celeron CPU and an eSATA card, no video card at all, very low load on the system. The power to this system went through a surge protector and a decent quality CyberPower UPS. Within three months of ownership these dimms started causing trouble, and memtest verified. Replaced with a single 4GB kingston hyperX dimm at Office Depot for the same price and all problems were solved. Over the years I simply haven't had consistant results from the quality of gskill products. My usualy default go-to company for value parts has been Kingston, and for some reason I didn't follow that belief sysyetm in this purchasing decision.
Purchased as part of a windows gaming system
Pros: Stable and no issues playing modern windows 3d games (Windows 7).
Cons: None encountered
Overall Review: I have a fairly beefy coolermaster heat sink on this in a standard tower. No issues with mounting (unlike my B85M-D3H which required some dremmel work to make the heat sink brackets fit past the beefy covers on the voltage regulators)
SiI 3132 Serial ATA Raid II Controller
Pros: This is an SiI 3132 Serial ATA Raid II Controller based pci express card with two eSATA ports. For everyone with SATA II based JBOD enclosures running modern Linux, these cards tend to be very reliable compared to other SATA II chips with port multiplier support.
Cons: I have had none
Overall Review: I read through the 1 egg reviews for this card. They all seemed to be windows related complaints. The RAID support in this thing is totally software based and there is no controller configuration outside of the host operating system. I currently operate two of these chips, this card and another manufactures version with a different pcb layout, both with dual eSATA ports, in a system running zfs-on-Linux with one Sans Digital JBOD connected to each, the zpool consisting of mirror vdev with disks split between the controllers. I see sustained write performance of 60+ MB/s to 7200 RPM WD red drives, and read performance is significantly higher than that. *These are trivial read/write tests* and were gained using dd and clearing caches between runs on a system with a postgress database for zabbix data monitoring about 10 systems, as well as logstash collecting data for elasticsearch. TPS throughput on iostat showed 200-400 per disk during the testing for both reads and writes.