I’ve been reading up on different types of high-end workstations and am a bit puzzled about an expensive drive alternative to the usual SATA: an “SAS” hard disk. What’s SAS and why would I want to even consider it?
Great question. I had no idea, so instead I asked Matt Lawson of Microsel to illuminate us on what SAS drives are why they might be a great choice for your new system. Here’s what he shared:
SAS stands for Serial Attached SCSI. Basically, a SAS drive utilizes the same form factor as a SATA drive but has several high performance advantages. First of all, there’s the platter speed. While typical SATA drives operate at 7200RPM, a SAS drive operates at 10K or 15K. Although the platter speed is double that of SATA, the MTBF (Mean Time Before Failure) remains at the industry standard of 1.2 million hours.
SAS drives are typically utilized in server and high-end workstation environments where speed and I/O frequency reign supreme. Now, that being said, there are oh SOOOO many factors in building a screaming fast, but rock-solid, workstation or server.
Where speed is concerned, you need to be looking at the right drives first and foremost. Nowadays, I tend to spec in a couple of SSD’s (Solid State Drive) in a RAID 0 (Redundant Array of Inexpensive Disks) for the boot and applications drive, then for scratch disks/additional storage, I like to do 3-5 or more SAS drives in a RAID 5 (best mix of redundancy and speed, with the addition of parity).
That being said, there are larger considerations, such as the RAID stripe size (or stripe width). The stripe size determines what size blocks of data will be sent to each drive in the array. It’s imperative (where speed is concerned) that the engineer do his job in determining what the server will be used for. If the application the server is built for houses small files, or is a file server for smaller files, you want to choose a small stripe size, say 256KB or so. Now, for people doing database work, photo/video/audio editing, rendering or production, they need as big a stripe as the controller allows for. For those types of applications, as stripe size of 2MB or higher (if allowed by the controller) is a must!!!
Last but not least, where stability is concerned, the drives must be properly paired (this is something that 90% of the builders in the world are oblivious to which, in turn, can make MY job very difficult as it tends to give the entire white-box market a black eye). If drives in a RAID array are not properly matched by Firmware version, the odds are that at least one of the drives will fall out of the array within the first year. Depending on the type of array chosen, this could simply mean the company has to foot the bill for higher hardware costs, or be as bad as catastrophic data loss.
There are a few key factors I like to hit on when I’m building a client a new workstation or server for the first time. I match Firmware on the drives, step codes on the processors (if they are doing a dual processor system) and match batch codes on the RAM. Those three factors will determine, from a stability stand-point, whether or not the server will stand the test of time.
Contributor Matt Lawson is a native Coloradoan whom has been engineering custom electronics systems for the past decade and is a certified Systems Engineer for Microsel of Colorado
Hi there, do you think is a good idea to impliment RAID 1 on the finance server, since it is fast and there is redundancy and also the read/write performance is with it.
Hi..
Recently i purchased on ibm server in my office, with SAS hard disk. I can’t install Operating System in that hadr disk.If anyone identify the problem, help me.
I’m at lost with the RAID 5 and 10 controller explanation, if you could give details of it again
RAID 10 required for redundancy nowadays. RAID 5 is not sufficient to keep a system up after 1 drive failure due to read/write failures and the size of new hard drives
Regarding at least part of the Windows Pagefile needing to be on the boot partition, this incorrect.
I have boxes running 2000 Server, 2003 Server, 2008 Server and XP Pro, all of which have been setup with no Pagefile on the Boot partition. For best performance, setup dedicated partitions/disks for all pagefile activity.
See: My computer > Properties > Advanced ; Performance > Settings; Advanced; Virtual memory > Change; select Boot partition > No paging file.
The 12MB minimum you talk of, is a ‘system wide’ minimum, that the system must have available to it. However, the discs that the pagefile is spread over (to make up the 12MB minimum or more), are totally configurable.
Only one addend here. My statement regarding Windows and the Page File (PF) was erroneously tagged as incorrect.
Windows, at least through VISTA, REQUIRES that part of the PF be on the system drive. With Windows XP, this is a small, approximately 12MB, space, but it MUST be on system disk or the OS will crash.
Good show people, thanks for the info.
To add to Chrystoph’s picking up of the issues regarding using SSD for the applications drives, since SSD’s have no drive heads and an almost non existant seek time, there is no point in using RAID 0 which is designed to mitigate the seek time in hard disks and maximise the throughput. Considering the write speed currently drops through the floor after a few weeks with many SSD’s, and a RAID 0 configuration would likely make the fragmentation issue that causes this worse, in addition to providing 2 points of failure for no increase in speed…. I’d be very wary of such a configuration. All the information seems really good, a great summary…. until “Nowadays, I tend to spec in…” after which it reads more like a geek’s unrealistic wish list, rather than something a systems administrator would actually do.
I really had no idea what it was. I know think I get it, but got a little lost in the third paragraph!
Good post, TY.
@Chrystoph:
Personally, I don’t see what using Windows has to do with where the pagefile goes. Anyone who’s going to be actually BUILDING a computer (be it Windows, Linux, or Mac) is going to be computer savvy enough to actually know how to move the pagefile to another drive, so that’s really a total non-issue. I maintain several Windows machines for friends and relatives and clients where the pagefile has been moved from the primary drive to one of the other drives (for varying reasons) and it’s never been any kind of problem, nor have I ever been required to have any portion of the pagefile on the primary drive.
Regarding the other concern you mention; Yes, with a Windows machine you would probably want some kind of backup for the OS and Applications partitions/drives (as it can be nothing short of nightmarish reinstalling Windows from scratch at times), but for Linux (and presumably other OS’s) your OS and applications are going to be the EASIEST part of a recovery operation. FAR more important is protection of user data and configuration files. (I can have ALL my software reinstalled from scratch in under an hour on either Mac or Linux, but if I ever lost the user data and configuration files, that could take AGES to rebuild from scratch.) This is easily done by placing the /home (or Users on Mac) and /etc (systemwide config files) partitions/folders on the RAID5 and/or implementing an automatic backup system to ensure the safety of those critical files.
As to the assertion that “most people will be using Windows”, that entirely depends on the environment at hand, what the computers are used for, who they are used by, and WHICH “most people” you might be talking about. In the shop where I work, “most people” are Mac users. At home and among my friends “most people” are Linux users. Among my gamer friends and among the average home users your statement holds true as “most people” in those circles use Windows. Right tool in the right place for the right task.
While I can certainly see the advantages of using a Solid State drive for relatively static information such as OS and application installation, two things immediately occur to me.
First, most people will be using Windows. As a result of this, at least part of the page file will have to be on the SSD. That means extensive writing to the drive, shortening its service life. That can be mitigated by reducing the page file on the SSD to the minimum and shifting the rest of the requirement to other storage on the machine.
Second, RAID 0 (zero) does not provide any redundancy, meaning that, if any one device in the drive array fails, the array is dead without possibility of recovery. This does not seem a good path for the OS and application data on a server.
Personally, I would use RAID 1 (Mirroring) for the OS/Application requirements. While it uses the space less effectively, it also provides failover.