Higher capacity can mean better performance
With hard drives, the faster the spindle speed, the faster the drive. The amount of cache also comes into play, but by and large, a 10,000-rpm drive is faster than a 7200-rpm drive, which is in turn faster than 5400-rpm and 4800-rpm drives. That's an easy and intuitive metric for comparison shopping.
There is no spindle in an SSD, but there is a comparative metric directly related to capacity. Up to around the 256GB level, PCWorld's testing has shown that a larger drive will be faster than a smaller drive, with other factors (such as the controller and the type of NAND) being equal. To understand why, you need to understand how data is written to SSDs.
With a hard drive, data is basically written serially, down a single channel. The stream may be interrupted by existing data, but ideally it's all written in a neat, uninterrupted line. Inside an SSD, data is written in a scattershot, parallel fashion down multiple channels to the multiple NAND chips at once. The more NAND chips an SSD has, the more channels it has to write/read across, and the faster the drive will be.
You can find a perfect example in Intel's latest 525 mSATA (Mini-SATA) drives. Read the specs, and you'll see that the 30GB model is rated for 7000 4k operations (read-write operations) per second and 200 MBps sustained reading, while the 240GB version is rated at 46,000 4k operations and 550 MBps, even though both drives use the same 25nm NAND and identical SandForce controllers.
SSD optimization is unnecessary
Until recently, the common SATA 3-gbps interface was fine for any type of storage. A modern SATA 6-gbps SSD is backward-compatible with that standard, but it requires a SATA 6-gbps interface to realize its full performance potential. Soon enough, even that standard won't be fast enough, as the fastest SSDs we've tested can already write at speeds nearing 5 gbps.
Common wisdom indicates that there's really no way to optimize an SSD using a software utility. When you think of the manner in which data is written--scattered all over the drive--and the lack of a read/write head that you must worry about positioning, it's clear that the optimization techniques developed for mechanical hard drives don't apply to SSDs. In fact, the way an SSD presents data to your computer's operating system bears zero resemblance to how it's stored on the drive. Wasting precious write cycles trying to optimize an SSD is counterproductive.
How TRIM prevents performance degradation
There was a time when an SSD's performance would slowly degrade. That's because writing data to a previously used NAND cell is a two-step process: The cell must be erased before it can be rewritten. To increase write performance, an SSD controller would simply mark a used cell as no longer active and write data only to cells that had never been used. Once all the cells were used a single time, the drive's write performance would deteriorate because its controller had to erase cells before it could write to them again.
Sign up for Computerworld eNewsletters.