Original Link: https://www.anandtech.com/show/5719/ocz-vertex-4-review-256gb-512gb



Thirteen months ago OCZ announced its intention to acquire Indilinx, the SSD controller maker that gave Intel its first taste of competition in the consumer market in 2009. Eight months later, OCZ launched its first post-acquisition SSD based on Indilinx silicon. Today, just five months after the launch of the Octane, OCZ is officially releasing the Vertex 4 – based on its Indilinx Everest 2 silicon. In less than a year, OCZ has brought to market more Indilinx powered controllers than Indilinx did in the previous three years. It's rare that you see the fruits of acquisition so quickly, but if there's anything OCZ's CEO Ryan Petersen is good at it's pushing for an aggressive schedule.

Rather than call this drive the Octane 2, OCZ went with Vertex 4, indicating its rightful place at the top of OCZ's SSD lineup. The implications run even deeper. It marks the first time in two years that a Vertex drive will ship without a SandForce controller inside. Make no mistake, while Octane was a shot across SandForce's bow, Vertex 4 means war. While OCZ continues to ship tons of SandForce controllers, the future for the company is Indilinx. The Vertex 4 is just the beginning. OCZ will continue to ship Vertex 3 in parallel, and should a future SandForce controller make competitive sense to use OCZ will definitely consider it, but the intention is to build the fastest silicon internally and use it as much as possible.

The dramatic departure in naming also embodies just how different Everest 2 is from the original Everest controller. OCZ claims there's minimal shared code between the Octane and Vertex 4 firmware, and the two drives perform very differently. Write amplification was always a concern with the Octane - it is no longer a major issue with the Vertex 4. OCZ and its Indilinx team have reduced write amplification to roughly the levels of Intel's controllers:

Estimated Worst Case Write Amplification

Indeed write performance is improved significantly as a result. A look at the spec sheet gives us the first indication of what's to come:

OCZ SSD Comparison
  OCZ Vertex 4 OCZ Vertex 3 OCZ Octane OCZ Vertex 2
Current NAND 25nm IMFT MLC 25nm IMFT MLC 25nm IMFT MLC 25nm IMFT MLC
Capacities 128, 256, 512GB 60, 90, 120, 240, 480GB 128, 256, 512GB, 1TB 60, 120, 240GB
Controller Indilinx Everest 2 SF-2281 Indilinx Everest SF-1221
Max Seq Read 535 MB/s 550 MB/s 480 MB/s 285 MB/s
Max Seq Write 475 MB/s 500 MB/s 330 MB/s 275 MB/s
Max Random Read 95K IOPS 60K IOPS 26K IOPS -
Max Random Write 85K IOPS 85K IOPS 35K IOPS 50K IOPS
AES-256 Encryption Yes Yes Yes No

Regardless of the nature of the data (compressible or incompressible), the Everest 2 powered Vertex 4 promises better random write performance than any other consumer SSD on the market today. And it delivers:

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Random write speed is nothing short of incredible. OCZ is able to equal SandForce's write speed on highly compressible data without resorting to any sort of data compression. This is a purely algorithmic advantage. While the original Everest was a work in progress by the time OCZ acquired the company, Everest 2 is the first Indilinx project OCZ had complete control over. Apparently OCZ's CEO, Ryan Petersen had a lot of his own input built into this design.

OCZ Vertex 4 Lineup
  512GB 256GB 128GB
NAND Configuration 16 x 32GB 25nm Intel sync NAND 16 x 16GB 25nm Intel sync NAND 8 x 16GB 25nm Intel sync NAND
DRAM 2 x 512MB DDR3-800 2 x 512MB DDR3-800*
2 x 512MB DDR3-800*
Controller Indilinx Everest 2 Indilinx Everest 2 Indilinx Everest 2
Max Seq Read 535 MB/s 535 MB/s 535 MB/s
Max Seq Write 475 MB/s 380 MB/s 200 MB/s
Max Random Read 95K IOPS 90K IOPS 90K IOPS
Max Random Write 85K IOPS 85K IOPS 85K IOPS
MSRP $699 $349 $179
*Only 512MB of DDR3 will be used, future versions will ship with 2 x 256MB devices

The Vertex 4 will be available in three capacities initially – 128GB, 256GB and 512GB, with a 1TB version following. Drives should be available today but in very limited quantities, and likely only 128GB capacities at the start. MSRP is fairly competitive with other 6Gbps drives on the market today:

SSD Pricing Comparison
  128GB 256GB 512GB
Crucial m4 $154.99 $299.99 $549.99
Intel SSD 520 $184.99 $344.99 $799.99
Samsung SSD 830 $174.99 $299.99 $779.99
OCZ Octane $199.99 $339.99 $849.99
OCZ Vertex 3 $199.99 $339.99 $1199.99
OCZ Vertex 4 $179 MSRP $349 MSRP $699 MSRP


Inside the Vertex 4

At a high level, Everest 2 is architecturally very similar to its predecessor. The SoC is still built on a 65nm G process and maintains the same basic architecture. Clock speeds are up now from 333MHz to 400MHz. There are also microarchitectural tweaks at work here – limits in Everest 1 have since been removed for Everest 2. Everest 2's sequencer is much improved, but getting more detail than this is basically impossible.


Everest 1's Block Diagram, Similar to Everest 2

Just as with previous OCZ drives, the Vertex 4 ships with its own custom designed PCB. Unlike most SSDs we've seen, the Vertex 4 places its Everest 2 controller in the center of the PCB – with NAND fanning out in a circle around it. The Indilinx SoC uses the drive housing as a heatsink as two thermal pads help conduct the heat away from the chip:


OCZ Vertex 4 512GB

In the 512GB version we find quad-die packaged Intel 25nm synchronous MLC NAND devices, sixteen of them in total. The 256GB version keeps the same number of devices but drops the die count per package from 4 to 2. Despite launching with Intel NAND, OCZ claims broad support for Flash from other vendors. Touting that advantage really only matters if there's cheaper NAND available on the market, today Intel NAND is still priced competitively enough to make it an obvious fit.


OCZ Vertex 4 512GB

OCZ is currently testing 20nm Intel NAND and 24nm Toggle Mode NAND with the Everest 2. Both are functional at this point, but neither is optimized in the current firmware. Should Toggle Mode NAND pricing or performance offer a measurable advantage, OCZ will introduce a separate product based on it. Currently, low-density 24nm Toggle NAND isn't price competitive and thus it's not used at launch. OCZ did add that a 1TB Vertex 4 would almost certainly use Toggle NAND as Toshiba's high-density pricing (64GB, octal die packages) is better than from IMFT these days. We're still at least a quarter away from seeing 20nm NAND used in volume, so enabling support for either of these options will require a firmware update.
With its sights fixed on OEMs, OCZ is far more concerned about drive longevity than it once was. OEMs don't like having to re-qualify components, they want a steady supply of the same product for the lifespan of whatever system they're selling. If a system is shipping to a government buyer, that lifespan can be extremely long. As a result, OCZ wanted to build a controller that was as forward looking as possible. Everest 2 needed to be able to migrate to 20nm without requiring another qualification pass from OCZ's customers. As NAND cell sizes shrink, error rates go up. Only so much can be dealt with at the NAND factory, the controller is extremely important to maintaining data validity.

The original Octane could correct up to 78 random bits for every 1KB of data using its BCH ECC engine. While that was more than sufficient for 25nm NAND, OCZ is planning for the future with Everest 2 and implemented a more robust ECC engine capable of correcting up to 128 random bits for every 1KB of data. OCZ believes the Everest 2's ECC capabilities are enough to ensure reliable operation with 20nm IMFT NAND and perhaps the first 1x-nm IMFT NAND as well. For consumers however, this has no bearing on the Vertex 4's performance/reliability as a drive today (Octane's ECC engine was enough).

1GB of DDR3-800 On-Board

The Everest 2 controller is flanked by a 512MB Micron DDR3-800 DRAM. Another 512MB chip exists on the flip side of the PCB bringing the total to a whopping 1GB of DDR3 memory on-board. OCZ makes no effort to hide the DRAM's purpose: Everest 2 will prefetch read requests from NAND into DRAM for quick servicing to the host. When serviced from DRAM, reads should complete as fast as the interface will allow it – in other words, the limit is the 6Gbps SATA interface, not the SSD.


OCZ Vertex 4 256GB

In order to get 128GB, 256GB and 512GB drives to market as quickly as possible, OCZ is shipping them all with 1GB of DRAM on-board. The 128GB and 256GB drives simply won't use all of the DRAM however. A future revision of the Vertex 4 will pair the 128/256GB drives with 512MB of memory instead to save on costs.


OCZ Vertex 4 256GB

The amount of memory bandwidth offered to the Everest 2 controller is insane – we're talking about 3.2GB/s, as much as many modern day smartphone SoCs, and as much as a desktop PC had a decade ago.

OCZ wouldn't tell me if the costs of shipping 1GB of DDR3 memory outweigh the savings from not having to pay SandForce for silicon anymore. Even though it owns Indilinx, R&D and manufacturing aren't free. All of that factored in, the cost of the Everest 2 controller is likely cheaper than SandForce's 2281, but it's not clear to me if the added cost of DRAM offsets that gap. None of this matters to end users, but it's an interesting discussion regardless. OCZ will have to deliver aggressive pricing regardless of its internal cost structure.

AES-256 Encryption

Similar to the Octane/Everest 1, all data written to NAND in the Vertex 4 goes through Everest 2's 256-bit AES encryption engine. Modern SSDs scramble data before writing to NAND to begin with (certain data patterns are more prone to errors in flash than others) and encryption offers security benefits in addition to working as a good scrambling engine. If you're going to support scrambling, the jump to enabling encryption isn't all that far.

Similar to other SSDs, the Vertex 4's encryption key is generated randomly at the factory. Unfortunately, also similar to other SSDs, there's no client facing tool to reset or manage the key. I believe the key is regenerated upon a secure erase and it can be tied to an ATA password, however what I'd really like to see is the bundling of a software package that can allow users to generate a new key and require a password at boot (not all ATA password implementations are super secure). I know there are third party applications that offer this functionality today, but I'd like to see something ship with one of these FDE drives by default so more consumers can actually use the feature. There's no point to having a self encrypting drive that gives up your data as soon as you plug it in to another system. While I'm making requests, I'd also like to see a way for OS X users to take advantage of built in full disk encryption.

Ndurance 2.0 and a 5-year Warranty

With Everest 2 OCZ supports redundant NAND arrays, similar to the latest Intel and SandForce controllers. By including redundant NAND on-board, the drive could withstand the failure of more than a single die without any data loss. The Vertex 4 doesn't have OCZ's redundant NAND technology enabled, although the enterprise version of the drive (Intrepid 3) will likely turn it on.

OCZ is also doing some granular manipulation of voltages at the NAND level in order to get the most endurance out of these drives (not all NAND is created equally, adjusting to characteristics of individual NAND devices can lead to more p/e cycles out of the drive). While this isn't really a concern for 25nm NAND on the Vertex 4 today, it is likely a feature we'll see played up with the move to 20nm and for eMLC versions of the drive targeted at the enterprise.

Confidence in reliability is at an all-time high with the Vertex 4 as it ships with a 5-year warranty, up from 3 years with the Octane, Vertex 2 and Vertex 3.



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read (4K Aligned)

Random read performance is staggering - a good 40% higher than anything else we've tested. While the cutoff for usefulness on a client drive is likely much lower than what even the Octane could deliver, this sort of performance bodes very well for OCZ's enterprise ambitions.

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Randomly write performance is also just excellent. SandForce's peak numbers come close, but throw in any sort of incompressible data and they quickly take a step back while the Vertex 4 is able to deliver. Again, I'm actually more interested in these numbers from an enterprise workload standpoint but heavy client users will definitely not be disappointed.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

SandForce always scaled well at higher queue depths, but again we're looking at best case performance for the SF-2281. Move towards incompressible data and the Vertex 4 is alone at the top.



Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

As impressive as the random read/write speeds were, at low queue depths the Vertex 4's sequential read speed is problematic:

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Curious as to what's going on, I ran AS-SSD and came away with much better results:

Incompressible Sequential Read Performance - AS-SSD

Finally I turn to ATTO, giving me the answer I'm looking for. The Vertex 4's sequential read speed is slow at low queue depths with certain workloads, move to larger transfer sizes or high queue depths and the problem resolves itself:

QD2
QD4
QD8

The problem is that many sequential read operations for client workloads occur at 64 – 128KB transfer sizes, and at a queue depth of 1 - 3. Looking at the ATTO data above you'll see that this is exactly the weak point of the Vertex 4.

I went back to Iometer and varied queue depth with our 128KB sequential read test and got a good characterization of the Vertex 4's large block, sequential read performance:

The Vertex 4 performs better with heavier workloads. While other drives extract enough parallelism to deliver fairly high performance with only a single IO in the queue, the Vertex 4 needs 2 or more for large block sequential reads. Heavier read workloads do wonderfully on the drive, ironically enough it's the lighter workloads that are a problem. It's the exact opposite of what we're used to seeing. As this seemed like a bit of an oversight, I presented OCZ with my data and got some clarification.

Everest 2 was optimized primarily for non-light workloads where higher queuing is to be expected. Extending performance gains to lower queue depths is indeed possible (the Everest 1 based Octane obviously does fine here) but it wasn't deemed a priority for the initial firmware release. OCZ instead felt it was far more important to have a high-end alternative to SandForce in its lineup. Given that we're still seeing some isolated issues on non-Intel SF-2281 drives, the sense of urgency does make sense.

There are two causes for the lower than expected, low queue depth sequential read performance. First, OCZ doesn't currently enable NCQ streaming for queue depths less than 3. This one is a simple fix. Secondly, the Everest 2 doesn't currently allow pipelined read access from more than 8 concurrent NAND die. For larger transfers and queue depths this isn't an issue, but smaller transfers and lower queue depths end up delivering much lower than expected performance.

To confirm that I wasn't crazy and the Vertex 4 was capable of high, real-world sequential read speeds I created a simple test. I took a 3GB archive and copied it from the Vertex 4 to a RAM drive (to eliminate any write speed bottlenecks). The Vertex 4's performance was very good:

Sequential Read - 3GB Archive Copy to RAM Disk

Clearly the Vertex 4 is capable of reading at very high rates – particularly when it matters, however the current firmware doesn't seem tuned for any sort of low queue depth operation.

Both of these issues are apparently being worked on at the time of publication and should be rolled into the next firmware release for the drive (due out sometime in late April). Again, OCZ's aim was to deliver a high-end drive that could be offered as an alternative to the Vertex 3 as quickly as possible.

Update: Many have been reporting that the Vertex 4's performance is dependent on having an active partition on the drive due to its NCQ streaming support. While this is true, it's not the reason you'll see gains in synthetic tests like Iometer. If you don't fill the drive with valid data before conducting read tests, the Vertex 4 returns lower performance numbers. Running Iometer on a live partition requires that the drive is first filled with data before the benchmark runs, similar to what we do for our Iometer read tests anyway. The chart below shows the difference in performance between running an Iometer sequential read test on a physical disk (no partition), an NTFS partition on the same drive and finally the physical disk after all LBAs have been written to:

Notice how the NTFS and RAW+precondition lines are identical, it's because the reason for the performance gain here isn't NCQ streaming but rather the presence of valid data that you're reading back. Most SSDs tend to give unrealistically high performance numbers if you read from them immediately following a secure erase so we always precondition our drives before running Iometer. The Vertex 4 just happens to do the opposite, but this has no bearing on real world performance as you'll always be reading actual files in actual use.

Despite the shortcomings with low queue depth sequential read performance, the Vertex 4 dominated our sequential write tests, even at low queue depths. Only the Samsung SSD 830 is able to compete:

Desktop Iometer - 128KB Sequential Write (4K Aligned)

Technically the SF-2281 drives equal the Vertex 4's performance, but that's only with highly compressible data. Large sequential writes are very often composed of already compressed data, which makes the real world performance advantage of the Vertex 4 tangible.

Incompressible Sequential Write Performance - AS-SSD

AS-SSD gives us another taste of the performance of incompressible data, which again is very good on the Vertex 4. As far as writes are concerned, there's really no beating the Vertex 4.



AnandTech Storage Bench 2011

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running in 2010.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our heavy workload test:

Heavy Workload 2011 - Average Data Rate

The overall score here is in-line with the SF-2281 solutions and OCZ's Octane. Our heavy workload is fairly write intensive, so I honestly expected the Vertex 4 to top the charts. Looking at the read/write breakdown explains why though:

Heavy Workload 2011 - Average Read Speed

Here we see the Vertex 4's poor low queue-depth sequential read performance rearing its head once more. Look at the sequential write speed and you'll see where OCZ makes up for its shortcomings however. The Vertex 4 writes quicker than any drive we've tested.

Heavy Workload 2011 - Average Write Speed

It's a tough tradeoff to come to terms with. The Vertex 4 delivers performance when it matters the most, but typically you want the absolute best performance regardless of workload. All of this may become moot if OCZ is able to address the situation with a firmware update in the coming weeks like it promised, but for buyers today it is something you have to think about.

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)



AnandTech Storage Bench 2011 - Light Workload

Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Light Workload 2011 - Average Data Rate

Our light workload is far more representative of a mainstream client workload (read heavy) and this is where the Vertex 4's sequential read performance hurts it the most. The Samsung SSD 830 ends up being considerably faster here. Once again, if we look at the breakdown of reads and writes we see why:

Light Workload 2011 - Average Read Speed

Light Workload 2011 - Average Write Speed

Read performance is around half of the best performers, while write speed is around 30% better. The combination results in competitive but not class-leading performance.If OCZ is able to deliver, at a minimum, Octane levels of read performance, the Vertex 4 should find itself much higher in the overall charts.

Light Workload 2011 - Disk Busy Time

Light Workload 2011 - Disk Busy Time (Reads)

Light Workload 2011 - Disk Busy Time (Writes)



PCMark 7

The storage suite in PCMark 7 validates a lot of what we've seen thus far. Despite great write performance, the Vertex 4 can't outperform the Vertex 3 because of its read speed limitations. From OCZ's perspective however, the gap is narrow enough in overall tests to make the shift away from SandForce likely worthwhile. The Vertex 4 doesn't care about compressible vs. incompressible data and it keeps more of the drive's BOM cost in house compared to the Vertex 3.

PCMark 7 Secondary Storage Score

It's worth noting that for sufficiently light workloads, the difference in performance between any modern SSD is going to be limited right off the bat. We're talking about a 6% spread between the slowest and fastest drive here. For many users, simply finding the right balance of price and reliability is sufficient - which happens to be one of the reasons we've been such big fans of Samsung's SSD 830.



TRIM Performance

I started out this review with a mention of estimated write amplification and how OCZ was able to significantly reduce it with the Vertex 4/Everest 2 compared to Octane/Everest 1. By reducing write amplification, OCZ should have also significantly improved worst case write performance when TRIM isn't available or before it's able to act.

To find out, I wrote sequential data across all user addressable LBAs and then wrote random data (4KB, QD=32) for 20 minutes across all LBAs. Finally I used HDTach to give me a simple visualization of write performance across all available LBAs (aka the Malventano Method):

This is a huge improvement over what we saw with the Octane. Behavior here isn't quite what we see with Intel's controllers, but again it's a huge step above what we saw in the previous generation.

The Vertex 4 does support idle time garbage collection, but at very low priority. The drive must be idle for at least an hour for the background GC to kick in. I'm glad to see that OCZ has taken a more conservative route here as I've never been a huge fan of idle time garbage collection to begin with.

If you don't leave the drive alone long enough to trigger the idle GC, as soon as 85% of the blocks on the drive are used up the Vertex 4 will automatically trigger its garbage collection algorithms. This is more of what I'd like to see, however I'd prefer it even more if OCZ lowered the limits of when it would start recycling blocks in order to try and maintain good performance under heavily fragmented conditions.

TRIM is alive and well on the drive – a single TRIM pass is able to restore performance to new:



Power Consumption

The Vertex 4, similar to the Octane before it, consumes entirely too much power at idle. OCZ tells us that this is a known issue, also fixed in the next version of the firmware that should reduce bring it down to roughly 0.75W at idle. At ~1.3W today, the Vertex 4 would draw more power than many 5400RPM 2.5-inch hard drives at idle - something to keep in mind if you're planning on putting this thing into a notebook.

Drive Power Consumption - Idle

Under load however, the Vertex 4 does quite well. It's more power efficient than Samsung's SSD 830, while offering similar if not better write performance. If your aim is better battery life and not performance however, you may want to stick with one of the 3Gbps Intel drives instead.

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write



Final Words

After years of begging, OCZ has finally delivered much of what we've wanted in an SSD: low write amplification and very good random/sequential write performance. It could use a more aggressive real-time garbage collection algorithm but running an OS with TRIM, that's mostly picking nits. The Vertex 4 takes write performance seriously and delivers handsomely, besting even the latest from Intel and Samsung. The advantage over SandForce is clear, particularly because Everest 2 and the Vertex 4 are able to deliver consistent performance regardless of data composition. SandForce's realtime compression/dedupe technology is definitely exciting, but if you're able to deliver similar or better performance without the inconsistency of data-dependency the choice is obvious. SandForce definitely set the bar very high with the SF-2281, but Everest 2 has the potential to exceed it - assuming OCZ/Indilinx can deliver on its promises.

Sequential read performance is unfortunately just as important for client workloads, and the Vertex 4 doesn't do nearly as well there. If OCZ is able to improve its low queue depth sequential read performance through a firmware update in the coming weeks, it will truly have built a drive (and controller) that are among the fastest on the market. I see no reason that OCZ shouldn't be able to achieve this given the sequential read performance we saw from Octane, but as is always the case with these types of launches we have to review the product we have, not the product we'll get.

Should nothing change, the tradeoff is an interesting one. OCZ effectively gives us three of the four corners of raw performance, and competitive sequential read speeds under heavy load. Does giving up the bottom end of sequential read performance matter? For users with very write intensive or generally IO heavy workloads, the tradeoff is likely worth it. It almost feels like OCZ should have launched the enterprise version of the Vertex 4 first, given its strengths. For more typical mainstream client workloads, the Vertex 4 isn't as good of a fit. The drive is still usable, but it's far from industry-leading when it comes to low queue depth read speeds. Admittedly it is industry-leading in write performance even in mainstream workloads, but the combination of the two is really what's missing. As we demonstrated with our copy test, the Vertex 4 is able to deliver good real world read performance but as our Light Storage Bench suite shows the read performance isn't consistently high. It's a frustrating dynamic, one that I truly hope is mitigated with the next firmware release as OCZ has promised.

Idle power consumption is also a concern of mine. As it stands, consuming over 1W when doing nothing isn't ok for a notebook drive. OCZ tells us a fix is on the way for this as well, but someone buying today needs to keep this in mind if it's going into a portable. The impact to overall battery life shouldn't be tremendous, but if you're on a quest to squeeze every last minute out of a single charge you may want to consider some of the alternatives.

Validation is and has always been a hot topic with every new SSD. OCZ is taking things more seriously and more importantly, has better access to fix bugs as they come up. With direct access to the firmware source code (a benefit of owning Indilinx), OCZ shouldn't have the same limits it has had in the past when working with third party controller vendors. The presumption is that now, if bugs come up in the field or during testing, they can be addressed as quickly as the Indilinx firmware engineers can type out a workaround. It remains to be seen how this works in practice, but the concept is at least sound.

In the end, as with most brand new controllers and SSD reviews our conclusion is to wait. Vertex 4 is a unique drive that really delivers a lot of what we've been asking for from a performance standpoint, but with some tangible caveats that we're told will be resolved in the coming weeks. If you're buying an SSD today, our standarding recommendation (particularly for Mac users) is Samsung's SSD 830. If you have a workload that demands better write and/or random read performance, let's see how this and other soon-to-be-announced drives behave over time before jumping the gun.

Log in

Don't have an account? Sign up now