Fusion-io Launches New Monster Drives

ioDrive2Fusion-io has just released the first upgrade of its NAND-based flash disk series ioDrive. The sequels, logically named ioDrive2 and ioDrive2 Duo, offer double storage capacity, bandwidth and circuit density compared to their predecessors. They are also designed for better endurance and tolerance to hardware failure.

The new SSD series has been described by the manufacturer as having a bandwidth of 3 gigabits per second (ioDrive 2 Duo) and that it can handle 700,000 read/write operations per second (IOPS). WThehen write operations figure is even larger: 900,000 IOPS according to David Flynn, CEO of Fusion-io. The latter is an unusual feature for flash drives, which usually read faster than they write.

In addition to improved performance, these PCIe-attached SSDs also come with a new firmware that includes a ‘self-healing’ feature called Adaptive Flashback. According to the manufacturer this feature gives the drive the ability to handle failures of one or more memory chips without any data loss. Other news include extended support for all major operating systems including Windows, Linux, OS X, Solaris x86, ESXi 5.0 and HP-UX.

The storage capacities for the ioDrive2 are 365GB and 785GB and for the ioDrive2 Duo 1.2TB and 2.4TB. These drives are certainly not intended for the consumer market as the pricing starts at $5,950, although this is nevertheless cheaper than the manufacturer’s first SSD generation. The intended users are data centers with very high requirements, and the drives are expected to be available from November 2011.


Site founder and storage enthusiast.

  1. Such numbers aren’t unusual when the data are stored in the cache. The speed of the cache is equal to DRAM speed, and one can perform I/O operations on it with minimum latency and maximum speed, but up to the time when the cache is not full. The actual random I/O speed for uncached data will be way lower.

  2. So, those numbers are actually DRAM cache figures? To be perfectly honest I have no idea how they manage to reach those numbers…

    Nice to see you again BTW 🙂

  3. For DRAM it is possible to achieve about 2000000 IOPS (theoretical limit for single-bank-access RAID configuration). No wonder that slower DRAM cache can hit 700000 random IOPS. For NAND – such numbers are unreal. No matter how you call it, no matter what shiny cover you put over it, no matter the stupid inscription you put, no matter which superior controller are you going to place, the speed is limited by module latency, page size and other factors and won’t increase much.

  4. A little offtopic – did you watch AMD shares price rating after they announced their “new” Bulldozer architecture ?
    I wonder what is going inside AMD – they have made countless mistakes performance-wise in many blocks while claiming that it is a “new” architecture. I personally hardly see anything “new” there. They have failed to match the performance of a single Core 2 Duo per module basis, despite the fact that the old processor can work at the same frequency as Bulldozer.

  5. No I haven’t watched the ticker price (until now), but it’s sad to see yet another dud from AMD. Meanwhile Intel is taking it real slow and pushes Ivy Bridge forward. Makes you wonder… There’s no end to the misery either, from looking at AMD’s “roadmap” to “Excavator” in 2014. Still no big changes in sight. This is just bad for everyone except Intel. The only place where AMD at least has some potential is in the mainstream/budget notebook segment. If they manage to create proper drivers for their dual graphics setup it might entice some gamers to the platform. Still, the CPU itself is no match even for a Core i5.

  6. Most likely they have some internal problems going on there, something like PR crap and quarrels among their own employees and some of them leaving and being disappointed as a result – i see no other valid explanation for making so many mistakes in multiple blocks of their new processor. In theory, it may have better scaleability in some heavy-threaded applications, but not in this version. Probably they could do something by the 2nd version of it otherwise their income will drop even further.
    What is worse, they are comparing the current version with 4th generation of Intel Core processors clocked at lower frequency with lower TDP. Even for the most stupid users it is clear that 2600k will run as much stable at the same frequency as Bulldozer and will be way faster, and even more faster when limited only by the TDP (same as bulldozer TDP).
    Combined with the information available from AMD, i think that they realized that they have done and decided to give up on any improvements, the current revisions must be only for greater frequency. I would expect them to be currently working on a new version of Bulldozer with most of the block redesigned from the scratch – and they probably are doing that for the last 6 month after realizing that it was too late to fix what they had in time. What we see now, is a workable , but useless prototype with multiple conceptual mistakes in many blocks, and the mistakes most likely have their origin in the conflicts within the development team and stupid, insulting PR.

  7. From watching the dynamics of AMD shares for the last few days, I actually learned a few new things how to manipulate (try to keep at some level) the price of the shares 🙂

  8. Another disappointment – this time from Intel. Their “new” high-end processor appear to be higher-clocked defect versions (with 2 cores and some other blocks disabled) of 8-core Sandy Bridge Xeons. Not to mention that the L3 latency was increased. Which means that 2 xeons with 4 cores would be faster for optimized tasks. Which means that 2xxx Intel processors are still the best choice for most tasks.

  9. That’s clearly a disappointment, but hardly surprising from Intel at this point. With AMD so far behind it looks like they are trying to squeeze as much as they can out of the market by rebranding old crap instead of launching new products that are already ready for prime time.

  10. Some new benchmarks confirm that in some single-threaded applications (and applications with just a few threads) the defect Xeons are actually slower at the same frequency (due to increased latencies).

    Leave a reply