Intel Ships 313 Series Cache SSDs for Ultrabooks

Following Intel’s 311 series of caches SSDs is the 313, now with improved sequential speeds although still remains at SATA II. Cache SSDs are meant to go side by side with traditional hard disk drives and this is exactly what the 313 series would do. This time though, it is meant to rest inside ultrabooks.

The 313 comes in two capacities, 20 GB and 24 GB, having maximum sequential read and write speeds of 220 MB/s and 160 MB/s respectively. While the 311 used a 34-nm manufacturing process, these new drives are made using 25-nm procedures.  Being SLC drives, the 313 will also feature endurance that is said to be tenfold of what MLC SSDs have.

Intel aims to reduce bootup times and application startup times with the help of these cache SSDs on upcoming ultrabooks. While SSDs are faster than traditional hard drives, their capacities are still much smaller and are also more expensive. The 313 series introduces SSD performance on laptops but in less expensive way as hard disks remain to be the prime storage.

The compatibility of the 313 with ultrabooks is made possible by two Intel proprietary technologies. With Smart Connect, the 313-equipped ultrabook could stay connected on the internet while consuming less power. Sync time between the computer and the cloud is also effectively reduced. The other technology, called Smart Response, simply moves frequently used programs on the 313 for a much faster access and loading times.

Amazon currently lists the 20 GB 313 SSD at $113 while the 24 GB is at $138. They are expected to come out with the newest Intel ultrabooks that would be released this year.

  1. As a storage for temporary, swap and other frequently updated files, such SSD will do. It’s price per gigabyte written is still much lower than the price of MLC drives, and its storage capacity for temporary and frequently updated files is not that important unless you are using virtualization software.

  2. I’m curious as to what data exactly these drives store. Wouldn’t it make more sense to put the entire OS on the drive and keep it there?

  3. OS with all libraries, including some preinstalled development files, may take about 8-9G. It may work for a system with low amount of temporary and swap files, but for some development builds it is a waste.

  4. Hmm, yes, I suppose that makes sense. I would still very much like to know how that caching algorithm works. Everything that can’t be controlled or checked manually makes me nervous.

  5. Probably it is very similar to conventional caching methods and windows file mapping methods. The driver probably maps some frequently used files to the SSD and performs I/O operations on the SSD rather than on HDD with periodical delayed updates to the HDD. But it also means that with some low level maintenance and drive caching software the files may be corrupted (due to conflicts). Also I’m not sure whether the caching driver would work correctly with RAID configurations.

  6. In particular I wonder whether the files will stay unharmed if one decides to use RAM caching + defragmentation at the same time.

    Leave a reply