facebook rss twitter

IBM celebrates memory breakthrough

by Hugo Jobling on 4 July 2011, 19:14

Quick Link: HEXUS.net/qa6ke

Add to My Vault: x

Just a phase

IBM has achieved what it calls a memory breakthrough, having created stable multi-bit phase-change memory - the first time this has been achieved for phase-change memory. 100 times faster than flash, PCM certainly has its appeal, although commercial implementations are a way off, yet.

Phase-change memory has been under development for decades, but IBM's experiment is the first time that stable storage of more than a single bit per cell has been achieved. Like MLC Flash chips, multi-bit PCM is able to store two bits per cell (the combinations 00, 01, 10 and 11), giving a sizeable improvement in memory density.

Where phase-change memory improves upon Flash is its speed and durability. IBMs testing saw a 'worst-case' latency of around 10 microseconds, some 100 times faster than flash memory. Where Flash tops out at some 3,000 write cycles per cell at a consumer grade ant 30,000 writes at an enterprise grade, IBM says that PCM is good for over 10 million write cycles, making it a much better fit for long-term enterprise use.

Accoridng to Dr. Haris Pozidis, Manager of Memory and Probe Technologies at IBM Research: "By demonstrating a multi-bit phase-change memory technology which achieves for the first time reliability levels akin to those required for enterprise applications, we made a big step towards enabling practical memory devices based on multi-bit PCM.

Despite having made this 'breakthrough' IBM isn't going to be changing the world any time soon. Not only does the company not plan to create any products itself, but rather will look to license patents to third parties interested in doing so, it also doesn't expect products featuring PCM before 2016, by which time it's possible that innovations in flash will have negated the benefits of PCM altogether.



HEXUS Forums :: 6 Comments

Login with Forum Account

Don't have an account? Register today!
Regardless of current usefulness, I'm always a fan of innovation, not least because it provides more options (and competition). Plus, nice to see IBM are still doing significant research, I'd not heard much about them recently - and they have been hugely instrumental in the development of modern computing.
it also doesn't expect products featuring PCM before 2016, by which time it's possible that innovations in flash will have negated the benefits of PCM altogether.
Not knowing anything about this tech, isn't also possible that PCM will continue to be developed, in which case there still might be some benefit to using it? I'm kind of disappointed to see the word “enterprise” mentioned frequently, as I would have thought increased longevity and reduced latency would have been of interested to most, if not all, users of memory-resident storage.
miniyazz
Regardless of current usefulness, I'm always a fan of innovation, not least because it provides more options (and competition). Plus, nice to see IBM are still doing significant research, I'd not heard much about them recently - and they have been hugely instrumental in the development of modern computing.
I'm sure I saw a report recently listing the top 5 patent filers in the US*. IBM was at the top, and it was pointed out that if you added #2-#5 together it was still less than their filings. It's a company I've got a good deal of respect for on a technical level, even though I work for a competitor.

(* disclaimer: yes, I know that means it was the US patent system - hopefully as few of their granted patents as possible were at the “fastening shoelaces” level of stupidity)
crossy
Not knowing anything about this tech, isn't also possible that PCM will continue to be developed, in which case there still might be some benefit to using it? I'm kind of disappointed to see the word “enterprise” mentioned frequently, as I would have thought increased longevity and reduced latency would have been of interested to most, if not all, users of memory-resident storage.


Enterprise = profit. They're not going to spend millions on the research unless they can be sure that big companies needing reliable and redundant server farms/clusters are going to buy them by the truckload. So yes, enthusiasts like us will be slavering over it and no doubt it'll find its way into the market, but really we're not the target market.

For consumers, current SSDs are more than reliable enough. To actually get to the write-limit, you'd have to be continuously writing and re-writing data for a looong time. A 64GB drive, at 80MByte/s will burn out at around 50 years, more recent drives are rated at over 80. Given that most people switch out hard drives within 5 years when they change computers, or perhaps every 10 for people who migrate them, this isn't an issue and likely never will be. Even if you're using your disk as the OS drive, it's not a problem.

And that's just fine for home use - but obviously for active archival work you may well want/need more time (and peace of mind).
Whiternoise
For consumers, current SSDs are more than reliable enough. To actually get to the write-limit, you'd have to be continuously writing and re-writing data for a looong time. A 64GB drive, at 80MByte/s will burn out at around 50 years, more recent drives are rated at over 80. Given that most people switch out hard drives within 5 years when they change computers, or perhaps every 10 for people who migrate them, this isn't an issue and likely never will be. Even if you're using your disk as the OS drive, it's not a problem.

And that's just fine for home use - but obviously for active archival work you may well want/need more time (and peace of mind).

Unfortunately, these figures are (as ever) misleading. For example, it is possible to quickly (within weeks or months) kill an SSD by filling it to 98% of its capacity, and then writing/deleting/writing/deleting to the last few GB repeatedly. And it's not actually that unusual a scenario - especially with smaller drives, where after OS and a few games you're running very full capacity-wise, leaving the last little bit for you to juggle files on as you need to, or the OS to use for temporary files, or some such thing. Overprovisioning can only help so much (in this case by extending the e.g. 2GB of free space to e.g. 10GB and therefore buying five times as much time as if there was no overprovisioning).
Don't drives try to re-arrange static data on the drive so that doesn't happen?