Other than just making everything generally faster, what would be a use-case that really benefits the most from something like this? My first thought is something like high-speed cameras; some Phantom cameras can capture hundreds, even thousands of gigabytes of data per second, so I think this tech could probably find some great applications there.
Unfortunately this 1 bit / 400 picoseconds metric is 10x slower than GDDR7. The applications for this will be limited to things that need non-volatile memory.
There’s some servers using SSDs as a direct extension of RAM. It doesn’t currently have the write endurance or the latency to fully replace RAM. This solves one of those.
Imagine, though, if we could unify RAM and mass storage. That’s a major assumption in the memory heirarchy that goes away.
This was actually the main market for Intel Optame. It’s got great write endurance, and better latency than Flash. I think they ended up stopping making it because it wasn’t cost effective. I’m actually using some old Optame drives in my server for the OS boot drive.
I doubt it would work for the buffer memory in a high speed camera. That needs to be overwritten very frequently until the camera is triggered. They didn’t say what the erase time or write endurance is. It could work for quickly dumping the RAM after triggering, but you don’t need low latency for that. A large number of normal flash chips written in parallel will work just fine.
Other than just making everything generally faster, what would be a use-case that really benefits the most from something like this? My first thought is something like high-speed cameras; some Phantom cameras can capture hundreds, even thousands of gigabytes of data per second, so I think this tech could probably find some great applications there.
The speed of many machine learning models is bound by the speed of the memory they’re loaded on so that’s probably the biggest one.
Unfortunately this 1 bit / 400 picoseconds metric is 10x slower than GDDR7. The applications for this will be limited to things that need non-volatile memory.
There’s some servers using SSDs as a direct extension of RAM. It doesn’t currently have the write endurance or the latency to fully replace RAM. This solves one of those.
Imagine, though, if we could unify RAM and mass storage. That’s a major assumption in the memory heirarchy that goes away.
This was actually the main market for Intel Optame. It’s got great write endurance, and better latency than Flash. I think they ended up stopping making it because it wasn’t cost effective. I’m actually using some old Optame drives in my server for the OS boot drive.
I doubt it would work for the buffer memory in a high speed camera. That needs to be overwritten very frequently until the camera is triggered. They didn’t say what the erase time or write endurance is. It could work for quickly dumping the RAM after triggering, but you don’t need low latency for that. A large number of normal flash chips written in parallel will work just fine.
The article highlights on device AI processing. Could be game changing in a lot of ways.