Why the end of Optane is bad news for the entire IT world

Analytics Intel is ending its Optane line of permanent memory, which is more disastrous for the industry than it appears on the surface.

The influence of ideas from the late 1960s and early 1970s is now so pervasive that almost no one can imagine anything else, and the best ideas from the next generation are often forgotten.

Optane introduced radical and transformative technology, but because of this ancient view, this technical debt, few in the industry realized just how radical Optane was. And so it was bombed.

To get to the gist of this, let’s step back for a long moment and ask, What is the primary function of a computer file?

The first computers did not have file systems. The giant machines of the 1940s and 1950s, built from tens of thousands of thermocouples, had only a few words of memory. Initially, programs were entered by manually connecting them to the computer: only data was in memory. The program was run and some results were printed.

With the growth of capabilities and we reached architecture von neumann The computer program is stored alongside the data in the same memory. On some older machines, that “memory” was magnetic storage: a rotating drum.

To insert it into memory, it was read from paper: perforated cards or paper tape. When computer memory became large enough to store several programs simultaneously, operating systems appeared: programs that run other programs.

There are still no file systems, though. There was RAM and there was I/O: printers, terminals, card readers, etc., but all the storage that could be accessed directly from the computer was memory. In the 1960s, memory It often meant primary magnetic storage, which had a huge advantage that is sometimes forgotten now: when you turn off the computer, whatever was in the primary store stays there. Reboot the computer, and its last program was still there.

It was around this time that the first hard drives began to appear: expensive, relatively slow, but huge compared to working memory. Early operating systems were given another function: the problem of managing such huge secondary storage. Indexing their contents and finding those required partitions and loading them into working memory.

Two levels of storage

As soon as operating systems started managing drives, a distinction appeared: primary And the secondary storage. Both are directly accessible from the computer, and are not loaded and unloaded by a human factor such as reels of paper tape or sets of punch cards. The primary storage appears directly in the processor’s memory map, and each individual word is directly readable or written.

Secondary storage is a much larger, slower pool that the processor cannot see directly, and can only access by requesting or sending entire blocks to another machine, a disk controller, which fetches the contents of selected blocks from a large storage pool, or puts them into that pool.

This split into eight-bit minicomputers continued into the 1970s and 1980s. The author fondly remembers attaching the ZX Microdrive to his 48K ZX Spectrum. Suddenly, my Spectrum had secondary storage. Spectrum’s Z80 CPU had a 64K memory map, a quarter of which was ROM. Each Microdrive cartridge, although it was only 100KB or so, could be stored twice The entire device’s usable memory. So there had to be a level of evasiveness: it was impossible to load the entire contents of the cartridge into memory.

will not fit. So the cartridges had an index, and then the blocks were named with the BASIC code, device code, screen images, or data files.

Since microcomputers, we still call primary storage “RAM” and we still call secondary storage “disks” or “drives”, although in many modern end-user computers they are all just different types of electronics No moving parts or separate media.

You start your computer by loading an operating system from a “disk” to RAM. Then, when you want to use a program, the operating system is loaded He. She From the “disk” to the RAM, and then this program will most likely load some data from the disk into the RAM. Even if it’s a Chromebook and doesn’t have any other native apps, its single app loads data from another computer over the Internet, which it loads from disk to RAM and then sends to the laptop.

Since UNIX was first written in 1969, this has become a motto: “Everything is a file.” Unix-like operating systems use the file system for all kinds of things that aren’t files: device access is controlled by metadata on files, input/output devices are accessed as if they were files, and you can play sounds by “copying” to a device voice, etc. Since UNIX V8 in 1984, there are even forged The file system is called /procwhich displays information about memory and running system processes by generating pretend files Users and programs can read it, and in some cases write.

Files are a powerful metaphor for their types, which proved unimaginably versatile in 1969, when Unix was written on a small computer with a maximum of 64,000 words and without sound, graphics, or networking. The files are everywhere now.

But files and file systems were just a prop.

The concept of a “computer file” was invented because memory was too expensive, too big, and too slow. The only way to attach millions of words of storage to a central frame dating back to the 1960s was a drive the size of a filing cabinet, storage space too large to fit into a computer’s memory map.

So instead, mainframe companies designed disk controllers, and created some form of database in the operating system. Imagine, for example, a payroll program, perhaps a few thousand words in size, that could handle a file of tens of thousands of employees, by doing this in small bits: reading a row from the personnel file, a description from the payroll file, calculating the result, and writing Row in paycheck file, then repeat. The operating system checks the indexes and turns them into instructions to the disk console: “Here, fetch block 47, path 52, head 12, sector 34, and block 57 of path 4, head 7, sector 65 … Now, type 74.32 in these Bloc … “

SSDs appeared in the 1990s, and by the 2000s they became more affordable. SSDs replace magnetic storage with electronic storage, but it is still secondary storage. SSDs pretend to be drives: the computer talks to the disk controller, sends and receives sectors, and the drive converts and shuffles them around storage blocks that can only be erased in chunks, usually a megabyte or more, to simulate a hard drive – Disk style function that writes 512-byte sectors.

The problem is that flash memory must be accessed this way. It is too slow to be mapped directly into the computer’s memory, and it is impossible to rewrite a flash byte. In order to modify a byte in a flash block, the rest of the contents of that entire block must be copied elsewhere, and then the entire block is erased. This is not how computer memory controllers work.

The future was here… but it’s gone

Optane made it possible to eliminate that. Like the Core store, it’s working memory: primary storage. The Optane range is as big and cheap as the drives. They ship in the hundreds of gigabytes size range, which is the same kind of modest SSD size, but can be installed directly into your motherboard’s DIMM slots. Every byte appeared there in the processor’s memory map, and every byte could be directly rewritten. Don’t shuffle around the blocks to erase them, like flash. It supports millions of writing cycles, instead of tens of thousands.

Hundreds of gigs, even terabytes, of dynamic non-volatile storage, thousands of times faster and thousands of times more powerful than flash memory. Not a secondary storage on the other side of the disk controller, but there in the memory map.

Not infinitely rewritable, no. So your computer needs some RAM as well, to hold fast changing variables and data. But instead of “loading” programs from disk to RAM every time you want to use them, the program is loaded once, and then stays in memory forever, no matter if there’s a power outage, no matter Asked if you turn the computer off for a week. Restart it, and all your apps will still be in memory.

No more installing OSes, no more booting. No more applications. The operating system sits in memory all the time, and so do your applications. And if you have a terabyte or two of non-volatile memory in your computer, what do you need an SSD for? Everything is just a memory. One small partition is fast and infinitely rewritable, but its contents disappear when the power is gone. The remaining 95 percent retain their contents forever.

Sure, if the box is a server, you can get some turntables so you can manage petabytes of data. Data centers need it, but very Few personal computers.

Linux, of course, support this. This particular Eagle has written the documentation on how to use it in a prominent enterprise distribution. But being Linux is Linux, everything should be a file, so he backed it up by partitioning it and formatting it with a file system. Use primary storage to simulate secondary storage, in the program.

No mainstream operating system currently understands the concept of a computer Just It has primary storage, no secondary storage at all, but is divided between a small volatile partition and a large non-volatile partition. It’s hard even to describe it to people who are familiar with how current computers work. I tried.

How do you find a driver if there are no clues? How to save things, if there is no place to save them to me? How do you compile the code, when there is no way for it #include One file to another because there are no files, and where does the resulting binary go?

there be Ideas on how to do this. Reg He wrote about one of them 13 years ago. There’s also Twizzler, a research project looking at how to make it look like a Unix system for existing software to use. When an HP whistleblower invented the memristor, HP was very excited and came up with some big plans…but it takes a long time to introduce a new technology to the mass market, and eventually HP gave up.

But Intel made it work, produced this stuff, put it on the market… and there weren’t enough people interested, and now it’s giving up too.

The future was here, but looking at it through the old, blurred, scratched lenses of the 1960s small computer operating system design, well – if everything was a file, Optane was just kind of a really fast drive, right?

No, it wasn’t. It was the biggest step forward since the microcomputer. But we blew it up.

Bye, Optan. We hardly know you ®

#Optane #bad #news #entire #world

Leave a Comment

Your email address will not be published.