Core Memory

Basement left, floor 2 right, floor3 left

Prof. Jay Forrester of MIT invented the core memory, which uses the persistence of direction of magnetic flux in a doughnut-shaped carbon ring to represent 0 or 1. These rings were called cores.
By transmitting a current (of about 1 Amp on early memories) through the core the direction of the flux can be set. Setting of a specific core, say at position x,y, is achieved by transmitting a current of, say, 0.6Amp, through one x-wire and the same to one y-wire simultaneously. A reversal of the flux is picked up by a third, sense wire, which is strung through all the cores. Cores retain their magnetization, so that the cores not read do not have to be regenerated, a critical operation in earlier memories, as storage tubes. However, when computers were restarted, the intialization of the drivers created currents that made it unwise to use the memory contents from before a shutdown, unless special circuitry was used in the drivers.
The cores were assembled into matrix planes that were approximately square, minimizing x + y -- the number of drivers needed -- for x * y bits that could be addressed. A 4K memory unit would have planes with 64 ^2 = 4096 bits, and hence need 64 + 64 drivers. One driver of each set would be selected using 6 bits from the 12-bit address neded to select one bit out 4K bits.
Multiple core planes were stacked to access multi-bit words. An assembly with twelve planes is shown in the DEC exhibit on the third floor.

Core memory was first used in the Whirlwind computer at MIT, which was operational from 1951 to 1959, seen on the second floor . Stringing three wires through cores was never automated and core memories, while reliable, remained expensive.

IBM adopted core technology from MIT first for the IBM 701. The IBM 737 core storage unit, announced in Oct. 1954 held 4K 36-bit words, each accessible in 12 microseconds and had twice the capacity of the prior electrostatic storage. The 701 had to be adapted to use memory that did not have to be regenerated after every write; its logic required 6 memory cycle times to execute an instruction, as an addition. A 737 unit could be rented then for $6,100/month. The follow-up machine, the IBM 704, with floating-point capability had been announced in April of the same year and replaced the 701 rapidly, which was withdrawn also in Oct. 1954, although existing machines continued to be used for a long time. It seems that UC Berkeley's College of Engineering obtained a 4K IBM 701 in 1956 and operated it up to 1959, when it upgraded to a 32K IBM 704 [Clough & Wilson: Early Finite Element Research at Berkeley; 5th US Nat. Conf. on Computational Mechanics, 1999]. The IBM 704 required only two cycle times for an addition instruction. It could address 32K of memory, but that much memory was rare then since it would cost well over a million dollars. Such a memory would contain more than a million cores, and MIT would be owed a $20 000 royalty payment.
A subsequent version was designated the IBM 709, which primarily distinguished itself by allowing input/output to be processed in paralllel. Both the IBM 704 and 709 were withdrawn from sales in 1960.

Development of a transistorized version, intially designated 709T, was managed by Michael Flynn, who became a professor at Stanford in 1975.

The resulting IBM 7090, announced in December 1958 and available starting December 1959, was sold up to 1969. The still used core memory, but now at 2.18 microseconds cycle time. Stanford eventually installed its successor, the , running the same memory at 2.0 microseconds, in its Computing Center in Polya Hall.

Core memory technology advanced rapidly, making cores ever smaller and faster. For the 360 series, access cycle speeds of 2 and 1.5 microseconds were achieved.

A patent suit between IBM and MIT was eventually settled, and IBM paid MIT 2 cents per core it made. In 1964 the core license rights were bought outright by IBM for $13M. IBM had earlier bought the concept of regeneration of memory after a destructive read from An Wang at Harvard, when An Wang obtained the patent in his name in 196x?.

Slower, 2-D core memories were also fabricated starting in 1966. Here sensing was performed by checking for flux-reversal on one of the driving wires. (Did that process require regeneration of the whole line?) Such core memories operated at about 1/4 the speed at about 1/4 the cost.
The Stanford ACME system used initially an 1 Mbyte 2-D memory and later an Ampex 2Mbyte 2-D memory to provide adequate buffer space for timeshared real-time data acquisition. Large core memory on top.
The `Pie' file held strips of magnetic tape.
By 1974 the cost of core memory was down to 1 cent/bit, but semiconductor memory had reached the same price then, and continued to go down in price rapidly, replacing core memory for most purposes. However most semiconductor memory does not retain its contents without regeneration and hence loses its contents when powered off. In the 1990 slower, flash memory become available that retains its contents.


Return to floor 3 with the IBM 360 exhibit or the DEC exhibit;

Return to the Whirlwind exhibit on floor 2;
Return to the Stanford Historic Phototour on Floor 1;
Return to the Logic Time line display in the Gates Basement;
Return to the IBM Mainframe at Stanford chronology.

Previous display. All the way back to the beginning of the photo tour