Chapter Nineteen
As we rouse ourselves from sleep every morning, memory fills in the blanks. We remember where we are, what we did the day before, and what we plan to do today. These memories might come in a rush or a dribble, and maybe after some minutes a few lapses might persist (“Funny, I don’t remember wearing my socks to bed”), but all in all we can usually reassemble our lives and achieve enough continuity to commence another day.
Of course, human memory isn’t very orderly. Try to remember something about high school geometry and you’re likely to start thinking about the day there was a fire drill just as the teacher was about to explain what QED meant.
Nor is human memory foolproof. Indeed, writing was probably invented specifically to compensate for the failings of our memory.
We write and we later read. We save and we later retrieve. We store and we later access. The function of memory is to keep the information intact between those two events. Anytime we store information, we’re making use of different types of memory. Just within the previous century, media for storing information has included paper, plastic discs, and magnetic tape, as well as various types of computer memory.
Even telegraph relays—when assembled into logic gates and then flip-flops—can store information. As we’ve seen, a flip-flop is capable of storing 1 bit. This isn’t a whole lot of information, but it’s a start. For once we know how to store 1 bit, we can easily store 2, or 3, or more.
On page 224 of Chapter 17, you encountered the level-triggered D-type flip-flop, which is made from an inverter, two AND gates, and two NOR gates:
When the Clock input is 1, the Q output is the same as the Data input. But when the Clock input goes to 0, the Q output holds the last value of the Data input. Further changes to the Data input don’t affect the outputs until the Clock input goes to 1 again.
In Chapter 17, this flip-flop was featured in a couple of different circuits, but in this chapter it will generally be used in only one way—to store 1 bit of information. For that reason, I’m going to rename the inputs and outputs so that they’ll be more in accordance with that purpose:
This is the same flip-flop, but now the Q output is named Data Out, and the Clock input (which started out in Chapter 17 as Hold That Bit) is named Write. Just as we might write down some information on paper, the Write signal causes the Data In signal to be written into, or stored, in the circuit. Normally, the Write input is 0, and the Data In signal has no effect on the output. But whenever we want to store 1 bit of data in the flip-flop, we make the Write input 1 and then 0 again, as shown in this logic table with the inputs and outputs abbreviated as DI, W, and DO:
As I mentioned in Chapter 17, this type of circuit is also called a latch because it latches onto data, but in this chapter we’ll call it memory. Here’s how we might represent 1 bit of memory without drawing all of the individual components:
Or it can be oriented like this if you prefer:
The positioning of the inputs and outputs doesn’t matter.
Of course, 1 bit of memory isn’t much at all, but it’s fairly easy to assemble an entire byte of memory by wiring together 8 bits of memory. All you have to do is connect the eight Write signals:
This 8-bit memory has eight inputs and eight outputs as well as a single input named Write that’s normally 0. To save a byte in memory, make the Write input 1 and then 0 again. This circuit can also be drawn as a single box, like so:
As usual, the subscripts differentiate the 8 bits. A subscript of 0 indicates the least significant bit, and a subscript of 7 is the most significant bit.
To be more consistent with the 1-bit memory, the 8-bit memory can be represented using 8-bit data paths for the input and output:
There’s another way of assembling eight flip-flops that isn’t quite as straightforward as this. Suppose we want only one Data In signal and one Data Out signal. But we want the ability to save the value of the Data In signal at eight different times during the day, or maybe eight different times during the next minute. And we also want the ability to later read those eight values by looking at just one Data Out signal.
In other words, rather than saving one 8-bit value, we want to save eight separate 1-bit values.
Storing eight separate 1-bit values involves more complex circuitry, but it simplifies the memory in other ways: If you count up the connections required for the 8-bit memory, you’ll find a total of 17. When eight separate 1-bit values are stored, the connections are reduced to just 6.
Let’s see how this works.
When storing eight 1-bit values, eight flip-flops are still required, but unlike the earlier configuration, the Data inputs are all connected while the Write signals are separate:
Although all the Data In signals are connected, this does not imply that all the flip-flops will be storing the same Data In value. The Write signals are separate, so a particular flip-flop will store the Data In value only when the corresponding Write signal becomes 1. The value that the flip-flop stores is the Data In value at that time.
Rather than manipulate eight separate Write signals, we can instead have one Write signal and govern which flip-flop it controls using a 3-to-8 decoder:
You’ve seen circuits similar to this before: On page 114 toward the end of Chapter 10, you saw a circuit that allowed you to specify an octal number using three switches where each of the eight AND gates was connected to a lightbulb. Depending on what octal number you specified, one (and only one) of the eight lightbulbs would light up. Similar circuits in Chapter 18 were instrumental in displaying clock digits.
The S0, S1, and S2 signals stand for Select. The inputs to each AND gate include one each of these Select signals or their inverses. This 3-to-8 decoder is a little more versatile than the one in Chapter 10 because a Write signal is combined with the S0, S1, and S2 inputs. If the Write signal is 0, all the AND gates will have an output of 0. If the Write Signal is 1, one and only one AND gate will have an output of 1 depending on the S0, S1, and S2 signals.
The Data Out signals from the eight flip-flops can be inputs to a circuit called an 8-to-1 selector that effectively selects one of the eight Data Out signals from the flip-flops:
Again, three Select signals and their inverses are input to eight AND gates. Based on the S0, S1, and S2 signals, one and only one AND gate can have an output of 1. But the Data Out signals from the flip-flops are also input to the eight AND gates. The output of the selected AND gate will be the corresponding Data Out signal from the flip-flops. An eight-input OR gate provides the final Data Out signal selected from among the eight.
The 3-to-8 decoder and 8-to-1 selector can be combined with the eight flip-flops like this:
Notice that the three Select signals to the decoder and the selector are the same. I’ve also made an important change in the labeling of the Select signals. They are now labeled Address, because it’s a number that specifies where the bit resides in memory. It’s like a post office address except that there are only eight possible 3-bit address values: 000, 001, 010, 011, 100, 101, 110, and 111.
On the input side, the Address input determines which flip-flop the Write signal will trigger to store the Data input. On the output side (at the bottom of the figure), the Address input controls the 8-to-1 selector to select the output of one of the eight latches.
For example, set the three Address signals to 010, set Data In to either 0 or 1, and set Write to 1 and then 0. That’s called writing to memory, and the value of Data In is said to be stored in memory at the address 010.
Change the three Address signals to something else. Now come back the next day. If the power is still on, you can set the three Address signals to 010 again, and you’ll see that the Data Out is whatever you set Data In to when you wrote it into memory. That’s called reading from memory or accessing memory. You can then write something else into that memory address by making the Write signal 1 and then 0.
At any time, you can set the Address signals to one of eight different values, and thus you can store eight different 1-bit values. This configuration of flip-flops, decoder, and selector is sometimes known as read/write memory because you can store values (that is, write them) and later determine what those values are (that is, read them). Because you can change the Address signals to any one of the eight values at will, this type of memory is more commonly known as random access memory, or RAM (pronounced the same as the animal).
Not all memory is random-access memory! In the late 1940s, before it became feasible to build memory from vacuum tubes and before the transistor was invented, other forms of memory were used. One odd technology used long tubes of mercury to store bits of information. Pulses at one end of the tube propagated to the other end like waves in a pond, but these pulses had to be read sequentially rather than randomly. Other types of delay-line memory were used up into the 1960s.
The particular RAM configuration that we’ve now built stores eight separate 1-bit values. It can be represented like this:
A particular configuration of RAM is often referred to as a RAM array. This particular RAM array is organized in a manner abbreviated as 8×1 (pronounced eight by one). Each of the eight values in the array is 1 bit. You can determine the total number of bits that can be stored in the RAM array by multiplying the two values, in this case 8 times 1, or 8 bits.
It’s possible to make larger arrays of memory by connecting smaller arrays together. For example, if you have eight 8×1 RAM arrays and you connect all the Address signals together and all the Write signals together, you can make an 8×8 RAM array:
Notice that the Data In and Data Out signals are now both 8 bits wide. This RAM array stores eight separate bytes, each of which is referenced by a 3-bit address.
However, if we were to assemble this RAM array from eight 8×1 RAM arrays, all the decoding logic and selection logic would be duplicated. Moreover, you may have noticed earlier that the 3-to-8 decoder and the 8-to-1 selector are similar in many ways. Both use eight four-input AND gates, which are selected based on three Select or Address signals. In a real-life configuration of memory, the decoder and selector would share these AND gates.
Let’s see if we can assemble a RAM array in a somewhat more efficient manner. Instead of an 8×8 RAM array that stores 8 bytes, let’s double the memory and make a 16×8 RAM array that stores 16 bytes. Eventually, we should have something that can be represented like this:
The address needs to be 4 bits wide to address 16 bytes of memory. The total number of bits that can be stored in this RAM array is 16 times 8, or 64, which means that 64 separate flip-flops will be required. Obviously it will be difficult to show the complete 16×8 RAM array within the pages of this book, so I’ll show it in several parts.
Earlier in this chapter you saw how a flip-flop used to store 1 bit can be symbolized by a box with Data In and Write inputs and a Data Out output:
One bit of memory is sometimes known as a memory cell. Let’s arrange 64 of these cells in a grid with 8 columns and 16 rows. Each row of 8 cells is a byte of memory. The 16 rows (only three of which are shown here) are for the 16 bytes:
Let’s ignore the Data Out part for now. As you can see, for each byte, the Write signals are connected because an entire byte will be written into memory at once. These connected Write signals are labeled at the left as W0 through W15. These correspond to the 16 possible addresses.
The Data In signals are connected in a different way. For each row, the most significant bit of the byte is at the left, and the least significant bit is at the right. The corresponding bits of each byte are connected together. It doesn’t matter that all the bytes have the same Data In signals, because that byte will only be written into memory when the Write signal is 1.
To write to one of 16 bytes, we need an address that is 4 bits wide because with 4 bits we can make 16 different values and select one of 16 things—those things being the bytes that are stored in memory. As pictured earlier, the Address input of the 16×8 RAM array is indeed 4 bits wide, but we need a way to convert that address into the appropriate Write signal. That’s the purpose of the 4-to-16 decoder:
This is the most complex decoder that you’ll see in this book! Each of the 16 AND gates has four inputs, which correspond to the four Address signals and their inverses. I’ve identified the output of these AND gates with numbers corresponding to the values of the four address bits.
This decoder helps generate the Write signals for the 16 bytes of the 16×8 RAM array: Each of the outputs of the AND gates in the decoder is an input to another AND gate that includes the single Write signal:
These are the signals to write the Data In byte into memory in the illustration on page 276.
We are done with the inputs, and all that’s left are the Data Out signals from each of the 64 memory cells. This one is hard because each of the eight columns of bits must be handled separately. For example, here’s an abbreviated circuit that handles the leftmost column of the 16×8 RAM array on page 276. It shows how the Data Out signals of the 16 memory cells can be combined with the 16 outputs of the 4-to-16 decoder to select only one of those memory cells:
The 16 outputs from the 4-to-16 decoder are shown at the left. Each of these is an input to an AND gate. The other input to the AND gate is a Data Out from one of the 16 memory cells from the first column of the figure on page 276. The outputs of those 16 AND gates go into a giant 16-input OR gate. The result is DO7, which is the most significant bit of the Data Out byte.
The worst part about this circuit is that it needs to be duplicated for each of the 8 bits in the byte!
Fortunately, there’s a better way.
At any time, only one of the 16 outputs of the 4-to-16 decoder will have an output of 1, which in reality is a voltage. The rest will have an output of 0, indicating ground. Consequently, only one of the AND gates will have an output of 1—and only then if the Data Out of that particular memory cell is 1—and the rest will have an output of 0. The only reason for the giant OR gate is to detect whether any of its inputs is 1.
We could get rid of the giant OR gate if we could just connect all the outputs of the AND gates together. But in general, directly connecting outputs of logic gates is not allowed because voltages might be connected directly to grounds, and that’s a short circuit. But there is a way to do this using a transistor, like this:
If the signal from the 4-to-16 decoder is 1, then the Data Out signal from the transistor emitter will be the same as the DO (Data Out) signal from the memory cell—either a voltage or a ground. But if the signal from the 4-to-16 decoder is 0, then the transistor doesn’t let anything pass through, and the Data Out signal from the transistor emitter will be nothing—neither a voltage nor a ground. This means that all the Data Out signals from a row of these transistors can be connected without creating a short circuit.
Here’s the abbreviated memory array again just showing the Data Out connections. The outputs of the 4-to-16 decoder are at the left, and the complete Data Out signals are at the bottom. Not shown are little resistors at those Data Out signals to ensure that they are either 1 or 0:
The complete 16×8 RAM array is on CodeHiddenLanguage.com.
These transistors are the basis of a circuit called a tri-state buffer. A tri-state buffer can have one of three outputs: ground, indicating logical 0; a voltage, indicating logical 1; or nothing at all—neither ground nor voltage, just as if it’s not connected to anything.
A single tri-state buffer is symbolized like this:
It looks like a buffer but with an additional Enable signal. If that Enable signal is 1, then the Output is the same as the Input. Otherwise, the Output is said to “float” as if it’s not connected to anything.
The tri-state buffer allows us to break the rule that prohibits connecting the outputs of logic gates. The outputs of multiple tri-state buffers can be connected without creating a short circuit—just as long as only one of them is enabled at any time.
Generally tri-state buffers are more useful when packaged to handle an entire byte with a single Enable signal:
That configuration of tri-state buffers I’ll symbolize with a box like this:
In future diagrams, if I don’t have room to label the box with its full name, I’ll use just “Tri-State” or “TRI.”
You’ve seen how tri-state buffers can help select 1 of 16 bytes within the 16×8 memory array. I also want the 16×8 memory array to have its own Enable input:
If that Enable signal is 1, then the Data Out signals represent the byte stored at the specified address. If the Enable signal is 0, the Data Out is nothing.
Now that we’ve built a circuit that stores 16 bytes, let’s double it. No, let’s quadruple it. No, no, let’s octuple it. No, no, no, let’s increase the amount of memory by a factor of 16!
To do this, you’ll need 16 of these 16×8 memory arrays, wired up like this:
Only three of the 16 RAM arrays are shown. They share the Data In inputs. The Data Outs of the 16 RAM arrays are safely connected together because the outputs use tri-state buffers. Notice two sets of 4-bit address: The address bits labeled A0 through A3 address all 16 of the RAM arrays while the address bits labeled A4 through A7 provide a Select input for a 4-to-16 decoder. This is used to control which of the 16 RAM arrays gets a Write signal and which gets an Enable signal.
The total memory capacity has been increased by a factor of 16, which means that we can store 256 bytes, and we can put this circuit in another box labeled like so:
Notice that the address is now 8 bits wide. A RAM array that stores 256 bytes is like a post office with 256 post office boxes. Each one has a different 1-byte value inside (which may or may not be better than junk mail).
Let’s do it again! Let’s take 16 of these 256×8 RAM arrays and use another 4-to-16 decoder to select them with another four address bits. The memory capacity increases by a factor of 16, for a total of 4096 bytes. Here’s the result:
The address is now 12 bits wide.
Let’s do it once more. We’ll need 16 of these 4096×8 RAM arrays and another 4-to-16 decoder. The address grows to 16 bits, and the memory capacity is now 65,536 bytes:
You can keep going, but I’m going to stop here.
You might have noticed that the number of values that a RAM array stores is directly related to the number of address bits. With no Address inputs, only one value can be stored. With four address bits, 16 values are stored, and with 16 address bits, we get 65,536. The relationship is summed up by this equation:
RAM that stores 65,536 bytes is also said to store 64 kilobytes, or 64K, or 64KB, which on first encounter might seem puzzling. By what weird arithmetic does 65,536 become 64 kilobytes?
The value 210 is 1024, which is the value commonly known as one kilobyte. The prefix kilo (from the Greek khilioi, meaning a thousand) is most often used in the metric system. For example, a kilogram is 1000 grams, and a kilometer is 1000 meters. But here I’m saying that a kilobyte is 1024 bytes—not 1000 bytes.
The problem is that the metric system is based on powers of 10, and binary numbers are based on powers of 2, and never the twain shall meet. Powers of 10 are 10, 100, 1000, 10000, 100000, and so on. Powers of 2 are 2, 4, 8, 16, 32, 64, and so on. There is no integral power of 10 that equals some integral power of 2.
But every once in a while, they do come close. Yes, 1000 is fairly close to 1024, or to put it more mathematically using an “approximately equal to” sign:
There is nothing magical about this relationship. All it implies is that a particular power of 2 is approximately equal to a particular power of 10. This little quirk allows people to conveniently refer to a kilobyte of memory when they really mean 1024 bytes.
What you don’t say is that a 64K RAM array stores 64 thousand bytes. It’s more than 64 thousand—it’s 65,536. To sound like you know what you’re talking about, you say either “64K” or “64 kilobytes” or “sixty-five thousand five hundred and thirty-six.”
Each additional address bit doubles the amount of memory. Each line of the following sequence represents that doubling:
Note that the numbers of kilobytes shown on the left are also powers of 2.
With the same logic that lets us call 1024 bytes a kilobyte, we can also refer to 1024 kilobytes as a megabyte. (The Greek word megas means great.) Megabyte is abbreviated MB. And the memory doubling continues:
The Greek work gigas means giant, so 1024 megabytes are called a gigabyte, which is abbreviated GB.
Similarly, a terabyte (teras means monster) equals 240 bytes (approximately 1012), or 1,099,511,627,776 bytes. Terabyte is abbreviated TB.
A kilobyte is approximately a thousand bytes, a megabyte is approximately a million bytes, a gigabyte is approximately a billion bytes, and a terabyte is approximately a trillion bytes.
Ascending into regions that few have traveled, a petabyte equals 250 bytes, or 1,125,899,906,842,624 bytes, which is approximately 1015, or a quadrillion. An exabyte equals 260 bytes, or 1,152,921,504,606,846,976 bytes, approximately 1018, or a quintillion.
Just to provide you with a little grounding, desktop computers purchased at the time that the first edition of this book was written, in 1999, commonly had 32 MB or 64 MB or sometimes 128 MB of random-access memory. At the time this second edition is being written, in 2021, desktop computers commonly have 4, 8, or 16 GB of RAM. (And don’t get too confused just yet—I haven’t mentioned anything about storage that is retained when the power is shut off, including hard drives and solid-state drives [SSD]; I’m only talking about RAM here.)
People, of course, speak in shorthand. Somebody who has 65,536 bytes of memory will say, “I have 64K (and I’m a visitor from the year 1980).” Somebody who has 33,554,432 bytes will say, “I have 32 megs.” And those that have 8,589,934,592 bytes of memory will say, “I’ve got 8 gigs (and I’m not talking music).”
Sometimes people will refer to kilobits or megabits (notice bits rather than bytes), but this is rare when speaking about memory. Almost always when people talk about memory, they’re talking number of bytes, not bits. Usually when kilobits or megabits come up in conversation, it will be in connection with data being transmitted over a wire or through the air, generally in connection with high-speed internet connections called “broadband,” and will occur in such phrases as “kilobits per second” or “megabits per second.”
You now know how to construct RAM in any array size you want (at least in your head), but I’ve stopped at 65,536 bytes of memory.
Why 64 KB? Why not 32 KB or 128 KB? Because 65,536 is a nice round number. It’s 216. This RAM array has a 16-bit address—2 bytes exactly. In hexadecimal, the address ranges from 0000h through FFFFh.
As I implied earlier, 64 KB was a common amount of memory in personal computers purchased around 1980, but it wasn’t quite like I’ve shown you here. Memory constructed from flip-flops is more precisely called static read-only memory. By 1980, dynamic RAM, or DRAM, was taking over and soon became dominant. DRAM requires only one transistor and one capacitor for each memory cell. A capacitor is a device used in electronics that contains two separated electrical conductors. A capacitor can store an electric charge, but not indefinitely. The key to making DRAM work is that these charges are refreshed thousands of times per second.
Both static RAM and dynamic RAM are called volatile memory. A constant source of electricity is required to hold the data. When the power goes off, volatile memory forgets everything it once knew.
It will be advantageous for us to have a control panel that lets us manage these 64KB of memory—to write values into memory or examine them. Such a control panel has 16 switches to indicate an address, eight switches to define an 8-bit value that we want to write into memory, another switch for the Write signal itself, and eight lightbulbs to display a particular 8-bit value:
All the switches are shown in their off (0) positions. I’ve also included a switch labeled Takeover. The purpose of this switch is to let other circuits use the same memory that the control panel is connected to. When the switch is set to 0 (as shown), the rest of the switches on the control panel don’t do anything. When the Takeover switch is set to 1, however, the control panel has exclusive control over the memory.
Implementing that Takeover switch is a job for a bunch of 2-to-1 selectors, which are quite simple in comparison with the larger decoders and selectors in this chapter:
When the Select signal is 0, the output of the OR gate is the same as the A input. When the Select signal is 1, the B input is selected.
We need 26 of these 2-to-1 selectors—16 for the Address signals, eight for the Data input switches, and two more for the Write switch and the Enable signal. Here’s the circuit:
When the Takeover switch is open, the Address, Data input, Write, and Enable inputs to the 64K × 8 RAM array come from external signals shown at the top left of the 2-to-1 selectors. When the Takeover switch is closed, the Address, Data input, and Write signals to the RAM array come from switches on the control panel, and Enable is set to 1. In either case, the Data Out signals from the RAM array go back to the eight lightbulbs in the control panel and possibly someplace else.
When the Takeover switch is closed, you can use the 16 Address switches to select any of 65,536 addresses. The lightbulbs show you the 8-bit value currently stored in memory at that address. You can use the eight Data switches to define a new value, and you can write that value into memory using the Write switch.
The 64K × 8 RAM array and control panel can certainly help you keep track of any 65,536 8-bit values you may need to have handy. But we have also left open the opportunity for something else—some other circuitry perhaps—to use the values that are stored in memory and to write other ones in as well.
If you think this scenario is improbable, you might want to look at the cover of the famous January 1975 issue of Popular Electronics, which featured a story about the first home computer, the Altair 8800:
Retro AdArchives/Alamy Stock Photo
The front of this computer is a control panel with nothing but switches and lights, and if you count the long row of switches toward the bottom, you’ll discover that there are 16 of them.
A coincidence? I don’t think so.