How Boolean Logic Works
Browse the article How Boolean Logic Works
Introduction to How Boolean Logic Works
Have you ever wondered how a computer can do something like balance a check book, or play chess, or spell-check a document? These are things that, just a few decades ago, only humans could do. Now computers do them with apparent ease. How can a "chip" made up of silicon and wires do something that seems like it requires human thought? If you want to understand the answer to this question down at the very core, the first thing you need to understand is something called Boolean logic. Boolean logic, originally developed by George Boole in the mid 1800s, allows quite a few unexpected things to be mapped into bits and bytes. The great thing about Boolean logic is that, once you get the hang of things, Boolean logic (or at least the parts you need in order to understand the operations of computers) is outrageously simple. In this article,we will first discuss simple logic "gates," and then see how to combine them into something useful.
Simple Gates
There are three, five or seven simple gates that you need to learn about, depending on how you want to count them (you will see why in a moment). With these simple gates you can build combinations that will implement any digital component you can imagine. These gates are going to seem a little dry here, and incredibly simple, but we will see some interesting combinations in the following sections that will make them a lot more inspiring. If you have not done so already, reading How Bits and Bytes Work would be helpful before proceeding. The simplest possible gate is called an "inverter," or a NOT gate. It takes one bit as input and produces as output its opposite. The table below shows a logic table for the NOT gate and the normal symbol for it in circuit diagrams: | |||||||
|
The AND gate performs a logical "and" operation on two inputs, A and B:
| ||||||||||||||||
|
| ||||||||||||||||||||
|
| ||||||||||||||||
|
| ||||||||||||||||
|
| ||||||||||||||||
|
| ||||||||||||||||
|
| ||||||||||||||||
|
Simple Adders
In the article on bits and bytes, you learned about binary addition. In this section, you will learn how you can create a circuit capable of binary addition using the gates described in the previous section. Let's start with a single-bit adder. Let's say that you have a project where you need to add single bits together and get the answer. The way you would start designing a circuit for that is to first look at all of the logical combinations. You might do that by looking at the following four sums: 0 | 0 | 1 | 1 |
+ 0 | + 1 | + 0 | + 1 |
0 | 1 | 1 | 10 |
0 | 0 | 1 | 1 |
+ 0 | + 1 | + 0 | + 1 |
00 | 01 | 01 | 10 |
| ||||||||||||||||||||
|
What if you want to add two 8-bit bytes together? This becomes slightly harder. The easiest solution is to modularize the problem into reusable components and then replicate components. In this case, we need to create only one component: a full binary adder.
The difference between a full adder and the previous adder we looked at is that a full adder accepts an A and a B input plus a carry-in (CI) input. Once we have a full adder, then we can string eight of them together to create a byte-wide adder and cascade the carry bit from one adder to the next.
In the next section, we'll look at how a full adder is implemented into a circuit.
Full Adders
The logic table for a full adder is slightly more complicated than the tables we have used before, because now we have 3 input bits. It looks like this: | |||||||||||||||||||||||||||||||||||||||||||||
|
Now we have a piece of functionality called a "full adder." What a computer engineer then does is "black-box" it so that he or she can stop worrying about the details of the component. A black box for a full adder would look like this:
The 4-bit adder we just created is called a ripple-carry adder. It gets that name because the carry bits "ripple" from one adder to the next. This implementation has the advantage of simplicity but the disadvantage of speed problems. In a real circuit, gates take time to switch states (the time is on the order of nanoseconds, but in high-speed computers nanoseconds matter). So 32-bit or 64-bit ripple-carry adders might take 100 to 200 nanoseconds to settle into their final sum because of carry ripple. For this reason, engineers have created more advanced adders called carry-lookahead adders. The number of gates required to implement carry-lookahead is large, but the settling time for the adder is much better.
Flip Flops
One of the more interesting things that you can do with Boolean gates is to create memory with them. If you arrange the gates correctly, they will remember an input value. This simple concept is the basis of RAM (random access memory) in computers, and also makes it possible to create a wide variety of other useful circuits. Memory relies on a concept called feedback. That is, the output of a gate is fed back into the input. The simplest possible feedback circuit using two inverters is shown below: It turns out that in "real" circuits, you can actually use this sort of simple inverter feedback approach. A more useful feedback circuit using two NAND gates is shown below:
R | S | Q | Q' |
0 | 0 | | Illegal |
0 | 1 | 1 | 0 |
1 | 0 | 0 | 1 |
1 | 1 | | Remembers |
- If R and S are opposites of one another, then Q follows S and Q' is the inverse of Q.
- If both R and S are switched to 1 simultaneously, then the circuit remembers what was previously presented on R and S.
In the next section we'll look at the J-K flip-flop.
The J-K Flip-Flop
A very common form of flip-flop is the J-K flip-flop. It is unclear, historically, where the name "J-K" came from, but it is generally represented in a black box like this: P | C | Clk | | J | K | Q | Q' |
1 | 1 | 1-to-0 | | 1 | 0 | 1 | 0 |
1 | 1 | 1-to-0 | | 0 | 1 | 0 | 1 |
1 | 1 | 1-to-0 | | 1 | 1 | | Toggles |
1 | 0 | X | | X | X | 0 | 1 |
0 | 1 | X | | X | X | 1 | 0 |
You might be asking yourself right now, "What in the world is that good for?" It turns out that the concept of "edge triggering" is very useful. The fact that J-K flip-flop only "latches" the J-K inputs on a transition from 1 to 0 makes it much more useful as a memory device. J-K flip-flops are also extremely useful in counters (which are used extensively when creating a digital clock). Here is an example of a 4-bit counter using J-K flip-flops:
Another use of a J-K flip-flop is to create an edge-triggered latch, as shown here:
Implementing Gates
In the previous sections we saw that, by using very simple Boolean gates, we can implement adders, counters, latches and so on. That is a big achievement, because not so long ago human beings were the only ones who could do things like add two numbers together. With a little work, it is not hard to design Boolean circuits that implement subtraction, multiplication, division... You can see that we are not that far away from a pocket calculator. From there, it is not too far a jump to the full-blown CPUs used in computers. So how might we implement these gates in real life? Mr. Boole came up with them on paper, and on paper they look great. To use them, however, we need to implement them in physical reality so that the gates can perform their logic actively. Once we make that leap, then we have started down the road toward creating real computation devices. The easiest way to understand the physical implementation of Boolean logic is to use relays. This is, in fact, how the very first computers were implemented. No one implements computers with relays anymore -- today, people use sub-microscopic transistors etched onto silicon chips. These transistors are incredibly small and fast, and they consume very little power compared to a relay. However, relays are incredibly easy to understand, and they can implement Boolean logic very simply. Because of that simplicity, you will be able to see that mapping from "gates on paper" to "active gates implemented in physical reality" is possible and straightforward. Performing the same mapping with transistors is just as easy.
Let's start with an inverter. Implementing a NOT gate with a relay is easy: What we are going to do is use voltages to represent bit states. We will define a binary 1 to be 6 volts and a binary 0 to be zero volts (ground). Then we will use a 6-volt battery to power our circuits. Our NOT gate will therefore look like this:
You can see in this circuit that if you apply zero volts to A, then you get 6 volts out on Q; and if you apply 6 volts to A, you get zero volts out on Q. It is very easy to implement an inverter with a relay!
It is similarly easy to implement an AND gate with two relays:
You can see from this discussion that you can create the three basic gates -- NOT, AND and OR -- from relays. You can then hook those physical gates together using the logic diagrams shown above to create a physical 8-bit ripple-carry adder. If you use simple switches to apply A and B inputs to the adder and hook all eight Q lines to light bulbs, you will be able to add any two numbers together and read the results on the lights ("light on" = 1, "light off" = 0).
Boolean logic in the form of simple gates is very straightforward. From simple gates you can create more complicated functions, like addition. Physically implementing the gates is possible and easy. From those three facts you have the heart of the digital revolution, and you understand, at the core, how computers work.