9
The true method of discovery is like the flight of an aeroplane. It starts from the ground of particular observation; it makes a flight in the thin air of imaginative generalization; and it again lands for renewed observation rendered acute by rational interpretation.
—Alfred North Whitehead (1861–1947), Process and Reality: An Essay in Cosmology
We continue on in the spirit of the last chapter, briefly turning our attention now to what we have termed the second of the great algebraic dramas—the art and science of describing situations by equations and then finding their solutions for insight.
An instructor in a class of 24 asks for a volunteer to pick a whole number between 1 and 24 and to tell it to the rest of the class after the instructor is out of earshot. They exit the classroom, close the door, and take a stroll down the hall before returning. Once back, the instructor directs everyone to take the chosen number (whatever it is) and multiply it by 15, then to add 20. Someone says they got 125, which the rest of the class immediately confirms.
The instructor then goes behind the projection screen making sounds as if they are frantically looking for something and then comes back out stating that they have found the original number chosen by the volunteer, and that it is 7. Some students chuckle, a few seem genuinely amazed, but most smile knowingly.
One of them blurts out, “that’s easy, you just reversed the steps.” The teacher nods in agreement and shows what happened in algebraic language:
Steps of the Procedure |
Algebraic Interpretation at Each Step |
|
Step 1 |
Class picks a number whose value is unknown to teacher |
x |
Step 2 |
Class multiplies this number by 15 |
15x |
Step 3 |
Class then adds 20 to the result |
15x + 20 |
Step 4 |
Class obtains 125 |
15x + 20 = 125 |
Step 5 |
Teacher solves the equation in their head |
x = 7 |
Step 6 |
Teacher reveals that the chosen number is 7 |
The reduction diagram shows the systematic solution process:
This is a common type of exercise in an elementary algebra class, and though simple once understood, it still contains powerful conceptual fuel.
What understanding algebraic mechanisms has done here is allow the instructor to find out a piece of information that they didn’t directly know. At no point in the process of discovery did they ever hear any of the students say the number 7. Yet, after being given the value 125 and based on the way certain algebraic relationships work, they were able to symbolically link this number to the value that they didn’t know, and then use canonical maneuvers to uncover what they wanted; namely, that the student had chosen the number 7.
MECHANISMS
It turns out that familiarity with the mechanisms of phenomena—or at least of their effects—can aid us in finding out unknown information in a wide variety of areas.
Consider lake levels. A lake is an interconnected entity where if the water level in one place rises, there will be corresponding effects elsewhere. This will allow us to deduce, by proxy, information about the lake and its surroundings without necessarily seeing those locations. For instance, if the lake level in our location rose to 6 feet above normal, then we will know that a spot on the shore, thirty miles away and only half a foot above the lake during normal levels, will now be underwater.
Here, we used an understanding of the way water levels work on Earth to find out an unknown piece of information. In this case only information from a local measurement was used; no calculations were needed.
Consider river currents. A boy accidentally drops a baseball bat off of a bridge into the center of a rapid-less river whose current is 3 miles per hour along its entire length. If a rafter 15 miles downstream pulls the bat out of the center of the river at 1:00 PM, when did the boy drop the bat into the river?
To solve this problem, we could use the formula distance = (speed) times (the time of travel), or d = st. Here, for convenience and since the terms are simple enough to remember, the variables have been abbreviated to their first letter as opposed to using generic x, y, and z. This follows common usage for the distance formula, although “rate”—abbreviated to r—is often used in the place of speed.
Given that s = 3 and d = 15, the formula simplifies to 15 = 3t, which immediately gives t = 5. Thus, the approximate time it took for the bat to travel to where the rafter picked it up was 5 hours. This means that it was dropped into the water around 8:00 AM. In this case, two calculations were used to obtain the desired information.
The values here are basic enough that many could quickly compute the answer without resorting to formal algebra at all, but we are after far bigger conceptual game than simply finding a quick solution to this basic problem, and using algebra gives us the tools to set the stage.
LITERAL EXPRESSIONS
Though the previous presentation of finding the time that the bat was dropped into the river certainly involves familiar techniques, the perspective is subtly different from the viewpoint in most of the previous chapters. Thus, before we take our first flight, we will take a brief detour to see how the perspectives compare.
NAMING ALGEBRAIC EXPRESSIONS
Throughout most of the book, we have not used proper names to describe algebraic expressions at all. For instance, the variable expression that encoded the procedure for the number of days and age problem (100x + 2013 + z – y) was never given a formal name.
It is only in the previous chapter that we have consistently given proper names to particular recurring notions: course average, final exam average, GPA formula, effective percentage rate of return, slugging percentage, and University and College Ranking Score.
When proper names are given in this way, the expressions are sometimes called formulas.
BOTH SIDES OF FORMULA ARE NOW IN PLAY
Though proper names were given to the formulas in the previous chapter, we still didn’t abbreviate them to a single letter. For instance, we had the following:
• Course average = 0.25x + 0.55y + 0.20z.
• GPA = 4x + 3y + 2z + 1u + 0v.
• University and College Ranking Score = 0.22x1 + 0.05x2 + 0.08x3 + 0.20x4 + 0.20x5 + 0.07x6 + 0.10x7 + 0.03x8 + 0.05x9.
Notice that the names of the formulas on the left-hand side are not abbreviated to a single letter. If we wanted to be consistent in this chapter, we should have written the distance formula as distance = st. The fact that it is often the practice to also abbreviate “distance” to the single letter d in this formula—though not necessarily standard practice before in situations like those in Chapter 8—is a nuance worth noting.
The nuance is that in the previous chapter we didn’t view the proper-named term on the left-hand side of the equals sign in the same way that we viewed the variable terms on the right. The variables and parameters on the right-hand side of the formula were viewed in a dynamic way, meaning we could substitute in different numbers for them and then calculate. Once done, this would give us the value of the proper-named term on the left-hand side (GPA, course average, etc.).
The proper-named terms were treated as non-interacting objects whose values were totally dependent upon the values on the right-hand side. We viewed them as if they inherited their variability—their very existence even—completely from the expressed combination of variables and parameters on the right. For example, GPA doesn’t exist without the combination of grade point values and credit percentages that make it up.
In this chapter, we are now viewing the distance in the same dynamic and measurable way as the entities on the right-hand side—speed and time. We see the distance as something that independently exists without being subordinate to the other entities. Consequently, the distance is viewed more as a quantity that is definitely related to the variables—s and t—on the right-hand side in the case of motion, but not as something whose very existence depends on them.
The conversion formula, which connects the independently designed yet related Fahrenheit and Celsius temperature scales, provides an example of this point. Fahrenheit and Celsius are measures of temperature that can be related to each other through the formula Yet each of these scales stands on their own two feet, independent of the other. Thus, the formula highlights a relationship between two independent entities rather than demonstrating a subordinate dependence of one quantity upon the other.
In fact, in the example we gave, the distance and speed were assigned numbers while we treated the time as an unknown that was related to them. In this viewpoint, terms on both sides of the equation are in play. And now because the distance, too, is looked at as being interactive and up for maneuver, it is not uncommon to abbreviate it to a single letter.
This is more of a subtle practice in some educational contexts, however, and not a necessary or absolute requirement by any means. That is, in any of the three formulas in the previous list, it is certainly possible to put the terms on the left-hand side of the equation in play, but for many uses the standard practice is not to do so.
The interplay between using the functional notation f(x) versus the letter notation y also touches upon these subtleties. See Appendix 3 for a brief discussion.
THE SYMBOLS ARE DESCRIPTIVE ABBREVIATIONS
In previous chapters, the x’s and y’s (a’s and b’s) we predominately used to represent variables (parameters) were standard characters that gave no clue on their own as to the quantities they were representing.
As mentioned in Chapter 1, there are advantages to using these generic representations, in that they give us a standard font in which to represent and compare unfamiliar phenomena. This familiarity of representation serves a valuable role both in educating the novice and in more easily assisting the uncloaking of unifying connections between diverse behaviors. The latter was copiously on display in the previous chapter.
However, in the distance formula, the symbols that are used do give clues as to the quantities that they represent—they are used in an interpretive sense. This type of representation becomes the predominant way of doing business in science, finance, and other real-world applications. Equations of this type are often called literal equations in algebra textbooks.
More generally, the terms literal equation and literal expression are used to describe equations or expressions that have parameters mixed in with variables or when multiple variables alone are involved, regardless of whether or not they involve descriptive abbreviations, such as E = mc2, or generic ones, such as ax + by + cz. They roughly approximate the type of equations and expressions that we have termed “big algebra” in this book. And though we have so far only hinted at it in a few places, these literal expressions and equations can also be maneuvered for major conceptual and computational gain wherever they appear. Many people find great difficulty, unfortunately, with such expressions in elementary algebra, often viewing them as a confusing maze of letters.
PARAMETERS DISGUISED?
When we decide to employ descriptive symbols into expressions and equations, this can start to clash with some of the protocols that we have previously used for alphabetic symbols: the Descartes protocol, for representing variables and parameters, in particular. For example, the right-hand side of the expression d = st has the same form as the multiplied terms on the right-hand side in the previous chapter (such as ax + by + cz), but in each of those three terms (ax, by, and cz), we always identified one letter as the variable and one as the parameter by assigning them a letter from a specific location in the alphabet.
For the case here (st), the letters are descriptive and were obtained by simply abbreviating the names of the phenomena they represent, meaning that their location in the alphabet is the result of happenstance. Consequently, the location of a letter can’t be counted on to tell us anything of algebraic significance here, thus the Descartes protocol may get completely blown out of the water for this type of situation.
In spite of all of this, it is still the speed, s, here that more naturally segments the phenomena into individual scenarios like a parameter does. That is, an individual river defines a given scenario, and for a particular location and date on the river, it is the speed that acts more like the constant, whereas the time spent on the river can vary. This allows us to think of the hundreds of rivers as each having a certain respective speed, at least in the locations of interest.
The s can then be used to algebraically tune the formula d = st to describe the scenario of floating down locations on a particular river. For example, given three rivers with current speeds of 3 mph, 2.8 mph, and 6.5 mph, we would tune s to 3, 2.8, or 6.5—yielding the three expressions d = 3t, d = 2.8t, and d = 6.5t, respectively.
This becomes even more apparent when we consider other phenomena that have more well-defined speeds throughout the time of travel than lengthy rivers, where more often than not the volume of flow per unit time is a more useful measure than the speed of the current. For instance, a person traveling in a car at the constant speed of 50 mph on an open stretch of freeway would define a scenario given by d = 50t, while passengers in an airplane cruising at the constant speed of 450 mph would define yet another one described by d = 450t, and so on.
It is the distance and time of travel that more naturally can change within each of these scenarios, at least, as we have described them. So, though it wasn’t necessary for solving the equation involving the bat, we can still think of speed as the parameter, and time and distance as the regular variables. We can then classify various phenomena with this organization in mind if we’d like.1
If desired, we could indicate the parameter nature of the speed by partially returning to the Descartes protocol and rewriting the expression as d = ax, where d = distance, a = speed of the river, and x = time on the river.
In courses today, you may sometimes see these situations represented by the formula y = mx, where y = distance, m = speed, and x = time. Though not early in the alphabet, the m is still viewed as a parameter corresponding geometrically to what is called the slope of a line, whereas the x and y are both looked upon as being changeable within the scenario—often a graphical one—and thus as the regular variables.2
CONCEPTUAL FLIGHT
We have seen that motion at a constant speed in water, on land, and in the air can be described by the same types of expressions. The mechanisms that propel the objects in each of these three media are different: the river for the bat, internal combustion engine or electric motor for the car, and jet engine for the plane. But the differences in mechanism—and whether or not we even deeply understand them—is irrelevant to the algebra that describes them.
As long as we have the knowledge to symbolically describe how the location of the object changes with time and a few other key facts, such as the distance between the starting and ending positions in these cases, then we can use the formula d = st to solve for t and hence find the time of travel.
Curiously enough, on a more general level, this parallels in a way the situation in which the teacher is engaged when finding the number chosen by the student. For the classroom problem, there was no physical motion at all, only two instructions, yet those instructions did something mathematically similar to what the river and the car did. They connected two numbers via a mechanism, the number (x) chosen by the student—but unknown to the teacher—to the calculated number (125) shared with the teacher. If you change one, then the other will change to keep the relationship describing the mechanism valid and vice versa.
In a similar fashion, if the speed is known, then the motions in the river, the car, and the plane can be seen as connecting two numbers—the distance traveled and the time of travel. If one quantity changes, then so must the other to keep the relationship valid. This means that on a certain level these motions can be thought of in the same way as the teacher’s set of two instructions.
If we want to represent these connections in a way that lends itself to clarity and maneuver, then employing algebraic equations is a useful way to do it. The following table illustrates this:
Note that the person in the automobile is a new problem corresponding to a car traveling at a constant speed of 50 mph over a total distance of 200 miles.
It is common to capture the relationship between distance, speed, and time as the well-known formula already mentioned several times in this chapter: d = st. It is less common to represent as a formula the relationship between the unknown number chosen by the student and the calculated number, but we can do so if desired. The relationship is given by 15x + 20 = calculated number. This can be abbreviated to 15x + 20 = C (or equivalently C = 15x + 20) with the situation in the table corresponding to C = 125. If the calculated value obtained by the class had been 170 instead, then the teacher could have placed C = 170 into the equation to obtain 170 = 15x + 20. Solving this would reveal the number chosen by the student to be 10. The formula for C contains the legs to handle any such situation where the instructions are to multiply the unknown whole number by 15 and then to add 20.
If desired, we could interpret the bat’s travel in the river as obeying a set of algebraic instructions:
Steps of the Procedure |
Algebraic Interpretation at Each Step |
|
Step 1 |
Pick a number whose value is unknown |
t |
Step 2 |
River multiplies this number by 3 |
3t |
Step 3 |
Distance value obtained is 15 |
15 = 3t |
Step 4 |
Solve the equation |
5 = t |
Step 5 |
Time of travel in the river is 5 hours |
On the flip side, we could metaphorically think of the unknown number chosen by the student as being transported—via the teacher’s instructions—to the number 125.
Though we risk pushing the metaphors a bit too far here, these viewpoints can still, at times, be leveraged for conceptual gain. Let’s now see if we can use some of these ideas to build a framework in which to better understand an important physical phenomenon.
RADIOACTIVITY: REGULARITY OUT OF SPONTANEITY
Radioactivity was discovered in early 1896 by French physicist Antoine Henri Becquerel. He accidentally discovered it while investigating another phenomenon called phosphorescence.
Phosphorescent substances are materials that absorb light of a certain energy and slowly re-emit it, usually as light of a different energy. Many items that glow in the dark exhibit this type of behavior, where, for example, absorbed white light gets re-emitted over time as a soft green light that is most readily seen in the dark.
Becquerel was investigating uranium for this type of behavior when he realized that it wasn’t necessary to first shine light on the element. Uranium that was kept in the dark for days—long after all phosphorescent effects would have ceased—was still emitting something, suggesting that the emissions were caused by some source internal to uranium itself and not due to some external agitation.
Very soon afterward, it was found that other elements possessed this property as well. This would turn into one of the most sensational and consequential discoveries in the history of science, for in time it led to the realization that the internal structures of the elements were much more sophisticated and energetic than originally thought and that they existed in different varieties called isotopes.
The newly discovered property was eventually christened with the name radioactivity by the renowned physicists Marie and Pierre Curie, who along with Becquerel shared the 1903 Nobel Prize in Physics for their groundbreaking discoveries.
Merriam Webster defines radioactivity as “the property possessed by some elements (such as uranium) or isotopes (such as carbon-14) of spontaneously emitting energetic particles (such as electrons or alpha particles) by the disintegration of their atomic nuclei.”3 Depending on the nature of the isotope, these emissions, called radiation, can be quite complex. The three most common and naturally occurring types of radiation are alpha particles (nuclei of helium), beta particles (electrons), and a form of high-energy light known as gamma rays (gamma photons). See the next three images (not drawn to scale):
Uranium-238 atom changing identity (disintegrating) by emitting an alpha particle (helium nucleus) and turning into an isotope of the element thorium. [Artwork provided by William Hatch]
Carbon-14 atom changing identity (disintegrating) by emitting a beta particle (electron), anti-neutrino, and turning into an isotope of nitrogen. [Artwork provided by William Hatch]
Barium-137 emitting a high-energy light particle (an electrically neutral gamma photon) and moving to a lower energy state but retaining its atomic identity. [Artwork provided by William Hatch]
Behaviors such as these mean that materials containing these radioactive atoms will have the number of such atoms reduced over time, as the affected atomic nuclei undergo emissions that transform their atomic identity into another element. For radioactive atoms such as Barium-137 that don’t change their identity, only their internal energy, the number of such atoms in a higher energy state will be the thing that decreases as a result of the emissions. In either case, the effects of these changes of state—changes in mass, weight, or energy—can be described numerically, meaning that the mass, weight, or energy of the radioactive substances vary over time, resulting in numerical symphonies.
A numerical ensemble for a hypothetical radioactive material (Imaginary Material-1 or IM-1) is given here:
Amount Present (Ounces) |
Time (Years) |
512 |
0 |
256 |
1 |
128 |
2 |
64 |
3 |
32 |
4 |
16 |
5 |
Numerical ensemble for decay of radioactive material (IM-1)
From an initial amount of 512 ounces, the amount of IM-1 decreased by half to 256 ounces in one year. In fact, notice that each year the amount of IM-1 present decreases or decays by half from the previous year. All radioactive isotopes decay in a similar fashion, which can be described by a characteristic number that tells, on average, how long it takes for a pure quantity of such substances to decrease by half. This length of time is called the half-life of the material and was one of several discoveries made during the earliest years of the twentieth century by physicist Ernest Rutherford and chemist Frederick Soddy. Both men were eventually awarded Nobel Prizes in Chemistry for this and other work on radioactive substances: Rutherford in 1908 and Soddy in 1921. (Rutherford had the singular good fortune to have his most important contribution to science come after already receiving a Nobel Prize—the identification of the atomic nucleus in 1911.)
Thus, IM-1 has a half-life of one year. Other isotopes of various elements have different half-lives, which can range from times far less than a billionth of a second to times far in excess of a billion years.
REGULARITY OF THE RIVER IS LINEAR
Regularity implies predictability. For example, in the following table, the river through its 3-mph current is able to connect the time passed with the total distance the bat has traveled, in a way that is measurable and symbolically described. This means that we can use either the time passed to predict the distance traveled or vice versa. Analogously, radioactive decay through its half-life regularity is able to connect the time passed with the amount of IM-1 left, allowing us to use either time as a predictor of the amount left or, more pertinently, the amount of substance left as a predictor of the time passed.
Distance (Miles) |
Time (Hours) |
0 |
0 (8:00 AM) |
3 |
1 (9:00 AM) |
6 |
2 (10:00 AM) |
9 |
3 (11:00 AM) |
12 |
4 (12:00 PM) |
15 |
5 (1:00 PM) |
Numerical ensemble for river travel
As discussed in Chapter 8, tables and algebraic expressions can be thought of as two ways of capturing or representing variable behavior. In the case of the table for river travel, we already know the expression that captures the ensemble: d = 3t. You can check this by letting t = 1, 2, 3, 4, 5 hours and observing that the distances match in the table. The table represents snapshots at the five listed times of the general varying behavior.
Once we have the formula, we can, of course, more readily solve for values not listed in the table such as the distance traveled by the bat after 2.5 hours or 7.2 hours, if not fished out by the rafter. These respectively yield d = 3(2.5) = 7.5 miles and d = 3(7.2) = 21.6 miles.
RADIOACTIVE REGULARITY IS EXPONENTIAL
What is the formula describing the situation involving the radioactive material IM-1? Even if we don’t know it, we can still use the table to make estimates for the values we don’t have. For example, if we want to know how much of the material is left after 2.5 years—halfway between the shaded entries for 2 and 3 years—we could guess 96 ounces, the amount halfway between 128 and 64 ounces.
The latter two weights correspond to the amounts of radioactive material left after the snapshots taken at 2 and 3 years, respectively. Our guess here is incorrect, however, as the actual value left after 2.5 years would be very close to 90.5 ounces.
Nevertheless, the table still arms us much better than being without it by allowing us to do an approximate type of algebra—sometimes called linear interpolation—that yields bounded guesstimates that aren’t wildly outlandish.4
The actual formula for describing the numerical ensemble generated by the decay of IM-1 in the table is given by amount of IM-1 = In the formula, “amount of IM-1” plays a role similar to the one distance did earlier in that it independently exists on its own, is measurable, and can be viewed as being simply related to the information on the right-hand side of the equation. In other words, the communication between both sides of the equation is two-way. We will abbreviate it to A, which gives
Let’s observe that this generates values in the table by considering t = 3 years. Replacing t by 3 in the formula yields
This matches the amount given in the table for 3 years. The other amounts can be similarly calculated.5
Now consider a question similar to those we asked about finding the time of travel of the bat. How long would it take for the initial 512-ounce radioactive sample to decay, decrease, or “travel” to a point in time where there are only 11 ounces of IM-1 left?
For this problem, it would mean that A = 11 and t would be unknown. The formula for IM-1 becomes . Mathematically, we now want to solve this equation for t.
This equation is different from any that we have solved in this book so far because the unknown is in an exponent. Such equations where the variable is in the exponent are known as exponential equations.
The techniques developed in Chapter 3 will completely fail here. Those methods work if the equation involves times t, which is written as
They do not work for an exponential equation, which here involves
raised to the t power, which is written as
Remember that for t = 5, times t becomes
Conversely, for t = 5 becomes
These values are not equal; in fact, is 80 times larger than
Students generally learn the maneuvers for solving exponential equations in an intermediate or college algebra class (Algebra 2 or above in high school), and the use of logarithms is generally involved. The result of employing these maneuvers—not shown—to solve the equation is a t value of approximately 5.54 years.
Because showing the steps involved in solving these types of equations will not be attempted, our conceptual flight continues in our ensuing discussions of this situation. The broad ideas for using exponential equations to discover unknown information—in this case solving for t—are the same as for the equations we have solved in this book, they just differ in the details. Let’s see what points of contact we can still see from taking only an overview of this material.
BIG ALGEBRA FROM THE AIR
Though we won’t be discussing the techniques for solving exponential equations here, we can still apply some of what we have discussed throughout the book to gain insight. This is particularly true in trying to classify two of the major types of scenarios involving radioactive substances.
One involves the amount of material we initially start with—which can vary from scenario to scenario—and the second involves the type of radioactive substance used—which also can vary from scenario to scenario. Both of these can be characterized and tuned, respectively, by a specific parameter.
INITIAL AMOUNT
Let’s begin by considering the initial amount of a substance. For now, let’s stay with IM-1. Originally, we started with 512 ounces of material, which then decreased following a specific decay pattern. If we had started instead with 816 ounces of IM-1, then another pattern—with different amounts for a given year—would ensue:
Amount Present (Ounces) |
Time (Years) |
816 |
0 |
408 |
1 |
204 |
2 |
102 |
3 |
51 |
4 |
25.5 |
5 |
Second numerical ensemble for decay of radioactive material IM-1
Note that here the amounts also decrease to half the amount of the previous year.
The formula for the numerical symphony generated in this scenario is given by 6 So, for a given scenario involving IM-1, the initial amount is a constant, but it can vary from scenario to scenario (512 ounces in Scenario 1 and 816 ounces in Scenario 2). This means that the initial amount of material present behaves like a parameter.
There are many ways in which to choose a letter to represent this parameter, but because it describes an amount too, it can be advantageous to be suggestive of this in our choice of symbol. However, we have to also be mindful to distinguish this from the A that we are already using in the formula.
A standard way to represent this initial amount is one that designates it as the value of the amount A when time equals 0 (at the start of the process). This could be done as “A at t = 0,” which we can abbreviate to A0. However, it is common practice to set the 0 as a subscript to further distinguish it (for instance, to not be confused with multiplication), which gives A0.
Inserting this into our formula for IM-1 yields the big algebra formula for IM-1:
For the first IM-1 table, we would let A0 = 512, and for the second one, we would let A0 = 816.
If we started instead with an initial value of 27 ounces of IM-1, we would have A0 = 27, which when substituted in the formula gives . Thus, the general formula has the legs to describe any initial amount scenario for IM-1. We simply set A0 equal to that initial value, and then the formula will take over predicting how it decays through time after that.
TYPE OF RADIOACTIVE MATERIAL
Let’s now consider a different radioactive isotope, IM-3, that has a half-life of three years. If we initially have 816 ounces of this isotope, then the decay pattern will look like the following:
Amount Present (Ounces) |
Time (Years) |
816 |
0 |
408 |
3 |
204 |
6 |
102 |
9 |
51 |
12 |
25.5 |
15 |
Numerical ensemble for decay of radioactive material IM-3
Note that the amount of IM-3 has decreased by half to 408 ounces in three years (not one year, as with IM-1).
The formula that worked for IM-1, will not accurately predict the slower decay pattern for IM-3, but something closely related will, namely
Check that when t = 6 years, the latter formula will yield the value listed in the table:
You can check that this will work for all of the other times as well: t = 0, 3, 9, 12, 15,…7
This second type of scenario depends on what isotope we have and will change from one isotope to another depending on the half-life of the isotope in question. This means that for a given isotope the half-life is constant, indicating that it is parameter-like in its behavior, too. To represent this parameter, we need to introduce a new letter. We will opt again in favor of being descriptive, designating h for half-life.
For an initial amount of 816 ounces, this will yield the formula For h = 1 as in IM-1, we will have the formula
whereas for h = 3 as in IM-3, we will have
For a third isotope IM-1205, with a half-life of 1205 years, we would have
(as h = 1205). Thus, after 1205 years have passed (t = 1205), this formula would predict
(which is half of 816 ounces after 1205 years).
Combining this information with that on the initial amount in the previous subsection leads to an even more general formula that has the reach to describe any initial amount (A0) of a radioactive material with any half-life (h) as
For 320 ounces (A0 = 320) of the real and dangerous isotope strontium-90, which has a half-life of approximately 28.8 years (h = 28.8), this more general formula would become For 1237 ounces of the most common isotope of uranium, uranium-238, which has a half-life of approximately 4.5 billion years, the formula would become
And so on.
RADIOCARBON DATING
Because the regularity of radioactive decay connects the amount of material left to the time passed—as discussed earlier—the mechanism can serve as a trailblazing approach to measuring the passage of time. This clocklike regularity of radioactive materials is based on the steady predictable diminishing of the amount of radioactive substance present via half-life periods. This diminishing can be measured in a number of ways, including diminishing mass, weight, or energy of the radioactive material over time (illustrated in the previous two sections), diminishing intensity of radioactive emissions over time, or diminishing ratio of radioactive material to nonradioactive material present over time.
If it helps, we can think of radioactive behavior as possessing a counting type of regularity that eventually zeros out as the number of half-life cycles (HLC) for the material increases: 1 HLC (50% left), 2 HLC (25% left), 3 HLC (12.5% left), 4 HLC (6.25% left), and so on.
In the 1940s and 1950s, powerful techniques that took advantage of these regularities were introduced and developed by chemist Willard Libby and his coworkers. He received the 1960 Nobel Prize in Chemistry for these efforts.
To see how it works, let’s consider a box that contains 92 ounces of the nonradioactive stable isotope IM (which doesn’t decay) plus 8 ounces of the radioactive isotope IM-5730 (which decays by emitting an electron and becoming a new element) with a half-life of 5730 years. These two isotopes combine for a total of 100 ounces of substance in the box initially.
For the sake of simplicity, let’s assume that after an atom of IM-5730 undergoes radioactive decay, the new element(s) produced along with anything emitted in the process are immediately pumped out of the box. Furthermore, we assume that for every IM-5730 atom that disappears via this radioactive decay, we can immediately replace it by pumping in a new one. This means that the ratios in the box stay the same—or are in equilibrium—as long as the pump stays on: namely, 92 ounces of the material in the box is stable IM and 8 ounces is radioactive IM-5730.
However, if we permanently turn the input pump off, then the amount of the radioactive IM-5730 will decrease over time, and as a consequence so will the corresponding percentage of IM-5730 versus the original 8 ounces in the box. We illustrate this in the following table:
Amount Left (Ounces) |
Percentage of Initial Amount Left |
Time after Pump Stops (Years) |
Half-Life Cycles |
8 |
100% |
0 |
0 |
4 |
50% |
5730 |
1 |
2 |
25% |
11,460 |
2 |
1 |
12.5% |
17,190 |
3 |
0.5 |
6.25% |
22,920 |
4 |
0.25 |
3.125% |
28,650 |
5 |
Numerical ensemble for amount of IM-5730 left after input pump stops
Note that the initial amount of IM-5730 decreased by half to 4 ounces after 5730 years; the initial amount decreased by one-fourth to 2 ounces in (5730)(2) = 11,460 years.
If someone came upon such a box that had been around for a long time and found that the amount of IM-5730 left was only 12.5% of the normal amount expected, then they could deduce—through knowledge of how radioactive mechanisms work—that the input pump turned off 3 HLC ago, or 17,190 years before. Or, if we invoke the metaphor, it would take 17,190 years for the initial amount of 8 ounces to be transported—via radioactive mechanisms—to the current amount of 1 ounce.
If someone found another such box with 1.64 ounces left (or 20.5% of the normal amount) and wanted to figure out when the input pump for it stopped, they could tune the parameters in the half-life equation, by setting A0 = 8 and h = 5730, which would yield
For the specific situation of 1.64 ounces left, we would substitute in A = 1.64, which gives
Then we would have to solve this exponential equation for t. As we are not solving exponential equations in this book, we will simply give the correct value of t, which here would be approximately 13,100 years. This would mean that the pump turned off around 13,100 years prior to the discovery of the box.
As a check, you can see that 1.64 ounces (20.5%) lies between the shaded rows for 1 ounce (12.5%) and 2 ounces (25%) in the table for IM-5730, meaning that 13,100 years should fall between the predicted times for those two values in the table.
This is in broad scope how the technique, developed by Libby and his associates, called radiocarbon dating works. The human body consists of approximately 18% carbon, and this carbon comes in three varieties: two stable isotopes (carbon-12 and carbon-13) and one radioactive isotope (carbon-14, with a half-life of about 5730 years).8 The ratio of carbon-14 to the stable carbon is extremely small but still measurable. As long as a body is alive, it replenishes through life processes the carbon-14 that decays away, keeping the ratio the same. However, when the body dies, the carbon-14 is no longer replenished—the input pump permanently turns off—and the ratio changes according to a half-life of 5730 years.
As the table for IM-5730 shows, the amount of material eventually becomes so small that there is a limit to what can be detected experimentally. For carbon-14, the detectable amounts are evidently good up to about 50,000 years, which corresponds to slightly less than 1% of the amount of carbon-14 present when the body was alive.9
Other techniques, such as mass spectrometry, can be used for dating organic materials beyond the limit of the method just described. It is also possible to use still additional isotopes to date some nonorganic objects such as rocks. The procedures differ according to the isotope used, but the general idea of measuring the percentage of radioactive material left versus an initial amount of material is the same. The method of using radioactive materials as time markers is called radioisotope or radiometric dating. It goes without saying that it is a method whose value to science and human understanding is monumental.
TABULAR ALGEBRA
The examples in the previous sections show that it is possible to use tables to make not-too-outlandish estimates of unknown information. We did this as a commonsense verification of the 13,100 year value on the time it would take 8 ounces of IM-5730 to decay to 1.64 ounces. However, there is nothing to prevent us from using tables on their own, and systematically, as another way to gain insight about unknown numerical information from an array of known values. Such an approach would have the advantage that it doesn’t require intimate knowledge of how to algebraically solve exponential equations to get an approximation.
We can think of techniques used with tables this way as being a type of “tabular algebra,” analogous in spirit to the way we take an ensemble of numbers generated by an algebraic expression (say 6t – 30) and find out through algebraic maneuvers what value of t makes 6t – 30 become, say, 180 (by solving 6t – 30 = 180): in other words, systematically finding or approximating unknown values from known ones.
The techniques for using algebraic expressions give an exact solution to the algebraic equations discussed so far, whereas the techniques involving tables generally only give approximate solutions for exponential equations. A way to make the tabular technique yield even better estimates is to increase the resolution of the table, by filling in more of the gaps through expanding the original table into a larger, more detailed table. For example, in the case of IM-5730, someone with the right expertise could calculate the time it should take the 8 ounces to decay to amounts that additionally include the in-between values for each of the six previous table entries (8, 4, 2, 1, 0.5, and 0.25 ounces). Some in-between values in this case would be 6, 3, 1.5, 0.75, 0.375, and 0.125 ounces. Doing this will yield new information in the shaded cells:
Amount Left (Ounces) |
Percentage of Initial Amount Left |
Time after Pump Stops (Years) |
8 |
100% |
0 |
6 |
75% |
2378.2 |
4 |
50% |
5730 |
3 |
37.5% |
8108.2 |
2 |
25% |
11,460 |
1.5 |
18.75% |
13,838.2 |
1 |
12.5% |
17,190 |
0.75 |
9.375% |
19,568.2 |
0.5 |
6.25% |
22,920 |
0.375 |
4.6875% |
25,298.2 |
0.25 |
3.125% |
28,650 |
0.125 |
1.5625% |
34,380 |
Expanded numerical ensemble for amount of IM-5730 left after input pump stops
Interpolation using this more detailed table gives an estimate of 12,649 years for the time it would take 8 ounces of IM-5730 to decay to 1.64 ounces. This is closer to the true value of 13,100 years than is the estimate obtained from using the earlier, less detailed table—with only six entries—which interpolates to 14,325 years (see endnote for a calculation of both estimates).10
If desired, more values can be included to build even larger tables that will yield even better approximations. If enough values are filled in, then these higher-resolution tables can ultimately, for practical uses, give almost as accurate a result as the formulas themselves.
Increasing the resolution of our tables by adding more rows of information is quite analogous to how adding more pixels—or data values—to a digital photograph increases the resolution of the image—improving its ability to better replicate the subject.
Remember that the computed values here all come from the formula Moreover, it is the percentages that are really the key, as these will hold regardless of the initial amount; that is, if we instead initially had 500 ounces of IM-5730, it would still take nearly 23,000 years for this new amount to decay to 6.25% of the amount we started with (31.25 ounces of IM-5730 in this case).
Using tables to obtain the information is probably the way that most people would deal with this particular situation (like they use tax rate tables to compute their income tax), but it is still more helpful than not knowing that algebraic expressions serve as a powerful force in underwriting the technique. For those who even mildly better understand these expressions, especially regarding scenario variables or parameters, it may give them the ability to more critically examine the situations presented to them.
This is especially important for decision makers who must deal with information from tables that describe social situations, demographics, economics, and so forth, where the laws are not so well defined and more assumptions—including the very formulas themselves in many cases—have to be made.
OTHER RENDITIONS OF EXPONENTIAL SONGS
The essence of the way in which radioactive materials regularly decrease in amount is exhibited in a wide assortment of phenomena. Just as “The Star-Spangled Banner” connects its different renditions—varying in tone, quality, and pace—through its lyrics, so too again does algebra allow us to connect these widely different phenomena through the agency of its expressions, of exponential type in this case.
Before diving in, let’s remember that in its simplest forms—using whole numbers—an exponent means repeated multiplication. For instance, a shorthand way to write the multiplication
is through the shorthand notation 26. Here, the 2 in the base means the number that we are multiplying, and the 6 in the exponent tells us how many 2s are involved in the multiplication.
Going back to the half-life of IM-1, we have the situation that each amount is one-half of the amount for the preceding period (preceding year in this case). This leads to the repeated multiplication of , which results in powers of
as demonstrated in the following table.
How formula is obtained for IM-1
Though the is crucial to the way that radioactive materials decay, it is one of a myriad of possibilities in terms of the mathematics alone. If we have a situation such that each amount is triple the previous amount in the year before, then
simply gets replaced by 3 and we will get the general formula for t years as A = (3)tA0 or equivalently A = A0(3)t.
Such situations can occur in the way money grows in certain interest-bearing accounts. For instance, if one invests in an account that earns 2% interest compounded annually, then this means that every year, 2% interest is applied to the amount from the previous year. Thus, the total amount in the account for, say, year 1 is the amount from year 0 plus 2% of the amount from year 0. In symbols, this becomes A1 = A0 + 0.02A0, which simplifies to A1 = 1.02A0 (think A0 + 4A0 simplifying to 5A0). Reasoning similarly will give A2 = 1.02A1, A3 = 1.02A2, and so on.
Here, the multiplier every year is 1.02, which means that in the formula from the table, we simply can replace by 1.02 to get the formula for how much would be in the bank account after t years as A = (1.02)tA0 or equivalently A = A0(1.02)t.
If the interest rate instead were 4.5%, we would now have a multiplier each year of 1.045: for this case, A1 = A0 + 0.045A0 simplifies to 1.045A0, A2 = A1 + 0.045A1 simplifies to 1.045A1, and so on. The formula for how much would be in the bank account after t years would now be A = (1.045)tA0 or equivalently A = A0(1.045)t.
These two examples show that we have the interest rate varying from scenario to scenario but constant within a specific scenario, meaning that it behaves as a parameter. Introducing a scenario variable for it and opting for being descriptive again, we will call it r. We will assume that r is given in decimal form. Doing so means that we have the general formula A = (1 + r)tA0 or equivalently A = A0(1 + r)t. This general formula now tells us the amount we will have after t years if we invest an initial amount of A0 dollars at an interest rate of r (given as a decimal). For the two situations described, we have r = 0.02 and r = 0.045, respectively.
Consider the scenario where we invest $30,000 into an account that earns 1% interest compounded annually, and we want to know how much we will have after 10 years. We simply use the general formula with the following scenario settings: A0 = 30,000 and r = 0.01. This would give us the specific formula for this scenario as A = 30000(1 + 0.01)t. The specific time that we are interested in for this scenario is 10 years. Substituting t = 10 into this formula will give us A = 30000(1 + 0.01)10 = 30000(1.01)10 = $33,138.66.
Thus, the money would have grown by $3,138.66 over the 10-year period. We could use the expression for this scenario to find out values of the investment for other times as well, such as t = 15 years or t = 20 years.
Many other phenomena—population growth for a time, initial microorganism growth, and early spread of a disease or a rumor, for example—may grow or decrease in similarly regular patterns. And for these, we can also use exponential expressions to predict their behavior and then put them in tables if need be to aid in the sharing of this information.
A MOST REMARKABLE PROPERTY OF EXPONENTIAL BEHAVIOR
More times than not, when you find the general exponential formula describing radioactive decay—or many other types of exponential decay or growth—it will not be given in the form that we have elaborated on in this chapter, but rather in the form of something like A = A0ekt. The base of this exponent is the number e, which is given by 2.718281828459045…
The number e is an irrational number (like π or ) whose decimal expansion goes on forever without a repeating pattern of digits. This means that the amount could be approximated by rounding e off to 2.71828, making the expression become A ≈ A0(2.71828)kt. The symbol ≈ stands for “approximately equal to.” However, most calculators nowadays have a key for giving better approximations of the number e as well as the exponential expression ekt, which means that we can use these keys as opposed to writing out the less accurate (2.71828)kt.
This, of course, differs from the earlier formula
Though the formula with a base of e is certainly less natural in the conceptual sense, it is worth noting that there are still two parameters present, this time A0 and k. The A0 is the same as in our earlier discussions, but the k—though still related to the half-life of the radioactive isotope—is not the same as the half-life of the isotope h.
Why do mathematicians and scientists choose to use this strange-looking number e as the base of exponential expressions?
This occurs in part due to a few important factors. Firstly, it is possible to represent exponential formulas of any base by one exponential formula of a single base. And secondly, exponential formulas that involve the number e as a base more naturally capture key components related to phenomena that exhibit a continuous type of growth or decay (where they occur all of the time) versus exponents to other bases. Exponential formulas to base e also have extremely nice properties with respect to the primary operations in calculus—differentiation and integration—and all that follows from this. See Appendix 4 for a brief discussion on how an exponent of one base can mimic exponents with different bases.
CONCLUSION
We have covered a diverse trail of algebraic and conceptual terrain in this chapter. Owing to the big gaps in the particulars—regarding some of the maneuvers employed—the approach taken here has been likened to taking a flight over those topics.
Has anything been gained from doing this? More specifically, has one of the book’s goals of providing fertile soil for the reader—to better coordinate, better appreciate, and possibly better manipulate—numerically varying phenomena been achieved?
As always, only readers can answer these deeply personal questions for themselves. In this conclusion, we will discuss only a few of the possible takeaways.
We started by focusing on the idea of mechanisms and how we can use our understanding of their effects, even if we don’t fully understand their inner workings, to acquire new information. This in a sense then became a metaphor for the entire chapter, where the fundamental idea was to take numerically varying phenomena as encountered—and whose inner workings may not have been completely understood in some cases—and still be able to think about or organize them in some productive way: a way that hopefully aided in better understanding the phenomena conceptually. This involved getting down to the nitty-gritty details in some cases while only understanding the gist of the phenomena in others.
We now summarize a couple of the perspectives that were enlisted to aid in this endeavor.
THE MECHANISM TRANSPORT VIEWPOINT
This framework involved viewing an unknown number as being “transported” or “related” via some type of procedure or mechanism to another known number of interest. The aim was to find the unknown number, which in the cases here meant modeling the situation by an algebraic equation and then solving it.
We started out with a purely algebraic mechanism—the teacher’s instructions—but saw afterward that other physical phenomena can be productively thought of in this way, too. In those cases, the mechanism itself wasn’t a set of algebraic instructions but was of a nature that it could still be modeled by such instructions—presented as algebraic expressions—thus opening up their analysis to the many techniques available on the mathematical grid.
Going the reverse way, we discussed the prospect of using a particular physical situation as a metaphor to productively think about other physical situations and the algebra itself. For this, we chose the foundational situation to be an object traveling or being transported via the medium of a tangible river from an initial or starting point to a final point.
All of this allowed for two-way connections to be made between various types of phenomena, using one to assist in understanding and working with the other and vice versa. This transport viewpoint is particularly well suited to phenomena that continually decrease or increase in time—or in relation to some other variable—such as the amount of radioactive material present in time or the distance traveled in time, respectively.
The process of leveraging algebra—and math in general—to discover unknown information, though predating the European Renaissance by centuries, gained more expansive tools in the 1500s and 1600s and was looked upon afresh as a powerful new type of reasoning: a brand that allowed for the systematic and unassailable discovery of hidden knowledge. It was a type of reasoning that, when joined together with the emerging experimental sciences, became so seductive and commanding that it took the seventeenth- and eighteenth-century world by storm, ultimately leading to the almost total downfall of other forms of reasoning about the physical world.
This included certain aspects of scholasticism—the dominant form of scholarly logical thinking in Europe throughout the late medieval period—a form that ultimately became the favorite target of criticism by later, more investigation-inclined natural philosophers and scientists.11
THE PARAMETER VIEWPOINT
Parameters are certainly one of the more commonly under-recognized and underappreciated tools in the algebraic arsenal. Throughout this text, we have shined a spotlight on them to illustrate how useful they can be for better classifying, understanding, and working in an all-encompassing way with numerically varying phenomena.
Many people already use the essence of this idea as an organizational tool in some familiar settings. Consider the way that some of us deal with television programming, which changes throughout the day. Rather than attempting to understand head on the entire totality of all programs showing in a given day, many try to organize them in more convenient and accessible ways. Two of the most natural involve looking at the program scheduling by time of day or by particular station.
That is, we may want to know what programs are showing during a particular time of day. In this case, we fix the time and allow the channels to vary, treating each hour or half-hour as a distinct scenario to be considered and allowing the channels to vary in each scenario. In other cases, we may want to know what programs are showing on a particular channel throughout the day. In the latter, we fix the channel and allow the time to vary, treating each channel as a distinct scenario.
Either of these are useful vantage points by which to partition television programming into digestible and useful chunks without trying to encompass the totality of all that is showing on every channel at every time for the entire day. Other organizing possibilities, of course, include searching programs by fixing categories such as sports, history, news, and documentaries.
Organizing locations on Earth according to longitude and latitude can present another example of acquiring useful information by fixing one item while allowing another to vary. If we fix the latitude and allow the longitude (east or west location) to vary, we get imaginary circles that geographers call parallels (circles parallel to the equator). Each parallel describes the set of all the locations on Earth that are at the same north or south position of the equator (0° latitude). If instead we fix the longitude and allow the latitude (north or south location) to vary, we get imaginary half-circles that geographers call meridians. Each meridian describes the set of all locations that are at the same east or west location on Earth with respect to their degree separation from the prime meridian (0° longitude).
Using parameters in algebra give us the ability to intensify such classification schemes by making them more precise and operational. Once a set of parameters are identified, we can then inject them into algebraic expressions, equations, and tables to fix a particular scenario and then have this result interact productively with other types of variation within the scenario.
In the case of radioactive phenomena, the two classification settings that we looked at were the initial amount of a radioactive substance present and the half-life or rate of decay of a particular substance. By using parameters to describe them, we were able to incorporate these classifications into a single algebraic expression that had the legs to handle, in principle, the decay of any radioactive substance occurring in any initial amount. Here, we tuned both of those settings for a particular scenario and then viewed the time and the changing amount of the substance as the regular variables within each scenario.
Though we didn’t discuss at any point how to actually solve the exponential equations involved in the description, this parameter description still allowed for a useful glimpse at the extremely complex phenomena of radioactivity in some of its most important and useful manifestations. That is, once the parameters are identified, we can still use them as classification tools even if we can’t take complete advantage of all of the other mathematical tools available.
Unfortunately, though the idea of parameters permeates mathematics and science, we have already seen that their representation and treatment is anything but consistent in algebra. We discussed the Descartes protocol in Chapter 8, but as further discussion revealed, this protocol is often not followed outside of educational contexts due to the need to satisfy other requirements—and is not even always consistently followed within educational contexts.
Even what counts as a parameter is inconsistently applied. In some cases, there is a somewhat natural separation between what acts as a parameter and what acts as a regular variable. This is true in the case of radioactivity where the half-life and initial amount present naturally act as constants in a given scenario, whereas the time of decay and the amount of material remaining act as the natural variables. This is also the case in the distance, speed, and time situations we discussed, where the speed of travel was the natural parameter and the distance and time naturally acted as regular variables in a given scenario. Similarly, with break-even issues, the selling price, unit costs, and fixed costs act as natural parameters (constant in a given scenario), and the quantity of items sold acts naturally as the regular variable (varies within a scenario).
However, in other situations there may be no such obvious separation between what is a parameter and what is a regular variable. But this doesn’t prevent us from still being able to impose a separation to aid in organizing the material. For instance, in the television programming situation, both station and time are in a sense equivalent—in that changing either usually yields a different program on the TV set—yet choosing to fix one or the other still can turn out to be very useful. This is also true for the longitude-latitude situation.
Irregularity in usage is something that occurs and causes confusion in many contexts. For instance, irregularity in the spelling and use of English words can often make learning to write in the language difficult. In a similar fashion, the irregularity in the alphabetic representation of parameters can make learning the conceptually rich and sometimes difficult subject of algebra even more difficult by presenting students with an alphabet soup of characters that are easy to get confused by.
The hope here is that, by continually acknowledging their existence and incorporating them into expressions and equations, parameters will perhaps seem a little less forbidding. Here is a list of some of the situations that we have discussed (or will discuss) in this book involving parameters:
Algebraic Object |
Parameters: Scenario Variables (Constant within a Scenario but Change from Scenario to Scenario) |
Regular Variables (Change within a Scenario) |
Chapter |
Px = Cx + F |
P = price per item, |
x = number of items sold |
4 |
100x + (2013 + z) – y |
z = years past 2013 |
x = number of days a week like to eat out, |
2 |
ax2 + bx + c = 0 |
a = coefficient value of square variation, |
x = variable or unknown number |
Appendix 1 |
Course Average = ax + by + cz |
a = homework contribution to final grade, |
x = homework average, |
8 |
GPA = a1x1 + a2x2 + a3x3 + a4x4 + a5x5 |
a1 = point value for A grade, |
x1 = % total credits at A, |
8 |
|
A0 = initial amount of material, |
t = time, |
9 |
d = st |
s = speed of object |
d = distance of travel, |
9 |
xn + yn = zn |
n = positive integer power |
x = first unknown, |
10 |
The idea of taking varying phenomena and thinking about them globally by slicing them up into “variation by scenario” pieces and “variation within a scenario” pieces is something that one should be aware of and on the lookout for in situations amenable to mathematical treatment.
This also includes circumstances when variation is displayed to us in tabular form. In many cases, where multiple tables or spreadsheets are used, each individual table or sheet may itself correspond to a fixed set of parameters—sometimes unrecognized as such—whereas variations within the table or sheet may correspond to the regular variables. Each of the tables we discussed on radioactivity can be so specified by fixing the parameters A0 and h, with the time t varying within each sheet. Though these could all be generated from the formula with appropriate settings for A0 and h, parameters for tables can still exist even when a formula for the behavior isn’t known.
In the next chapter, we will take a look at algebra from a slightly different vantage point by examining some of the benefits that may accrue to mathematicians and others from initiating investigations out of sheer curiosity and prompt of imagination. The discussion will touch upon mathematical goings-on over a span of nearly 4000 years, briefly visiting ancient Mesopotamia, ancient Greece, Roman times, and the early scientific era, as well as the twentieth and twenty-first centuries.