Four Ages of Technological Innovations
Hello everyone and welcome back. This article will be the first in a new series; in this series, we’ll mostly be talking about our lives on this planet and, specifically, how AI and robots will transform the way we live. But, more broadly, we’ll be discussing about how technology, in general, will transform human civilization. And the list of technologies that’ll transform our civilization is mainly AI, robots, and the Internet of Things (or just IOT for short) but also includes programmable matter, 3d-printing, superconductors, virtual and augmented reality, and I’m sure other things that I’m missing; but that list pretty much sums up most of what we’ll be talking about. I thought that a good way to begin this series would be by discussing the technological revolutions, in general, and giving a little bit of an overview of some of the aforementioned technologies. Since humans first evolved and, in fact, there is even fossil and archeological evidence that our distant common ancestors (who were not human) from whom we had evolved had used technology such as basic tools about 2 million years ago.
Out of all of the technological revolutions in our species history it seems that the biggest ones were the invention of fire, language, agriculture, the previous industrial revolutions, and the current industrial revolution which is being shaped primarily by AI, robotics, and IOT technology. The time periods characterized by the developments of fire, language, agriculture, and AI/robotics have been called the first, second, third, and fourth ages by the author Bryon Reese who, along with many others, consider these four developments to be the biggest. I, personally, am still on the fence as to whether or not the first three technologies in that list (which demarcate the first, second, and third ages) are bigger than the previous two industrial revolutions; admittedly, this indecision could be due to my own ignorance. But what I will concede is that fire, language, and agriculture changed our own biology in a way that few other technologies have; that is to say, the first, second, and third ages were biological revolutions in that they upgraded how our own biology works. Let me give an example to make what I am saying hopefully a little less confusing. The invention of fire (which occurred 300,000 to 1 million years ago) allowed us to cook our meat. This had numerous benefits but the biggest ones is that it made us healthier, it allowed us to increase our caloric intake, and the increase in caloric intake allowed our brains to eventually triple in size to roughly the size that they are today. Thus, the biggest transformation that occurred in the first age was essentially in our own biology. The second age, caused by the invention of language about 100,000 years ago, was also characterized, primarily, by its changes in our biology. Language allowed us to create new synaptic connections within our brains which could not exist without language.
Aging: Accumulation of Damage in the Body
The first two industrial revolutions dramatically increased the human lifespan. Now, that might not sound like a change in our biology, but let me explain why that misconception is wrong and is due, essentially, to just a misunderstanding of what aging actually is. Aging is, by definition, the accumulation of damage within our bodies; thus, a stark increase in the human lifespan would be due to our bodies accumulating less damage through metabolism. Thus, those technological revolutions can also be viewed as biological revolutions. But this brings us to our next phase in technological and biological revolutions. If we were to tank these technological revolutions based on how much they improved our biological hardware, then the fourth age (the technological revolution which will occur in our time in the 21st century) would top them all by a long shot. In our time, we actually have a pretty complete understanding of what aging is and what its causes are. It was discovered by the scientist Dr. Aubry Degre that aging is due, from what is known today (though not necessarily in the future), entirely to the following biological processes and the damage that they entail: mutations in mitochondrial and cell DNA, the accumulation of waste within cells as well as outside of them, and lastly to problems with the molecules that are responsable for connecting cells.
Life 3.0
Moore’s Law
Moore’s law states that the number of transistors on a computer chip doubles once roughly every 18 months and it is this law which is largely responsable for the past and ongoing trends in the miniaturization of electronics. Now, this trend in miniaturization and computing power actually suggests that, in the future (assuming the Moore’s law continues for a few more decades), we could use tiny nanobots (along with biotechnology) to substantially repair the damage caused by aging. Many diseases—like cancer and Alzheimer's, for example—are, fundamentally, caused by certain amounts of specific kinds of damage to the body. And, astonishingly, those specific kinds of damage are always caused by aging (defined as just the accumulation of damage within our bodies). Thus, aging and many forms of disease such as these are coupled—they are inseparable. Thus, we cannot talk about ending cancer or Alzheimer's without ending aging; you cannot get rid of one without also getting rid of the other.
Nanobots, Cybernetics, & Aging
Moore’s law—which basically defined the information age that occurred in the past 40 years—will also largely characterize the fourth age (also sometimes called the Third Industrial Revolution). As Ray Kurzweil once said, if this trend continues for a few more decades, then a few decades from now the miniaturization in electronics will have progressed to a point where it will become feasible to build tiny, blood-cell sized nanobots that can circulate the human blood stream and work on healing some of the damage caused by aging.
Aging is the result of microscopic damage accumulated on the cellular and molecular level; but there are also macroscopic forms of bodily damage which can be repaired using the technology of the fourth age—the age of AI, robots, and the Internet of Things. Take, for example, amputees (people with missing limbs) or people who are paralyzed. If you have a missing arm or leg, then we can augment a prosthetic robotic limb to your body which can be controlled using just your thoughts - kind of like the real thing. During the 21st century, we’ll see dramatic improvements in the area of prosthetics; the prosthetic limbs of the future will simulate the senses of touch, pressure temperature, texture, and pain to either the same or very close to the same degree of sensitivity as their biological counterparts. And, even just using today’s exoskeletons, we can restore mobility to paralyzed patients who are paralyzed from the waste down. For example, researchers used an exoskeleton to restore mobility to a monkey that was paralyzed bellow the waste.
I think that now might be a good time to make a distinction between improving our biology and augmenting our biology with robotics components. The nanobots that destroy cancer cells in your body, that destroy free radicals and assist lysosomes and immune cells in cleaning up the “junk” in our bodies, those would indeed be examples of radical changes and improvements in our biology, akin to the previous three ages. But a robotic limb or an android or an exoskeleton aren’t really biology; rather, they are replacements of biology. As Ray Kurzweil once said, we’ll gradually replace the biological components of our bodies with superior robotic counterparts; for example, we could potentially one day replace red blood cells with aristrocytes. That replacement would allow us to perform an Olympic sprint for 10 minutes straight or sit on the bottom of our pool for 4 hours; again, the robotic components that we replace our biological counterparts with would be improvements and superior to those biological counterparts.
What do we call a species that is capable of improving its own hardware? The cosmologist and AI researcher Max Tegmark in his book, Life 3.0, calls such a species, Life 3.0, to distinguish it between Life 1.0 and Life 2.0. Life 1.0 is defined as life that can neither improve its hardware or software: a bacterium is an example of Life 1.0 because it cannot alter what it is made of nor can it learn anything new to alter its software. We humans are an example of Life 2.0: organisms with fixed hardware that can change our software. We humans are composed of stuff like neurons, red blood cells, organs, and other stuff. That is our hardware. And we can’t change that. But what we can do is learn new knowledge and information that changes the way that we act and the actions that we perform: learning is how we alter our software and is what makes us a form of Life 2.0. The fourth age, the technological revolution that'll occur in the 21st century, will be characterized not only by profound changes to our biology like the previous three ages but, additionally, it will be distinct from all prior technological revolutions by the simple fact that it will also be characterized by the transition from Life 2.0 to Life 3.0 as we gradually transition from humans to cyborgs, and from cyborgs to either full-fledged robots or digital avatars.
Technology and the Scientific Method
Since the sixteenth century when Galileo Galilee and Francis Bacon for the first time fully understood and implemented the scientific method, for centuries all the way to our present time scientists have used the scientific method to understand how the Cosmos works. For those past few centuries, as our scientific knowledge of the Cosmos grew, our level of technology increased. This isn’t a mere coincidence and shouldn’t be too surprising since we humans use our knowledge of the universe to create our technologies. Put simply, the key to making good technology is to have a lot of knowledge of how the universe works. This is why that, in the 200,000 year long history since humans first came to be, Isaac Newton’s publication of his treatise, The Mathematical Principles of Natural Philosophy (oftentimes called the Principia for short), is still to this day widely recognized as the single-handed greatest contribution to human knowledge of all time. Not only was the publication of this manuscript widely recognized as the greatest contribution to our scientific knowledge ever, but it also lead too one of the greatest technological revolutions in the history of human kind: the industrial revolution and the advent of steam power and the locomotive.
How, you might ask, did Newton’s publication of the laws of motion and classical mechanics (which govern the mechanics of all classical objects in the universe) allow subsequent generations of engineers to build technologies that changed everything about the way that we live? Or, to phrase that question in a more familiar way, how did a newfangled understanding of the inner-clockwork of the Cosmos enable us to build new technologies based upon that new understanding? The book, The Great Equations, gives a pretty comprehensive and in-depth explanation, but I’ll give a simplified version of the story. Until Newton, we humans hadn’t yet developed the conceptual or mathematical framework for understanding what the motion of an object would look like under the action of any arbitrary force. And for a long time we thought that thermodynamics—the study of heat—had nothing at all to do with motion or forces. People used to think that the reason why if you placed a hot body next to a colder body the hot body cooled was because a mysterious fluid called the “caloric” would leave the hot body and go into the cold body. In later experiments, the caloric would be elusively disappointing. But Newton’s Principia—and specifically his three laws of motion—gave us a new and correct conceptual framework. Now, in all fairness, Newton’s work isn’t the entire story. To get the correct conceptual framework (which would also eventually lead to the correct mathematical framework), we had to combine Newton’s laws with monism: a view first posited the ancient Greek philosophers Lucretius and Democritus. This lead, to what Feynman described, as the understanding that the whole world can be viewed as “myriad particles interacting in an infinite number of combinations.” (I borrowed that quote from Feynman’s opening remarks in his lectures on physics. I still occasionally like to re-read those opening remarks from time to time since they perfectly capture the essence of how a physicists thinks and how they derive their theories and laws about nature.)
Long story short, this is the correct view: that everything is made up of atoms which obey trajectories governed by Newton’s laws. And this, by the way, includes hot and cold objects.
Relationship Between Work and Heat
Later experiments done by James Prescott Joule showed that there was another way to heat up a substance which didn’t rely on putting a hot object next to a cold object: his experiments showed that there is a connection between heat and a concept called work. (Work is a concept that is related to force. Technically, work is defined by the formula, \(W=\int_cF·dr\). But to make things simpler for those of you who are laymen to this topic, you can think of force as, roughly speaking, just being equal to force times distance. But the more important point is that whenever you have one body doing work on another, you know that two things must be rubbing and pushing against you in order to get some amount of force, F, on the other side of the equation.) What this experiment taught us is that an object can heat up another object by rubbing and pushing against it (in other words, by exerting forces on it).
But how is it that a hot object can heat up a cold object if it is right next to it? Prescott’s experiment, Newton’s laws, and the ancient view of monism implied an answer: both the hot and cold objects are composed of many atoms. When the hot object is placed next to the cold object, the particles in the hot object collide with (which is to say, exert forces on) the particles in the cold object. According to Newton’s second law, \(F=ma\), this causes the particles in the cold object to accelerate and heat up. But Prescott’s experiment implied that whenever you have things bumping and rubbing against each other, this causes one of the objects to heat up. Altogether, this gives us the basic picture that an object gets heated up by a hotter object because the atoms in the hotter object bump into the atoms of the cooler object causing them to move faster. And that’s essentially all heat is. When the hot object cools, its atoms start to move around a little more slowly because when they bumped into the atoms composing the colder objects, they lost some of their kinetic energy.
This correct understand of what heat actually is and how to describe it mathematically allowed us to formulate a branch of physics called thermodynamics. The entire physics of thermodynamics was obtained by applying Newton’s laws of motion to a particular situation and then deriving all the consequences. Newton’s laws of motion, and thermodynamics in particular, essentially gave later engineers a kind of “blueprint” for creating the first engines and inventing the locomotive. There are many other examples of how advances in scientific knowledge have caused progress in technological development. Newton’s discovery of the laws of classical mechanics—which lead to the development of technologies which shaped the industrial revolution—is the most fruitful one. There are, however, a few other examples of this phenomenon which are worth noting: Maxwell’s discovery of the laws of electromagnetism which quite literally lead to the electrification of the world and the telecommunications revolution; and the discovery of the laws of quantum mechanics which lead to the invention of the transistor and ushered in the information age. It really isn’t a coincidence that every time we discovered new fundamental laws of physics that let us understand a new range of phenomena, a world-changing technological revolution occurred afterword. Again, as I said earlier: by better understanding the Cosmos, we are able to build new technologies based on that new understanding.
Predicting the Future of Technology
In every generation since at least the industrial revolution, science has been the vanguard of not only technological advance but also human exploratory advance as well. The latter, which was enunciated in Carl Sagan’s book Pale Blue Dot, will become increasingly evident as we use our knowledge of the cosmos (and physics in particular) to design and manufacture successive generations of increasingly fast and sophisticated space ships. It had once been said by Carl Sagan that, “Science is a collaborative enterprise, spanning the generations. When it permits us to see the far side of some new horizon, we remember those who prepared the way, seeing for them also.” Indeed there is a straight forward implication of this extraordinary relationship between science and technology: namely, that if you can predict the future of scientific knowledge, you can also predict the future state of technology, the world of tommorrow. The history of scientific knowledge in certain lines of research follow very precise trends and many of these trends oftentimes extend into the future. The most popular example of such trends that I know of is Moore’s law which states that the number of transistors in a computer chip doubles roughly every two years. And this trends also implies that the power of computers doubles roughly every two years as well.
The chief Google engineer, Ray Kurzweil, once extended this trend a few decades into the future to predict the advent of the internet and that there would eventually be billions of people all over the world using computers. And keep in mind that he made this prediction in a time when no more than a few dozen top military officials used computers. Many people probably thought that Kurzweil was a little crazy for making that prediction, and yet, both of those predictions came true. He also predicts that these trends will continue for a few more decades; if those trends do indeed continue over the next few decades (which we have good reason to think they will), then a few decades from now we’ll be able to implant tiny chips into nanobots sufficiently small and intelligent to navigate the human blood stream and monitor our health for signs of disease and infection. Indeed, such nanobots would also likely be used to improve the biology and functionality of the bodies of people who are perfectly healthy. Thus, these nanobots will be a part of our story from the transition from just plain old humans to cyborgs. (Indeed, many of us are already cyborgs.) Nanobots, of course, won’t be the whole story when it comes to transitioning from humans to cyborgs: internet contract lenses, exoskeletons, prosthetic limbs, and artificial organs will also play a crucial part in this transition. I’d like to add to that that we’ll also likely eventually transition from cyborgs to either androids or digital avatars.
Room-Temperature Superconductors
Another trend in science (which has been obeyed since the advent of superconductors) is that as time progresses, we continue to discover new ways of achieving superconductivity within materials at increasingly high temperatures. Superconductors seem a little like Clarktech (which is to say, magic) and exhibit pretty extraordinary properties. For one thing, a superconductor can transmit electricity with, for all practical purposes, no loss of power. If the aforementioned trend (that I mentioned in the begining of this paragraph) continues, then we’ll eventually be able to create room-temperature superconducters and as Richard Feynman explained in the Feynman lectures, this would change everything. And room-temperature superconductors would have enormous implications in a resource-based economy where the aim is to improve true economic and industrial efficiency as opposed to cost efficiency which profoundly limits how efficient and “good” you can make things work.
First, let me explain how the ordinary conductors used in present-day power lines work. Today’s power lines are composed of copper wires and some kind of protective sheath. Copper is, by definition, a conductor because it is a good conductor of electricity. The outer-electrons of copper atoms are attracted to the nuclei of their parent atoms very weakly. By applying a voltage between two points in the copper wire separated by some distance, this induces an electric field inside of these wires. The electric field essentially has the effect of pushing those electrons around. Without the electric field, the electrons exhibit random motion and, therefore, have zero net displacement. But once the electric field gets activated, those electrons move in a kind of “zig-zag motion” constantly colliding with the nuclei of copper atoms during the process. Even with the zig-zag motion, those electrons still move with a net average displacement in the opposite direction of the electric field.
This process where the electrons move through the wire is what we call electricity and this is how electrical power gets transmitted across the length of the wire. But there is one really big drawback to using ordinary copper wires as a conductor. As the electrons keep colliding with the copper nuclei, they lose a fraction of their electrical energy. Each time an electron collides with a copper nuclei, its energy gets transferred to the copper nuclei via heat. Thus, when it comes to using copper wires to transmit electricity, you end up losing a ton of electrical energy to heat. And this problem is made worse by the fact that electricity must oftentimes travel hundreds, if not thousands, of miles of power lines before it reaches its final destination. Due to both the inherent physical properties of regular conductors like copper and due also to the extreme distances that electricity must travel to get from a power source to the destination (which is usually a home or a building), a ridiculous amount of electrical energy gets lost and wasted into just heating the wires.
The solution to the latter problem is to localize the production of energy, something which would be an inherent goal in a resource-based economy and something which we talk about in more detail in the article, Post Scarcity Economics: A resource-based Economy. The solution to the former problem would be room-temperature superconductors which can transmit electricity without the loss of any electrical energy. Don’t ask me how a superconductor can transmit electricity without any power loss because, quite frankly, no one knows how. All that we know is that this ability is an empirical fact which we have confirmed experimentally.
When it comes to ordinary conductors, some power source must constantly supply power to maintain electricity and a current through the wire. This is essentially due to power losses caused by heat. But if you set up a current in a loop of wire composed of a superconductor, the electrical current in that wire could be sustained for the remaining lifetime of the universe even without a power source and this is because none of the electrical energy gets lost due to heat.
Not only would superconductors revolutionize how we transmit electricity, but they would also forever transform transportation by ushering in the age of magnetism. Allow me to explain. In a resource-based economy (which is essentially just treating the process of economizing (in the true sense of that term) various different industrial processes as a science based on the scientific method), you would not only want to minimize power loss in electrical transmission but you’d also want to minimize power loss in the transportation of a ground-based vehicle (or a vehicle at any altitude) from some point A to some other point B. To do this, you’d want to phase out the idea of using vehicles with tires which “rub” against an asphalt ground; you’d replace that entire model of ground-based transportation with vehicles with no tires which magnetically float above a superconducting surface. This solves the problem because a vehicle that moves in “mid-air”” along a superconducting surface doesn’t lose any energy to friction whereas a vehicle which uses tires that push against an asphalt surface lose a ton of energy due to friction.
When you look at a road, the surface might look fairly smooth; but if you looked at the surface of that road under a microscope, you’d find that the atoms across the surface form jagged hills and valleys. And the same is true of the tires. This is essentially the origin and cause of friction. This is the reason why if you push a block across the surface of a road, it will eventually come to a stop. To make this more concrete, suppose that you have a car traveling across a road at a constant velocity (say 20 mi./hr. North). If you keep your foot off the pedal, that car will eventually stop e ven if you don’t use the brakes. The reason why is because the care is losing its kinetic energy due to friction against the road and also the air; this causes the road and the air to heat up. So, once again you’re losing a ton of energy due to heat loss. What would be revolutionary about room temperature superconductors and maglev transportation systems is that you could entirely eliminate the loss of energy due to friction against the road and the air.
Superconductors and Maglev Transportation
By passing a current through a superconductor, this creates a magnetic field: a vehicle with magnetics in it could “float” and “hover” above the superconducting surface in “mid air” since the magnetic field (generated by the current passing the superconductor) would push the vehicle up. By using an evacuated tube (or perhaps some other kind of technology which could simply just “push” the air out of the way), you could also eliminate air friction. Suppose, as a hypothetical example, that you had superconducting pavement or rails enclosed in an evacuated tube that encircles the entire circumference of the Earth. If you had a maglev vehicle traveling at a constant velocity of 20 mi/hr (or any speed for that matter) with that tube, even without using the pedal that vehicle would travel in circles around the Earth essentially forever (ignoring the fact that the Earth would eventually be destroyed by the Sun or any other outside influences). This would be possible since the kinetic energy of the vehicle would never change and the reason why the vehicle’s kinetic energy wouldn’t decease is because there wouldn’t be any friction.
The widespread use of maglev transportation would change the world. Think about how much gasoline and oil that we must use to keep our cars going. Even with fully electric vehicles that are powered using only renewable energy, you’d still need a heck of a lot of renewable energy to power all of those vehicles to overcome ground and air friction. But with maglev transportation units within evacuated tubes, you would require just a minuscule amount of power to supply power to those vehicles. Now, we really wouldn’t have to limit ourselves to ground-based transportation and, as we discussed in other articles, it would be highly advantageous to utilize 3-dimensional traffic which consists of vehicles moving on ground level and at various different altitudes. This would become especially important if we ever want to colonize other planets.
Maglev Trains Could Connect Arcologies
Another thing that we discussed in other articles is that it would be highly beneficial to build arcologies—vast, mega-skyscrapers that humans could live in. For example, the Ocean Spiral is a kind of mega-skyscraper which would stretch from the seafloor all the way to the surface of the ocean; these arcologies could be built and duplicated all over the world. The benefits that such structures would offer would be enormous. Those arcologies could be used to extract geothermal and tidal energy from the sea; they would also be immensely useful for underwater research. This is of course only one example. We’d also want to build land-based arcologies: these arcologies would help clean the air and they’d also allow us to expand our population. But if we have a lot of these arcologies, then clearly it wouldn’t make much since to limit ourselves to convention “two-dimensional” traffic on the surface; we’d want to make traffic “three-dimensional.” It would be conceivable to connect adjacent arcologies with evacuates tubes at various different altitudes and to have maglev vehicles travel through those tubes.
By taking current trends in the development of technology and extrapolating them into the future or by making assumptions about possible developments in science and technology which could conceivably one day be made in the future, we are able to a certain degree of accuracy predict the future. Predicting the future state of science, technology, and humanity in general was the primary goal in the article, Orbital Rings and Planet Building, and also many of the articles in the subject, Space Travel & Colonization. This will also be the primary goal of many other future topics that we cover in the three subjects Space Travel & Colonization, Technology, and Futurism. In all honesty, I must confess the futurism (predicting the future) will never be a true science like in the science fiction writer Isaac Asimov’s Foundation Series; at best, we can predict future technological developments with varying degrees of confidence.
Some of these predictions that we make will inevitably occur (assuming that we don’t destroy ourselves). Perhaps the most popular example of inevitable technological developments is the Dyson Sphere. We’re going to build one of these things eventually since it would just be really stupid to let stars in systems uninhabited with life to exhaust all of their solar energy without ever being put to good use. Another popular example is the development of AGI—AI that is as smart as humans. (There is also another definition of AGI which just refers to AI that can solve any problem. Technically, this is something that we already developed, but the problem is that it would take the AI forever to get anything done and thus wouldn’t really be of any practical use to us.) There are also future technological developments which are highly speculative and that many people assert will forever be impossible. Examples of this include warp drives which would allow for faster-than-lightspeed transportation vehicles and the use of tachyons to communicate faster than light speed.
At any rate, we’ll discuss many of these possible future developments (from the ones which are nearly inevitable all the way to ones which we are highly uncertain about whether or not they will ever happen) in the next article entitled, Utopia: Life in the Year 2100. And as we discuss each of those topics, I’ll make sure to give some degree of specification as to the likelihood of each of these developments ever actually being made. I hope that you all have a great week and, in the next article, we’re going to try to fit all of these technologies together to see how they’ll shape our future in the year 2100.
This article is licensed under a CC BY-NC-SA 4.0 license.
References
1. “Handaxe and Tektites from Bose, China.” The Smithsonian Institution's Human Origins
Program, 4 Jan. 2017, humanorigins.si.edu/evidence/behavior/stone-tools/early-stone-
age-tools/handaxe-and-tektites-bose-china.
2. Wikipedia contributors. (2019, March 20). History of agriculture. In Wikipedia, The Free
Encyclopedia. Retrieved 20:29, March 22, 2019, from https://en.wikipedia.org
/w/index.php?title=History_of_agriculture&oldid=888596268
3. Gray, Alex. “This Robot Chef Wants to Know How You like Your Pancakes.” World
Economic Forum, 14 Oct. 2016, www.weforum.org/agenda/2016/10/robot-chef-makes-
4. Science and Futurism with Isaac Arthur. (n.d.). Home [YouTube Channel]. Retrieved from
https://www.youtube.com/channel/UCZFipeZtQM5CKUjx6grh54g
5. Wikipedia contributors. (2019, March 8). Philosophiæ Naturalis Principia Mathematica. In
Wikipedia, The Free Encyclopedia. Retrieved 20:38, March 22, 2019, from
https://en.wikipedia.org/w/index.php?title=Philosophi
%C3%A6_Naturalis_Principia_Mathematica&oldid=886807598
6. Wikipedia contributors. (2019, March 8). Newton's laws of motion. In Wikipedia, The Free
Encyclopedia. Retrieved 20:39, March 22, 2019, from https://en.wikipedia.org
/w/index.php?title=Newton%27s_laws_of_motion&oldid=886801669
7. Wikipedia contributors. (2019, March 10). James Prescott Joule. In Wikipedia, The Free
Encyclopedia. Retrieved 20:40, March 22, 2019, from https://en.wikipedia.org