The Kurzweil predictions that don’t matter

Time for…another Ray Kurzweil analysis. It’s funny how I keep swearing to myself I won’t write another one about him, but end up doing so anyway. I’m sorry. For sure, there won’t be anything more about him until next year or later.

In my last blog post, “Will Kurzweil’s 2019 be our 2029?”, I mentioned that several of his predictions for 2019 were wrong, and would probably still be wrong in 2029, but that it didn’t matter since they pertained to inconsequential things. Rather than leave all two of you who read my blog hanging in suspense, I’d like to go over those and explain my thoughts. As before, these predictions are taken from Kurzweil’s 1998 book The Age of Spiritual Machines.

The augmented reality / virtual reality glasses will work by projecting images onto the retinas of the people wearing them.

To be clear, by 2030, standalone AR and VR eyewear will have the levels of capability Kurzweil envisioned for 2019. However, it’s unknowable whether retinal projection will be the dominant technology they will use to show images to the people wearing them. Other technologies like lenses made of transparent LCD screens, or beamed images onto semitransparent lenses, could end up dominant. Whichever gains the most traction by 2030 is irrelevant to the consumer–they will only care about how smooth and convincing the digital images displays in front of them look.

“Keyboards are rare, although they still exist. Most interaction with computing is through gestures using hands, fingers, and facial expressions and through two-way natural-language spoken communication.”

The first sentence was wrong in 2019 and still will be in 2029. As old-fashioned as they may be, keyboards have many advantages over other modes of interacting with computers:

  • Keyboards are physically large and have big buttons, meaning you’re less likely to push the wrong one than you are on a tiny smartphone keyboard.
  • They have many keys corresponding not only to letters and numbers, but to functions, meaning you can easily use a basic keyboard to input a vast range of text and commands into a computer. Imagine how inefficient it would be to input a long URL into a browser toolbar or to write computer code if you had to open all kinds of side menus on your input device to find and select every written symbol, including colons, semicolons, and dollar symbols. Worse, imagine doing that using “hand gestures” and “facial expressions.”
  • Keyboards are also very ergonomic to use and require nothing more than tiny finger movements and flexions of the wrists. By contrast, inputting characters and commands into your computer through some combination of body movements, gestures and facial expressions that it would see would take you much more time and physical energy (compare the amount of energy it takes you to push the “A” button on your keyboard with how much it takes to raise both of your arms up and link your hands over your head with your elbows bent to turn your body into something resembling an “A” shape). And you’d have to go to extra trouble to make sure the device’s camera had a full view of your body and that you were properly lit. This is why something like the gestural interface Tom Cruise used in Minority Report will never become common.

Furthermore, two-way voice communication with computers has its place, but won’t replace keyboards. First, talking with machines sacrifices your privacy and annoys the people within earshot of you. Imagine a world where keyboards are banned and people must issue voice commands to their computers when searching for pornography, and where workers in open-concept offices have to dictate all their emails. Second, verbal communication works poorly in noisy environments since you and your machine have problems understanding each other. It’s simply not a substitute for using keyboards.

Even verbal communication plus gestures, facial expressions, and anything else won’t be enough to render keyboards obsolete. If you want to get any kind of serious work done, you need one.

This will still hold true in 2029, and keyboards will not be “rare” then, or even in 2079. Kurzweil will still be wrong. But so what? The keyboard won’t be “blocking” any other technology, and given its advantages over other modes of data and command input, its continued use is unavoidable and necessary.

Let me conclude this section by saying I can only imagine keyboards becoming obsolete in exotic future scenarios. For example, in a space ship crewed entirely by robots, keyboards, mice, and even display screens might be absent since the robots and the ship would be able to directly communicate through electronic signals. If the captain wanted to turn left, it would think the command, and the ship’s sensors would receive it and respond. And in his mind’s eye, the captain would see live footage from external ship cameras.

“Cables have largely disappeared.”

As I wrote in the analysis, it will still be common for control devices and peripheral devices to have data cables in 2029 due to better information security and slightly lower costs. Moreover, in many cases there will be no functional disadvantage to having corded devices, as they never need to leave the vicinity of whatever they are connected to. Consider, if you have a PC at your work desk, why would you ever need to move your keyboard to anyplace other than the desk’s surface? To use your computer, you need to be close to it and the monitor, which means the keyboard has to stay close to them as well. In such a case, a keyboard with a standard, 5 foot long cord would serve you just as well as a wireless keyboard that could connect to your PC from a mile away.

“Of the total computing capacity of the human species (that is, all human brains), combined with the computing technology the species has created, more than 10 percent is nonhuman.”

This was badly wrong in 2019, and in 2029, the “nonhuman” portion of all computation on Earth will probably be no higher than 1%, so it will still be wrong. But so what? Comparisons of how much raw thinking humans and machines do are misleading since they are “apples to oranges,” and they provide almost no useful insights into the overall state of computer technology or automation.

When it comes to computation, quantity does not equal quality. Consider this example: I estimated that, in 2019, all the world’s computing devices combined did a total of 3.5794 x 1021 flops of computation. Now, if someone invented an AGI that was running on a supercomputer that was, say, ten times as powerful as a human brain, the AGI would be capable of 200 petaflops, or 2.0 x 1017 flops. Looking at the raw figures for global computation, it would seem like the addition of that AI changed nothing: the one supercomputer it was running on wouldn’t even make the global computation count of 3.5794 x 1021 flops increase by one significant digit! However, anyone who has done the slightest thinking about AI’s consequences knows that one machine would be revolutionary, able to divide its attention in many directions at once, and would have inaugurated a new era of much faster economic, scientific, and technological growth that would have been felt by people across the world.

“Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices.”

Rotating computer memories–also called “hard disk drives” (HDD)–were still common in 2019, and will still be in 2029, though less so. This is because HDDs have important advantages over their main competitor, solid-state drives (SSDs), often called “flash drives,” and those advantages will not disappear over this decade.

HDDs are cheaper on a per-bit basis and are less likely to suffer data corruption or data loss. SSDs, on the other hand, are more physically robust since they lack moving parts, and allow much faster access to the data stored in them since they don’t contain disks that have to “spin up.” Given the tradeoffs, in 2029, HDDs will still be widely used in data centers and electronic archive facilities, where they will store important data which needs to be preserved for long periods, but which isn’t so crucial that users need instantaneous access to it. Small consumer electronic devices, including smartphones, smart watches, and other wearables, will continue to exclusively have SSD memory, and finding newly manufactured laptops with anything but SSDs might be impossible. Only a small fraction of desktop computers will have HDDs by then.

So rotating memories will still be around in 2029, meaning the prediction will still be wrong since it contains the absolute term “fully replaced.” But again, so what? All of the data that average people need to see on a day-to-day basis will be stored on SSDs, ensuring they will have instantaneous access to it. The cost of HDD and SSD memory will have continued its long-running, exponential improvement, making both trivially cheap by 2029 (it was already so cheap in 2019 that even poor people could buy enough to meet all their reasonable personal needs). The HDDs that still exist will be out of sight, either in server farms or in big, immobile boxes that are on or under peoples’ work desks. The failure of the prediction will have no noticeable impact, and if you could teleport to a parallel universe where HDDs didn’t exist anymore, nothing about day-to-day life would seem more futuristic.

“A new computer-controlled optical-imaging technology using quantum-based diffraction devices has replaced most lenses with tiny devices that can detect light waves from any angle. These pinhead-sized cameras are everywhere.”

The cameras that make use of quantum effects and reflected light never got good enough to exit the lab, and it’s an open question whether they will be commercialized by 2029. I doubt it, but don’t see why it should matter. Billions of cameras–most of them tiny enough to fit on smartphones–already are practically everywhere and will be even more ubiquitous in 2029. It’s not relevant whether they make use of exotic principles to capture video and still images or whether they use through conventional methods involving the capture of visible light. The important aspects of the prediction–that cameras will be very small and all over the place–was right in 2019 and will be even more right in 2029.

“People read documents either on the hand-held displays or, more commonly, from text that is projected into the ever present virtual environment using the ubiquitous direct-eye displays. Paper books and documents are rarely used or accessed.”

This prediction was technologically possible in 2019, but didn’t come to pass because many people showed a (perhaps unpredictable) preference for paper books and documents. It turns out there’s something appealing about the tactile experience of leafing through books and magazines and being able to carry them around that PDFs and tablet computers can’t duplicate. Personal computing devices had to become widely available before we could realize old fashioned books and sheets of paper had some advantages.

Come 2029, paper books, magazines, journals, newspapers, memos, and letters will still be commonly encountered in everyday life, so the prediction will still be wrong. Fortunately, the persistence of paper isn’t a significant stumbling block in any way since all important paper documents from the pre-computer era have been scanned and are available over the internet for free or at low cost, and all important new written documents originate in electronic format.

For what it’s worth, I’ve predicted that, in the 2030s, books and computer tablets will merge into a single type of device that could be thought of as a “digital book.” It will be a book with several hundred pages made of thin, flexible digital displays (perhaps using ultra-energy efficient e-ink) instead of paper. At the tap of a button, the text on all of the pages will instantly change to display whichever book the user wanted to read at that moment. They could also be used as notebooks in which the user could hand write or draw things with a stylus, which would be saved as image or text files. The devices will fuse the tactile appeal of old-fashioned books with the content flexibility of tablet computers.

“Three-dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.”

3D volumetric displays didn’t advance nearly as fast as Kuzweil predicted, so this was wrong in 2019, and the technology doesn’t look poised for a breakthrough, so it will still be wrong in 2029. However, it doesn’t matter since VR goggles and probably AR glasses as well will let people have the same holographic experiences. By 2029, you will be able to put on eyewear that displays lifelike, moving images of other people, giving the false impression they are around you. Among other things, this technology will be used for video calls.

“The all-enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all the facets of the tactile sense, including the senses of pressure, temperature, textures, and moistness…the ‘total touch’ haptic environment requires entering a virtual reality booth.”

The haptic/kinetic/touch aspect of virtual reality is very underdeveloped compared to its audio and visual aspects, and will still lag far behind in 2029, but little will be lost thanks to this. After all, if you’re playing a VR game, do you want to be able to feel bullets hitting you, or to feel the extreme temperatures of whatever exotic virtual environment you’re in? Even if we had skintight catsuits that could replicate physical sensations accurately, would we want to wear them? Slipping on a VR headset that covers your eyes and ears is fast and easy–and will become even more so as the devices miniaturize thanks to better technology–but taking off all your clothes to put on a VR catsuit is much more trouble.

A VR headset is made of smooth metal and high-impact plastic, making it easy to clean with a damp a rag. By contrast, a catsuit made of stretchy material and studded with hard servos, sensors and other little machines would soak up sweat, dirt and odors, and couldn’t be thrown in the washing machine or dryer like a regular garment since its parts would get damaged if banged around inside. It’s impractical.

“These technologies are popular for medical examinations, as well as sensual and sexual interactions…”

I doubt that VR body suits and VR “booths” will be able to satisfactorily replicate anything but a narrow range of sex acts. Given the extreme importance of tactile stimulation, the setup would have to include a more expensive catsuit. There would also need to be devices for the genitals, adding more costs, and possibly other contraptions to apply various types of physical force (thrust, pull, resistance, etc.) to the user. Cleanup would be even more of a hassle. [Shakes head]

The fundamental limits to this technology are such that I don’t think it will ever become “popular” since VR sex will fall so far short of the real thing. That said, I believe another technology, androids, will be able to someday “do it” as well as humans. Once they can, androids will become some of the most popular consumer devices of all time, with major repercussions for dating, marriage, gender relations, and laws relating to sex and prostitution. They would let any person, regardless of social status, looks, or personality, to have unlimited amounts of “sex,” which is unheard of in human history. Just don’t expect it until near the end of this century!

“The vast majority of transactions include a simulated person, featuring a realistic animated personality and two-way voice communication with high-quality natural-language understanding.”

As with replacing all books with PDFs on computer displays, there was no technological barrier to this in 2019, but it didn’t happen because most transactions remained face-to-face, and because people preferred online transactions involving simple button-clicks rather than drawn-out conversations with fake human salesmen. The consumer preferences were not clear when the prediction was made in 1998.

By 2029, the prediction will still be wrong, though it won’t matter, since buying things by simply clicking on buttons and typing a few characters is faster and much less aggravating than doing the same transactions through a “simulated person.” Anyone who has dealt with a robot operator on the phone that laboriously enunciates menu options and obtusely talks over you when you are responding will agree. It would be a step backwards if that technology became more widespread by 2029.

“Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads. While humans are still allowed to drive on local roads (although not on highways), the automated driving systems are always engaged and are ready to take control when necessary to prevent accidents.”

Sensors and transmitters that could guide cars were never installed along roadways, but it didn’t turn out to be a problem since we found that cars could use GPS and their own onboard sensors to navigate just as well. So the prediction was wrong, and the expensive roadside networks will still not exist in 2029, but it won’t matter.

The second part of the prediction will be half right by 2029, and it’s failure to be 100% right will be consequential. By then, autonomous cars will be statistically safer than the average human driver and will be in the “human range” of “efficiency,” albeit towards the bottom of the range: they will still be overly cautious, slowing down and even stopping whenever they detect slightly dangerous conditions (e.g. – erratic human driver nearby, pedestrian who looks like they might be about to cross the road illegally, heavy rain, dead leaves blowing across the road surface). In short, they’ll drive like old ladies, which will be annoying at times.

While the technology will be cheaper and more widely accepted, it will still be a luxury feature in 2029 that only a minority of cars in rich countries have. At best, a token number of public roads worldwide will ban human-driven vehicles. Enormous numbers of lives will be lost in accidents, and billions of dollars wasted in traffic jams each year thanks to autonomous car technology not advancing as fast as Kurzweil predicted.

“The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual-experience software, which ranges from simulations of ‘real’ experiences to abstract environments with little or no corollary in the physical world.”

In 2019, the sports industry had the highest revenues in the entertainment sector, totaling $480 – $620 billion. That year, the VR gaming industry generated a paltry $1.2 billion in revenue, so the prediction was badly wrong for 2019. And even if the latter grows twentyfold over this decade, which I think is plausible, it won’t come close to challenging the dominance of sports.

That said, looking at revenues is kind of arbitrary. The spirit of the prediction, which is that VR gaming will become a very popular and common means of entertainment, will be right by 2029 in rich countries, and it will only get more widespread with time.

“Computerized health monitors built into watches, jewelry, and clothing which diagnose both acute and chronic health conditions are widely used. In addition to diagnosis, these monitors provide a range of remedial recommendations and interventions.”

The devices are already built into some smartwatches, and will be “widely used” by any reasonable metric by 2029. I don’t think they will be shrunk to the sizes of jewelry like rings and earrings, but that won’t have any real consequences since the watches will be available. No one in 2029 will say “I’m really concerned about my heart problem and want to buy a wearable monitoring device, but my health is not so important that I would want to trouble myself with a watch. However, I’d be OK with a ring.”

Health monitoring devices won’t be built into articles of clothing for the same reasons that other types of computers won’t be built into them: 1) laundering and drying the clothes would be a hassle since water, heat and being banged around would damage their electronic parts and 2) you’d have to remember to always wear your one shirt with the heartbeat monitor sewn into it, regardless of how appropriate it was for the occasion and weather, or how dirty it was from wearing it day after day. It makes much more sense to consolidate all your computing needs into one or two devices that are fully portable and easy to keep clean, like a smartphone and smartwatch, which is why we’ve done that.

Links:

  1. Rotating computer memories (HDDs) are cheaper and more reliable than solid-state memories (SSDs). Those advantages are unlikely to disappear, meaning HDDs will still be around in 2029.
    https://www.computerweekly.com/feature/Spinning-disk-hard-drives-Good-value-for-many-use-cases
  2. Even old-fashioned computer tapes will still be around in 2029, as they’re even better-suited for long-term data storage (called “cold storage”).
    https://www.economist.com/science-and-technology/2020/12/15/magnetic-tape-has-a-surprisingly-promising-future

Will Kurzweil’s 2019 be our 2029?

One piece of feedback I received on my analysis of how accurate Ray Kurzweil’s predictions for 2019 were was that I should include some kind of summary of my findings. I agree it would be valuable since it would let readers “see the forest for the trees,” so I have compiled a table showing each of Kurzweil’s predictions along with my rating how each turned out. The possible ratings are:

  1. Right
  2. Part right, part wrong
  3. Will happen later
  4. Wrong because needlessly specific / right in spirit, wrong in specifics
  5. Wrong
  6. Will probably never be 100% right
  7. Impossible to judge accurately / Unclear
  8. Overtaken by other tech

Note that it is possible for a prediction to fall under more than one of those categories. For example, the prediction that “The expected life span…[is now] over one hundred” was “Wrong” because it was not true in any country at the end of 2019, however, it also “Will happen later” since there will be a point farther in the future when life expectancy reaches that level.

Additionally, for many predictions that were not “Right” in 2019, I analyzed whether and when they might be, and put my findings under the table’s “Notes” column. This exercise is valuable since it shows us whether Kurzweil is headed in the wrong direction as a futurist, or whether he’s right about the trajectory of future events but overly optimistic about how soon important milestones will be reached.

The completed table is large, and is best viewed on a large screen, so I don’t recommend looking at it on your smartphone. It’s size also made it unsuited for a WordPress table, so I can’t directly embed it into this blog post. Instead, I present my table as a downloadable PDF, and as a series of image snapshots shown below.

So, will Kurzweil’s 2019 be our reality by 2029? In large part, yes, but with some notable misses. According to my estimates, by the end of 2029, augmented reality and virtual reality technology will reach the levels he envisioned, and VR gaming will be a mainstream entertainment medium (though not the dominant one). AI personal assistants will have the “humanness” and complexities of personality he envisioned (though it should be emphasized that they will not be sentient or truly intelligent). Real-time language translating technology will be as good as average human translators. Body-worn health monitoring devices will match his vision. Finally, it’s within the realm of possibility that the cost-performance of computer processors in 2029 could be what he predicted for 2019, but the milestone probably won’t be reached until later.

However, nanomachines, cybernetic implants that endow users with above-normal capabilities, and our understanding of how the human brain works and of its “algorithms” for intelligence and sentience will not approach his forecasted levels of sophistication and/or use until well into this century. These delays that were evident in 2019 are important since they significantly push back the likely dates when Kurzweil’s later predictions (which I am aware of but have not yet discussed on this blog) about radical life extension, the fusion of man and machine, and the creation of the first artificial general intelligence (AGI) will come true. His predictions about robotics and about the rate of improvement to the cost-performance of computer processors are also too optimistic. Those are all very important developments, and the delays reinforce my longstanding view that Kurzweil’s vision of the future will largely turn out right, but will take decades longer to become a reality than he predicts. He has repeatedly indicated that he is very scared to die, which makes me suspect Kurzweil skews the dates of his future predictions–particularly those about life extension technology–closer to the present so they will fall within his projected lifespan.

That said, my analysis of his 2019 predictions shows he’s on the wrong track on a few issues, but that it isn’t consequential. “Quantum diffraction” cameras may not ever catch on, but so what? Regular digital cameras operating on conventional principles are everywhere and can capture any events of interest. In 2029 and beyond, data cables to devices like computer monitors and controllers will still be common, and not everything will be wireless, but I don’t see how this will impose real hardship on anyone or be a drag on any area of science, technology, or economic development. Keyboards, paper, books, and rotating computer hard disks will also remain in common use for much longer than Kurzweil thinks, but aside from annoying him and a small number of like-minded technophiles, I don’t see how their continuance will hurt anything. On that note, let me touch on another longstanding view I’ve had of him and his way of thinking: Kurzweil errs by ignoring “the Caveman Principle,” and by assuming average people like technology as much as he does.

This holds especially true for implanted technologies like brain implants and cybernetic implants in the eyes and ears. I agree with Kurzweil that they will eventually become common, but the natural human aversion to disfiguring own bodies, and the coming improvements to wearable technologies like AR glasses and earbuds, will delay it until the distant future.

In conclusion, Ray Kurzweil remains a high-quality futurist, and it would be a mistake to dismiss everything he says because some of his predictions failed to come true. Those failures are either inconsequential or are still on track to happen, albeit farther in the future than he originally said. Out of 66 predictions (as I defined them) for 2019, three are write-offs since they are “Impossible to judge accurately / Unclear.” Of the remaining 63, fifteen were simply “Right,” and by 2029, about another 14 will be “Right,” or “clearly about to be Right within the next few years.” Another 16 will still probably be “Wrong,” but it won’t be consequential (e.g. – people will still type of keyboards, some keyboards will still have cables connected to them, hi-res volumetric displays won’t exist, but it won’t matter since people will be able to use eyewear to see holographic images anyway). Forty-five out of a possible 63 by 2029 ain’t bad.

The remaining 18 predictions likely to still be false in 2029 and which are of consequence include building nanomachines, extending human lifespan, building an AGI, and understanding how the brain works. They will probably lag Kurzweil’s expectations by a larger margin than they did in 2019, some progress will still have occurred during the 2020s, and each field of research will be getting large amounts of investment to reach the same goals Kurzweil wants. The potential benefits of all of them will still be recognized, and no new laws of nature will have been discovered prohibiting them from being achieved through sustained effort. Then, as now, we’ll be able to say he’s essentially on the right track, as scary as that may be (read his other stuff yourself).

How Ray Kurzweil’s 2019 predictions are faring (pt 4)

This is the fourth…and LAST…entry in my series of blog posts analyzing the accuracy of Ray Kurzweil’s predictions about what things would be like in 2019. These predictions come from his 1998 book The Age of Spiritual Machines. You can view the previous installments of this series here:

Part 1

Part 2

Part 3

“An undercurrent of concern is developing with regard to the influence of machine intelligence. There continue to be differences between human and machine intelligence, but the advantages of human intelligence are becoming more difficult to identify and articulate. Computer intelligence is thoroughly interwoven into the mechanisms of civilization and is designed to be outwardly subservient to apparent human control. On the one hand, human transactions and decisions require by law a human agent of responsibility, even if fully initiated by machine intelligence. On the other hand, few decisions are made without significant involvement and consultation with machine-based intelligence.”

MOSTLY RIGHT

Technological advances have moved concerns over the influence of machine intelligence to the fore in developed countries. In many domains of skill previously considered hallmarks of intelligent thinking, such as driving vehicles, recognizing images and faces, analyzing data, writing short documents, and even diagnosing diseases, machines had achieved human levels of performance by the end of 2019. And in a few niche tasks, such as playing Go, chess, or poker, machines were superhuman. Eroded human dominance in these and other fields did indeed force philosophers and scientists to grapple with the meaning of “intelligence” and “creativity,” and made it harder yet more important to define how human thinking was still special and useful.

While the prospect of artificial general intelligence was still viewed with skepticism, there was no real doubt among experts and laypeople in 2019 that task-specific AIs and robots would continue improving, and without any clear upper limit to their performance. This made technological unemployment and the solutions for it frequent topics of public discussion across the developed world. In 2019, one of the candidates for the upcoming U.S. Presidential election, Andrew Yang, even made these issues central to his political platform.

If “algorithms” is another name for “computer intelligence” in the prediction’s text, then yes, it is woven into the mechanisms of civilization and is ostensibly under human control, but in fact drives human thinking and behavior. To the latter point, great alarm has been raised over how algorithms used by social media companies and advertisers affect sociopolitical beliefs (particularly, conspiracy thinking and closedmindedness), spending decisions, and mental health.

Human transactions and decisions still require a “human agent of responsibility”: Autonomous cars aren’t allowed to drive unless a human is in the driver’s seat, human beings ultimately own and trade (or authorize the trading of) all assets, and no military lets its autonomous fighting machines kill people without orders from a human. The only part of the prediction that seems wrong is the last sentence. Probably most decisions that humans make are done without consulting a “machine-based intelligence.” Consider that most daily purchases (e.g. – where to go for lunch, where to get gas, whether and how to pay a utility bill) involve little thought or analysis. A frighteningly large share of investment choices are also made instinctively, with benefit of little or no research. However, it should be noted that one area of human decision-making, dating, has become much more data-driven, and it was common in 2019 for people to use sorting algorithms, personality test results, and other filters to choose potential mates.

“Public and private spaces are routinely monitored by machine intelligence to prevent interpersonal violence.”

MOSTLY RIGHT

Gunfire detection systems, which are comprised of networks of microphones emplaced across an area and which use machine intelligence to recognize the sounds of gunshots and to triangulate their origins, were emplaced in over 100 cities at the end of 2019. The dominant company in this niche industry, “ShotSpotter,” used human analysts to review its systems’ results before forwarding alerts to local police departments, so the systems were not truly automated, but nonetheless they made heavy use of machine intelligence.

Automated license plate reader cameras, which are commonly mounted next to roads or on police cars, also use machine intelligence and are widespread. The technology has definitely reduced violent crime, as it has allowed police to track down stolen vehicles and cars belonging to violent criminals faster than would have otherwise been possible.

In some countries, surveillance cameras with facial recognition technology monitor many public spaces. The cameras compare the people they see to mugshots of criminals, and alert the local police whenever a wanted person is seen. China is probably the world leader in facial recognition surveillance, and in a famous 2018 case, it used the technology to find one criminal among 60,000 people who attended a concert in Nanchang.

At the end of 2019, several organizations were researching ways to use machine learning for real-time recognition of violent behavior in surveillance camera feeds, but the systems were not accurate enough for commercial use.

“People attempt to protect their privacy with near-unbreakable encryption technologies, but privacy continues to be a major political and social issue with each individual’s practically every move stored in a database somewhere.”

RIGHT

In 2013, National Security Agency (NSA) analyst Edward Snowden leaked a massive number of secret documents, revealing the true extent of his employer’s global electronic surveillance. The world was shocked to learn that the NSA was routinely tracking the locations and cell phone call traffic of millions of people, and gathering enormous volumes of data from personal emails, internet browsing histories, and other electronic communications by forcing private telecom and internet companies (e.g. – Verizon, Google, Apple) to let it secretly search through their databases. Together with British intelligence, the NSA has the tools to spy on the electronic devices and internet usage of almost anyone on Earth.

Edward Snowden

Snowden also revealed that the NSA unsurprisingly had sophisticated means for cracking encrypted communications, which it routinely deployed against people it was spying on, but that even its capabilities had limits. Because some commercially available encryption tools were too time-consuming or too technically challenging to crack, the NSA secretly pressured software companies and computing hardware manufacturers to install “backdoors” in their products, which would allow the Agency to bypass any encryption their owners implemented.

During the 2010s, big tech titans like Facebook, Google, Amazon, and Apple also came under major scrutiny for quietly gathering vast amounts of personal data from their users, and reselling it to third parties to make hundreds of billions of dollars. The decade also saw many epic thefts of sensitive personal data from corporate and government databases, affecting hundreds of millions of people worldwide.

With these events in mind, it’s quite true that concerns over digital privacy and confidentiality of personal data have become “major political and social issues,” and that there’s growing displeasure at the fact that “each individual’s practically every move stored in a database somewhere.” The response has been strongest in the European Union, which, in 2018, enacted the most stringent and impactful law to protect the digital rights of individuals–the “General Data Protection Regulation” (GDPR).

Widespread awareness of secret government surveillance programs and of the risk of personal electronic messages being made public thanks to hacks have also bolstered interest in commercial encryption. “Whatsapp” is a common text messaging app with built-in end-to-end encryption. It was invented in 2016 and had 1.5 billion users by 2019. “Tor” is a web browser with built-in encryption that became relatively common during the 2010s after it was learned even the NSA couldn’t spy on people who used it. Additionally, virtual private networks (VPNs), which provide an intermediate level of data privacy protection for little expense and hassle, are in common use.

“The existence of the human underclass continues as an issue. While there is sufficient prosperity to provide basic necessities (secure housing and food, among others) without significant strain to the economy, old controversies persist regarding issues of responsibility and opportunity.”

RIGHT

It’s unclear whether this prediction pertained to the U.S., to rich countries in aggregate, or to the world as a whole, and “underclass” is not defined, so we can’t say whether it refers only to desperately poor people who are literally starving, or to people who are better off than that but still under major daily stress due to lack of money. Whatever the case, by any reasonable definition, there is an “underclass” of people in almost every country.

In the U.S. and other rich countries, welfare states provide even the poorest people with access to housing, food, and other needs, though there are still those who go without because severe mental illness and/or drug addiction keep them stuck in homeless lifestyles and render them too behaviorally disorganized to apply for government help or to be admitted into free group housing. Some people also live in destitution in rich countries because they are illegal immigrants or fugitives with arrest warrants, and contacting the authorities for welfare assistance would lead to their detection and imprisonment. Political controversy over the causes of and solutions to extreme poverty continues to rage in rich countries, and the fault line usually is about “responsibility” and “opportunity.”

The fact that poor people are likelier to be obese in most OECD countries and that starvation is practically nonexistent there shows that the market, state, and private charity have collectively met the caloric needs of even the poorest people in the rich world, and without straining national economies enough to halt growth. Indeed, across the world writ large, obesity-related health problems have become much more common and more expensive than problems caused by malnutrition. The human race is not financially struggling to feed itself, and would derive net economic benefits from reallocating calories from obese people to people living in the remaining pockets of land (such as war-torn Syria) where malnutrition is still a problem.

There’s also a growing body of evidence from the U.S. and Canada that providing free apartments to homeless people (the “housing first” strategy) might actually save taxpayer money, since removing those people from unsafe and unhealthy street lifestyles would make them less likely to need expensive emergency services and hospitalizations. The issue needs to be studied in further depth before we can reach a firm conclusion, but it’s probably the case that rich countries could give free, basic housing to their homeless without significant additional strain to their economies once the aforementioned types of savings to other government services are accounted for.

“This issue is complicated by the growing component of most employment’s being concerned with the employee’s own learning and skill acquisition. In other words, the difference between those ‘productively’ engaged and those who are not is not always clear.”

PARTLY RIGHT

As I said in part 2 of this review, Kurzweil’s prediction that people in 2019 would be spending most of their time at work acquiring new skills and knowledge to keep up with new technologies was wrong. The vast majority of people have predictable jobs where they do the same sets of tasks over and over. On-the-job training and mandatory refresher training is very common, but most workers devote small shares of their time to them, and the fraction of time spent doing workplace training doesn’t seem significantly different from what it was when the book was published.

From years of personal experience working in large organizations, I can say that it’s common for people to take workplace training courses or work-sponsored night classes (either voluntarily or because their organizations require it) that provide few or no skills or items of knowledge that are relevant to their jobs. Employees who are undergoing these non-value-added training programs have the superficial appearance of being “productively engaged” even if the effort is really a waste, or so inefficient that the training course could have been 90% shorter if taught better. But again, this doesn’t seem different from how things were in past decades.

This means the prediction was partly right, but also of questionable significance in the first place.

“Virtual artists in all of the arts are emerging and are taken seriously. These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques. However, interest in the output of these creative machines has gone beyond the mere novelty of machines being creative.”

MOSTLY RIGHT

The “Deep Dream” computer program made this surrealist portrait.

In 2019, computers could indeed produce paintings, songs, and poetry with human levels of artistry and skill. For example, Google’s “Deep Dream” program is a neural network that can transform almost any image into something resembling a surrealist painting. Deep Dream’s products captured international media attention for how striking, and in many cases, disturbing, they looked.

“Portrait of Edmond de Belamy”

In 2018, a different computer program produced a painting–“Portrait of Edmond de Belamy”–that fetched a record-breaking $423,500 at an art auction. The program was a generative adversarial network (GAN) designed and operated by a small team of people who described themselves as “a collective of researchers, artists, and friends, working with the latest models of deep learning to explore the creative potential of artificial intelligence.” That seems to fulfill the second part of the prediction (“These cybernetic visual artists, musicians, and authors are usually affiliated with humans or organizations (which in turn are comprised of collaborations of humans and machines) that have contributed to their knowledge base and techniques.”)

Machines are also respectable songwriters, and are able to produce original songs based on the styles of human artists. For example, a computer program called “EMMY” (an acronym for “Experiments in Musical Intelligence”) is able to make instrumental musical scores that accurately mimic those of famous human musicians, like Bach and Mozart (fittingly, Ray Kurzweil made a simpler computer program that did essentially the same thing when he was a teenager). Listen to a few of the songs and judge their quality for yourself:

Computer scientists at Google have built a neural network called “JukeBox” that is even more advanced than EMMY, and which can produce songs that are complete with simulated human lyrics. While the words don’t always make sense and there’s much room for improvement, most humans have no creative musical talent at all and couldn’t do any better, and the quality, sophistication and coherence of the entirely machine-generated songs is very impressive (audio samples are available online).

Also at Google, an artificial intelligence program called the “Generative Pretrained Transformer” was invented to understand and write text. In 2019, the second version of the program, “GPT-2,” made its debut, and showed impressive skill writing poetry, short news articles and other content, with minimal prompting from humans (it was also able to correctly answer basic questions about text it was shown and to summarize the key points, demonstrating some degree of reading comprehension). While often clunky and sometimes nonsensical, the passages that GPT-2 generates nonetheless fall within the “human range” of writing ability since they are very hard to tell apart from the writings of a child, or of an adult with a mental or cognitive disability. Some of the machine-written passages also read like choppy translations of text that was well-written in whatever its original language was.

Much of GPT-2’s poetry is also as good as–or, as bad as–that written by its human counterparts:

And they have seen the last light fail;
By day they kneel and pray;
But, still they turn and gaze upon
The face of God to-day.

And God is touched and weeps anew
For the lost souls around;
And sorrow turns their pale and blue,
And comfort is not found.

They have not mourned in the world of men,
But their hearts beat fast and sore,
And their eyes are filled with grief again,
And they cease to shed no tear.

And the old men stand at the bridge in tears,
And the old men stand and groan,
And the gaunt grey keepers by the cross
And the spent men hold the crown.

And their eyes are filled with tears,
And their staves are full of woe.
And no light brings them any cheer,
For the Lord of all is dead

In conclusion, the prediction is right that there were “virtual artists” in 2019 in multiple fields of artistic endeavor. Their works were of high enough quality and “humanness” to be of interest for reasons other than the novelties of their origins. They’ve raised serious questions among humans about the nature of creative thinking, and whether machines are capable or soon will be. Finally, the virtual artists were “affiliated with” or, more accurately, owned and controlled by groups of humans.

“Visual, musical, and literary art created by human artists typically involve a collaboration between human and machine intelligence.”

UNCLEAR

It’s impossible to assess this prediction’s veracity because the meanings of “collaboration” and “machine intelligence” are undefined (also, note that the phrase “virtual artists” is not used in this prediction). If I use an Instagram filter to transform one of the mundane photos I took with my camera phone into a moody, sepia-toned, artistic-looking image, does the filter’s algorithm count as a “machine intelligence”? Does my mere use of it, which involves pushing a button on my smartphone, count as a “collaboration” with it?

Likewise, do recording studios and amateur musicians “collaborate with machine intelligence” when they use computers for post-production editing of their songs? When you consider how thoroughly computer programs like “Auto-Tune” can transform human vocals, it’s hard to argue that such programs don’t possess “machine intelligence.” This instructional video shows how it can make any mediocre singer’s voice sound melodious, and raises the question of how “good” the most famous singers of 2019 actually are: Can Anyone Sing With Autotune?! (Real Voice Vs. Autotune)

If I type a short story or fictional novel on my computer, and the word processing program points out spelling and usage mistakes, and even makes sophisticated recommendations for improving my writing style and grammar, am I collaborating with machine intelligence? Even free word processing programs have automatic spelling checkers, and affordable apps like Microsoft Word, Grammarly and ProWritingAid have all of the more advanced functions, meaning it’s fair to assume that most fiction writers interact with “machine intelligence” in the course of their work, or at least have the option to. Microsoft Word also has a “thesaurus” feature that lets users easily alter the wordings of their stories.

“The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual-experience software, which ranges from simulations of ‘real’ experiences to abstract environments with little or no corollary in the physical world.”

WRONG

Analyzing this prediction first requires us to know what “virtual-experience software” refers to. As indicated by the phrase “continues to be,” Kurzweil used it earlier, specifically, in the “2009” chapter where he issued predictions for that year. There, he indicates that “virtual-experience software” is another name for “virtual reality software.” With that in mind, the prediction is wrong. As I showed previously in this analysis, the VR industry and its technology didn’t progress nearly as fast as Kurzweil forecast.

That said, the video game industry’s revenues exceed those of nearly all other art and entertainment industries. Globally for 2019, video games generated about $152.1 billion in revenue, compared to $41.7 billion for the film. The music industry’s 2018 figures were $19.1 billion. Only the sports industry, whose global revenues were between $480 billion and $620 billion, was bigger than video games (note that the two cross over in the form of “E-Sports”).

Revenues from virtual reality games totaled $1.2 billion in 2019, meaning 99% of the video game industry’s revenues that year DID NOT come from “virtual-experience software.” The overwhelming majority of video games were viewed on flat TV screens and monitors that display 2D images only. However, the graphics, sound effects, gameplay dynamics, and plots have become so high quality that even these games can feel immersive, as if you’re actually there in the simulated environment. While they don’t meet the technical definition of being “virtual reality” games, some of them are so engrossing that they might as well be.

“The primary threat to [national] security comes from small groups combining human and machine intelligence using unbreakable encrypted communication. These include (1) disruptions to public information channels using software viruses, and (2) bioengineered disease agents.”

MOSTLY WRONG

Terrorism, cyberterrorism, and cyberwarfare were serious and growing problems in 2019, but it isn’t accurate to say they were the “primary” threats to the national security of any country. Consider that the U.S., the world’s dominant and most advanced military power, spent $16.6 billion on cybersecurity in FY 2019–half of which went to its military and the other half to its civilian government agencies. As enormous as that sum is, it’s only a tiny fraction of America’s overall defense spending that fiscal year, which was a $726.2 billion “base budget,” plus an extra $77 billion for “overseas contingency operations,” which is another name for combat and nation-building in Iraq, Afghanistan, and to a lesser extent, in Syria.

In other words, the world’s greatest military power only allocates 2% of its defense-related spending to cybersecurity. That means hackers are clearly not considered to be “the primary threat” to U.S. national security. There’s also no reason to assume that the share is much different in other countries, so it’s fair to conclude that it is not the primary threat to international security, either.

Also consider that the U.S. spent about $33.6 billion on its nuclear weapons forces in FY2019. Nuclear weapon arsenals exist to deter and defeat aggression from powerful, hostile countries, and the weapons are unsuited for use against terrorists or computer hackers. If spending provides any indication of priorities, then the U.S. government considers traditional interstate warfare to be twice as big of a threat as cyberattackers. In fact, most of military spending and training in the U.S. and all other countries is still devoted to preparing for traditional warfare between nation-states, as evidenced by things like the huge numbers of tanks, air-to-air fighter planes, attack subs, and ballistic missiles still in global arsenals, and time spent practicing for large battles between organized foes.

“Small groups” of terrorists inflict disproportionate amounts of damage against society (terrorists killed 14,300 people across the world in 2017), as do cyberwarfare and cyberterrorism, but the numbers don’t bear out the contention that they are the “primary” threats to global security.

Whether “bioengineered disease agents” are the primary (inter)national security threat is more debatable. Aside from the 2001 Anthrax Attacks (which only killed five people, but nonetheless bore some testament to Kurzweil’s assessment of bioterrorism’s potential threat), there have been no known releases of biological weapons. However, the COVID-19 pandemic, which started in late 2019, has caused human and economic damage comparable to the World Wars, and has highlighted the world’s frightening vulnerability to novel infectious diseases. This has not gone unnoticed by terrorists and crazed individuals, and it could easily inspire some of them to make biological weapons, perhaps by using COVID-19 as a template. Modifications that made it more lethal and able to evade the early vaccines would be devastating to the world. Samples of unmodified COVID-19 could also be employed for biowarfare if disseminated in crowded places at some point in the future, when herd immunity has weakened.

Just because the general public, and even most military planners, don’t appreciate how dire bioterrorism’s threat is doesn’t mean it is not, in fact, the primary threat to international security. In 2030, we might look back at the carnage caused by the “COVID-23 Attack” and shake our collective heads at our failure to learn from the COVID-19 pandemic a few years earlier and prepare while we had time.

“Most flying weapons are tiny–some as small as insects–with microscopic flying weapons being researched.”

UNCLEAR

What counts as a “flying weapon”? Aircraft designed for unlimited reuse like planes and helicopters, or single-use flying munitions like missiles, or both? Should military aircraft that are unsuited for combat (e.g. – jet trainers, cargo planes, scout helicopters, refueling tankers) be counted as flying weapons? They fly, they often go into combat environments where they might be attacked, but they don’t carry weapons. This is important because it affects how we calculate what “most”/”the majority” is.

What counts as “tiny”? The prediction’s wording sets “insect” size as the bottom limit of the “tiny” size range, but sets no upper bound to how big a flying weapon can be and still be considered “tiny.” It’s up to us to do it.

A “Phantom” ultralight plane. Is it fair to call this “tiny”?

“Ultralights” are a legally recognized category of aircraft in the U.S. that weigh less than 254 lbs unloaded. Most people would take one look at such an aircraft and consider it to be terrifyingly small to fly in, and would describe it as “tiny.” Military aviators probably would as well: The Saab Gripen is one of the smallest modern fighter planes and still weighs 14,991 lbs unloaded, and each of the U.S. military’s MH-6 light observation helicopters weigh 1,591 lbs unloaded (the diminutive Smart Car Fortwo weighs about 2,050 lbs, unloaded).

With those relative sizes in mind, let’s accept the Phantom X1 ultralight plane as the upper bound of “tiny.” It weighs 250 lbs unloaded, is 17 feet long and has a 28 foot wingspan, so a “flying weapon” counts as being “tiny” if it is smaller than that.

If we also count missiles as “flying weapons,” then the prediction is right since most missiles are smaller than the Phantom X1, and the number of missiles far exceeds the number of “non-tiny” combat aircraft. A Hellfire missile, which is fired by an aircraft and homes in on a ground target, is 100 lbs and 5 feet long. A Stinger missile, which does the opposite (launched from the ground and blows up aircraft) is even smaller. Air-to-air Sidewinder missiles also meet our “tiny” classification. In 2019, the U.S. Air Force had 5,182 manned aircraft and wanted to buy 10,264 new guided missiles to bolster whatever stocks of missiles it already had in its inventory. There’s no reason to think the ratio is different for the other branches of the U.S. military (i.e. – the Navy probably has several guided missiles for every one of its carrier-borne aircraft), or that it is different in other countries’ armed forces. Under these criteria, we can say that most flying weapons are tiny.

The RQ-11B Raven drone could be considered a “tiny flying weapon.”

If we don’t count missiles as “flying weapons” and only count “tiny” reusable UAVs, then the prediction is wrong. The U.S. military has several types of these, including the “Scan Eagle,” RQ-11B “Raven,” RQ-12A “Wasp,” RQ-20 “Puma,” RQ-21 “Blackjack,” and the insect-sized PD-100 Black Hornet. Up-to-date numbers of how many of these aircraft the U.S. has in its military inventory are not available (partly because they are classified), but the data I’ve found suggest they number in the hundreds of units. In contrast, the U.S. military has over 12,000 manned aircraft.

At 100mm long and 120mm wide along its main rotor, the PD-100 drone is as small as a large dragonfly.

The last part of the prediction, that “microscopic” flying weapons would be the subject of research by 2019, seems to be wrong. The smallest flying drones in existence at that time were about as big as bees, which are not microscopic since we can see them with the naked eye. Moreover, I couldn’t find any scientific papers about microscopic flying machines, indicating that no one is actually researching them. However, since such devices would have clear espionage and military uses, it’s possible that the research existed in 2019, but was classified. If, at some point in the future, some government announces that its secret military labs had made impractical, proof-of-concept-only microscopic flying machines as early as 2019, then Kurzweil will be able to say he was right.

Anyway, the deep problems with this prediction’s wording have been made clear. Something like “Most aircraft in the military’s inventory are small and autonomous, with some being no bigger than flying insects” would have been much easier to evaluate.

“Many of the life processes encoded in the human genome, which was deciphered more than ten years earlier, are now largely understood, along with the information-processing mechanisms underlying aging and degenerative conditions such as cancer and heart disease.”

PARTLY RIGHT

The words “many” and “largely” are subjective, and provide Kurzweil with another escape hatch against a critical analysis of this prediction’s accuracy. This problem has occurred so many times up to now that I won’t belabor you with further explanation.

The human genome was indeed “deciphered” more than ten years before 2019, in the sense that scientists discovered how many genes there were and where they were physically located on each chromosome. To be specific, this happened in 2003, when the Human Genome Project published its first, fully sequenced human genome. Thanks to this work, the number of genetic disorders whose associated defective genes are known to science rose from 60 to 2,200. In the years since Human Genome Project finished, that climbed further, to 5,000 genetic disorders.

The cost of sequencing a human genome sharply dropped, making it possible to do genome-wide association studies, and for middle income people to have their personal genomes sequenced.

However, we still don’t know what most of our genes do, or which trait(s) each one codes for, so in an important sense, the human genome has not been deciphered. Since 1998, we’ve learned that human genetics is more complicated than suspected, and that it’s rare for a disease or a physical trait to be caused by only one gene. Rather, each trait (such as height) and disease risk is typically influenced by the summed, small effects of many different genes. Genome-wide association studies (GWAS), which can measure the subtle effects of multiple genes at once and connect them to the traits they code for, are powerful new tools for understanding human genetics. We also now know that epigenetics and environmental factors have large roles determining how a human being’s genes are expressed and how he or she develops in biological but non-genetic ways. In short just understanding what genes themselves do is not enough to understand human development or disease susceptibility.

Returning to the text of the prediction, the meaning of “information-processing mechanisms” probably refers to the ways that human cells gather information about their external surroundings and internal state, and adaptively respond to it. An intricate network of organic machinery made of proteins, fat structures, RNA, and other molecules handles this task, and works hand-in-hand with the DNA “blueprints” stored in the cell’s nucleus. It is now known that defects in this cellular-level machinery can lead to health problems like cancer and heart disease, and advances have been made uncovering the exact mechanics by which those defects cause disease. For example, in the last few years, we discovered how a mutation in the “SF3B1” gene raises the risk of a cell developing cancer. While the link between mutations to that gene and heightened cancer risk had long been known, it wasn’t until the advent of CRISPR that we found out exactly how the cellular machinery was malfunctioning, in turn raising hopes of developing a treatment.

The aging process is more well-understood than ever, and is known to have many separate causes. While most aging is rooted in genetics and is hence inevitable, the speed at which a cell or organism ages can be affected at the margins by how much “stress” it experiences. That stress can come in the form of exposure to extreme temperatures, physical exertion, and ingestion of specific chemicals like oxidants. Over the last 10 years, considerable progress has been made uncovering exactly how those and other stressors affect cellular machinery in ways that change how fast the cell ages. This has also shed light on a phenomenon called “hormesis,” in which mild levels of stress actually make cells healthier and slow their aging.

“The expected life span…[is now] over one hundred.”

WRONG

The expected life span for an average American born in 2018 was 76.2 years for males and 81.2 years for females. Japan had the highest figures that year out of all countries, at 81.25 years for men and 87.32 years for women.

“There is increasing recognition of the danger of the widespread availability of bioengineering technology. The means exist for anyone with the level of knowledge and equipment available to a typical graduate student to create disease agents with enormous destructive potential.”

WRONG

Among the general public and national security experts, there has been no upward trend in how urgently the biological weapons threat is viewed. The issue received a large amount of attention following the 2001 Anthrax Attacks, but since then has receded from view, while traditional concerns about terrorism (involving the use of conventional weapons) and interstate conflict have returned to the forefront. Anecdotally, cyberwarfare and hacking by nonstate actors clearly got more attention than biowarfare in 2019, even though the latter probably has much greater destructive potential.

Top national security experts in the U.S. also assigned biological weapons low priority, as evidenced in the 2019 Worldwide Threat Assessment, a collaborative document written by the chiefs of the various U.S. intelligence agencies. The 42-page report only mentions “biological weapons/warfare” twice. By contrast, “migration/migrants/immigration” appears 11 times, “nuclear weapon” eight times, and “ISIS” 29 times.

As I stated earlier, the damage wrought by the COVID-19 pandemic could (and should) raise the world’s appreciation of the biowarfare / bioterrorism threat…or it could not. Sadly, only a successful and highly destructive bioweapon attack is guaranteed to make the world treat it with the seriousness it deserves.

Thanks to better and cheaper lab technologies (notably, CRISPR), making a biological weapon is easier than ever. However, it’s unclear if the “bar” has gotten low enough for a graduate student to do it. Making a pathogen in a lab that has the qualities necessary for a biological weapon, verifying its effects, purifying it, creating a delivery system for it, and disseminating it–all without being caught before completion or inadvertently infecting yourself with it before the final step–is much harder than hysterical news articles and self-interested talking head “experts” suggest. From research I did several years ago, I concluded that it is within the means of mid-tier adversaries like the North Korean government to create biological weapons, but doing so would still require a team of people from various technical backgrounds and with levels of expertise exceeding a typical graduate student, years of work, and millions of dollars.

“That this potential is offset to some extent by comparable gains in bioengineered antiviral treatments constitutes an uneasy balance, and is a major focus of international security agencies.”

RIGHT

The development of several vaccines against COVID-19 within months of that disease’s emergence showed how quickly global health authorities can develop antiviral treatments, given enough money and cooperation from government regulators. Pfizer’s successful vaccine, which is the first in history to make use of mRNA, also represents a major improvement to vaccine technology that has occurred since the book’s publication. Indeed, the lessons learned from developing the COVID-19 vaccines could lead to lasting improvements in the field of vaccine research, saving millions of people in the future who would have otherwise died from infectious diseases, and giving governments better tools for mitigating any bioweapon attacks.

Put simply, the prediction is right. Technology has made it easier to make biological weapons, but also easier to make cures for those diseases.

“Computerized health monitors built into watches, jewelry, and clothing which diagnose both acute and chronic health conditions are widely used. In addition to diagnosis, these monitors provide a range of remedial recommendations and interventions.”

MOSTLY RIGHT

Many smart watches have health monitoring features, and though some of them are government-approved health devices, they aren’t considered accurate enough to “diagnose” health conditions. Rather, their role is to detect and alert wearers to signs of potential health problems, whereupon the latter consult a medical professionals with more advanced machinery and receive a diagnosis.

The Apple Watch Series 5

By the end of 2019, common smart watches such as the “Samsung Galaxy Watch Active 2,” and the “Apple Watch Series 4 and 5” had FDA-approved electrocardiogram (ECG) features that were considered accurate enough to reliably detect irregular heartbeats in wearers. Out of 400,000 Apple Watch owners subject to such monitoring, 2,000 received alerts in 2018 from their devices of possible heartbeat problems. Fifty-seven percent of people in that subset sought medical help upon getting alerts from their watches, which is proof that the devices affect health care decisions, and ultimately, 84% of people in the subset were confirmed to have atrial fibrillation.

The Apple Watches also have “hard fall” detection features, which use accelerometers to recognize when their wearers suddenly fall down and then don’t move. The devices can be easily programmed to automatically call local emergency services in such cases, and there have been recent case where this probably saved the lives of injured people (does suffering a serious injury due to a fall count as an “acute health condition” per the prediction’s text?).

A few smart watches available in late 2019, including the “Garmin Forerunner 245,” also had built-in pulse oximeters, but none were FDA-approved, and their accuracy was questionable. Several tech companies were also actively developing blood pressure monitoring features for their devices, but only the “HeartGuide” watch, made by a small company called “Omron Healthcare,” was commercially available and had received any type of official medical sanction. Frequent, automated monitoring and analysis of blood oxygen levels and blood pressure would be of great benefit to millions of people.

Smartphones also had some health tracking capabilities. The commonest and most useful were physical activity monitoring apps, which count the number of steps their owners take and how much distance they traverse during a jog or hike. The devices are reasonably accurate, and are typically strapped to the wearer’s upper arm or waist if they are jogging, or kept in a pocket when doing other types of activity. Having a smartphone in your pocket isn’t literally the same as having it “built into [your] clothing” as the prediction says, but it’s close enough to satisfy the spirit of the prediction. In fact, being able to easily insert and remove a device into any article of clothing with a pocket is better than having a device integrated into the clothing since it allows for much more flexibility of attire–if you want to try out a new jogging route and also want to measure how long it is, you don’t have to remember to wear your one and only T-shirt with the built-in activity monitoring devices.

Millions of people use smart watches with health tracking features and smartphones with activity detection apps, which counts as “widely used” by any reasonable definition. However, no items of jewelry with embedded health tracking capabilities were available in 2019, so that part of the prediction is wrong.

“There are prevalent reports of computers passing the Turing Test, although these instances do not meet the criteria (with regard to sophistication of the human judge, the length of time for the interviews, etcetera) established by knowledgeable observers. There is a consensus that computers have not yet passed a valid Turing Test, but there is growing controversy on this point.”

RIGHT

In 2014, a chatbot named “Eugene Goostman” made global headlines by “passing the Turing Test” at an event held at the University of Reading. However, AI experts soon determined it was invalid because the dialog between the human judges and the machine only lasted five minutes. Ray Kurzweil himself even interacted with Eugene Goostman and posted the conversation on his website to show the machine could not correctly answer questions that probe commonsense reasoning and other aspects of intelligent thinking:

Kurzweil: How old are you?
Eugene: I’m a little boy. 13 years old.

Kurzweil: How much is two plus two?
Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!

Kurzweil: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)

Kurzweil: I live in the capital of the United States. Do you know where that is?
Eugene: Every nerd knows that the capital of The United States is Washington.

Kurzweil: I live in the capital of the country that put a man on the moon.
Eugene: Well, if you want it, the capital of The United States is Washington.

Kurzweil: I live in the capital of the country that built the great wall.
Eugene: Tell me more about Capital. I like to learn about different places!

In 2018, a Google AI program called “Duplex” also made headlines for “passing the Turing Test” in phone calls where it made restaurant reservations without the human workers on the other end of the line realizing they were talking to a machine. While an impressive technological feat, experts again disagreed with the media’s portrayal of its capabilities, and pointed out that in human-machine interactions weren’t valid Turing Tests because they were too short and focused on a narrow subject of conversation.

“The subjective experience of computer-based intelligence is seriously discussed, although the rights of machine intelligence have not yet entered mainstream debate.”

RIGHT

The prospect of computers becoming intelligent and conscious has been a topic of increasing discussion in the public sphere, and experts treat it with seriousness. A few recent examples of this include:

Those are all thoughtful articles written by experts whose credentials are relevant to the subject of machine consciousness. There are countless more articles, essays, speeches, and panel discussions about it available on the internet.

“Sophia” the robot

Machines, including the most advanced “A.I.s” that existed at the end of 2019, had no legal rights anywhere in the world, except perhaps in two countries: In 2017, the Saudis granted citizenship to an animatronic robot called “Sophia,” and Japan granted a residence permit to a video chatbot named “Shibuya Mirai.” Both of these actions appear to be government publicity stunts that would be nullified if anyone in either country decided to file a lawsuit.

“Machine intelligence is still largely the product of a collaboration between humans and machines, and has been programmed to maintain a subservient relationship to the species that created it.”

RIGHT

Critics often–and rightly–point out that the most impressive “A.I.s” owe their formidable capabilities to the legions of humans who laboriously and judiciously fed them training data, set their parameters, corrected their mistakes, and debugged their codes. For example, image-recognition algorithms are trained by showing them millions of photographs that humans have already organized or attached descriptive metadata to. Thus, the impressive ability of machines to identify what is shown in an image is ultimately the product of human-machine collaboration, with the human contribution playing the bigger role.

Finally, even the smartest and most capable machines can’t turn themselves on without human help, and still have very “brittle” and task-specific capabilities, so they are fundamentally subservient to humans. A more specific example of engineered subservience is seen in autonomous cars, where the computers were smart enough to drive safely by themselves in almost all road conditions, but laws required the vehicles to watch the human in the driver’s seat and stop if he or she wasn’t paying attention to the road and touching the controls.

Well, well, well…that’s it. I have finally come to the end of my project to review Ray Kurzweil’s predictions for 2019. This has been the longest single effort in the history of my blog, and I’m glad the next round of his predictions pertains to 2029, so I can have time to catch my breath. I would say the experience has been great, but like the whole year of 2020, I’m relieved to be able to turn the page and move on.

Happy New Year!

Links:

  1. Advances in AI during the 2010s forced humans to examine the specialness of human thinking, whether machines could also be intelligent and creative and what it would mean for humans if they could.
    https://www.bbc.com/news/business-47700701
  2. Andrew Yang made technological unemployment and universal basic income (UBI) major components of his 2020 U.S. Presidential campaign platform.
    https://en.wikipedia.org/wiki/Andrew_Yang#2020_presidential_campaign
  3. An article explaining “acoustic gunshot detection”:
    https://www.eff.org/pages/gunshot-detection
  4. The “ShotSpotter” gunshot detection system was emplaced in over 100 cities in 2019.
    https://www.startribune.com/as-gunfire-continues-in-st-paul-so-does-shotspotter-debate/565382652/
  5. This 2019 article from Dayton shows a correlation between the presence of license plate readers and a decrease in violent crime.
    https://www.daytondailynews.com/news/area-police-look-to-license-plates-readers-as-crime-fighting-tool/ESQLILHQP5HJTCIVJL6IJ6T7VU/
  6. In 2018, a wanted criminal was arrested in China after facial recognition cameras identified him at a concert, out of a crowd of 60,000 people.
    https://www.bbc.com/news/world-asia-china-43751276
  7. Edward Snowden’s key revelations about electronic spying.
    https://mashable.com/2014/06/05/edward-snowden-revelations/
  8. An incomplete list of data hacks that happened in the 2010s. Hundreds of millions of people had important personal data compromised.
    https://www.cnn.com/2019/07/30/tech/biggest-hacks-in-history/index.html
  9. A list of commonly used encrypted messaging apps in 2019.
    https://heimdalsecurity.com/blog/the-best-encrypted-messaging-apps/
  10. In 2018, VPNs were widely used on every continent. Forty-four percent of Indonesian internet users had them.
    https://blog.globalwebindex.com/chart-of-the-day/vpn-usage-2018/
  11. If obesity rates are any indication, people in the 2010s were not too poor to feed themselves.
    https://academic.oup.com/eurpub/article/23/3/464/536242
  12. In 2005, obesity became a cause of more childhood deaths than malnourishment. The disparity was surely even greater by 2019. There’s no financial reason why anyone on Earth should starve.
    https://www.factcheck.org/2013/03/bloombergs-obesity-claim/
  13. Several studies done during the 2010s indicated that governments would save money if they gave the homeless free apartments.
    https://www.vox.com/2014/5/30/5764096/homeless-shelter-housing-help-solutions
  14. A 2016 article about Google’s “Deep Dream” program, which can make surreal, artistic images.
    https://www.theguardian.com/artanddesign/2016/mar/28/google-deep-dream-art
  15. A computer-generated painting, “Portrait of Edmond de Belamy,” sold for $423,500 in 2018. Have YOU ever made a painting worth that much money?
    https://edition.cnn.com/style/article/obvious-ai-art-christies-auction-smart-creativity/index.html
  16. “Obvious” is a “collective” of humans and computers that produce accalimed art.
    https://obvious-art.com/page-about-obvious/
  17. “EMMY” is a machine that can write decent instrumental songs.
    https://www.theatlantic.com/entertainment/archive/2014/08/computers-that-compose/374916/
  18. Google’s “Open JukeBox” could even write songs that had simulated human voices singing.
    https://openai.com/blog/jukebox/
  19. Samples of GPT-2’s poetry.
    https://www.gwern.net/GPT-2
  20. Samples of GPT-2’s short news articles and written responses to prompts.
    https://openai.com/blog/better-language-models/
  21. “Auto-Tune” is a widely used song editing software program that can seamlessly alter the pitch and tone of a singer’s voice, allowing almost anyone to sound on-key. Most of the world’s top-selling songs were made with Auto-Tune or something similar to it. Are the most popular songs now products of “collaboration between human and machine intelligence”?
    https://en.wikipedia.org/wiki/Auto-Tune
  22. The virtual reality gaming industry had about $1.2 billion in revenues in 2019.
    https://www.juniperresearch.com/press/press-releases/virtual-reality-games-revenues-reach-8-bn-2023
  23. In 2017, terrorists killed 14,300 people globally.
    https://www.jewishvirtuallibrary.org/statistics-on-incidents-of-terrorism-worldwide
  24. The U.S. spent $16.6 billion on cyberseucrity in FY2019.
    https://www.fedscoop.com/cybersecurity-budget-2020-trump-white-house/
  25. The U.S. military’s “base” defense budget was $726.2 billion in FY2019.
    https://fas.org/sgp/crs/natsec/R44519.pdf
  26. The U.S. spent $33.6 billion on its nuclear forces in FY2019.
    https://www.cbo.gov/system/files/2019-01/54914-NuclearForces.pdf
  27. The “Phantom X1” ultralight plane.
    https://en.wikipedia.org/wiki/Phantom_X1
  28. Data for several “tiny” flying drones in use with the U.S. Navy in 2019.
    https://www.navy.mil/DesktopModules/ArticleCS/Print.aspx?PortalId=1&ModuleId=724&Article=2159299
  29. Data on the U.S. Army’s unmanned drones, including “tiny” ones, from the same period.
    https://fas.org/irp/program/collect/uas-army.pdf
  30. In 2019, the U.S. Air Force had 5,182 manned aircraft and wanted to buy 10,264 new guided missiles.
    https://www.csis.org/analysis/us-military-forces-fy-2020-air-force
  31. We recently discovered how a mutation in the “SF3B1” gene changes intracelluar activity in ways that raise cancer risk.
    https://www.fredhutch.org/en/news/center-news/2019/10/sf3b1-cancer-mutation.html
  32. The Human Genome Project led to major cost improvements to gene sequencing technology, and to the discovery of many disease-associated genes.
    https://unlockinglifescode.org/learn/human-genome-project
  33. We have a better understanding of how cell-level molecular machinery contributes to aging.
    https://pure.au.dk/ws/files/52135662/DemirovicRattanExpGer13.pdf
  34. Official 2018 life expectancy figures for the U.S. and Japan:
    https://www.cdc.gov/nchs/products/databriefs/db355.htm
    https://www.nippon.com/en/features/h00250/life-expectancy-for-japanese-men-and-women-at-new-record-high.html
  35. The 2019 Worldwide Threat Assessment barely mentions biological weapons.
    https://www.dni.gov/files/ODNI/documents/2019-ATA-SFR—SSCI.pdf
  36. Pfizer’s COVID-19 vaccine is the first to incorporate mRNA. The new technology could lead to other vaccines that save millions of lives.
    https://www.wfaa.com/article/news/health/coronavirus/vaccine/what-is-an-mrna-covid-19-vaccine-and-how-does-it-differ-from-other-vaccines/287-240b8181-f13f-47a4-9514-9b6b30988d32
    http://www.rationaloptimist.com/blog/mrna-vaccines-could-revolutionise-medicine/
  37. Several smart watches available in 2019 had ECG monitors.
    https://www.reviewsbreak.com/best-ecg-smartwatch/
    https://www.theverge.com/2018/9/13/17855006/apple-watch-series-4-ekg-fda-approved-vs-cleared-meaning-safe
  38. In 2019, Apple Watches with ECG monitors detected atrial fibrillation events in almost 2,000 people.
    https://news.trust.org/item/20190316134851-5cktc/
  39. The Apple Watch’s “hard fall” detection feature might have already saved the lives of several injured people.
    https://www.nbcnews.com/news/us-news/apple-watch-s-hard-fall-feature-automatically-calls-911-hiker-n1070471
  40. The “HeartGuide” smart watch can monitor blood pressure.
    https://www.medtechdive.com/news/fda-cleared-wearable-blood-pressure-device-hits-market/544908/
  41. The media wrongly declared in 2014 the “Eugene Goostman” had passed the Turing Test.
    https://www.bbc.com/news/technology-27762088
    https://www.kurzweilai.net/mt-notes-on-the-announcement-of-chatbot-eugene-goostman-passing-the-turing-test
  42. Google’s “Duplex” AI could masquerade as human for short conversations.
    https://digital.hbs.edu/platform-rctom/submission/google-duplex-does-it-pass-the-turing-test/
  43. The actions by Japan and Saudi Arabia to grant some rights to machines are probably invalid under their own legal frameworks.
    https://www.ersj.eu/journal/1245
  44. Facebook’s image recognition feature relied on a massive training set of data prepared by humans.
    https://engineering.fb.com/2018/05/02/ml-applications/advancing-state-of-the-art-image-recognition-with-deep-learning-on-hashtags/

How Ray Kurzweil’s 2019 predictions are faring (pt 3)

This is the third entry in my series of blog posts that will analyze the accuracy of Ray Kurzweil’s predictions about what things would be like in 2019. These predictions come from his 1998 book The Age of Spiritual Machines. My previous entries on this subject can be found here:

Part 1
Part 2

“You can do virtually anything with anyone regardless of physical proximity. The technology to accomplish this is easy to use and ever present.”

PARTLY RIGHT

While new and improved technologies have made it vastly easier for people to virtually interact, and have even opened new avenues of communication (chiefly, video phone calls) since the book was published in 1998, the reality of 2019 falls short of what this prediction seems to broadly imply. As I’ll explain in detail throughout this blog entry, there are many types of interpersonal interaction that still can’t be duplicated virtually. However, the second part of the prediction seems right. Cell phone and internet networks are much better and have much greater geographic reach, meaning they could be fairly described as “ever present.” Likewise, smartphones, tablet computers, and other devices that people use to remotely interact with each other over those phone and internet networks are cheap, “easy to use and ever present.”

“‘Phone’ calls routinely include high-resolution three-dimensional images projected through the direct-eye displays and auditory lenses.”

WRONG

As stated in previous installments of this analysis, the computerized glasses, goggles and contact lenses that Kurzweil predicted would be widespread by the end of 2019 failed to become so. Those devices would have contained the “direct-eye displays” that would have allowed users to see simulated 3D images of people and other things in their proximities. Not even 1% of 1% of phone calls in 2019 involved both parties seeing live, three-dimensional video footage of each other. I haven’t met one person who reported doing this, whereas I know many people who occasionally do 2D video calls using cameras and traditional screen displays.

Video calls have become routine thanks to better, cheaper computing devices and internet service, but neither party sees a 3D video feed. And, while this is mostly my anecdotal impression, voice-only phone calls are vastly more common in aggregate number and duration than video calls. (I couldn’t find good usage data to compare the two, but don’t see how it’s possible my conclusion could be wrong given the massive disparity I have consistently observed day after day.) People don’t always want their faces or their surroundings to be seen by people on the other end of a call, and the seemingly small extra amount of effort required to do a video call compared to a mere voice call is actually a larger barrier to the former than futurists 20 years ago probably thought it would be.

“Three-dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.”

MOSTLY WRONG

As I wrote in my Prometheus review, 3D holographic display technology falls far short of where Kurzweil predicted it would be by 2019. The machines are very expensive and uncommon, and their resolutions are coarse, with individual pixels and voxels being clearly visible.

Augmented reality glasses lack the fine resolution to display lifelike images of people, but some virtual reality goggles sort of can. First, let’s define what level of resolution a video display would need to look “lifelike” to a person with normal eyesight.

A depiction of a human eye’s horizontal field of view.

A human being’s field of vision is front-facing, flared-out “cone” with a 210 degree horizontal arc and a 150 degree vertical arc. This means, if you put a concave display in front of a person’s face that was big enough to fill those degrees of horizontal and vertical width, it would fill the person’s entire field of vision, and he would not be able to see the edges of the screen even if he moved his eyes around.

If this concave screen’s pixels were squares measuring one degree of length to a side, then the screen would look like a grid of 210 x 150 pixels. To a person with 20/20 vision, the images on such a screen would look very blocky, and much less detailed than how he normally sees. However, lab tests show that if we shrink the pixels to 1/60th that size, so the concave screen is a grid of 12,600 x 9,000 pixels, then the displayed images look no worse than what the person sees in the real world. Even a person with good eyesight can’t see the individual pixels or the thin lines that separate them, and the display quality is said to be “lifelike.”

The “Varjo VR-1” virtual reality goggles

No commercially available VR goggles have anything close to lifelike displays, either in terms of field of view or 60-pixels-per-degree resolutions. Only the “Varjo VR-1” googles come close to meeting the technical requirements laid out by the prediction: they have 60-pixels-per-degree resolutions, but only for the central portions of their display screens, where the user’s eyes are usually looking. The wide margins of the screens are much lower in resolution. If you did a video call, the other person filmed themselves using a very high-quality 4K camera, and you used Varjo VR-1 goggles to view the live footage while keeping your eyes focused on the middle of the screen, that person might look as lifelike as they would if they were physically present with you.

Problematically, a pair of Varjo VR-1’s is $6,000. Also, in 2019, it is very uncommon for people to use any brand of VR goggles for video calls. Another major problem is that the goggles are bulky and would block people on the other end of a video call from seeing the upper half of your own face. If both of your wore VR goggles in the hopes of simulating an in-person conversation, the intimacy would be lost because neither of you would be able to see most of the other person’s face.

VR technology simply hasn’t improved as fast as Kurzweil predicted. Trends suggest that goggles with truly lifelike displays won’t exist until 2025 – 2028, and they will be expensive, bulky devices that will need to be plugged into larger computing devices for power and data processing. The resolutions of AR glasses and 3D holograms are lagging even more.

“Routinely available communication technology includes high-quality speech-to-speech language translation for most common language pairs.”

MOSTLY RIGHT

In 2019, there were many speech-to-speech language translation apps on the market, for free or very low cost. The most popular was Google Translate, which had a very high user rating, had been downloaded by over 6 million people, and could do voice translations between 30+ languages.

The only part of the prediction that remains debatable is the claim that the technology would offer “high-quality” translations. Professional human translators produce more coherent and accurate translations than even the best apps, and it’s probably better to say that machines can do “fair-to-good-quality” language translation. Of course, it must be noted that the technology is expected to improve.

“Reading books, magazines, newspapers, and other web documents, listening to music, watching three-dimensional moving images (for example, television, movies), engaging in three-dimensional visual phone calls, entering virtual environments (by yourself, or with others who may be geographically remote), and various combinations of these activities are all done through the ever present communications Web and do not require any equipment, devices, or objects that are not worn or implanted.”

MOSTLY RIGHT

Reading text is easily and commonly done off of smartphones and tablet computers. Smartphones and small MP3 players are also commonly used to store and play music. All of those devices are portable, can easily download text and songs wirelessly from the internet, and are often “worn” in pockets or carried around by hand while in use. Smartphones and tablets can also be used for two-way visual phone calls, but those involve two-dimensional moving images, and not three as the prediction specified.

As detailed previously, VR technology didn’t advance fast enough to allow people to have “three-dimensional” video calls with each other by 2019. However, the technology is good enough to generate immersive virtual environments where people can play games or do specialized types of work. Though the most powerful and advanced VR goggles must be tethered to desktop PCs for power and data, there are “standalone” goggles like the “Oculus Go” that provide a respectable experience and don’t need to be plugged in to anything else during operation (battery life is reportedly 2 – 3 hours).

“The all-enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all the facets of the tactile sense, including the senses of pressure, temperature, textures, and moistness…the ‘total touch’ haptic environment requires entering a virtual reality booth.”

WRONG

Aside from a few, expensive prototypes, there are no body suits or “booths” that simulate touch sensations. The only kind of haptic technology in widespread use is video game control pads that can vibrate to crudely approximate the feeling of shooting a gun or being next to an explosion.

“These technologies are popular for medical examinations, as well as sensual and sexual interactions…”

WRONG

Though video phone technology has made remote doctor appointments more common, technology has not yet made it possible for doctors to remotely “touch” patients for physical exams. “Remote sex” is unsatisfying and basically nonexistent. Haptic devices (called “teledildonics” for those specifically designed for sexual uses) that allow people to remotely send and receive physical force to one another exist, but they are too expensive and technically limited to find use.

“Rapid economic expansion and prosperity has continued.”

PARTLY RIGHT

Assessing this prediction requires a consideration of the broader context in the book. In the chapter titled “2009,” which listed predictions that would be true by that year, Kurzweil wrote, “Despite occasional corrections, the ten years leading up to 2009 have seen continuous economic expansion and prosperity…” The prediction for 2019 says that phenomenon “has continued,” so it’s clear he meant that economic growth for the time period from 1998 – December 2008 would be roughly the same as the growth from January 2009 – December 2019. Was it?

U.S. real GDP growth rate (year-over-year)

The above chart shows the U.S. GDP growth rate. The economy continuously grew during the 1998 – 2019 timeframe, except for most of 2009, which was the nadir of the Great Recession.

OECD GDP growth rate from 1998 – 2019

Above is a chart I made using data for the OECD for the same time period. The post-Great Recession GDP growth rates are slightly lower than the pre-recession era’s, but growth is still happening.

Global GDP growth rate from 1998 – 2019

And this final chart shows global GDP growth over the same period.

Clearly, the prediction’s big miss was the Great Recession, but to be fair, nearly every economist in the world failed to foresee it–even in early 2008, many of them thought the economic downturn that was starting would be a run-of-the-mill recession that the world economy would easily bounce back from. The fact that something as bad as the Great Recession happened at all means the prediction is wrong in an important sense, as it implied that economic growth would be continuous, but it wasn’t since it went negative for most of 2009, in the worst downturn since the 1930s.

At the same time, Kurzweil was unwittingly prescient in picking January 1, 2009 as the boundary of his two time periods. As the graphs show, that creates a neat symmetry to his two timeframes, with the first being a period of growth ending with a major economic downturn and the second being the inverse.

While GDP growth was higher during the first timeframe, the difference is less dramatic than it looks once one remembers that much of what happened from 2003 – 2007 was “fake growth” fueled by widespread irresponsible lending and transactions involving concocted financial instruments that pumped up corporate balance sheets without creating anything of actual value. If we lower the heights of the line graphs for 2003 – 2007 so we only see “honest GDP growth,” then the two time periods do almost look like mirror images of each other. (Additionally, if we assume that adjustment happened because of the actions of wiser financial regulators who kept the lending bubbles and fake investments from coming into existence in the first place, then we can also assume that stopped the Great Recession from happening, in which case Kurzweil’s prediction would be 100% right.) Once we make that adjustment, then we see that economic growth for the time period from 1998 – December 2008 was roughly the same as the growth from January 2009 – December 2019.

“The vast majority of transactions include a simulated person, featuring a realistic animated personality and two-way voice communication with high-quality natural-language understanding.”

WRONG

“Simulated people” of this sort are used in almost no transactions. The majority of transactions are still done face-to-face, and between two humans only. While online transactions are getting more common, the nature of those transactions is much simpler than the prediction described: a buyer finds an item he wants on a retailer’s internet site, clicks a “Buy” button, and then inputs his address and method of payment (these data are often saved to the buyer’s computing device and are automatically uploaded to save time). It’s entirely text- and button-based, and is simpler, faster, and better than the inefficient-sounding interaction with a talking video simulacrum of a shopkeeper.

As with the failure of video calls to become more widespread, this development indicates that humans often prefer technology that is simple and fast to use over technology that is complex and more involving to use, even if the latter more closely approximates a traditional human-to-human interaction. The popularity of text messaging further supports this observation.

“Often, there is no human involved, as a human may have his or her automated personal assistant conduct transactions on his or her behalf with other automated personalities. In this case, the assistants skip the natural language and communicate directly by exchanging appropriate knowledge structures.”

MOSTLY WRONG

The only instances in which average people entrust their personal computing devices to automatically buy things on their behalf involve stock trading. Even small-time traders can use automated trading systems and customize them with “stops” that buy or sell preset quantities of specific stocks once the share price reaches prespecified levels. Those stock trades only involve computer programs “talking” to each other–one on behalf of the seller and the other on behalf of the buyer. Only a small minority of people actively trade stocks.

“Household robots for performing cleaning and other chores are now ubiquitous and reliable.”

PARTLY RIGHT

Small vacuum cleaner robots are affordable, reliable, clean carpets well, and are common in rich countries (though it still seems like fewer than 10% of U.S. households have one). Several companies make them, and highly rated models range in price from $150 – $250. Robot “mops,” which look nearly identical to their vacuum cleaning cousins, but use rotating pads and squirts of hot water to clean hard floors, also exist, but are more recent inventions and are far rarer. I’ve never seen one in use and don’t know anyone who owns one.

The iRobot Roomba 960 is a highly rated robot vacuum cleaner.

No other types of household robots exist in anything but token numbers, meaning the part of the prediction that says “and other chores” is wrong. Furthermore, it’s wrong to say that the household robots we do have in 2019 are “ubiquitous,” as that word means “existing or being everywhere at the same time : constantly encountered : WIDESPREAD,” and vacuum and mop robots clearly are not any of those. Instead, they are “common,” meaning people are used to seeing them, even if they are not seen every day or even every month.

“Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads. While humans are still allowed to drive on local roads (although not on highways), the automated driving systems are always engaged and are ready to take control when necessary to prevent accidents.”

WRONG*

The “automated driving systems” were mentioned in the “2009” chapter of predictions, and are described there as being networks of stationary road sensors that monitor road conditions and traffic, and transmit instructions to car computers, allowing the vehicles to drive safely and efficiently without human help. These kinds of roadway sensor networks have not been installed anywhere in the world. Moreover, no public roads are closed to human-driven vehicles and only open to autonomous vehicles.

Newer cars come with many types of advanced safety features that are “always engaged,” such as blind spot sensors, driver attention monitors, forward-collision warning sensors, lane-departure warning systems, and pedestrian detection systems. However, having those devices isn’t mandatory, and they don’t override the human driver’s inputs–they merely warn the driver of problems. Automated emergency braking systems, which use front-facing cameras and radars to detect imminent collisions and apply the brakes if the human driver fails to do so, are the only safety systems that “are ready to take control when necessary to prevent accidents.” They are not common now, but will become mandatory in the U.S. starting in 2022.

*While the roadway sensor network wasn’t built as Kurzweil foresaw, it turns out it wasn’t necessary. By the end of 2019, self-driving car technology had reached impressive heights, with the most advanced vehicles being capable of of “Level 3” autonomy, meaning they could undertake long, complex road trips without problems or human assistance (however, out of an abundance of caution, the manufacturers of these cars built in features requiring the human drivers to clutch the steering wheels and to keep their eyes on the road while the autopilot modes were active). Moreover, this could be done without the help of any sensors emplaced along the highways. The GPS network has proven itself an accurate source of real-time location data for autonomous cars, obviating the need to build expensive new infrastructure paralleling the roads.

In other words, while Kurzweil got several important details wrong, the overall state of self-driving car technology in 2019 only fell a little short of what he expected.

“Efficient personal flying vehicles using microflaps have been demonstrated and are primarily computer controlled.”

UNCLEAR (but probably WRONG)

The vagueness of this prediction’s wording makes it impossible to evaluate. What does “efficient” refer to? Fuel consumption, speed with which the vehicle transports people, or some other quality? Regardless of the chosen metric, how well must it perform to be considered “efficient”? The personal flying vehicles are supposed to be efficient compared to what?

A man on a flying skateboard participated in France’s 2019 Bastille Day military parade. The device counts as a “personal flying vehicle,” but it is impractical and very dangerous to use. It can travel about five miles in 10 minutes on one full tank of fuel, and can take off and land almost anywhere. Is it “efficient”?

What is a “personal flying vehicle”? A flying car, which is capable of flight through the air and horizonal movement over roads, or a vehicle that is capable of flight only, like a small helicopter, autogyro, jetpack, or flying skateboard?

But even if we had answers to those questions, it wouldn’t matter much since “have been demonstrated” is an escape hatch allowing Kurzweil to claim at least some measure of correctness on this prediction since it allows the prediction to be true if just two prototypes of personal flying vehicles have been built and tested in a lab. “Are widespread” or “Are routinely used by at least 1% of the population” would have been meaningful statements that would have made it possible to assess the prediction’s accuracy. “Have been demonstrated” sets the bar so low that it’s almost impossible to be wrong.

Diagram showing what a “Gurney flap” / “microflap” is.

At least the prediction contains one, well-defined term: “microflaps.” These are small, skinny control surfaces found on some aircraft. They are fixed in one position, and in that configuration are commonly called “Gurney flaps,” but experiments have also been done with moveable microflaps. While useful for some types of aircraft, Gurney flaps are not essential, and moveable microflaps have not been incorporated into any mass-produced aircraft designs.

“There are very few transportation accidents.”

WRONG

Tens of millions of serious vehicle accidents happen in the world every year, and road accidents killed 1.35 million people worldwide in 2016, the last year for which good statistics are available. Globally, the per capita death rate from vehicle accidents has changed little since 2000, shortly after the book was published, and it has been the tenth most common cause of death for the 2000 – 2016 time period.

In the U.S., over 40,000 people died due to transportation accidents in 2017, the last year for which good statistics are available.

“People are beginning to have relationships with automated personalities as companions, teachers, caretakers, and lovers.”

WRONG

As I noted in part 1 of this analysis, even the best “automated personalities” like Alexa, Siri, and Cortana are clearly machines and are not likeable or relatable to humans at any emotional level. Ironically, by 2019, one of the great socials ills in the Western world was the extent to which personal technologies have isolated people and made them unhappy, and it was coupled with a growing appreciation of how important regular interpersonal interaction was to human mental health.

Aaaaaand that’s it for now. I originally estimated this project to analyze all of Ray Kurzweil’s 2019 predictions could be spread out over three blog entries, but it has taken even more time and effort than I anticipated, and I need one more. Stay tuned, the fourth AND FINAL installment is coming soon!

Links:

  1. A 2018 survey found that most American adults spent an average of 24-41 minutes per day on phone calls. The survey didn’t break that number out into traditional voice-only calls and video calls.
    https://www.zdnet.com/article/americans-spend-far-more-time-on-their-smartphones-than-they-think/
  2. Another 2018 survey commissioned by the telecom company Vonage found that “1 in 3 people live video chat at least once a week.” That means 2 in 3 people use the technology less often than that, perhaps not at all. The data from this and the previous source strongly suggest that voice-only calls were much more common than video calls, which strongly aligns with my everyday observations.
    https://www.vonage.com/resources/articles/video-chatterbox-nation-report-2018/
  3. A person with 20/20 vision basically sees the world as a wraparound TV screen that is 12,600 pixels wide x 9,000 pixels high (total: 113.4 million pixels). VR goggles with resolutions that high will become available between 2025 and 2028, making “lifelike” virtual reality possible.
    https://www.microsoft.com/en-us/research/uploads/prod/2018/02/perfectillusion.pdf
  4. The “Varjo VR-1” virtual reality goggles cost $6,000 and can display lifelike images at the centers of their screens.
    https://www.cnet.com/news/the-best-vr-display-ive-ever-seen-varjo-vr-1-costs-6000/
  5. A roundup of the top ten speech-to-speech language translation apps of 2019.
    https://www.daytranslations.com/blog/top-10-free-language-translation-apps/
  6. A 2018 study found that the best English-Mandarin machine translation programs were inferior to professional human translators.
    https://www.technologyreview.com/2018/09/05/140487/human-translators-are-still-on-top-for-now/
  7. The “Oculus Go” is a VR headset that doesn’t need to be plugged into anything else for electricity or data processing. It’s a fully self-contained device.
    https://www.cnet.com/reviews/oculus-go-review/
  8. As this 2019 article makes clear, virtual haptic technology is far less advanced than Kurzweil predicted it would be.
    https://www.scientificamerican.com/article/new-virtual-reality-interface-enables-touch-across-long-distances/
  9. An account of a firsthand experience with cutting-edge (no pun intended) teledildonics in 2018:
    https://www.engadget.com/2018-07-02-flirt4free-teledildonics-long-distance-sex.html
  10. A 2019 analysis shows that the vast majority of transactions in the U.S. are still done face-to-face between humans, but e-commerce’s share is steadily growing.
    https://www.digitalcommerce360.com/article/us-ecommerce-sales/
  11. A roundup of the highest-rated robot vacuum cleaners of 2019:
    https://www.techhive.com/article/3388038/best-robot-vacuums-on-amazon.html
  12. A list of advanced car safety features from 2019:
    https://www.caranddriver.com/features/g27612164/car-safety-features/
  13. Tesla Autopilot is capable of Level 3 autonomous driving. However, out of an abundance of caution (e.g. – just one accident generates enormous bad publicity), the company has installed features that cap it at Level 2.
    https://electrek.co/2019/09/19/tesla-autopilot-v10-commute-without-driver-intervention/
  14. French inventor Franky Zapata designed a flying skateboard called the “Flyboard Air,” and used it to cross the English Channel and wow crowds during the 2019 Bastille Day military parade.
    https://www.theverge.com/2019/8/4/20753648/jet-powered-hoverboard-english-channel-crossing-franky-zapata-success
  15. These World Health Organization reports show that deadly road accidents were about as common in 2016 as they were in 2000. It’s still a leading cause of death.
    https://www.who.int/news-room/fact-sheets/detail/the-top-10-causes-of-death
    https://apps.who.int/iris/bitstream/handle/10665/277370/WHO-NMH-NVI-18.20-eng.pdf?ua=1
  16. The CDC reported that 43,024 people died in the U.S. in 2017 of “Transport accidents.” Only 1,718 of those did not involve road vehicles.
    https://www.cdc.gov/nchs/data/nvsr/nvsr68/nvsr68_09_tables-508.pdf

How Ray Kurzweil’s 2019 predictions are faring (pt 2)

This is the second entry in my series of blog posts that will analyze the accuracy of Ray Kurzweil’s predictions about what things would be like in 2019. These predictions come from his 1998 book The Age of Spiritual Machines. My first entry on this subject can be found here.

“Hand-held displays are extremely thin, very high resolution, and weigh only ounces.”

RIGHT

The Samsung Galaxy Tab S5 is, by any reasonable account, extremely thin and very high resolution, and it weighs ounces. New, it costs less than $500, making it affordable for millions of average people. There are even better tablet computers than this.

The tablet computers and smartphones of 2019 meet these criteria. For example, the Samsung Galaxy Tab S5 is only 0.22″ thick, has a resolution that is high enough for the human eye to be unable to discern individual pixels at normal viewing distances (3840 x 2160 pixels), and weighs 14 ounces (since 1 pound is 16 ounces, the Tab S5’s weight falls below the higher unit of measurement, and it should be expressed in ounces). Tablets like this are of course meant to be held in the hands during use.

The smartphones of 2019 also meet Kurzweil’s criteria.

“People read documents either on the hand-held displays or, more commonly, from text that is projected into the ever present virtual environment using the ubiquitous direct-eye displays. Paper books and documents are rarely used or accessed.

MOSTLY WRONG

A careful reading of this prediction makes it clear that Kurzweil believed AR glasses would be commonest way people would read text documents by late 2019. The second most common method would be to read the documents off of smartphones and tablet computers. A distant last place would be to read old-fashioned books with paper pages. (Presumably, reading text off of a laptop or desktop PC monitor was somewhere between the last two.)

The first part of the prediction is badly wrong. At the end of 2019, there were fewer than 1 million sets of AR glasses in use around the world. Even if all of their owners were bibliophiles who spent all their waking hours using their glasses to read documents that were projected in front of them, it would be mathematically impossible for that to constitute the #1 means by which the human race, in aggregate, read written words.

The bar chart shows yearly sales of paper books in the U.S. Sales declined in the early 2010s due to the debut of e-readers and smartphones, but then they recovered a great deal. Books aren’t dead.

Certainly, is now much more common for people to read documents on handheld displays like smartphones and tablets than at any time in the past, and paper’s dominance of the written medium is declining. Additionally, there are surely millions of Americans who, like me, do the vast majority of their reading (whether for leisure or work) off of electronic devices and computer screens. However, old-fashioned print books, newspapers, magazines, and packets of workplace documents are far from extinct, and it is inaccurate to claim they “are rarely used or accessed,” both in the relative and absolute senses of the statement. As the bar chart above shows, sales of print books were actually slightly higher in 2019 than they were in 2004, which was near the time when The Age of Spiritual Machines was published.

Sales of “graphic paper” have dropped in rich countries over the last 20 years and will also start dropping in poor countries soon.

Finally, sales of “graphic paper”–which is an industry term for paper used in newsprint, magazines, office printer paper, and other common applications–were still high in 2019, even if they were trending down. If 110 million metric tons of graphic paper were sold in 2019, then it can’t be said that “Paper books and documents are rarely used or accessed.” Anecdotally, I will say that, though my office primarily uses all-digital documents, it is still common to use paper documents, and in fact it is sometimes preferable to do so.

Most twentieth-century paper documents of interest have been scanned and are available through the wireless network.”

RIGHT

The wording again makes it impossible to gauge the prediction’s accuracy. What counts as a “paper document”? For sure, we can say it includes bestselling books, newspapers of record, and leading science journals, but what about books that only sold a few thousand copies, small-town newspapers, and third-tier science journals? Are we also counting the mountains of government reports produced and published worldwide in the last century, mostly by obscure agencies and about narrow, bland topics? Equally defensible answers could result in document numbers that are orders of magnitude different.

Also, the term “of interest” provides Kurzweil with an escape hatch because its meaning is subjective. If it were the case that electronic scans of 99% of the books published in the twentieth century were NOT available on the internet in 2019, he could just say “Well, that’s because those books aren’t of interest to modern people” and he could then claim he was right.

It would have been much better if the prediction included a specific metric, like: “By the end of 2019, electronic versions of at least 1 million full-length books written in the twentieth century will be available through the wireless network.” Alas, it doesn’t, and Kurzweil gets this one right on a technicality.

For what it’s worth, I think the prediction was also right in spirit. Millions of books are now available to read online, and that number includes most of the 20th century books that people in 2019 consider important or interesting. One of the biggest repositories of e-books, the “Internet Archive,” has 3.8 million scanned books, and they’re free to view. (Google actually scanned 25 million books with the intent to create something like its own virtual library, but lawsuits from book publishers have put the project into abeyance.)

The New York Times, America’s newspaper of record, has made scans of every one of its issues since its founding in 1851 available online, as have other major newspapers such as the Washington Post. The cursory research I’ve done suggests that all or almost all issues of the biggest American newspapers are now available online, either through company websites or third party sites like newspapers.com.

The U.S. National Archives has scanned over 92 million pages of government documents, and made them available online. Primacy was given to scanning documents that were most requested by researchers and members of the public, so it could easily be the case that most twentieth-century U.S. government paper documents of interest have been scanned. Additionally, in two years the Archives will start requiring all U.S. agencies to submit ONLY digital records, eliminating the very cumbersome middle step of scanning paper, and thenceforth ensuring that government records become available to and easily searchable by the public right away.

The New England Journal of Medicine, the journal Science, and the journal Nature all offer scans of pass issues dating back to their foundings in the 1800s. I lack the time to check whether this is also true for other prestigious academic journals, but I strongly suspect it is. All of the seminal papers documenting the significant scientific discoveries of the 20th century are now available online.

Without a doubt, the internet and a lot of diligent people scanning old books and papers have improved the public’s access to written documents and information by orders of magnitude compared to 1998. It truly is a different world.

“Most learning is accomplished using intelligent software-based simulated teachers. To the extent that teaching is done by human teachers, the human teachers are often not in the local vicinity of the student. The teachers are viewed more as mentors and counselors than as sources of learning and knowledge.”

WRONG*

The technology behind and popularity of online learning and AI teachers didn’t advance as fast as Kurzweil predicted. At the end of 2019, traditional in-person instruction was far more common than and was widely considered to be superior to online learning, though the latter had niche advantages.

However, shortly after 2019 ended, the COVID-19 pandemic forced most of the world into quarantine in an effort to slow the virus’ spread. Schools, workplaces, and most other places where people usually gathered were shut down, and people the world over were forced to do everyday activities remotely. American schools and universities switched to online classrooms in what might be looked at as the greatest social experiment of the decade. For better or worse, most human teachers were no longer in the local vicinity of their students.

Thus, part of Kurzweil’s prediction came true, a few months late and as an unwelcome emergency measure rather than as a voluntary embrasure of a new educational paradigm. Unfortunately, student reactions to online learning have been mostly negative. A 2020 survey found that most college students believed it was harder to absorb knowledge and to learn new skills through online classrooms than it was through in-person instruction. Almost all of them unsurprisingly said that traditional classroom environments were more useful for developing social skills. The survey data I found on the attitudes of high school students showed that most of them considered distance learning to be of inferior quality. Public school teachers and administrators across the country reported higher rates of student absenteeism when schools switched to 100% online instruction, and their support for it measurably dropped as time passed.

The COVID-19 lockdowns have made us confront hard truths about virtual learning. It hasn’t been the unalloyed good that Kurzweil seems to have expected, though technological improvements that make the experience more immersive (ex – faster internet to reduce lag, virtual reality headsets) will surely solve some of the problems that have come to light.

“Students continue to gather together to exchange ideas and to socialize, although even this gathering is often physically and geographically remote.”

RIGHT

As I described at length, traditional in-person classroom instruction remained the dominant educational paradigm in late 2019, which of course means that students routinely gathered together for learning and socializing. The second part of the prediction is also right, since social media, cheaper and better computing devices and internet service, and videophone apps have made it much more common for students of all ages to study, work, and socialize together virtually than they did in 1998.

“All students use computation. Computation in general is everywhere, so a student’s not having a computer is rarely an issue.”

MOSTLY RIGHT

First, Kurzweil’s use of “all” was clearly figurative and not literal. If pressed on this back in 1998, surely he would have conceded that even in 2019, students living in Amish communities, living under strict parents who were paranoid technophobes, or living in the poorest slums of the poorest or most war-wrecked country would not have access to computing devices that had any relevance to their schooling.

Second, note the use of “computation” and “computer,” which are very broad in meaning. As I wrote in the first part of this analysis, “A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is…something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer.”

With these two caveats in mind, it’s clear that “all students use computation” by default since all people except those in the most deprived environments routinely interact with computing devices. It is also true that “computation in general is everywhere,” and the prediction merely restates this earlier prediction: “Computers are now largely invisible. They are embedded everywhere…” In the most literal sense, most of the prediction is correct.

However, a judgement is harder to make if we consider whether the spirit of the prediction has been fulfilled. In context, the prediction’s use of “computation” and “computer” surely refers to devices that let students efficiently study materials, watch instructional videos, and do complex school assignments like writing essays and completing math equations. These devices would have also required internet access to perform some of those key functions. At least in the U.S., virtually all schools in late 2019 have computer terminals with speedy internet access that students can use for free. A school without either of those would be considered very unusual. Likewise, almost all of the country’s public libraries have public computer terminals and internet service (and, of course, books), which people can use for their studies and coursework if they don’t have computers or internet in their homes.

At the same time, 17% of students in the U.S. still don’t have computers in their homes and 18% have no internet access or very slow service (there’s probably large overlap between people in those two groups). Mostly this is because they live in remote areas where it isn’t profitable for telecom companies to install high-speed internet lines, or because they belong to extremely poor or disorganized households. This lack of access to computers and internet service results in measurably worse academic performance, a phenomenon called the “homework gap” or the “digital gap.” With this in mind, it’s questionable whether the prediction’s last claim, that “a student’s not having a computer is rarely an issue” has come true.

“Most adult human workers spend the majority of their time acquiring new skills and knowledge.”

WRONG

This is so obviously wrong that I don’t need to present any data or studies to support my judgement. With a tiny number of exceptions, employed adults spend most of their time at work using the same skills over and over to do the same set of tasks. Yes, today’s jobs are more knowledge-based and technology-based than ever before, and a greater share of jobs require formal degrees and training certificates than ever, but few professions are so complex or fast-changing that workers need to spend most of their time learning new skills and knowledge to keep up.

In fact, since the Age of Spiritual Machines was published, a backlash against the high costs and necessity of postsecondary education–at least as it is in America–has arisen. Sentiment is growing that the four-year college degree model is wasteful, obsolete for most purposes, and leaves young adults saddled with debts that take years to repay. Sadly, I doubt these critics will succeed bringing about serious reforms to the system.

If and when we reach the point where a postsecondary degree is needed just to get a respectably entry-level job, and then merely keeping that job or moving up to the next rung on the career ladder requires workers to spend more than half their time learning new skills and knowledge–whether due to competition from machines that keep getting better and taking over jobs or due to the frequent introductions of new technologies that human workers must learn to use–then I predict a large share of humans will become chronically demoralized and will drop out of the workforce. This is a phenomenon I call “job automation escape velocity,” and intend to discuss at length in a future blog post.

“Blind persons routinely use eyeglass-mounted reading-navigation systems, which incorporate the new, digitally controlled, high-resolution optical sensors. These systems can read text in the real world, although since most print is now electronic, print-to-speech reading is less of a requirement. The navigation function of these systems, which emerged about ten years ago, is now perfected. These automated reading-navigation assistants communicate to blind users through both speech and tactile indicators. These systems are also widely used by sighted persons since they provide a high-resolution interpretation of the visual world.”

PARTLY RIGHT

As stated previously, AR glasses have not yet been successful on the commercial market and are used by almost no one, blind or sighted. However, there are smartphone apps meant for blind people that use the phone’s camera to scan what is in front of the person, and they have the range of functions Kurzweil described. For example, the “Seeing AI” app can recognize text and read it out loud to the user, and can recognize common objects and familiar people and verbally describe or name them.

Additionally, there are other smartphone apps, such as “BlindSquare,” which use GPS and detailed verbal instructions to guide blind people to destinations. It also describes nearby businesses and points of interest, and can warn users of nearby curbs and stairs.

Apps that are made specifically for blind people are not in wide usage among sighted people.

“Retinal and vision neural implants have emerged but have limitations and are used by only a small percentage of blind persons.”

MOSTLY RIGHT

Retinal implants exist and can restore limited vision to people with certain types of blindness. However, they provide only a very coarse level of sight, are expensive, and require the use of body-worn accessories to collect, process, and transmit visual data to the eye implant itself. The “Argus II” device is the only retinal implant system available in the U.S., and the FDA approved it in 2013. As of this writing, the manufacturer’s website claimed that only 350 blind people worldwide used the systems, which indeed counts as “only a small percentage of blind persons.”

The “Argus II” system consists of an electronic device surgically implanted in a person’s retina which receives vision data from externally-worn camera glasses and a data processing unit.

The meaning of “vision neural implants” is unclear, but could only refer to devices that connect directly to a blind person’s optic nerve or brain vision cortex. While some human medical trials are underway, none of the implants have been approved for general use, nor does that look poised to change.

“Deaf persons routinely read what other people are saying through the deaf persons’ lens displays.”

MOSTLY WRONG

“Lens displays” is clearly referring to those inside augmented reality glasses and AR contact lenses, so the prediction says that a person wearing such eyewear would be able to see speech subtitles across his or her field of vision. While there is at least one model of AR glasses–the Vuzix Blade–that has this capability, almost no one uses them because, as I explored in part 1 of this review, AR glasses failed on the commercial market. By extension, this means the prediction also failed to come true since it specified that deaf people would “routinely” wear AR glasses by 2019.

A person wearing Vuzix Blade glasses can download the “Zoi Meet” app into the device and have subtitles of spoken words displayed across their field of vision.

However, in the prediction’s defense, deaf people commonly use real-time speech-to-text apps on their smartphones. While not as convenient as having captions displayed across one’s field of view, it still makes communication with non-deaf people who don’t know sign language much easier. Google, Apple, and many other tech companies have fielded high-quality apps of this nature, some of which are free to download. Deaf people can also type words into their smartphones and show them to people who can’t understand sign language, which is easier than the old-fashioned method of writing things down on notepad pages and slips of paper.

Additionally, video chat / video phone technology is widespread and has been a boon to deaf people. By allowing callers to see each other, video calls let deaf people remotely communicate with each other through sign language, facial expressions and body movements, letting them experience levels of nuanced dialog that older text-based messaging systems couldn’t convey. Video chat apps are free or low-cost, and can deliver high-quality streaming video, and the apps can be used even on small devices like smartphones thanks to their forward-facing cameras.

In conclusion, while the specifics of the prediction were wrong, the general sentiment that new technologies, specifically portable devices, would greatly benefit deaf people was right. Smartphones, high-speed internet, and cheap webcams have made deaf people far more empowered in 2019 than they were in 1998.

“There are systems that provide visual and tactile interpretations of other auditory experiences such as music, but there is debate regarding the extent to which these systems provide an experience comparable to that of a hearing person.”

RIGHT

There is an Apple phone app called “BW Dance” meant for the deaf that converts songs into flashing lights and vibrations that are said to approximate the notes of the music. However, there is little information about the app and it isn’t popular, which makes me think deaf people have not found it worthy of buying or talking about. Though apparently unsuccessful, the existence of the BW Dance app meets all the prediction’s criteria. The prediction says nothing about whether the “systems” will be popular among deaf people by 2019–it just says the systems will exist.

The “Not Impossible” music suit.

That’s probably an unsatisfying answer, so let me mention some additional research findings. A company called “Not Impossible Labs” sells body suits designed for deaf people that convert songs into complex patterns of vibrations transmitted into the wearer’s body through 24 different touch points. The suits are well-reviewed, and it’s easy to believe that they’d provide a much richer sensory experience than a buzzing smartphone with the BW Dance app would. However, the suits lack any sort of displays, meaning they don’t meet the criterion of providing users a visual interpretation of songs.

There are many “music visualization” apps that create patterns of shapes, colors, and lines to convey the musical structures of songs, and some deaf people report they are useful in that role. It would probably be easy to combine a vibrating body suit with AR glasses to provide wearers with immersive “visual and tactile interpretations” of music. The technology exists, but the commercial demand does not.

“Cochlear and other implants for improving hearing are very effective and are widely used.”

RIGHT

Since receiving FDA approval in 1984, cochlear implants have significantly improved in quality and have become much more common among deaf people. While the level of benefit widely varies from one user to another, the average user ends us hearing well enough to carry on a phone conversation in a quiet room. That means cochlear implants are “very effective” for most people who use them, since the alternative is usually having no sense of hearing at all. Cochlear implants are in fact so effective that they’ve spurred fears among deaf people that they will eradicate the Deaf culture and end the use of sign language, leading some deaf people to reject the devices even though their senses would benefit.

Cochlear implants provide increasing benefits to users as their technology improves.
Cochlear implant sales have been increasing in the U.S. as more deaf people have the devices installed. Some deaf people fear the technology will make their culture extinct.

Other types of implants for improving hearing also exist, including middle ear implants, bone-anchored hearing aids, and auditory brainstem implants. While some of these alternatives are more optimal for people with certain hearing impairments, they haven’t had the same impact on the Deaf community as cochlear implants.

“Paraplegic and some quadriplegic persons routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices.”

WRONG

Paraplegics and quadriplegics use the same wheelchairs they did in 1998, and they can only traverse stairs that have electronic lift systems. As noted in my Prometheus review, powered exoskeletons exist today, but almost no one uses them, probably due to very high costs and practical problems. Some rehabilitation clinics for people with spinal cord and leg injuries use therapeutic techniques in which the disabled person’s legs and spine are connected to electrodes that activate in sequences that assist them to walk, but these nerve and muscle stimulation devices aren’t used outside of those controlled settings. To my knowledge, no one has built the sort of prosthesis that Kurzweil envisioned, which was a powered exoskeleton that also had electrodes connected to the wearer’s body to stimulate leg muscle movements.

“Generally, disabilities such as blindness, deafness, and paraplegia are not noticeable and are not regarded as significant.”

WRONG (sadly)

As noted, technology has not improved the lives of disabled people as much as Kurzweil predicted they would between 1998 and 2019. Blind people still need to use walking canes, most deaf people don’t have hearing implants of any sort (and if they do, their hearing is still much worse than average), and paraplegics still use wheelchairs. Their disabilities are noticeable often at a glance, and always after a few moments of face-to-face interaction.

Blindness, deafness, and paraplegia still have many significant negative impacts on people afflicted with them. As just one example, employment rates and average incomes for working-age people with those infirmities are all lower than they are for people without. In 2019, the U.S. Social Security program still viewed those conditions as disabilities and paid welfare benefits to people with them.

Links:

  1. There were fewer than 1 million augmented reality glasses in the world at the end of 2019. https://arinsider.co/2019/09/11/5-million-ar-headsets-by-2023/
  2. Sales of print books in 2017 were not much different from what they probably were in 1999, when the Age of Spiritual Machines was published. https://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/75735-sales-of-print-books-increased-slightly-in-2017.html
  3. Sales figures for “graphic paper” prove that, while paper books, newspapers, and office documents are declining, they aren’t “dead” or even “uncommon” yet. https://www.mckinsey.com/industries/paper-forest-products-and-packaging/our-insights/graphic-paper-producers-boosting-resilience-amid-the-covid-19-crisis
  4. The “Internet Archive” has scans of 3.8 million books, and is growing. https://www.pcmag.com/news/the-internet-archive-is-linking-digital-books-to-wikipedia-citations
  5. By late 2019, the U.S. National Archives had put 92 million pages of government documents on its website, free for anyone to view. https://narations.blogs.archives.gov/2019/10/02/naras-record-group-explorer-a-new-path-into-naras-holdings/
  6. The 2020 report COVID-19 on Campus found that most U.S. college students found online instruction an inferior way to learn compared to traditional classroom instruction.
    https://marketplace.collegepulse.com/img/covid19oncampus_ckf_cp_final.pdf
  7. Another 2020 survey of U.S. teenagers found that most of them considered online learning to be less effective than in-person classes.
    https://www.surveymonkey.com/curiosity/common-sense-media-school-reopening/
  8. A 2020 survey of U.S. teachers and school administrators found that student absenteeism rates climbed thanks to the introduction of online classes.
    https://www.edweek.org/ew/articles/2020/10/15/in-person-learning-expands-student-absences-up-teachers.html
  9. A U.S. Census survey found in 2019 that 17% of students didn’t have computers in their homes and 18% had no internet access or very slow service.
    https://apnews.com/article/7f263b8f7d3a43d6be014f860d5e4132
  10. The “Seeing AI” smartphone app uses the device’s camera to recognize text, objects and people and to read, describe, or name them out loud. Blind users have highly reviewed it.
    https://apps.apple.com/us/app/seeing-ai/id999062298#see-all/reviews
  11. The “BlindSquare” smartphone app provides voice-based GPS navigation to users, and is also highly reviewed by blind people.
    https://apps.apple.com/us/app/blindsquare/id500557255#see-all/reviews
  12. The FDA approves the “Argus II” retinal implant system for the blind in 2013.
    https://www.nature.com/news/fda-approves-first-retinal-implant-1.12439
  13. In 2019, an app called “Zoi Meet” was developed for the Vuzix Blade AR glasses. The app produces real-time subtitles of spoken words, displayed across the wearer’s field of vision.
    https://www.vuzix.com/Blog/vuzix-blade-real-time-language-transcription-zoi-meet
  14. In 2019, there were many smartphone apps that helped deaf people to communicate with hearing people.
    https://www.meriahnichols.com/best-deaf-apps/
    https://abilitynet.org.uk/news-blogs/9-useful-apps-people-who-are-deaf-or-have-hearing-loss
  15. “Glide” is a popular video phone app among deaf people.
    https://www.fastcompany.com/3054050/how-video-chat-app-glide-got-deaf-people-talking
  16. “BW Dance” is an app that converts songs into patterns of vibrations that flashing lights that deaf people can experience.
    https://www.producthunt.com/posts/bw-dance
  17. “Not Impossible Labs” makes body suits that allow deaf people to experience music in the form of complex patterns of vibrations.
    https://www.billboard.com/articles/news/8476553/not-impossible-labs-live-music-deaf
  18. Cochlear implants have gotten better and more common among deaf people as time has passed.
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4111484/
  19. U.S. sales growth of cochlear implants is projected to continue.
    https://www.grandviewresearch.com/industry-analysis/cochlear-implants-industry
  20. Aside from cochlear implants, middle ear implants, auditory brainstem implants, and bone-anchored hearing aids can amplify or restore hearing.
    https://www.bcig.org.uk/cochlear-implant-devices/implantable-devices/
  21. People who are blind, or deaf, or who have serious spinal cord damage are less likely to have jobs and also make less money than people who don’t have those conditions.
    https://www.afb.org/research-and-initiatives/employment/reviewing-disability-employment-research-people-blind-visually
    https://www.nationaldeafcenter.org/news/employment-report-shows-strong-labor-market-passing-deaf-americans
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2792457/

How Ray Kurzweil’s 2019 predictions are faring (pt 1)

In 1999, Ray Kurzweil, one of the world’s greatest futurists, published a book called The Age of Spiritual Machines. In it, he made the case that artificial intelligence, nanomachines, virtual reality, brain implants, and other technologies would greatly improve during the 21st century, radically altering the world and the human experience. In the final four chapters, titled “2009,” “2019,” “2029,” and “2099,” he made detailed predictions about what the state of key technologies would be in each of those years, and how they would impact everyday life, politics and culture.

Ray Kurzweil receiving a technology award from President Clinton in 1999.

Towards the end of 2009, a number of news columnists, bloggers and even Kurzweil himself weighed in on how accurate his predictions from the eponymous chapter turned out. By contrast, no such analysis was done over the past year regarding his 2019 predictions. As such, I’m taking it upon myself to do it.

I started analyzing the accuracy of Kurzweil’s predictions in late 2019 and wanted to publish my full results before the end of that year. However, the task required me to do much more research that I had expected, so I missed that deadline. Really digging into the text of The Age of Spiritual Machines and parsing each sentence made it clear that the number and complexity of the 2019 predictions were greater than a casual reading would suggest. Once I realized how big of a task it would be, I became kind of demoralized and switched to working on easier projects for this blog.

With the end of 2020 on the horizon, I think time is running out to finish this, and I’ve decided to tackle the problem by breaking it into smaller, manageable chunks: My analysis of Kurzweil’s 2019 predictions from The Age of Spiritual Machines will be spread out over three blog entries, the first of which you’re now reading. Except where noted, I will only use sources published before January 1, 2020 to support my conclusions.

“Computers are now largely invisible. They are embedded everywhere–in walls, tables, chairs, desks, clothing, jewelry, and bodies.”

RIGHT

A computer is a device that stores and processes data, and executes its programming. Any machine that meets those criteria counts as a computer, regardless of how fast or how powerful it is (also, it doesn’t even need to run on electricity). This means something as simple as a pocket calculator, programmable thermostat, or a Casio digital watch counts as a computer. These kinds of items were ubiquitous in developed countries in 1998 when Ray Kurzweil wrote the book, so his “futuristic” prediction for 2019 could have just as easily applied to the reality of 1998. This is an excellent example of Kurzweil making a prediction that leaves a certain impression on the casual reader (“Kurzweil says computers will be inside EVERY object in 2019!”) that is unsupported by a careful reading of the prediction.

“People routinely use three-dimensional displays built into their glasses or contact lenses. These ‘direct eye’ displays create highly realistic, virtual visual environments overlaying the ‘real’ environment.”

MOSTLY WRONG

The first attempt to introduce augmented reality glasses in the form of Google Glass was probably the most notorious consumer tech failure of the 2010s. To be fair, I think this was because the technology wasn’t ready yet (e.g. – small visual display, low-res images, short battery life, high price), and not because the device concept is fundamentally unsound. The technological hangups that killed Google Glass will of course vanish in the future thanks to factors like Moore’s Law. Newer AR glasses, like Microsoft’s Hololens, are already superior to Google Glass, and given the pace of improvement, I think AR glasses will be ready for another shot at widespread commercialization by the end of the 2020s, but they will not replace smartphones for a variety of reasons (such as the unwillingness of many people to wear glasses, widespread discomfort with the possibility that anyone wearing AR glasses might be filming the people around them, and durability and battery life advantages of smartphones).

Kurzweil’s prediction that contact lenses would have augmented reality capabilities completely failed. A handful of prototypes were made, but never left the lab, and there’s no indication that any tech company is on the cusp of commercializing them. I doubt it will happen until the 2030s.

Pokemon Go is an augmented reality video game, and has been downloaded over 1 billion times.

However, people DO routinely access augmented reality, but through their smartphones and not through eyewear. Pokemon Go was a worldwide hit among video gamers in 2016, and is an augmented reality game where the player uses his smartphone screen to see virtual monsters overlaid across live footage of the real world. Apps that let people change their appearances during live video calls (often called “face filters”), such as by making themselves appear to have cartoon rabbit ears, are also very popular among young people.

So while Kurzweil got augmented reality technology’s form factor wrong, and overestimated how quickly AR eyewear would improve, he was right that ordinary people would routinely use augmented reality.

The augmented reality glasses will also let you experience virtual reality.

WRONG

Augmented reality glasses and virtual reality goggles remain two separate device categories. I think we will someday see eyewear that merges both functions, but it will take decades to invent glasses that are thin and light enough to be worn all day, untethered, but that also have enough processing power and battery life to provide a respectable virtual reality experience. The best we can hope for by the end of the 2020s will be augmented reality glasses that are good enough to achieve ~10% of the market penetration of smartphones, and virtual reality goggles that have shrunk to the size of ski goggles.

Of note is that Kurzweil’s general sentiment that VR would be widespread by 2019 is close to being right. VR gaming made a resurgence in the 2010s thanks to better technology, and looks poised to go mainstream in the 2020s.

The augmented reality / virtual reality glasses will work by projecting images onto the retinas of the people wearing them.

PARTLY RIGHT

The most popular AR glasses of the 2010s, Google Glass, worked by projecting images onto their wearer’s retinas. The more advanced AR glass models that existed at the end of the decade used a mix of methods to display images, none of which has established dominance.

“Magic Leap One”

The “Magic Leap One” AR glasses use the retinal projection technology Kurzweil favored. They are superior to Google Glass since images are displayed to both eyes (Glass only had a projector for the right eye), in higher resolution, and covering a larger fraction of the wearer’s field of view (FOV). Magic Leap One also has advanced sensors that let it map its physical surroundings and movements of its wearer, letting it display images of virtual objects that seem to stay fixed at specific points in space (Kurzweil called this feature “Virtual-reality overlay display”).

Microsoft “Hololens”

Microsoft’s “Hololens” uses a different technology to produce images: the lenses are in fact transparent LCD screens. They display images just like a TV screen or computer monitor would. However, unlike those devices, the Hololens’ LCDs are clear, allowing the wearer to also see the real world in front of them.

The “Vuzix Blade”

The “Vuzix Blade” AR glasses have a small projector that beams images onto the lens in front of the viewer’s right eye. Nothing is directly beamed onto his retina.

It must emphasized again that, at the end of 2019, none of these or any other AR glasses were in widespread or common use, even in rich countries. They were confined to small numbers of hobbyists, technophiles, and software developers. A Magic Leap One headset cost $2,300 – $3,300 depending on options, and a Hololens was $3,000.

A man wearing HTC Vive virtual reality goggles, with hand controllers.

And as stated, AR glasses and VR goggles remained two different categories of consumer devices in 2019, with very little crossover in capabilities and uses. The top-selling VR goggles were the Oculus Rift and the HTC Vive. Both devices use tiny OLED screens positioned a few inches in front of the wearer’s eyes to display images, and as a result, are much bulkier than any of the aforementioned AR glasses. In 2019, a new Oculus Rift system cost $400 – $500, and a new HTC Vive was $500 – $800.

“[There] are auditory ‘lenses,’ which place high resolution-sounds in precise locations in a three-dimensional environment. These can be built into eyeglasses, worn as body jewelry, or implanted in the ear canal.”

MOSTLY RIGHT

Humans have the natural ability to tell where sounds are coming from in 3D space because we have “binaural hearing”: our brains can calculate the spatial origin of the sound by analyzing the time delay between that sound reaching each of our ears, as well as the difference in volume. For example, if someone standing to your left is speaking, then the sounds of their words will reach your left ear a split second sooner than they reach your right ear, and their voice will also sound louder in your left ear.

By carefully controlling the timing and loudness of sounds that a person hears through their headphones or through a single speaker in front of them, we can take advantage of the binaural hearing process to trick people into thinking that a recording of a voice or some other sound is coming from a certain direction even though nothing is there. Devices that do this are said to be capable of “binaural audio” or “3D audio.” Kurzweil’s invented term “audio lenses” means the same thing.

The Bose Frames sunglasses have small sound speakers built into them, close to the wearer’s ears.

Yes, there are eyeglasses with built-in speakers that play binaural audio. The Bose Frames “smart sunglasses” is the best example. Even though the devices are not common, they are commercially available, priced low enough for most people to afford them ($200), and have gotten good user reviews. Kurzweil gets this one right, and not by an eyerolling technicality as would be the case if only a handful of million-dollar prototype devices existed in a tech lab and barely worked.

The Apple Airpod wireless earbuds are, like most Apple products, status objects like jewelry.

Wireless earbuds are much more popular, and upper-end devices like the SoundPEATS Truengine 2 have impressive binaural audio capabilities. It’s a stretch, but you could argue that branding, and sleek, aesthetically pleasing design qualifies some higher-end wireless earbud models as “jewelry.”

Sound bars have also improved and have respectable binaural surround sound capabilities, though they’re still inferior to traditional TV entertainment system setups where the sound speakers are placed at different points in the room. Sound bars are examples of single-point devices that can trick people into thinking sounds are originating from different points in space, and in spirit, I think they are a type of technology Kurzweil would cite as proof that his prediction was right.

The last part of Kurzweil’s prediction is wrong, since audio implants into the inner ears are still found only in people with hearing problems, which is the same as it was in 1998. More generally, people have shown themselves more reluctant to surgically implant technology in their bodies than Kurzweil seems to have predicted, but they’re happy to externally wear it or to carry it in a pocket.

“Keyboards are rare, although they still exist. Most interaction with computing is through gestures using hands, fingers, and facial expressions and through two-way natural-language spoken communication. “

MOSTLY WRONG

Rumors of the keyboard’s demise have been greatly exaggerated. Consider that, in 2018, people across the world bought 259 million new desktop computers, laptops, and “ultramobile” devices (higher-end tablets that have large, detachable keyboards [the Microsoft Surface dominates this category]). These machines are meant to be accessed with traditional keyboard and mouse inputs.

Gartner’s estimates of global personal computer (PC) sales in 2018. The numbers for 2019 will be nearly the same.

The research I’ve done suggests that the typical desktop, laptop, and ultramobile computer has a lifespan of four years. If we accept this, and also assume that the worldwide computer sales figures for 2015, 2016, and 2017 were the same as 2018’s, then it means there are 1.036 billion fully functional desktops, laptops, and ultramobile computers on the planet (about one for every seven people). By extension, that means there are at least 1.036 billion keyboards. No one could reasonably say that Kurzweil’s prediction that keyboards would be “rare” by 2019 is correct.

The second sentence in Kurzweil’s prediction is harder to analyze since the meaning of “interaction with computing” is vague and hence subjective. As I wrote before, a Casio digital watch counts as a computer, so if it’s nighttime and I press one of its buttons to illuminate the display so I can see the time, does that count as an “interaction with computing”? Maybe.

If I swipe my thumb across my smartphone’s screen to unlock the device, does that count as an “interaction with computing” accomplished via a finger gesture? It could be argued so. If I then use my index finger to touch the Facebook icon on my smartphone screen to open the app, and then use a flicking motion of my thumb to scroll down over my News Feed, does that count as two discrete operations in which I used finger gestures to interact with computing?

You see where this is going…

Being able to set the bar that low makes it possible that this part of Kurzweil’s prediction is right, as unsatisfying as that conclusion may be.

Virtual reality game setups, like those offered by Oculus, commonly make use of hand controllers like these, which monitor the locations and movements of the player’s hands and translate them into in-game commands. This is an example of gestural control. Several million people now have advanced VR game systems like this.

Virtual reality gaming makes use of hand-held and hand-worn controllers that monitor the player’s hand positions and finger movements so he can grasp and use objects in the virtual environment, like weapons and steering wheels. Such actions count as interactions with computing. The technology will only get more refined, and I can see them replacing older types of handheld game controllers.

Hand gestures, along with speech, are also the natural means to interface with augmented reality glasses since the devices have tiny surfaces available for physical contact, meaning you can’t fit a keyboard on a sunglass frame. Future AR glasses will have front-facing cameras that watch the wearer’s hands and fingers, allowing them to interact with virtual objects like buttons and computer menus floating in midair, and to issue direct commands to the glasses through specific hand motions. Thus, as AR glasses get more popular in the 2020s, so will the prevalence of this mode of interface with computers.

Users interface with the “Gen 2” Amazon Echo through two-way spoken communication. The device is popular and highly reviewed and only costs $100, putting it within reach of hundreds of millions of households.

“Two-way natural-language spoken communication” is now a common and reliable means of interacting with computers, as anyone with a smart speaker like an Amazon Echo can attest. In fact, virtual assistants like Alexa, Siri, and Cortana can be accessed via any modern smartphone, putting this within reach of billions of people.

The last part of Kurzweil’s prediction, that people would be using “facial expressions” to communicate with their personal devices, is wrong. For what it’s worth, machines are gaining the ability to read human emotions through our facial expressions (including “microexpressions”) and speech. This area of research, called “affective computing,” is still stuck in the lab, but it will doubtless improve and find future commercial applications. Someday, you will be able to convey important information to machines through your facial expressions, tone of voice, and word choice just as you do to other humans now, enlarging your mode of interacting with “computing” to encompass those domains.

“Significant attention is paid to the personality of computer-based personal assistants, with many choices available. Users can model the personality of their intelligent assistants on actual persons, including themselves…”

WRONG

The most widely used computer-based personal assistants–Alexa, Siri, and Cortana–don’t have “personalities” or simulated emotions. They always speak in neutral or slightly upbeat tones. Users can customize some aspects of their speech and responses (i.e. – talking speed, gender, regional accent, language), and Alexa has limited “skill personalization” abilities that allow it to tailor some of its responses to the known preferences of the user interacting with it, but this is too primitive to count as a “personality adjustment” feature.

My research didn’t find any commercially available AI personal assistant that has something resembling a “human personality,” or that is capable of changing that personality. However, given current trends in AI research and natural language understanding, and growing consumer pressure on Silicon Valley’s to make products that better cater to the needs of nonwhite people, it is likely this will change by the end of this decade.

“Typically, people do not own just one specific ‘personal computer’…”

RIGHT

A 2019 Pew survey showed that 75% of American adults owned at least one desktop or laptop PC. Additionally, 81% of them owned a smartphone and 52% had tablets, and both types of devices have all the key attributes of personal computers (advanced data storing and processing capabilities, audiovisual outputs, accepts user inputs and commands).

The data from that and other late-2010s surveys strongly suggest that most of the Americans who don’t own personal computers are people over age 65, and that the 25% of Americans who don’t own traditional PCs are very likely to be part of the 19% that also lack smartphones, and also part of the 48% without tablets. The statistical evidence plus consistent anecdotal observations of mine lead me to conclude that the “typical person” in the U.S. owned at least two personal computers in late 2019, and that it was atypical to own fewer than that.

“Computing and extremely high-bandwidth communication are embedded everywhere.”

MOSTLY RIGHT

This is another prediction whose wording must be carefully parsed. What does it mean for computing and telecommunications to be “embedded” in an object or location? What counts as “extremely high-bandwidth”? Did Kurzweil mean “everywhere” in the literal sense, including the bottom of the Marianas Trench?

First, thinking about my example, it’s clear that “everywhere” was not meant to be taken literally. The term was a shorthand for “at almost all places that people typically visit” or “inside of enough common objects that the average person is almost always near one.”

Second, as discussed in my analysis of Kurzweil’s first 2019 prediction, a machine that is capable of doing “computing” is of course called a “computer,” and they are much more ubiquitous than most people realize. Pocket calculators, programmable thermostats, and even a Casio digital watch count computers. Even 30-year-old cars have computers inside of them. So yes, “computing” is “embedded ‘everywhere'” because computers are inside of many manmade objects we have in our homes and workplaces, and that we encounter in public spaces.

Of course, scoring that part of Kurzweil’s prediction as being correct leaves us feeling hollow since those devices don’t the full range of useful things we associate with “computing.” However, as I noted in the previous prediction, 81% of American adults own smartphones, they keep them in their pockets or near their bodies most of the time, and smartphones have all the capabilities of general-purpose PCs. Smartphones are not “embedded” in our bodies or inside of other objects, but given their ubiquity, they might as well be. Kurzweil was right in spirit.

Third, the Wifi and mobile phone networks we use in 2019 are vastly faster at data transmission than the modems that were in use in 1999, when The Age of Spiritual Machines was published. At that time, the commonest way to access the internet was through a 33.6k dial-up modem, which could upload and download data at a maximum speed of 33,600 bits per second (bps), though upload speeds never got as close to that limit as download speeds. 56k modems had been introduced in 1998, but they were still expensive and less common, as were broadband alternatives like cable TV internet.

In 2019, standard internet service packages in the U.S. typically offered WiFi download speeds of 30,000,000 – 70,000,000 bps (my home WiFi speed is 30-40 Mbps, and I don’t have an expensive service plan). Mean U.S. mobile phone internet speeds were 33,880,000 bps for downloads and 9,750,000 bps for uploads. That’s a 1,000 to 2,000-fold speed increase over 1999, and is all the more remarkable since today’s devices can traffic that much data without having to be physically plugged in to anything, whereas the PCs of 1999 had to be plugged into modems. And thanks to wireless nature of internet data transmissions, “high-bandwidth communication” is available in all but the remotest places in 2019, whereas it was only accessible at fixed-place computer terminals in 1999.

Again, Kurzweil’s use of the term “embedded” is troublesome, since it’s unclear how “high-bandwidth communication” could be embedded in anything. It emanates from and is received by things, and it is accessible in specific places, but it can’t be “embedded.” Given this and the other considerations, I think every part of Kurzweil’s prediction was correct in spirit, but that he was careless with how he worded it, and that it would have been better written as: “Computing and extremely high-bandwidth communication are available and accessible almost everywhere.”

Cables have largely disappeared.”

MOSTLY RIGHT

Assessing the prediction requires us to deduce which kinds of “cables” Kurzweil was talking about. To my knowledge, he has never been an exponent of wireless power transfer and has never forecast that technology becoming dominant, so it’s safe to say his prediction didn’t pertain to electric cables. Indeed, larger computers like desktop PCs and servers still need to be physically plugged into electrical outlets all the time, and smaller computing devices like smartphones and tablets need to be physically plugged in to routinely recharge their batteries.

That leaves internet cables and data/power cables for peripheral devices like keyboards, mice, joysticks, and printers. On the first count, Kurzweil was clearly right. In 1999, WiFi was a new invention that almost no one had access to, and logging into the internet always meant sitting down at a computer that had some type of data plug connecting it to a wall outlet. Cell phones weren’t able to connect to and exchange data with the internet, except maybe for very limited kinds of data transfers, and it was a pain to use the devices for that. Today, most people access the internet wirelessly.

Wireless keyboards and mice are affordable, but still significantly more expensive than their wired counterparts.

On the second count, Kurzweil’s prediction is only partly right. Wireless keyboards and mice are widespread, affordable, and are mature technologies, and even lower-cost printers meant for people to use at home usually come with integrated wireless networking capabilities, allowing people in the house to remotely send document files to the devices to be printed. However, wireless keyboards and mice don’t seem about to displace their wired predecessors, nor would it even be fair to say that the older devices are obsolete. Wired keyboards and mice are cheaper (they are still included in the box whenever you buy a new PC), easier to use since users don’t have to change their batteries, and far less vulnerable to hacking. Also, though they’re “lower tech,” wired keyboards and mice impose no handicaps on users when they are part of a traditional desktop PC setup. Wireless keyboards and mice are only helpful when the user is trying to control a display that is relatively far from them, as would be the case if the person were using their living room television as a computer monitor, or if a group of office workers were viewing content on a large screen in a conference room, and one of them was needed to control it or make complex inputs.

No one has found this subject interesting enough to compile statistics on the percentages of computer users who own wired vs. wireless keyboards and mice, but my own observation is that the older devices are still dominant.

And though average computer printers in 2019 have WiFi capabilities, the small “complexity bar” to setting up and using the WiFi capability makes me suspect that most people are still using a computer that is physically plugged into their printer to control the latter. These data cables could disappear if we wanted them to, but I don’t think they have.

This means that Kurzweil’s prediction that cables for peripheral computer devices would have “largely disappeared” by the end of 2019 was wrong. For what it’s worth, the part that he got right vastly outweighs the part he got wrong: The rise of wireless internet access has revolutionized the world by giving ordinary people access to information, services and communication at all but the remotest places. Unshackling people from computer terminals and letting them access the internet from almost anywhere has been extremely empowering, and has spawned wholly new business models and types of games. On the other hand, the world’s failure to fully or even mostly dispense with wired computer peripheral devices has been almost inconsequential. I’m typing this on a wired keyboard and don’t see any way that a more advanced, wireless keyboard would help me.

“The computational capacity of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 million billion calculations per second).” [Or 20 petaflops]

WRONG

Graphics cards provide the most calculations per second at the lowest cost of any type of computer processor. The NVIDIA GeForce RTX 2080 Ti Graphics Card is one of the fastest computers available to ordinary people in 2019. In “overclocked” mode, where it is operating as fast as possible, it does 16,487 billion calculations per second (called “flops”).

A GeForce RTX 2080 retails for $1,100 and up, but let’s be a little generous to Kurzweil and assume we’re able to get them for $1,000.

$4,000 in 1999 dollars equals $6,164 in 2019 dollars. That means today, we can buy 6.164 GeForce RTX 2080 graphics cards for the amount of money Kurzweil specified.

6.164 cards x 16,487 billion calculations per second per card = 101,625 billion calculations per second for the whole rig.

This computational cost-performance level is two orders of magnitude worse than Kurzweil predicted.

The SuperMUC-NG supercomputer fills a large room and is as powerful as one human brain.

Additionally, according to Top500.org, a website that keeps a running list of the world’s best supercomputers and their performance levels, the “Leibniz Rechenzentrum SuperMUC-NG” is the ninth fastest computer in the world and the fastest in Germany, and straddles Kurzweil’s line since it runs at 19.4 petaflops or 26.8 petaflops depending on method of measurement (“Rmax” or “Rpeak”). A press release said: “The total cost of the project sums up to 96 Million Euro [about $105 million] for 6 years including electricity, maintenance and personnel.” That’s about four orders of magnitude worse than Kurzweil predicted.

I guess the good news is that at least we finally do have computers that have the same (or slightly more) processing power as a single, average, human brain, even if the computers cost tens of millions of dollars apiece.

“Of the total computing capacity of the human species (that is, all human brains), combined with the computing technology the species has created, more than 10 percent is nonhuman.”

WRONG

Kurzweil explains his calculations in the “Notes” section in the back of the book. He first multiplies the computation performed by one human brain by the estimated number of humans who will be alive in 2019 to get the “total computing capacity of the human species.” Confusingly, his math assumes one human brain does 10 petaflops, whereas in his preceding prediction he estimates it is 20 petaflops. He also assumed 10 billion people would be alive in 2019, but the figure fell mercifully short and was ONLY 7.7 billion by the end of the year.

Plugging in the correct figure, we get (7.7 x 109 humans) x 1016 flops = 7.7 x 1025 flops = the actual total computing capacity of all human brains in 2019.

Determining the total computing capacity of all computers in existence in 2019 can only really be guessed at. Kurzweil estimated that at least 1 billion machines would exist in 2019, and he was right. Gartner estimated that 261 million PCs (which includes desktop PCs, notebook computers [seems to include laptops], and “ultramobile premiums”) were sold globally in 2019. The figures for the preceding three years were 260 million (2018), 263 million (2017), and 270 million (2016). Assuming that a newly purchased personal computer survives for four years before being fatally damaged or thrown out, we can estimate that there were 1.05 billion of the machines in the world at the end of 2019.

However, Kurzweil also assumed that the average computer in 2019 would be as powerful as a human brain, and thus capable of 10 petaflops, but reality fell far short of the mark. As I revealed in my analysis of the preceding prediction, a 10 petaflop computer setup would cost somewhere between $606,543 in GeForce RTX 2080 graphics cards, or $52.5 million for half a Leibniz Rechenzentrum SuperMUC-NG supercomputer. None of the people who own the 1.34 billion personal computers in the world spent anywhere near that much money, and their machines are far less powerful than human brains.

Let’s generously assume that all of the world’s 1.05 billion PCs are higher-end (for 2019) desktop computers that cost $900 – $1,200. Everyone’s machine has an Intel Core i7, 8th Generation processor, which offers speeds of a measly 361.3 gigaflops (3.613 x 1011 flops). A 10 petaflop human brain is 27,678 times faster!

Plugging in the computer figures, we get (1.05 x 109 personal computers) x 3.61311 flops = 3.794 x 1020 = the total computing capacity of all personal computers in 2019. That’s five orders of magnitude short. The reality of 2019 computing definitely fell wide of Kurzweil’s expectations.

What if we add the computing power of all the world’s smartphones to the picture? Approximately 3.2 billion people owned a smartphone in 2019. Let’s assume all the devices are higher-end (for 2019) iPhone XR’s, which everyone bought new for at least $500. The iPhone XR’s have A12 Bionic processors, and my research indicates they are capable of 700 – 1,000 gigaflop maximum speeds. Let’s take the higher-end estimate and do the math.

3.2 billion smartphones x 1012 flops = 3.2 x 1021 = the the total computing capacity of all smartphones in 2019.

Adding things up, pretty much all of the world’s personal computing devices (desktops, laptops, smartphones, netbooks) only produce 3.5794 x 1021 flops of computation. That’s still four orders of magnitude short of what Kurzweil predicted. Even if we assume that my calculations were too conservative, and we add in commercial computers (e.g. – servers, supercomputers), and find that the real amount of artificial computation is ten times higher than I thought, at 3.5794 x 1022 flops, this would still only be equivalent to 1/2000th, or 0.05% of the total computing capacity of all human brains (7.7 x 1025 flops). Thus, Kurzweil’s prediction that it would be 10% by 2019 was very wrong.

“Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices.”

WRONG

For those who don’t know much about computers, the prediction says that rotating disk hard drives will be replaced with solid-state hard drives that don’t rotate. A thumbdrive has a solid-state hard drive, as do all smartphones and tablet computers.

I gauged the accuracy of this prediction through a highly sophisticated and ingenious method: I went to the nearest Wal-Mart and looked at the computers they had for sale. Two of the mid-priced desktop PCs had rotating disk hard drives, and they also had DVD disc drives, which was surprising, and which probably makes the “other electromechanical computing devices” part of the prediction false.

The HP Pavilion 590-p0033w has a rotating hard disk drive, indicated by the “7200 RPM” (revolutions per minute) speed figure on the front of this box. It also says it has a “DVD-Writer.” This is a newly manufactured machine, and at $499, is a mid-ranged desktop.
The HP Slim Desktop 290-p0043w also has a rotating hard disk drive, with a 7200 RPM speed.
And before anyone says “Well, only the clunky, old-fashioned desktops still have rotating disk drives!” check out this low-end (but newly manufactured) laptop I also found at Wal-Mart. The HP 15-bs212wm has a rotating hard disk drive and a DVD drive.

If the world’s biggest brick-and-mortar retailer is still selling brand new computers with rotating hard disk drives and rotating DVD disc drives, then it can’t be said that solid state memory storage has “fully replaced” the older technology.

“Three-dimensional nanotube lattices are now a prevalent form of computing circuitry.”

MOSTLY WRONG

Many solid-state computer memory chips, such as common thumbdrives and MicroSD cards, have 3D circuitry, and it is accurate to call them “prevalent.” However, 3D circuitry has not found routine use in computer processors thanks to unsolved problems with high manufacturing costs, unacceptably high defect rates, and overheating.

An internal diagram of a common MicroSD card, which has the simple job of storing data. It has about 18 layers. Memory storage chips are less sensitive to manufacturing defects since they have redundancy.
An exploded diagram of Intel’s upcoming “Lakefield” processor, which has the complex job of storing and processing data. It has four layers, and is much more technically challenging to make than a 3D memory chip.

In late 2018, Intel claimed it had overcome those problems thanks to a proprietary chip manufacturing process, and that it would start selling the resulting “Lakefield” line of processors soon. These processors have four, vertically stacked layers, so they meet the requirement for being “3D.” Intel hasn’t sold any yet, and it remains to be seen whether they will be commercially successful.

Silicon is still the dominant computer chip substrate, and carbon-based nanotubes haven’t been incorporated into chips because Intel and AMD couldn’t figure out how to cheaply and reliably fashion them into chip features. Nanotube computers are still experimental devices confined to labs, and they are grossly inferior to traditional silicon-based computers when it comes to doing useful tasks. Nanotube computer chips that are also 3D will not be practical anytime soon.

It’s clear that, in 1999, Kurzweil simply overestimated how much computer hardware would improve over the next 20 years.

“The majority of ‘computes’ of computers are now devoted to massively parallel neural nets and genetic algorithms.”

UNCLEAR

Assessing this prediction is hard because it’s unclear what the term “computes” means. It is probably shorthand for “compute cycles,” which is a term that describes the sequence of steps to fetch a CPU instruction, decode it, access any operands, perform the operation, and write back any result. It is a process that is more complex than doing a calculation, but that is still very basic. (I imagine that computer scientists are the only people who know, offhand, what “compute cycle” means.)

Assuming “computes” means “compute cycles,” I have no idea how to quantify the number of compute cycles that happened, worldwide, in 2019. It’s an even bigger mystery to me how to determine which of those compute cycles were “devoted to massively parallel neural nets and genetic algorithms.” Kurzweil doesn’t describe a methodology that I can copy.

Also, what counts as a “massively parallel neural net”? How many processor cores does a neutral net need to have to be “massively parallel”? What are some examples of non-massively parallel neural nets? Again, an ambiguity with the wording of the prediction frustrates an analysis. I’d love to see Kurzweil assess the accuracy of this prediction himself and to explain his answer.

“Significant progress has been made in the scanning-based reverse engineering of the human brain. It is now fully recognized that the brain comprises many specialized regions, each with its own topology and architecture of interneuronal connections. The massively parallel algorithms are beginning to be understood, and these results have been applied to the design of machine-based neural nets.”

PARTLY RIGHT

The use of the ambiguous adjective “significant” gives Kurzweil an escape hatch for the first part of this prediction. Since 1999, brain scanning technology has improved, and the body of scientific literature about how brain activity correlates with brain function has grown. Additionally, much has been learned by studying the brain at a macro-level rather than at a cellular level. For example, in a 2019 experiment, scientists were able to accurately reconstruct the words a person was speaking by analyzing data from the person’s brain implant, which was positioned over their auditory cortex. Earlier experiments showed that brain-computer-interface “hats” could do the same, albeit with less accuracy. It’s fair to say that these and other brain-scanning studies represent “significant progress” in understanding how parts of the human brain work, and that the machines were gathering data at the level of “brain regions” rather than at the finer level of individual brain cells.

Yet in spite of many tantalizing experimental results like those, an understanding of how the brain produces cognition has remained frustratingly elusive, and we have not extracted any new algorithms for intelligence from the human brain in the last 20 years that we’ve been able to incorporate into software to make machines smarter. The recent advances in deep learning and neural network computers–exemplified by machines like AlphaZero–use algorithms invented in the 1980s or earlier, just running on much faster computer hardware (specifically, on graphics processing units originally developed for video games).

If anything, since 1999, researchers who studied the human brain to gain insights that would let them build artificial intelligences have come to realize how much more complicated the brain was than they first suspected, and how much harder of a problem it would be to solve. We might have to accurately model the brain down the the intracellular level (e.g. – not just neurons simulated, but their surface receptors and ion channels simulated) to finally grasp how it works and produces intelligent thought. Considering that the best we have done up to this point is mapping the connections of a fruit fly brain and that a human brain is 600,000 times bigger, we won’t have detailed human brain simulation for many decades.

“It is recognized that the human genetic code does not specify the precise interneuronal wiring of any of these regions, but rather sets up a rapid evolutionary process in which connections are established and fight for survival. The standard process for wiring machine-based neural nets uses a similar genetic evolutionary algorithm.”

RIGHT

This prediction is right, but it’s not noteworthy since it merely re-states things that were widely accepted and understood to be true when the book was published in 1999. It’s akin to predicting that “A thing we think is true today will still be considered true in 20 years.”

The prediction’s first statement is an odd one to make since it implies that there was ever serious debate among brain scientists and geneticists over whether the human genome encoded every detail of how the human brain is wired. As Kurzweil points out earlier in the book, the human genome is only about 3 billion base-pairs long, and the genetic information it contains could be as low as 23 megabytes, but a developed human brain has 100 billion neurons and 1015 connections (synapses) between those neurons. Even if Kurzweil is underestimating the amount of information the human genome stores by several orders of magnitude, it clearly isn’t big enough to contain instructions for every aspect of brain wiring, and therefore, it must merely lay down more general rules for brain development.

I also don’t understand why Kurzweil wrote the second part of the statement. It’s commonly recognized that part of childhood brain development involves the rapid paring of interneuronal connections that, based on interactions with the child’s environment, prove less useful, and the strengthening of connections that prove more useful. It would be apt to describe this as “a rapid evolutionary process” since the child’s brain is rewiring to adapt to child to its surroundings. This mechanism of strengthening brain connection pathways that are rewarded or frequently used, and weakening pathways that result in some kind of misfortune or that are seldom used, continues until the end of a person’s life (though it gets less effective as they age). This paradigm was “recognized” in 1999 and has never been challenged.

Machine-based neural nets are, in a very general way, structured like the human brain, they also rewire themselves in response to stimuli, and some of them use genetic algorithms to guide the rewiring process (see this article for more info: https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414). However, all of this was also true in 1999.

“A new computer-controlled optical-imaging technology using quantum-based diffraction devices has replaced most lenses with tiny devices that can detect light waves from any angle. These pinhead-sized cameras are everywhere.”

WRONG

Devices that harness the principle of quantum entanglement to create images of distant objects do exist and are better than devices from 1999, but they aren’t good enough to exit the R&D labs. They also have not been shrunk to pinhead sizes. Kurzweil overestimated how fast this technology would develop.

Virtually all cameras still have lenses, and still operate by the old method of focusing incoming light onto a physical medium that captures the patterns and colors of that light to form a stored image. The physical medium used to be film, but now it is a digital image sensor.

A teardown of a Samsung Galaxy S10 smartphone reveals its three digital cameras, which produce very high-quality photos and videos. Comparing them to the tweezers and human fingers, it’s clear they are only as big as small coins.

Digital cameras were expensive, clunky, and could only take low-quality images in 1999, so most people didn’t think they were worth buying. Today, all of those deficiencies have been corrected, and a typical digital camera sensor plus its integrated lens is the size of a small coin. As a result, the devices are very widespread: 3.2 billion people owned a smartphone in 2019, and all of them probably had integral digital cameras. Laptops and tablet computers also typically have integral cameras. Small standalone devices, like pocket cameras, webcams, car dashcams, and home security doorbell cameras, are also cheap and very common. And as any perusal of YouTube.com will attest, people are using their cameras to record events of all kinds, all the time, and are sharing them with the world.

This prediction stands out as one that was wrong in specifics, but kind of right in spirit. Yes, since 1999, cameras have gotten much smaller, cheaper, and higher-quality, and as a result, they are “everywhere” in the figurative sense, with major consequences (good and bad) for the world. Unfortunately, Kurzweil needlessly stuck his neck out by saying that the cameras would use an exotic new technology, and that they would be “pinhead-sized” (he hurt himself the same way by saying that the augmented reality glasses of 2019 would specifically use retinal projection). For those reasons, his prediction must be judged as “wrong.”

“Autonomous nanoengineered machines can control their own mobility and include significant computational engines. These microscopic machines are beginning to be applied to commercial applications, particularly in manufacturing and process control, but are not yet in the mainstream.”

WRONG

A state-of-the-art microscopic machine invented in 2019 can move around in water by twirling its four “flippers.”

While there has been significant progress in nano- and micromachine technology since 1999 (the 2016 Nobel Prize in Chemistry was awarded to scientists who had invented nanomachines), the devices have not gotten nearly as advanced as Kurzweil predicted. Some microscopic machines can move around, but the movement is guided externally rather than autonomously. For example, turtle-like micromachines invented by Dr. Marc Miskin in 2019 can move by twirling their tiny “flippers,” but the motion is powered by shining laser beams on them to expand and contract the metal in the flippers. The micromachines lack their own power packs, lack computers that tell the flippers to move, and therefore aren’t autonomous.

In 2003, UCLA scientists invented “nano-elevators,” which were also capable of movement and still stand as some of the most sophisticated types of nanomachines. However, they also lacked onboard computers and power packs, and were entirely dependent on external control (the addition of acidic or basic liquids to make their molecules change shape, resulting in motion). The nano-elevators were not autonomous.

Similarly, a “nano-car” was built in 2005, and it can drive around a flat plate made of gold. However, the movement is uncontrolled and only happens when an external stimulus–an input of high heat into the system–is applied. The nano-car isn’t autonomous or capable of doing useful work. This and all the other microscopic machines created up to 2019 are just “proof of concept” machines that demonstrate mechanical principles that will someday be incorporated into much more advanced machines.

Significant progress has been made since 1999 building working “molecular motors,” which are an important class of nanomachine, and building other nanomachine subcomponents. However, this work is still in the R&D phase, and we are many years (probably decades) from being able to put it all together to make a microscopic machine that can move around under its own power and will, and perform other operations. The kinds of microscopic machines Kurzweil envisioned don’t exist in 2019, and by extension are not being used for any “commercial applications.”

Whew! That’s it for now. I’ll try to publish PART 2 of this analysis next month. Until then, please share this blog entry with any friends who might be interested. And if you have any comments or recommendations about how I’ve done my analysis, feel free to comment.

Links:

  1. Ray Kurzweil’s self-analysis of how accurate his 2009 predictions were: https://kurzweilai.net/images/How-My-Predictions-Are-Faring.pdf
  2. The inventor of the first augmented reality contact lenses predicted in 2015 that commercially viable versions of the devices wouldn’t exist for at least 20 more years. (https://www.inverse.com/article/31034-augmented-reality-contact-lenses)
  3. In late 2019, a Magic Leap One cost $2,300 – $3,300 and a Hololens was $3,000. https://www.cnn.com/2019/12/10/tech/magic-leap-ar-for-companies/index.html
  4. In 2019, a new Oculus Rift system cost $400 – $500, and a new HTC Vive was $500 – $800. (https://www.theverge.com/2019/5/16/18625238/vr-virtual-reality-headsets-oculus-quest-valve-index-htc-vive-nintendo-labo-vr-2019)
  5. In 2018, people across the world bought 259 million new desktop computers, laptops, and “ultramobile” devices (higher-end tablets that have large, detachable keyboards [the Microsoft Surface dominates this category]). These machines are meant to be accessed with traditional keyboard and mouse inputs. Keyboards aren’t dead.
    (https://venturebeat.com/2019/01/10/gartner-and-idc-hp-and-lenovo-shipped-the-most-pcs-in-2018-but-total-numbers-fell/)
  6. Survey data from 2018 about the global usage of “digital personal assistants.” Users speak to their smartphones or smart speakers, mostly to obtain simple information (like weather forecasts) or to have their computers do simple tasks. (https://www.business2community.com/infographics/the-growth-in-usage-of-virtual-digital-assistants-infographic-02056086)
  7. 2019 Pew Survey showing that the overwhelming majority of American adults owned a smartphone or traditional PC. People over age 64 were the least likely to own smartphones. (https://www.pewresearch.org/internet/fact-sheet/mobile/)
  8. A 2015 American Community Survey revealed that households headed by people over 64 were the least likely to have smartphones, PCs, or internet access. (https://www.census.gov/content/dam/Census/library/publications/2017/acs/acs-37.pdf)
  9. In 2000, 34% of Americans accessed the internet through dial-up modems, and only 3% did so through “broadband” (a catch-all for cable, DSL, and satellite access). Most U.S. internet users were still using dial-up modems that were at most 56k. The remaining 63% didn’t access it at all. (http://thetechnews.com/2016/01/03/usa-getting-faster-internet-speeds-but-not-at-the-pace-others-are/)
  10. In 2019, a mid-tier internet service plan in the U.S. granted users download speeds of 30 – 60 Mbps. (https://www.pcmag.com/news/state-by-state-the-fastest-and-slowest-us-internet)
  11. 2019 U.S. mobile phone network average speeds were 33.88 Mbps for downloads and 9.75 Mbps for uploads (https://www.speedtest.net/reports/united-states/ )
  12. The Black Friday 2019 circular for Newegg.com featured five models of printers for sale. Only one of them, the Brother HL-L2300D, wasn’t WiFi-capable. (https://bestblackfriday.com/ads/newegg-black-friday/page-12#ad_view)
  13. Gartner figures for global computer sales in 2015, 2016, 2017, 2018 and 2019.
    (https://www.gartner.com/en/newsroom/press-releases/2017-01-11-gartner-says-2016-marked-fifth-consecutive-year-of-worldwide-pc-shipment-decline)
    (https://venturebeat.com/2018/01/11/gartner-and-idc-agree-hp-shipped-the-most-pcs-in-2017/)
    (https://www.gartner.com/en/newsroom/press-releases/2020-01-13-gartner-says-worldwide-pc-shipments-grew-2-point-3-percent-in-4q19-and-point-6-percent-for-the-year)
  14. Intel’s i7 Generation 8 processor is capable of 361.3 gigaflop speeds. (https://www.pugetsystems.com/labs/hpc/Skylake-X-7800X-vs-Coffee-Lake-8700K-for-compute-AVX512-vs-AVX2-Linpack-benchmark-1068/)
  15. 3.2 billion people owned a smartphone in 2019. (https://newzoo.com/insights/trend-reports/newzoo-global-mobile-market-report-2019-light-version/)
  16. In 2019, 3D chips were common in memory storage devices, like MicroSD cards. 3D NAND chips had up to 64 layers. (https://semiengineering.com/what-happened-to-nanoimprint-litho/)
  17. In 2019, Intel was still working the kinks out of its first 3D computer processor, called “Lakefield,” and it wasn’t ready for commercial sales. (https://www.overclock3d.net/news/cpu_mainboard/intel_details_their_lakefield_processor_design_and_foveros_3d_packaging_tech/1)
  18. In 2019, computer circuits made of carbon nanotubules were still stuck in research labs, and held back from commercialization by many unsolved problems relating to cost of manufacture and reliability. Silicon was still the dominant computing substrate. (https://www.sciencenews.org/article/chip-carbon-nanotubes-not-silicon-marks-computing-milestone)
  19. “Compute cycle” has three meanings: #1 (https://www.zdnet.com/article/how-much-is-a-unit-of-cloud-computing/), #2 (https://www.quora.com/What-is-a-Compute-cycle) and #3 (https://www.computerhope.com/jargon/c/compute.htm)
  20. In a 2019 experiment, researchers were able to decode the words a person was speaking by studying their brain activity. (https://www.biorxiv.org/content/10.1101/350124v2)
  21. “The current ways of trying to represent the nervous system…[are little better than] what we had 50 years ago.”  –Marvin Minsky, 2013 (https://youtu.be/3PdxQbOvAlI)
  22. “Today’s neural nets use algorithms that were essentially developed in the early 1980s.” (https://futurism.com/cmu-brain-research-grant
  23. The inventor of “back-propagation,” which spawned many computer algorithms central to AI research, now believes it will never lead to true intelligence, and that the human brain doesn’t use it. (https://www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html)
  24. Henry Markram’s project to create a human brain simulation by 2019 failed. (https://www.theatlantic.com/science/archive/2019/07/ten-years-human-brain-project-simulation-markram-ted-talk/594493/)
  25. “Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat.” –Yann LeCun, 2017 (https://www.theverge.com/2017/10/26/16552056/a-intelligence-terminator-facebook-yann-lecun-interview)
  26. Machine neural networks are similar to human brains in key ways. (https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414)
  27. Some machine neural nets use genetic algorithms. (https://blog.coast.ai/lets-evolve-a-neural-network-with-a-genetic-algorithm-code-included-8809bece164)
  28. Quantum imaging is a real thing. However, devices that can make use of it are still experimental. (https://onlinelibrary.wiley.com/doi/full/10.1002/lpor.201900097)
  29. The Samsung Galaxy S10 is an upper-end smartphone released in 2019. It has three digital cameras, all of which operate on the same technology principles as the digital cameras of 1999. (https://www.digitalcameraworld.com/reviews/samsung-galaxy-s10-camera-review)
  30. The 2016 Nobel Prize in Chemistry was given to three scientists who had done pioneering work on nanomachines. (https://www.extremetech.com/extreme/237575-2016-nobel-prize-in-chemistry-awarded-for-nanomachines)
  31. Dr. Marc Miskin’s micromachines from 2019 are interesting, but a far cry from what Kurzweil thought we’d have by then. (https://www.inquirer.com/health/micro-robots-upenn-cornell-20190307.html)

My predictions for the 2010s were very accurate

As I said in a recent blog entry, my interest in futurism and my habit of making written predictions about the future predate the creation of this blog by many years. Previously, I used Facebook as my platform for publishing those ideas, and in December 2009, I took my first shot at making a written list of personal predictions. The document’s title, “Predictions for the next decade,” is self-explanatory, and as it is now the end of the decade, I’d like to rate my accuracy. [Spoiler: I did a great job overall.]

Below, I’ve coped and pasted the text of the original Facebook note, and interspersed present-day evaluations of my predictions in square brackets and bold text. I’ve even carried over the pictures that were embedded in the original.

============================================================

Predictions for the next decade
December 25, 2009 at 2:57 PM

The first decade of the 2000’s (which should actually be called the “Oughties”) is just a few days from being over, and I thought I’d render a couple serious predictions for the “teen years.” Most of you probably don’t know this, but I am a futurist and like reading serious books written by scientists about what they think the future will be like. The granddaddy of these people is Ray Kurzweil, and I encourage you to take a couple minutes to read about the guy’s life, beliefs and predictions (failed and confirmed) here:

http://en.wikipedia.org/wiki/Ray_kurzweil
http://www.kurzweilai.net/meme/frame.html?main=/articles/art0275.html [Broken link to what was a Ray Kurzweil self-assessment of his 2009 predictions]
http://en.wikipedia.org/wiki/Predictions_made_by_Raymond_Kurzweil

Ray Kurzweil

As you can see from Kurzweil’s “2009” predictions, he’s about 50% right, 25% maybe right or wrong, and 25% flat wrong. While I think he’s definitely on the right track with his predictions, his big problems are that he overestimates the rate of technological advance–particularly where it concerns improvements to the “thinking” abilities of computers–and the willingness of people to accept new technologies. Kurzweil also sticks his neck out too often by proclaiming that one, specific type of technology will be in use by year X. When it doesn’t happen, his credibility is impeached.

[I still believe these things.]

Anyway, a detailed overview of my views on Kurzweil will have to wait for a later note. For now, let me tell you what I think the world will be like by the end of 2019.

The Political World

Obama wins the 2012 elections. Let’s face it: The same coalition of minorities and young people that elected Obama in 2008 are going to rally to his defense in 2012. The Republicans don’t have any obvious “golden boy” right now either. The only way Obama can lose is if he colossally screws up (or at least if Americans perceive it that way), which doesn’t seem likely given his intelligence. I’m not sure if Biden will run in 2016, but just remember that the guy will be 74 at that time, which will make him older than McCain was in 2008. There’s no way in hell I or anyone else can guess about who will win the 2016 elections, so I won’t try.

[I was right about everything! Also, I think the Democrats now have these same problems going in to the 2020 election: They don’t have an obvious “golden boy,” there’s not enough time left for one to emerge out of the woodwork, all of their Presidential candidates are seriously flawed in some way, and Joe Biden’s age problem is worse than ever! However, Trump’s voter base today is smaller than Obama’s voter base was in 2012, so even a seriously flawed Democratic opponent could beat him. The 2020 race is on such a knife’s edge that I can’t assign odds right now to the outcome, other than to say whoever wins will be disappointing.]

Sorry folks, but we still have a two-party system in 2019. Our institutions and people are simply too heavily geared towards supporting it.

[Sadly, I was right. However, since 2009, I’ve become less convinced that a three- or four-party system will help much, so we’re not much worse off as a nation than we otherwise would have been. Other governments show that, as the amount of political diversity grows, so does political gridlock. The need to sacrifice principles to make pragmatic, messy compromises that “keep the lights on” never goes away. My changed attitude towards this issue is a good example of how I’ve become less idealistic/more jaded over the last ten years from having more time to observe how the world really works.]

The U.S. will still be the world’s most powerful country politically, economically, militarily, culturally, diplomatically, and technologically, though China has closed much of that gap. On the subject of China, I think it’s important to remember that it is a country currently in social, political and economic transition, and it faces enormous challenges and pressures for change in the future. In no particular order, let me go through these. First, China has a growing gender imbalance that could threaten its internal stability. Thanks to a patriarchal culture and the one-child-per-family policy, abortions of female children are widespread and produce a sex disparity in the population (i.e. – If you can only have one kid, might as well make it count and have a boy). By the end of 2019, 24 million young Chinese men of marrying age will be unable to find wives. Having a lot of young, unattached males who aren’t getting enough sex hanging around inside your country is bad news, as the conservative societies of the Middle East show us. These guys tend to start a lot of trouble (terrorism, riots, reform movements, etc.).

[I was right! In fact, the Chinese sex imbalance is even worse than I estimated (some sources say there are 34 million more men than women there). Fortunately for China, this hasn’t translated into a mass civil unrest, and the single, young men are handling it with stiff upper lips and lots of erotic anime cartoons. The ongoing protests in Hong Kong aren’t being driven by the sex imbalance, and in fact, the city has a significant surplus of single, young women.]

Second, China faces another demographic problem in the form of its aging population: By the end of 2019, around 20% of all Chinese will be 60 or older, and that proportion will only grow with time. Frankly speaking (as I always do), old people sap national resources through pensions and medical services, as we see in our own country with Social Security and Medicare (and it is even worse at the state level in many cases). The graying of China’s population is going to cause large, direct decreases in the GDP growth rate, which will have a ripple effect through the entire country and all other segments of its society. Of course, the Chinese would be able to overcome this problem by increasing the number of young people to support the old through taxes, though it’s questionable whether such increases could be accomplished by 2019: Even if the Chinese government were to rescind the birth restrictions, it probably wouldn’t lead to sufficient population growth since many Chinese now have a Westernized mindset and are more concerned with personal growth and accomplishment than they are with having kids. Increasing immigration is another possibility, and while I do think a significantly greater share of China’s population will be foreign by 2019, the Chinese are simply too xenophobic to allow enough immigrants in anytime soon.

[I was mostly right! The share of China’s population that is 60 or older is in fact 17-18% now, so my original estimate was too high (not sure where I got it from) but still in the ballpark. In 2015, China raised the birth limit to two children per family, but it failed to spur a baby boom that was large enough to alter the country’s negative demographic trajectory. Mass immigration of workers also hasn’t happened in China.]

Third, China’s rapid industrial growth has caused serious environmental damage that will be much worse by 2019 and that will put another constraint on their GDP growth. Not only is the country the world’s largest greenhouse gas emitter, it is also the worst offender (or one of the worst) when it comes to a slew of other types of pollutants like sulfur dioxide and heavy metals. Fishing stocks near China’s coasts have also been almost exhausted, northern China is facing desertification and depletion of aquifers, and elsewhere in the country people riot on a near-daily basis over pollution and its effects. I’m not going to go into this in full detail, but there’s a great 2007 article in Foreign Affairs entitled “The Great Leap Backwards?” that covers the full extent of the damage if you want to read about it.

[Thankfully, the most dire extrapolations of China’s pollution trends didn’t pan out. China’s CO2 emissions have grown over the last ten years, so it pumps out more of the gas than ever before, but the rate of that growth has gotten much lower. Its levels of sulfur dioxide and air particulate emissions have also dropped over the last decade thanks to stricter laws. Environmental damage is probably hurting Chinese GDP growth less than I predicted, which actually makes me happy.]

Fourth, I think China’s global influence is going to hit a wall because the country doesn’t really stand for anything. It’s international behavior is clearly self-interested in all respects, and the country doesn’t have much of a vision–ideological or otherwise–to offer the world. Contrast this with the U.S., which has for decades sought to spread political freedom, economic freedom, free trade, and human rights, and which openly works for a future world free from want and oppression. Yes, I know that sounds very preachy of me, and yes, I realize that our pursuit of those goals has been inconsistent for various reasons, but I think we do the best we can given the constraints and that our presence moves the world in a positive direction overall. Having grand ideas and a nice-sounding ideology resonates with people across the world on a very basic level, and this is an area in which China is severely lacking.

[I was right. China is still viewed as a self-interested player on the international stage and has few good friends. The friends it has made through investment in Africa and in the Belt and Road Initiative would walk away as soon as the money stopped flowing.]

Fifth, by 2019, China’s economic growth rate will have slowed no matter what since there are only so many low-hanging fruits you can harvest. China’s statism isn’t going to be able to deliver results once the country’s economy moves beyond a low-wage export model and innovation and entrepreneurship become the pillars of further growth, as they are in the Developed World. Really, that touches upon one of China’s biggest problems–it’s government. Ignoring the traditional American complaints about the disregard for human and political rights (most Chinese don’t really care about these), the Chinese Communist Party simply isn’t going to be efficient enough or responsive enough to meet the expectations of the Chinese people once they become more educated, sophisticated and wealthy and once the aforementioned problems start to have a real impact. Political change and a period of social instability in China are hence inevitable and might happen by 2019 or be about to happen. The operative word in that sentence is “might.”

[I was right about China’s economic growth rate shrinking during the 2010s, but wrong about the CCP’s ability to hold on to power. The last decade has shown the Party to be more adept at tracking and shaping the opinions of its people and defusing potential crises than I anticipated.]

Such a transition might lead to a wonderful outcome or to disaster. It’s always possible that the CCP could, during a time of internal crisis, try to divert attention and to unify the country by playing on the strong nationalism of its people and agitating over Taiwan, the Spratly Islands or something else (Pride, and somewhat by extension nationalism, is the quintessential human flaw). That, of course, would bring China into conflict with us, but the such a subject is too large to be discussed here. Suffice it to say that any U.S.-China military conflict–even if waged with strictly conventional weapons–would be terrible no matter who won and would lead to a dramatically altered international economic and diplomatic balance of power that the losing side would be extremely bitter over. Let me be clear: A war between China and the U.S. is extremely unlikely (largely due to economic linkages), and I think it would only have a chance of happening if China’s government got really desperate and felt it had more to gain from such a war than it would lose or if some fool in Taiwan tried to declare independence. The point is that there’s a possibility of conflict.

In any case, by 2019, China won’t be able to beat the U.S. in combat under most circumstances. They might be able to win the opening stages of a war over Taiwan, but that’s only because Taiwan is just a few miles from China whereas it’s across the biggest ocean in the world from the U.S. In an all-out industrial war like WWII, China will still get beaten as badly in 2019 as in 2009, assuming the American people are willing to fight as hard as they did in WWII (actually not such a safe assumption). On that note, it’s worth keeping in mind that, while China’s military capabilities are rapidly growing, they have a very long way to go to catch up to America’s. We’re talking at least 50 years here. The exact same applies to their economy. Think China’s rich? Compare their per capita GDP with America’s. Even adjusted for PPP, it’s not even close.

[I was right. Today, China remains too weak to beat the U.S. military, and is too weak to take over and hold onto Taiwan. Regarding my “50 year” prediction, I think China’s naval and air forces could be strong enough in as little as 20 years to beat U.S. forces in a war for the “First Island Chain,” but that’s not the same as saying China’s military will be better in every way, and able to beat the U.S. military in any type of engagement and in any part of the world.]

I think it’s useful to remember another episode from our recent history when thinking about China’s projected rise. In the 1980’s, Americans were absolutely convinced that Japan was going to surpass us and become the world’s economic superpower. All of the economic trend lines pointed to such an outcome. But guess what happened? Overinvestment in land and the stock market (done out of the expectation of high future returns indicated by all those upward trend lines) formed a bubble, which popped and led to a recession (similar to our current problem). Unemployment went up, causing consumer spending to go way down. Government stimulus attempts were ineffective. Japan’s elderly population was also a major drain. Something similar could happen to China (though a gradual leveling of GDP growth is also possible), and at some point in the future, we might all be laughing about how worried we were back in 2009 or 2019 about this other Asian juggernaut.

[A Japan-style economic slowdown could still hit China, so my prediction from 2009 still stands. However, I’m far less concerned about the “total meltdown” scenario where China has an economic depression AND a political implosion AND attacks its neighbors, dragging in the U.S. Over the last decade, the CCP has proven itself smarter and more cool-headed than to let such a thing happen.]

[In the original Facebook note, I detoured into a discussion of Chinese history at this point. While interesting, I’m omitting it because it isn’t about futurism.]

…The Chinese respect Americans and Europeans for their accomplishments, but still believe that it was really just a fluke that the West happened to be more advanced than China back starting in the 1800’s when it began expanding into East Asia. The Chinese are extremely proud of their country’s economic, political and military ascension and think that the end result will merely be the righting of wrongs and the establishment of the world order as it always should have been, with China at the natural center. The problem is, there is a real chance these ambitions could be frustrated for the reasons I have discussed, and a wounded and bitter China obsessed with failed expectations would be a menace to everyone.

There is a small chance we could see such a world by 2019, or that we could see it on the horizon. Or, China might defy all expectations, overcome its problems and be on its way to taking the reins of global leadership. Or it could be in some middle ground. The point is that China’s future is really uncertain, and owing to the country’s size and strength, this is a major issue we will need to involve ourselves with.

Moving on, Iraq has an at least even chance of still being a stable country by 2019, with a democratic–albeit highly corrupt–government. I’m not saying it’s going to be paradise, but it will be able to take care of itself, and no one will worry about it facing state collapse. Iraq will still be getting a lot of U.S. aid to keep it stable, and I could see small numbers of American troops still in the country for special purposes like training the Iraqi army and fighting terrorists, but we’re not going to be losing many guys, so no one will care. On the other hand, there’s also the real chance that Iraq could hold together for only a short period after the U.S. withdraw, and then start disintegrating again. Such a development would initiate a new national debate over here on whether or not to send troops back in, which I believe we ultimately would.

[I was right! When I wrote the original note, there were about 125,000 U.S. troops in Iraq. By early 2012, it had dropped to 5,000. As I thought might happen, Iraq started disintegrating shortly after, and in 2014, ISIS took over large parts of the country. To confront the new threat, U.S. troop levels again increased, but the intensification of combat was tolerated by the American public because our casualties were low. In 2019, Iraq has returned to being a stable country with a corrupt, democratic government.]

As seemingly hopeless as Afghanistan is, I don’t think Obama is going to cut and run and let it degenerate like we did in the 1990’s. The country will surely be a dump in 2019, but there will be some stability.

[I was right. Afghanistan is probably better than it has been in its history, but the social and economic progress it has made thanks to the U.S. and other countries is paper-thin, and would disintegrate if Afghanistan were left to its own devices. Even Afghans acknowledge this.]

I have no idea how the Iranian nuclear problem is going to be resolved, but some kind of solution will have to be found by 2019. They’ll definitely have the ability to make nukes and warheads by then. Considering the downsides of attacking their nuclear infrastructure, I think it’s entirely possible we might just have to let Iran get the bomb, or at least leave them with the capability to do so.

[An “OK” nuclear deal between the U.S. and Iran existed from 2015-18, and hit the pause button on Iran’s nuclear weapons program. Had the deal never existed, and had Iran chosen to develop nuclear weapons as fast as possible, it would have built a nuclear bomb by now. The situation is now in a weird holding pattern, where the Trump administration half-believes it can use sanctions to force Iran to give up nuclear weapons, and Iran is mad about the sanctions but not so much that it’s willing to fully resume nuclear weapon development and risk even worse punishment. As was the case in 2009, I have no idea how this dispute will resolve itself.]

There’s also a good chance the that whole “War on Terror” might have wound down and receded from public consciousness by 2019. Sure, crazy Islamists are always going to be a threat, but I could see the momentum being on our side by 2019 and the enemy ranks thinned to a manageable level. It won’t necessarily happen, but it’s a real possibility.

[I was right!]

Also, at current rates, Medicare will go bankrupt in 2019. Expect this to be a big political issue in the 2016 elections. We’ll find a solution to the problem, though I doubt it will be an efficient or cheap one.

[I was wrong. This hurts because it was a particularly sloppy prediction of mine. The Medicare program, by design, can’t “go bankrupt,” nor was there any chance of it running out of money by 2019. I have no idea what inspired that prediction.]

Oh yeah, Fidel Castro dies by 2019 for sure. Kim Jong-Il also has a good chance of being dead by then. I’m not sure what effect it will have on either country, though I’m more optimistic about Cuba moderating in the coming years. Hosni Mubarak is also going to be kicking the bucket, along with Pope Benedict.

[Muhaha! Castro and Kim died in 2016 and 2011, respectively, and Cuba did “moderate” a great deal during the 2010s, though it is still not a free country. Mubarak and Benedict didn’t die, but they both were effectively removed from the picture due to a coup (2011) and resignation (2013), respectively.]

Warfare and military stuff

By 2019, unmanned military vehicles are going to be more prolific and advanced than they are now. Expect to see unmanned boats and trucks in common American military use. The machines will mostly be under the control of remote human operators.

[I got ahead of myself. Unmanned aircraft are more sophisticated and more numerous in the U.S. military and peer militaries, but unmanned land vehicles and ships are still experimental, so I don’t consider them to be in “common” use.]

Reaper UAV

Of course, the core of our fighting forces will still be human beings, and you’ll still be seeing guys kicking down doors, sleeping in tents and chasing down some rusted AK-47 wielding bums out in some forsaken country even in 2019. Expect that to continue for a couple more decades.

On that note, in addition to Iraq and Afghanistan, the U.S. will doubtless involve itself in a number of small conflicts and operations in the Third World. Yemen, Somalia and elsewhere in Africa seem like the best bets for this.

[Wow, I was super right! The level of violence in Yemen is much worse than it was in 2009, and though U.S. troops aren’t on the ground there, American weapons and indirect support are propping up one faction. U.S. troops did combat operations in Somalia over the past decade and there is now at least one U.S. base there. Elsewhere in Africa, the U.S. military involved itself in conflicts in Libya and Niger, and built a base in the latter. The U.S. also intervened in the Syrian civil war and actually invaded the country. I predict this kind of global policing will continue at about the same level during the 2020s. ]

WWIII–as in, a huge war between the major powers–is highly unlikely though not impossible. I wouldn’t lose any sleep over it.

Cyberwarfare and possibly bioterrorism are much greater threats over the next decade, and I would expect a couple major cyberattacks, not necessarily on the U.S. but against some developed nation. I doubt this would take the form of crippling an entire country, but shutting down all the electricity to a big city isn’t out of the question. While a bioattack could occur, it’s almost certainly just going to involve a normal pathogen like anthrax and won’t be some genetically engineered superbug that kills off half the human race. Sorry to disappoint you apocalypse movie buffs.

[Cyberwarfare did surge in the 2010s, though it overwhelmingly took the form of hacking to steal sensitive data from governments and big companies. There was only one instance of a successful cyberattack meant to shut down a public utility, and it was perpetrated by Russia against Ukraine in 2015.]

The economy

I’m loath to make any predictions here given the huge swings in the global economy we’ve seen over the last two years, but I doubt that we’re headed for some Great Depression part two. The economy will slowly recover, the recession will end, and many Americans will start trying to resume their reckless spending habits. It’s simply an immutable fact about our culture that we are shortsighted and materialistic. I’m not saying things will be as good as they were in 1999 or 2005, but the economy will recover from the recession and will become healthier.

[I was right! In fact, by many measures, the U.S. economy is now stronger than it was in 1999 and 2005. Today’s unemployment rate is the lowest it has been in 50 years. Reckless spending habits are back with a vengeance! SUV sales are higher than ever, and newly constructed houses are bigger than ever.]

However, there’s a giant issue on the horizon that could throw a monkey wrench into all of this–the U.S. budget deficit. This is a major, MAJOR problem that has been kept on the back-burner for years and years while people piddled on about socialized healthcare and the wars. Basically, the U.S. federal government has for almost ten years now been spending way more money than it receives in taxes–the traditional means through which governments are funded. To cover the shortfall, the U.S. has been borrowing money from countries like China, Japan and Saudi Arabia. We now owe these countries trillions of dollars, plus billions in interest, and there are no signs that this dependency is going to shrink in the future if we stay on the present course.

[In the original Facebook note, I went into almost a rant about politics and overspending at this point. I’m omitting most of it because it isn’t relevant to this analysis.]

…Past experience has shown that once national debt reaches a certain % of GDP, foreign and domestic investors start getting spooked about the country’s ability to repay, and they stop lending money to it. Well, we’re getting very close to that point, and between now and 2019, the U.S. will have to make major spending and taxation changes to avoid total disaster. If we don’t take the initiative, most likely our main foreign creditors (China, Japan, Saudi Arabia) are going to start pressuring us in various ways to cut spending and raise taxes. If we don’t, they’ll start reducing their purchases of our Treasury securities.

If we really screw things up by failing to reach some kind of fiscal compromise in Washington, life in America could really, REALLY suck come 2019. However, I think it’s more probable that we will end up going through some tortuous debate within government that ends up with us raising taxes, cutting federal programs, or probably a little of both by 2019, which will avert a major economic crisis.

[I was wrong. At the time, I didn’t realize that there is no hard-and-fast rule about how high your country’s debt-to-GDP ratio level has to get before you have a fiscal crisis. Since the U.S. has the world’s biggest economy, trades heavily with all other major countries, and prints its own currency–which, very importantly, is also the world’s reserve currency–special rules apply to it. To see what happens when your country lacks these advantages, look at what happened to Greece in 2015. That said, I haven’t started thinking that the rising national debt isn’t a problem for the U.S. There are just so many variables at play that I can’t predict when or if a sovereign debt crisis will hit, or how bad it will get before we agree to a solution.]

Technology

There are several clear trends that will continue into 2019. For one, we see the vanishing of physical media to hold data. By 2019, the big DVD and Blu-Ray collections you see in peoples’ houses will largely be a thing of the past, at least among people with half a brain or more. People will just download movies into their computers or computer/TV’s, or pay like $1 to watch them On Demand. The same will also be happening to video games: Instead of having to buy an expensive console and a bunch of game cartridges, or a high-end computer, people will use high-speed Internet connections to stream video game feeds from remote central locations. People will be able to play any game they want at any time at low cost, and they won’t have to buy any hardware except controllers and maybe an adapter. The overall costs to gamers will decrease significantly while access to games will increase tremendously. There will still be a lot of highly advanced consoles around by 2019, but the transition to the new technology will be well underway, and the trend will be clear by that point.

[I was mostly right. Sales of DVDs and Blu Ray discs declined over the last decade while movie streaming has exploded and become the norm. Had you bought Netflix stock in December 2009, when its streaming service was in its infancy, your investment would today be worth 33x as much. I don’t see physical storage media bouncing back. Today’s gamers also do have access to a wider variety of games at lower prices than ever thanks to streaming, as the Playstation Network (PSN) and Steam show. My prediction about the end of game consoles might have been a little optimistic, as Sony and Microsoft are planning to release a new generation of consoles next year, but I still think it will come true at some point. Industry insiders are still talking about the impending transition to streaming.]

Hollywood and the video game industry are going to try hard to push 3D TV on the masses, but I’m not sure how quickly it’s going to catch on. People ARE NOT going to put on a pair of 3D glasses every time they watch TV, especially if the glasses cost a lot of money and each household needs to buy several of them so all family members can watch. Hardcore video gamers might be willing to do it to enhance the gaming experience, but not average people just watching the Cooking Channel or something. People might be more amenable to 3D movies in the theaters, but it’s not going to work at home. I don’t think 3D is really going to catch on until holographic TV’s that can produce 3D pictures for the naked eyes are invented, and I could see these at least being in the prototype stage by 2019.

Don’t get me wrong–3D TV definitely seems like the next big thing (after all, at some point, you can’t increase the resolution of 2D TV any further to be discernable to human eyes, and the industry has to start heading in new directions), but I think the 3D glasses are going to be a big stumbling block, and it’s going to take longer for the technology to gain mass acceptance than industry insiders would like. Another major problem is the fact that regular TV’s–including the expensive flatscreen HDTV’s–can’t display 3D images, and normal Blu-Ray players can’t play them, meaning everyone will have to pay a couple thousand bucks again to get everything replaced. There’s also no standard yet for 3D signal broadcasts, and TV signal bandwidths are going to need time to expand to handle them, anyway.

[I was right! And I’m darn proud about this since, in late 2009, we were in the grips of Avatar hysteria, and one of the film’s selling points was that it was made to be seen in 3D. And yes, glasses-free 3D TV prototypes now exist, including one made by “Mopic.”]

Once 3D motion pictures do start to gain widespread appeal, a mini-industry will spring up to “convert” the old 2D movies to 3D in much the same way that old black and white films were colorized. This might start happening by 2019.

I also wouldn’t be surprised to see digital cameras being sold in stores by 2019 that take both 2D and 3D photos, so you could look at them normally or put on your 3D glasses and see them popping out of the computer screen.

[3D movies and photos still haven’t caught on, and I worded my predictions on this to indicate my justified doubts. When glasses-free 3D TVs and advanced VR/AR eyewear become mature technologies, the conversions of older 2D visual content into 3D format will begin. This will probably start by 2029, and will certainly be widespread by 2039. Additionally, there are indeed digital cameras being sold in stores today that can take 3D photos (such as the $350 Vuze XR), but they aren’t very popular.]

E-readers are finally going to go mainstream within a few years, and will be ubiquitous and cheap by 2019. Sure, there will still be people soldiering on with normal newspapers and books, but those will be on their way out for most people, especially younger ones. You’ll go on a bus or a subway or something and see a whole bunch of people looking at their personal e-readers.

[I was basically right. A brand new, high-quality Amazon Kindle 6″ e-reader costs less than $100 today. E-readers are in fact obsolete now because tablet computers got much cheaper and better over the last decade and have many more functions than e-readers. It’s worth it to pay a little extra money to get a device that is so much better. Larger smartphones (sometimes called “phablets”) have also eaten up part of the e-reader market share. It is indeed very common, and in fact the norm, to see people on buses or subways looking at their personal devices, though most of the time it’s a smartphone instead of an e-reader.]

Typical e-reader [in 2009]

Along those lines, by 2019, I think almost every college student will have a laptop or some tabletlike portable computer (maybe an e-reader with an attached stylus and keyboard) that they would bring to class and do most of their work on. These should also be pretty common among high schoolers, though don’t expect pencils and paper to be anywhere near gone by 2019 in schools.

[I was right. 81% of college students now use laptops during class lectures. Among the remaining 19%, I bet some are using tablet computers or even smartphones to take notes. https://www.pcmag.com/news/370271/the-average-college-laptop-shopper-prioritizes-price-speed ]

There’s also going to be a massive increase in the number of amateur recorded videos. At the rates that computer memory costs are decreasing and digital camcorder technology is improving, by 2019 you could feasibly record every second of your entire, boring life–in good quality video–and save it onto your computer or put it onto the Internet. By 2019, also expect basically half the human race to have a high-quality digital camcorder built into their cell phone, computer, slim digital camera, or WHATEVER, and also expect there to be a lot more surveillance cameras everywhere. Between those developments, expect an explosion in “citizen journalism” and voyeurism, and expect for virtually every single disaster (plane crash, tornado, riot), public crime, or act of public obnoxiousness to be recorded and posted onto the Internet within hours for everyone around the world to see. Yeah, I know things are already kind of like this, I’m just saying you should expect it to be ten times more intense and pervasive by 2019. And things will only get worse from there.

[Oh my God I was right! You could strap a GoPro camera to your chest, record 720p footage with audio, and make 90 GB of video per day (assuming you skip the nine hours per day when you’re sleeping and showering). Applying lossless compression to the footage, you could reduce the file size by at least 50%. The resulting 45 GB of data you made each day could be saved onto personal hard drives. A 2TB HDD costs $50 today, and could store 44.4 days worth of videos, meaning it would only cost slightly more than $1/day to make and save recordings of almost all your waking hours. That price will of course drop in the future as computer memory gets cheaper. Moving on, in rich and middle-income countries, over half of adults have smartphones. Even in poor India, over 24% have smartphones, and another 40% have “dumb cell phones,” most of which surely have built-in cameras. My prediction about an explosion in the amount of amateur and semi-professional video content being uploaded to the Internet was also obviously right. In fact, it’s now common for public crimes and acts of obnoxiousness to be recorded by multiple people and from different angles.]

More generally, by 2019, physical computer memory will be so cheap that for about $50 you could buy more memory than you could ever put to any practical use. For instance, if you scanned every important personal document, photograph and home movie into your computer and also added in all the songs you liked along with pdf copies of all your favorite books, it wouldn’t come anywhere close to filling up your hard disk. By 2019, the only way you could exceed the memory capacity of an average computer would be extreme and pointless hording of data: You would have to save 100 or more Blu-Ray quality movies onto your hard drive, or download hundreds of thousands of songs (which would easily include every famous song ever written), have dozens of high-end computer games (circa 2019) on your computer at once, or record every second of your life with hi-def cameras in order to have a problem with disk space. Bottom line: By 2019, computer hard disk space is effectively infinite for normal people.

[I was right. As mentioned, $50 today will buy you a 2TB hard drive, which contains more space than the vast majority of people need. There’s no reason to save movies and music onto your personal drives thanks to cloud storage and streaming.]

It will also be a lot faster and easier to upload videos and pictures to the Internet by 2019. I could see a lot of people using digital cameras that have GPS sensors in them or some other type of location fixing device/software, so every picture and video would automatically be tagged by the device with information on where it was taken. People could easily search and view videos and pictures over the Internet by searching for images of a certain geographic area.

[I was right about it all! Smartphones automatically embed the GPS coordinates into a photo’s metadata at the moment the photo is taken, though this feature can be disabled. ]

I’d also expect electronic media to be embedded in a lot of magazines, books, wall ads, and products by 2019, meaning you open up a copy of the December 2019 issue of Maxim, and several of the pages feature paper-thin computer displays with moving images and sounds. Most of these will probably be advertisements. I could also see a lot of billboards and wall ads being like this by 2019, and people no longer being shocked or fascinated by them–they’ll just be an everyday thing. If you want an idea of what I’m talking about, watch Minority Report and Children of Men. This was already done for the first time in some magazine this year.

[I was wrong. Paper-thin computer display technology didn’t get cheaper and better as fast as I predicted, and I think we might have to wait until the 2030s for the price/quality level to be good enough to make this a reality. A bigger problem, however, is the decline of the print industry, which includes the magazine sub-industry. Over the last ten years, magazine sales have shrunk by about 50% as people have switched to reading things off of screens instead of paper, and I don’t see how this transition can be stopped. By the time it is possible to make paper-thin digital displays that are so cheap that buyers will be OK with throwing them away, there won’t be much of a magazine industry left. Case in point: Maxim, which I mentioned in my prediction, went from 12 to 10 issues per year in 2012, and has had declining revenues and profits over the last decade.]

By 2019, every new car except the very cheapest will come with a GPS and an MP3 player. Being horribly dependent upon one’s GPS for navigation will be a common thing among new drivers. I wouldn’t be surprised if some computerized, self-driving cars were also on the roads by 2019, though they’ll definitely be an expensive novelty and most people will be scared to ride in them. Far more common will be cars that use some form of computer assistance for things like collision warnings and parallel parking. Affordable hybrids and reliable battery-powered cars with respectable range will be a lot more numerous in 2019 (in part thanks to all the used, circa-2009 Priuses that will be still circulating), though the roads will still be dominated by normal internal combustion vehicles driven by stupid human beings like today. It takes many years for the vehicle fleet to “turn over.”

[I was right about almost everything! The 2019 Hyundai Accent base version, which is a cheaper car model but not one of “the very cheapest,” comes with an integral MP3 player, but no GPS. However, this is a moot point since MP3 drives and GPS receivers built in to cars have been rendered redundant by smartphones, which have both of those features. The future actually turned out more convenient than I thought. Self-driving cars are on the roads in the form of Teslas, they are still expensive novelties, and a recent poll showed most Americans are afraid to ride in them. ( https://www.forbes.com/sites/tanyamohn/2019/03/28/most-americans-still-afraid-to-ride-in-self-driving-cars/#3991842432da ) Hybrids and pure electric cars are indeed more common today, but gas-powered cars still dominate.]

Solar technology will be cheaper and better than ever in 2019. For just a few thousand dollars, you can buy enough solar panels from Wal-Mart or Home Depot to cover your entire roof. The human labor needed to install it properly might actually end up costing more than the panels themselves. Tens of millions of working- and middle-class people find it within their means to install the solar panels on their property without major financial strain, and rooftop solar arrays become a common sight in the U.S., though such upgrades are still only made to a minority of homes.

[I was right! Also, I got a 7.5 kW solar array installed on my own roof in 2019, and the cost after all tax breaks and other discounts was about $10,000, which is affordable for a middle-class person. Even a working-class income person could buy it thanks to loans. Rooftop solar panels indeed became common sights across the U.S. during the 2010s.]

Along the lines of what Kurzweil keeps harping on, I could see a lot of people using glasses with computers built into them. There’s a clear trend for computers to get smaller, more convenient to use, more portable, and more integrated into everyday life. Just look at how many people have iPhones, which are essentially small computers. A logical next stage would be to have the computer display permanently in your field of vision, meaning a ghosted heads-up display overlaying what you see in the real world. The glasses themselves might have their own independent computers, or they might function like Bluetooths and be dependent upon signals received from the iPhone in your pocket. At the very least, these would be useful for navigation and for displaying information about stores, places and things you encounter. I’m not going to screw myself over like Kurzweil and make a bunch of specific predictions about how the glasses will work, etc., but I think it would make sense for the makers of these glasses to first start marketing them to people who wear glasses anyway thanks to bad eyesight. Maybe you’re going in to get your frames changed or something, and you fork over the extra $100 to get the little computer built into your new frame. Pretty soon, you’re bragging to all your normally sighted friends about it, you let them wear it for a couple minutes so they can see what it’s like, and maybe that pushes them to start buying glasses of their own even though they ordinarily never wear them. By no means do I think the majority of people will have these things by 2019, but I could see them being a viable technology by then that people don’t consider weird. Let me also make a highly specific prediction about this: Once Apple gets into making these things, it will call them–what else–but the iGlass. Ha ha ha!!!

[I was wrong. Google tried to introduce the first augmented reality glasses in 2013, and it was possibly the biggest tech industry failure of the decade. Once large numbers of people started using them, problems that I didn’t foresee in 2009 became clear, such as the unwillingness of many normally sighted people to wear glasses all the time, and dismay from other people that someone else’s AR glasses could be surreptitiously recording them. The 2010s were also the decade when technology fatigue, social media addiction, and “fear of missing out” (FOMO) became real problems, and people realized that being connected to the virtual world all the time with devices like AR glasses might be a bad thing. Again, I couldn’t have foreseen this. That said, I don’t think AR eyewear is dead forever, and in fact I predict it will return as a niche product in the 2020s once the technology is better and cheaper.]

At the very least, by 2019, most everyone will have the equivalent of an iPhone. Normal cell phones strictly for calling other people and sending text messages will be rare.

Just more generally, computers will become more ubiquitous, helpful and user-friendly. By 2019, you’ll be able to just type a natural language question into your computer (probably your iPhone or whatever equivalent you have) or some website (“What’s a really good, cheap Chinese restaurant around here?”) or maybe even speak the question into the computer’s microphone, and it will be able to understand you and give a useful answer (“Ho Fat’s: Average rating is four stars, average entree price is $7, located four blocks ahead.”). It won’t work all the time, but will be effective and reliable enough for many people to use it and benefit from it.

[I was right about everything! Our devices and computers will get better at these things over the 2020s, and will evolve from merely responding to our requests to anticipating our needs and proactively suggesting useful things to us. In the near future, your life will be better if you follow your computer’s daily advice.]

In terms of health technology, by 2019, anyone will be able to submit a blood or saliva sample to a lab and get a copy of their personal genome for a few hundred bucks, if not less. Instead of getting some horribly long printout, you would get the data on a thumbdrive or something like that. People would find the information valuable for health purposes since it would inform them of hereditary health risks they might face, and would allow them to take precautions beforehand, but it is not going to lead to the revolution in personal healthcare by 2019 that some are expecting. Maybe it would add two or three years onto the average life expectancy of the population.

[I was right! Dante Labs does high-quality, whole-genome sequencing for $600, and you send them your DNA with a “spit kit.” The genomic data are returned to you in the form of a .txt file. The personal health benefits of having this information are small because we still don’t understand most of the human genome.]

By 2019, there will be a prescription pill on the market that slows the human aging process, delaying death and extending life. But expect it to cost a lot and to deliver minimal benefits, like you take it every day starting in your 20’s and you end up living to 90 instead of 88.

[It’s not clear if I was right. The problem with my prediction was, to prove that a pill extends human lifespan, decades of clinical trials would need to happen so differences in mortality and rate of aging could be discerned between people who took the pill and people who didn’t. Giving it only ten years for science to settle the matter was a mistake. That said, I’m heartened by the number of new drugs that were popularized over the last decade that have some scientific basis for having “life extension” properties (metformin and rapamycin), and in the fullness of time, I predict we’ll have conclusive evidence that at least one of today’s unproven anti-aging drugs does extend human lifespan.]

Household robots will be fairly common by 2019 and will be doing stuff like vacuuming the floor, mowing the lawn and dragging the trashcans to the curb. They won’t be humanoid in shape and instead will have very utilitarian and function-specific designs. Industrial robots will be more advanced, and I could see greater use of robots in labor-intensive industries doing things like picking fruits and vegetables from farm fields, which would erode our demand for illegal immigrant labor and mitigate the demographic shifts we’re expecting. A lot of the technologies necessary for creating these affordable, dependable robots will come from military research.

[I was mostly wrong. Vacuum cleaner robots have gotten much cheaper and more common since 2009, but that’s the only inroad robots made into people’s homes. Human hands still do almost all of the fruit- and vegetable-picking on farms, though experimental robots have gotten much better. The technologies just didn’t advance as fast as I thought.]

Many more people will telecommute. Also, taking college classes remotely will be a lot more common and more respectable by 2019 (which will be a good thing), though the vast majority of young people will still want to be physically present in the classroom and get the campus/college life experience.

[I was right.]

Space

I wouldn’t be surprised if, by 2019, space probes had discovered life or proof of life elsewhere in our Solar System. I’m not talking about little green men, I mean microbes and fungi. We’re most likely to find this stuff in the soils of Mars or on some of the moons of Jupiter and Saturn. Instead of destroying the basis for religions like Christianity, I think their adherents will find a way to rationalize it and reconcile it with their beliefs.

[I was not wrong or right because I hedged my statement with the uncertain phrase “I wouldn’t be surprised if…” In fact, I didn’t even make a real prediction. Life still hasn’t been found outside of Earth, but I still think it’s very possible that simple alien life forms like microbes and fungi exist in our Solar System and beyond. I can’t predict when we might find an sample.]

Europa–one of the moons of Jupiter and a candidate for extraterrestrial life. It is a water moon whose surface is frozen, but underneath it is liquid.

I also wouldn’t be surprised if one of our telescopes spotted a distant planet with Earth-like conditions by 2019. It would be pretty cool, the first grainy pictures of the planet would be on the cover of TIME magazine, and I’m sure it would change the way people thought about the importance of the space program, but we’d really just continue with our daily lives. A lot of our whacko, conspiracy theory types would latch onto these findings and start renewing their paranoia over aliens.

[Many potentially habitable exoplanets have been discovered since 2009 (https://en.wikipedia.org/wiki/List_of_extrasolar_candidates_for_liquid_water ), but we don’t have proof they have life. We don’t have quality photos of these exoplanets because we don’t have multi-trillion dollar space telescopes whose lenses are several square kilometers in area, which is what would be needed to capture enough of the infinitesimal visible light reflecting off an exoplanet to make a photograph. Because I now have an elementary grasp of optics, I understand why a detailed photo of an exoplanet won’t grace the cover of TIME magazine for a long time.]

By 2019, we’ll probably be in a mini-space race with China to go back to the Moon. No one will have landed humans there, but the time for such an event would be measurably close.

[I think I was right. China landed its first rover on the Moon recently, is planning a second one, and probably has the long-term goal of landing a man on the Moon. The U.S. Vice President has also declared that there is a new space race with China, and that America’s response should be a manned Moon landing by 2024. I predict that deadline will slip, but a landing by the end of this decade is plausible.]

“Special” problems

The world isn’t going to face any major risks in 2012, at least not because of anything the ancient Mayans said. Keep in mind that the Mayans were such great futurists that they didn’t predict the Spanish showing up in the 1500’s and massacring them. It’s also unclear whether the Mayans even believed 2012 would bring any kind of disasters to the world. If anything, they would have been happy about the milestone. Finally, let’s keep in mind that the Mayan calendar isn’t really ending in 2012, we’re just supposedly transitioning into a new age of mankind. According to the Mayans, this has happened several times in human history, the last occurrence being in 3114 BC, (Year Zero to the Mayans) when the current age of mankind began. If the transition dates between each age of man are times of great death and disaster as 2012 proponents claim, then 3114 BC should have likewise been a period of great suffering, but historical and archaeological records show no evidence of and problems that year. It looks like–gasp-the Mayans made it all up.

[Mayan doomsday didn’t happen, and I remember spending the first half of December 21, 2012 filling out boring paperwork at a bank.]

The world’s climate almost certainly won’t be detectably different ten years from now. Sure, it will be 0.1 degrees Celsius warmer, but you’re not going to see any major changes in coastlines or weather thanks to that. Runaway global warming is a possibility, just as the Earth getting hit by a giant asteroid is, though the mainstream of climatologists dismisses the theory. If anything, I think the threat of global warming is exaggerated.

[I was right. In spite of the breathless, dour pronouncements that “Global warming MAY HAVE CONTRIBUTED to this latest disaster” that are now daily pablum on the news, the planet’s overall climate is not noticeably different to people than it was ten years ago. I still consider “runaway global warming” to be a very remote possibility. ]

Peak Oil may or may not happen during the teen years. This is another outcome that is very difficult to predict. Once the recession ends and petroleum demand picks up again, we’re going to see $4 gasoline again pretty soon, and I don’t see it getting much cheaper than that. But we’re not going to “run out” of oil EVER. There’s simply too much on this planet–the biggest bottleneck is our ability to extract and process it. By 2019, gas could easily be north of $4 per gallon, and there might be many more people taking mass transit or using battery powered cars, but there’s not going to be any collapse in oil supplies. We’ll just get used to it.

[Overall, I’d rate my prediction as “wrong.” Not only did Peak Oil not happen, but gas prices have stayed below the $3.00 mark in most of the U.S. for the last five years in spite of a booming economy. Fracking changed everything. It was a change I didn’t see coming, but I was in good company.]

[I’m leaving out two paragraphs from my original Facebook Note where I talk about and debunk the “Prophecy of the Popes” because the whole topic is silly and unscientific. You can research it on your own if you want: https://en.wikipedia.org/wiki/Prophecy_of_the_Popes ]

California seems kind of overdue for a big earthquake, doesn’t it? (I probably should say this right now since I’m actually in San Diego at the moment, right over a faultline) I would expect a significant one by 2019.

[I was wrong, and I quit the business of “earthquake prediction” years ago. Even the best seismologists in the world can’t make useful forecasts.]

I’d like to end this section by making an important point: I’ve come to realize that most people have a natural tendency to believe that the world is always getting worse, to be pessimistic and to believe that the worst case scenario will occur. You can see this in the slew of zombie horror movies, books and films about 2012 and the apocalypse, and among commonly held views about the future of the world. I believe that this mostly stems from a perverse fascination that people have with spectacle and disaster, from the millennialist tradition of the Abrahamic faiths that predominate in the West, and from a strong and usually secret desire among many people–particularly survivalists, young men, and individuals frustrated by their low ranking in the current, orderly society–to experience adventure and “natural” living instead of their boring, normal lives. Often, these desires are informed by immaturity and by mistaken notions of what such a postapocalyptic world would be like (imagine being in Mogadishu or Darfur and being just as poor, starving, stuck, and badly armed as everyone else).

A common retort is that “this time it’s different” because there are so many “signs” of impending disaster occurring at once. Really? I hope that I’ve shown here the flaws of such prophecies, and just because there are a lot of them doesn’t mean anything. 0 + 0 + 0 + 0 = 0. Moreover, I think to a large extent that the paranoia is being fueled by the media and by the entertainment industry, which themselves are just essentially parroting to the masses what they know they want to hear and not tapping into some kind of cosmic truth about the future. The “experts” who also harp on catastrophes like Peak Oil, 2012 and the Biblical apocalypse and lend seeming credence to them usually stand something to gain (typically money, resume padding, fame, or just an ego boost) from being in the public light, and they almost always lack the necessary facts and data to assert their ideas with anything approaching true certainty. Of course, the experts on the opposing side who claim that things actually aren’t as bad as most people think and won’t end calamitously are usually ignored by average people because they’re not as exciting as the other guys. The whole phenomenon is silly and shows the consequences of irrational human thinking.

[I still stand by all of this! These beliefs have in fact been strengthened by things I learned over the last decade about evolutionary psychology and the negativity bias.]

The Edster

Eddie will be 35 and will be in a mid-management position at some big company or probably the government. Hopefully, his mind won’t be dulled yet by the drudgery of the workplace, and he will still be creative.

Perhaps there shall be a Mrs. Eddie…or perhaps not. In any case, Eddie will be feeling the desire to generate Eddie Jrs in the next few years if he does not already have them since having kids at 50 would be too old and Eddie would be a stodgy and out-of-touch dad. 2019 would start the optimal time window for Eddie to start reproducing.

Eddie will have read an enormous number of books by this point and will have more advanced knowledge in several fields, including evolutionary psychology and philosophy. Eddie will also have traveled widely by this point and will have visited many countries, definitely including Thailand, Britain, Portugal, Spain, France, and Italy. Eddie will have visited all 50 states and will own a small RV and boat to assist with these travels.

There is a chance that Eddie might be involved in a Ph.D. program in 2019.

By 2019, Eddie will own several houses that he will rent out to tenants on the side. Eddie will have enough of these by 2019 to start seriously thinking about quitting his normal day job and just working 15 hours a week doing rental real estate and spending the rest of his time at leisure and doing personal pursuits. Perhaps Eddie will begin making serious plans to work his way into the Travelers’ Century Club.

[I hit the nail on the head! My math was miraculously right, and I did indeed turn 35 ten years after I turned 25. My career situation closely matches my predictions, I’ve traveled widely (though I fell one state short of my 50 state goal), but don’t have the RV. Also, after visiting the first 17 countries, I realized there is a lot of repetition in the world and some places just aren’t worth seeing, so I dropped my long-term goal of seeing 100+ countries so I can hang out with old people in the Travelers’ Century Club. Very fortunately, I opted against pursuing a Ph.D and invested my time in wiser endeavors like playing more video games.]

[That’s the end of the original Facebook Note. However, over the next few years, I added new predictions to it in the form of Comments, which I’m posting below this, along with their timestamps and my evaluations of them.]

March 26, 2011: Another thing to add under the “Technology” section: By the end of 2019, the 2-D TV paradigm will have finally reached maturity. The problems and tradeoffs that currently dog digital TV sets (motion blur, bad-looking anti-jitter settings, dull blacks and whites, etc.) will be solved, and the picture quality will be perfect at last. The price for digital TV sets will also have come down so much that a 60″ monster will cost a thousand bucks or less, so having such an appliance will be the new standard. Almost all types of big-screen TV’s circa-2019 will be less than two inches thick, and some might in fact be incredibly thin and light. Of course, rather than let us be happy with this, Hollywood and the electronics industry will keep pushing us to buy even better TV technologies. As I’ve said, there’s a good chance we will be transitioning to 3-D TV’s in large numbers, and by the end of 2019, its possible that holographic TV’s might be in mass production. The industry might also have some new, ultra-high res format better than 1080×1920 that it’s trying to push on consumers for 2-D TV’s, though I don’t see why anyone in their right mind would NEED something higher res than Blu-Ray.

[I was mostly right. New 2D TVs have solved all the technical problems with accurately displaying colors and moving objects. They actually improved more on all the metrics I listed than I predicted they would. The industry is now pushing 4K format on consumers, and people are buying it even though few of them need it.]

March 26, 2011: Also, let me clarify something. By 2019, I believe that DVD’s and Blu-Rays will be largely obsolete and that most people will stream hi-def movies over the Internet whenever they want to watch them. However, that doesn’t mean all of those discs are going to magically disappear. Yeah, you’ll still see them for sale at Wal-Mart and you’ll still see them cluttering up peoples’ houses, just in the same way you can still find VHS tapes all over the place. But by that far in the future, discs will be old technology that is clearly on its way out. Sales will be way down and still declining, and stores will probably have to slash prices way down on Blu-Rays to $5 to get anyone to buy them. Redbox might still exist and still rent Blu-Rays, but the technology’s niche in our lives will have shrunk to the margins.

[I was right! Wal-Mart now literally sells Blu-Ray movies in unorganized bargain bins. Redbox still exists and rents discs to people, but the company has been ailing for years due to the rise of competitors that deliver streamed content.]

December 16, 2011: By 2019, LED lights will finally be perfected and will be the new standard for industrial, commercial and residential lighting. LED’s will be cheap, will produce natural-looking light, and of course won’t burn out for 10+ years.

[I was right!]

December 27, 2011: By the end of 2019, the following gadgets will be obsolete:
1) Standalone GPS devices (GPS features will be built into other devices you will still carry)
2) Tablets exclusively used for E-reading (tablet tech will be so advanced that there will be no point in buying such limited devices)
3) Cellphones that aren’t smart phones (smart phones will be so cheap that there won’t be any point in buying a “dumb” phone)
4) Pocket digital cameras (will be replaced by cameras built into smart phones–DSLR’s will still have a niche, though)
5) DVD players (Blu-Ray players and disc will be dirt cheap by the end of 2019)
6) Recordable CD’s and DVD’s (thumbdrives, cloud storage and streamed content will replace discs)
Yes, I took this from a recent Yahoo news article entitled “7 Gadgets that won’t be around in 2020.”

[I was right.]

December 27, 2011: Also, by the end of 2019, most new digital cameras will capture pictures in 3D and through use of multifocus technology, whereby one push of the shutter button actually takes multiple pictures of the same image at different fields of depth, so that the viewer can later “zoom” in and out of any given photograph to see images of the foreground, background, or any arbitrary distance from the lens in focus. Computer facial recognition technology will also be so advanced that computers could automatically identify all the faces shown in a given photo.

[The first prediction about multifocus camera tech being the norm was wrong, but the second prediction about facial recognition was right.]

December 27, 2011: Also, by the end of 2019, I believe free cell phone service will exist. It will probably be just basic talk and text, and a company like Google or Apple will run the service.

[I was wrong, though the cost of a typical cell phone plan dropped.]

[And that’s a wrap! If you’re curious to know what my predictions are for ten years hence, this month I’m publishing a big list of predictions for that and other future dates, so stay tuned!]

Review: “Blade Runner”

Plot:

In the year 2019 a race of “bioengineered” humans called “replicants” exists, and are used as slave laborers and soldiers on space colonies. While made superior to ordinary humans in most respects (strength, pain tolerance, intelligence), replicants have deliberately capped lifespans of only four years to limit the amount of damage they can do should they rebel against their masters, and they are not allowed on Earth itself. This doesn’t stop a small group of replicants–including several who have enhanced combat traits–from hijacking a space ship and traveling to Earth to confront their “creator,” the head of the company the manufactured them and all other replicants, and to force him to technologically extend their lifespans. The replicants smuggle themselves into Los Angeles, where the company’s headquarters is.

Upon discovering the infiltration, the LAPD hires a bounty hunter named “Rick Deckard” to hunt down the replicants. Deckard’s background is never clearly explained, but he has good detective skills and has killed replicants before. As he follows leads and tracks them down, Deckard meets a love interest and is forced to confront his biases about replicants and consider existential questions about them and himself.

An important fact must be clarified and emphasized. Replicants ARE NOT robots or androids; they are “bio-engineered” humans. They don’t have metal body parts or microchip brains, and instead are made of flesh and blood like us. As proof, there are several scenes in Blade Runner where the replicant characters are hurt or killed, and they display pain responses to injuries and bleed red blood.

A replicant named “Zhora,” dead after being shot in the back with a handgun. Note the blood.

Additionally, it’s made clear that replicants can only be distinguished from humans by a sit-down interview with a trained examiner in which the subject is asked a series of odd questions (called the “Voight-Kampff Test”) while their physiological and spoken responses are analyzed. The procedure looks like a polygraph test. If replicants were robots with metal bones, microchip brains, or something like that, then a simple X-ray scan or metal detector wand would reveal them, and there’d be no need for a drawn-out interview. Likewise, if the replicants were organic, but fundamentally different from humans, then this could also be quickly detected with medical scans to vision their bones and organs, and with DNA tests to check for things like something other than 46 chromosomes.

By deduction, it must be true that replicants are flesh-and-blood humans, albeit ones that are produced and birthed in labs and biologically/genetically engineered to have trait profiles suited for specific jobs. The available evidence leads me to suspect that replicants are “assembled” in the lab by fitting together body parts and organs, the way you might put together a Mr. Potato Head. They are then “born” as full-grown adults and come pre-programmed with fake memories and possibly work skills. Replicants are human slaves, technologically engineered for subservience and skill.

Analysis:

Los Angeles will be polluted and industrial. In the film, Los Angeles is a grim, hectic place where fire-belching smokestacks are within sight of the city’s residential core. During the few daylight scenes, the air is very hazy with smog. This depiction of 2019 fortunately turned out wrong, and in fact, Los Angeles’ air quality is much better than it was when Blade Runner was released in 1982.

This improvement hasn’t just happened to L.A.–across the U.S. and other Western countries, air pollution has sharply declined over the last 30-40 years thanks to stricter laws on car emissions, industrial activity, and energy efficiency. With average Westerners now accustomed to clean air and more aware of environmental problems, I don’t see how things could ever backslide to Blade Runner extremes, so long as oxygen-breathing humans like us control the planet.

National average pollution figures from the U.S. EPA

Of course, the improvements have been largely confined to the Western world. China and India–which rapidly industrialized as the West was cleaning itself up–now have smog levels that, on bad days, are probably the same as Blade Runner’s L.A. This has understandably become a major political issue in both countries, and they will follow the West’s path improving their air quality over the coming decades. In the future, particulate air pollution will continue to be concentrated in the countries that are going through industrial phases of their economic development.

This looks like a shot from Blade Runner, but is actually a photo taken on a smoggy evening in Beijing in 2013.
The building, named “Pangu Plaza,” on a clear day.

Real estate will be cheap in Los Angeles. One of the minor characters is a high-ranking employee at the company that makes the replicants. He lives alone in a large, abandoned apartment building somewhere in Los Angeles. After being tricked into letting the replicants into his abode, he gestures to the cavernous space and says: “No housing shortage around here. Plenty of room for everybody.” In fact, the exact opposite of this came true, and Los Angeles is in the grips of a housing shortage, widespread unaffordability of apartments and houses, and record-breaking numbers of poorer people having to live on the streets or in homeless shelters.

The problems owe to the rise of citizen groups that oppose new construction, historical preservationists, and innumerable new zoning, environmental, and labor laws that have made it too hard to build enough housing to keep up with the city’s population growth since 1982, and priced affordably for the people who actually work there. Blade Runner envisioned a grim 2019 for Los Angeles, courtesy of unchecked capitalism (e.g. – smokestacks in the city, smoggy air, megacorporations that play God by mass producing slaves), yet the city (and California more generally) actually went down the opposite path by embracing citizen activism, unionists, and big government, ironically leading to a different set of quality of life problems. Fittingly, the building that stood in for the derelict apartment building in Blade Runner has now been fully renovated, is a government-protected landmark, and is full of deep-pocketed, trendy businesses.

The vast majority of Los Angeles’ land area is covered by single-family homes and low-rise buildings.

There will be flying cars. One iconic element of Blade Runner is its flying cars, called “spinners.” They’re shaped and proportioned similarly to conventional, road-only cars, and they’re able to drive on roads, but they can also take off straight up into the air. Clearly, we don’t have flying cars like this today, and for reasons I discussed at length in my blog entry about flying cars, I doubt we ever will.

I won’t repeat the points I made in that other blog entry, but let me briefly say here that the spinners are particularly unrealistic types of flying cars because they don’t have propellers or any other device that lifts the craft up by blowing air at the ground. Instead, they seem to operate thanks to some kind of scientifically impossible force–maybe “anti-gravity”–that lets them fly almost silently. There are brief shots in the film where low-flying spinners belch smoke from their undersides, which made me wonder if they were vectored thrust nozzles like those found on F-35 jets. But because the smoke comes out at low speed, the undermounted nozzles are not near the crafts’ centers of gravity, and the smoke isn’t seen coming out when the spinners are flying at higher altitudes, I don’t think they help levitate the spinners any more than a tailpipe helps a conventional car drive forward on a road.

A flying car expelling exhaust from its underside during takeoff..

People will smoke indoors. In several scenes, characters are shown smoking cigarettes indoors. This depiction of 2019 is very inaccurate, though in fairness the people who made the movie couldn’t have foreseen the cultural and legal sea changes towards smoking that would happen in the 1990s and 2000s.

People in Blade Runner like smoking indoors. No one stops them, and there aren’t any “No Smoking” signs.

When judging the prediction, also consider that if we average people and the legal framework were more enlightened, vaping indoors would be much more common today. While not “healthy,” vaping nicotine is vastly less harmful to a person’s health than smoking cigarettes, and science has not yet found any health impact of exposure to “secondhand vape smoke.”

A recent photo of a young woman smoking an e-cigarette.

There will be genetically engineered humans. In Blade Runner, mankind has created a race of genetically engineered humans called “replicants” to do labor. The genetic profile of each replicant is tailored to the needs of his or her given field of work. For example, one of the film’s replicant characters, a female named “Pris,” is a prostitute, so she is made to be physically attractive and to have average intelligence. All of the replicant characters clearly had high levels of strength and very high pain tolerances.

Digital dossier on the replicant “Pris”

In the most basic sense, Blade Runner was right, because genetically engineered humans do exist in 2019. There are probably dozens of people alive right now who were produced with a special in vitro fertilization (IVF) procedure called “mitochondrial replacement therapy” in which an egg from a woman with genetically defective mitochondria is infused with genetically normal mitochondria from a third person, and then the “engineered” egg is combined with sperm to produce a zygote. The first such child was born in 1997.

Additionally, there are now two humans with genetically engineered nuclear DNA, and they were both born in November 2018 in China after a rogue geneticist used CRISPR to change both of their genomes. Those edits, however, were very small, and will probably not manifest themselves in any detectable way as the babies grow up, meaning Blade Runner‘s prediction that there would be genetically engineered adults with meaningfully enhanced strength, intelligence, and looks in 2019 failed to come true. This is because it has proven very hard to edit human genes without accidentally damaging the target gene or some other one, and because most human traits (height, IQ, strength, etc.) are each controlled by dozens or hundreds of different genes, each having a small effect.

For example, there’s no single gene that controls a human’s intelligence level; there are probably over 1,000 genes that, in aggregate, determine how smart the person is and in what areas (math, verbal, musical). If you use CRISPR to flip any one of those genes in the “smart” direction, it will raise the person’s IQ by 1 point, so you just have to flip 40 genes to create a genius. But CRISPR is an imprecise tool, so every time you use it to flip one gene, there’s a 20% chance that CRISPR will accidentally change a completely different gene as well, perhaps causing the person to have a higher risk of cancer, schizophrenia or a birth defect.

The discovery of CRISPR was a milestone in the history of genetic technology, and it improved our ability to do genetic engineering by leaps and bounds, but it’s simply not precise enough or safe enough to make humans with the major enhancements that the replicants had. We’ll have to wait for the next big breakthrough, I can’t predict when that will happen, and I doubt anyone else could since there’s no “trend line” for this area of technology.

That’s not to say that we couldn’t use existing (or near-term) genetic technologies to make humans with certain attributes. A technique called “preimplantation genetic screening” (PGS) involves the creation of several human zygotes through IVF, followed by gene sequencing of each zygote and implantation of the one with the best genetic traits in the mother. This isn’t true “genetic engineering,” but it accomplishes much the same thing. And you could sharply raise the odds of getting a zygote with specific characteristics if you did the IVF using sperm or eggs from adults who already had those those characteristics. For example, if you wanted to use genetic technology to make a physically strong person, you would get the sperm or eggs of a bodybuilder from a sperm/egg bank, use them for an IVF procedure, and then employ PGS to find the fertilized egg that had the most gene variants known to correlate with high strength. This would almost certainly yield a person of above-average physical strength, without making use of bona fide “genetic engineering.” There are no statistics on how many live babies have been produced through this two-step process, but if we assume just 0.1% of IVF procedures are of this type, then the number is over 8,000 globally as of this writing.

Furthermore, I can imagine how, within 20 years, genetic engineering could be applied to enhance the zygotes farther. Within that timeframe, we will probably discover which mitochondrial genes code for athleticism, and by using mitochondrial replacement therapy, we could tweak our PGS-produced zygote still farther. Let’s assume that there are ten nuclear genes coding for physical strength. The average person has five of those genes flipped to “weak” and five flipped to “strong,” resulting in average overall strength. Our carefully bred, deliberately selected zygote has nine genes flipped to “strong” and one flipped to “weak.” Since we only have to change one gene to genetically “max out” this zygote’s physical strength, the use of CRISPR is deemed an acceptable risk (error rates are lower than they were in 2019 anyway thanks to lab techniques discovered since then), and it works. The person grows up to be a top bodybuilder.

There will be genetically engineered super-soldiers. The leader of the replicant gang in Blade Runner is named “Roy Batty,” and he was designed with traits suited for military combat. Having governments or evil companies make genetically engineered or cloned super-soldiers is a common trope in sci fi, but I doubt it will ever happen, except perhaps in very small numbers.

First, I simply don’t believe that the government of any free country, and even most authoritarian ones, would be willing to undertake such a project. And even if one of them were, the diplomatic costs imposed by other countries on the basis of human rights would probably outweigh the benefits of having the small number of super-soldiers. Mass producing millions of super-soldiers to fill out an army (to be clear, there was no evidence of anything but than small-batch production in Blade Runner) is even less plausible, as it would be too fascist and dehumanizing a proposal for even the most hardline dictatorships. Censure from the international community would also be severe. What damage can you do with an army of genetic super-soldiers if years of economic sanctions have left you without any money for bullets?

Second, a country’s ability to make super-soldiers will be constrained by its ability to raise and educate them. In spite of their genetic endowments, the super-soldiers would only be effective in combat if they were educated to at least the high school level and psychologically well-adjusted, which means costly, multi-year investments would need to be made. Where would the state find enough women who were willing to be implanted with super-soldier embryos and carry them until birth? If the government coerced its women into doing this, the country would become an international pariah for sure, and its neighbors would strengthen their own armies out of concern at such derangement.

Who would raise the children? State-run orphanages are almost universally terrible at this, and too many of the super-soldiers would turn out to be mentally or emotionally unfit for military service, or perhaps fit, but no better overall than a non-genetically engineered soldier who was raised by a decent family. If the government instead forced families to raise the super-soldier kids, doubtless many would be damaged by family dysfunction at the hands of parents who didn’t want them or parents who raised them improperly.

Third, by the time we have the technology to make genetic super-soldiers at relatively low cost, and by the time any such super-soldiers get old enough to start military service, militaries will probably be switch to AIs and combat robots that are even better. As I predicted in my Starship Troopers review, a fully automated or 95% automated military force could exist as early as 2095.

And if the super-soldiers were all clones of each other, they could develop very close personal bonds, come to feel alienated from everyone else, and behave unpredictably as a group. Identical twins and triplets report having personal bonds that can’t be understood by other people.

That said, I think human genetic engineering will become widespread this century, it will enable us to make “super people” who will be like the most extraordinary “natural” humans alive today, some of those genetically engineered people will serve in armed forces and under private military contractors across the world, and they will perform their jobs excellently thanks to their genetically enhanced traits. While it’s possible that some of these “genetic super-soldiers” will be made by governments or illegally made by evil companies, people like that will be very small in number, and dwarfed by genetic super-soldiers who are the progeny of private citizens who decided, without government coercion, to genetically engineer their children. Those offspring will then enter the military through the same avenues as non-genetically engineered people, either by joining voluntarily or being drafted. Yes, there will be genetically engineered super-soldiers someday, but their presence in the military or in private security firms will be incidental, and not–except in some rare cases–because a government or company made them for that purpose and controlled their lives from birth.

There will be “artificial animals”. While visiting the luxurious office of a tycoon, Deckard sees the man’s pet owl flying around, and he’s told that it is “artificial.” Later, he comes across an artificial pet snake, whose scales (and presumably, all other body parts) were manufactured in labs and bear microscopic serial numbers. To the naked eye, both animals look indistinguishable from normal members of their species. It’s unclear whether “artificial” means “organic” like human replicants, or “mechanical” like robots with metal endoskeletons and computer chips for brains. We have failed to create the latter, and the robotic imitations of animals we have today are mostly toys that don’t look, move, or behave convincingly. Our progress achieving the former (replicant animals) is more equivocal.

Our technology is still far too primitive for us to be able to grow discrete body parts and organs in a lab and to seamlessly join them together to make healthy, fully functional animals. This is the likeliest process used to make the replicants, so in the strictest sense, we have failed to live up to vision Blade Runner had for 2019. However, we are able to genetically modify animals and have done so many times to hone our genetic engineering techniques. For example, Chinese scientists used CRISPR to make dogs that have twice the normal muscle mass. For all I know, they’re now the pets of a rich man like the film’s tycoon.

Barbra Streisand with her cloned dogs.

Additionally, we are reasonably good at cloning animals, and, considering the vagueness of the terms “artificial” and “bioengineered” as they are used in the film, it could be argued that they apply to clones. Cloning a cat costs about $25,000 and a dog about $50,000, putting the service out of reach for everyone but the rich, and there are several rich people who have cloned pets, most notably Barbra Streisand, who had two clones made of her beloved dog after it died. A celebrity of her stature owning cloned animals is somewhat analogous to Blade Runner‘s depiction of the tycoon who owned the artificial owl.

There will be non-token numbers of humans living off Earth. At several points in Blade Runner, references are made to the “off-world colonies,” which are space stations and/or celestial bodies that have significant human populations. Advertisements encourage Los Angelinos to consider moving there, which implies that the colonies are big enough and stable enough to house people other than highly trained astronauts. The locations of the colonies aren’t described, but I’ll assume they were in our solar system.

This prediction has clearly failed. The only off-world human presence is found on the International Space Station, it only has a token number of people (about six at any time) on it, only elite people can go there, and its small size and lack of self-sufficiency (cargo rockets must routinely resupply it) means it fails to meet the criteria for a “colony”.

There are no plans or funds available to expand the ISS enough to turn it into a true “space colony,” and in fact, it might be abandoned in the 2020s. Other space stations might be built over the next 20 years by various nations and conglomerates, but they will be smaller than the ISS and will only be open to highly trained astronauts.

While a manned Moon landing is possible in the next ten years (probably by Americans), I doubt a Moon base comparable in size and capabilities to the ISS will be built for at least 20 years (note that 14 years passed from when U.S. President Reagan declared the start of the ISS project and when the first part of it was launched into space, and no national leader has yet committed to building a Moon base, which would probably be even more expensive). In fact, in my Predictions blog post, I estimated that such a base wouldn’t exist until the 2060s. It would take decades longer for that base or any other on the Moon to get big enough to count as a “colony” that was also open to large numbers of average-caliber people. A Mars colony is an even more distant prospect due to the inherently higher costs and technological demands.

I think the human race will probably be overtaken by intelligent machines before we are able to build true off-world colonies that have large human populations. Once we are surpassed here on Earth, sending humans into space will seem all the more wasteful since there will be machines that can do all the things humans can, but at lower cost. We might never get off of Earth in large numbers, or if we do, it will be with the permission of Our Robot Overlords to tag along with them since some of them were heading to Mars anyway.

Cars will be boxy and angular instead of streamlined. Many of the cars shown in the movie are boxy and faceted. While this may have looked futuristic to Americans in 1982, boxy, angular cars were in fact already on their way out, and would be mostly extinct by the mid-90s. The cars of Blade Runner look retro today, and no mass-produced, modern vehicles look like them.**

Deckard’s car.
A van
U.S. fuel economy standards sharply increased from 1975-85. Blade Runner was filmed in 1982, and its artistic vision was to some extent influenced by the aesthetics of the time, hence the boxy future cars.

The change to curvaceous, streamlined car bodies was driven by stricter automobile fuel efficiency requirements, enacted by the U.S. government in response to the Arab Oil Embargoes of the 1970s. Carmakers found that one of the easiest ways to make cars more fuel efficient was to streamline their exteriors to reduce air resistance.

A 1982 Toyota Corolla
A 2019 Toyota Corolla

Since there’s no reason to think vehicle fuel efficiency standards will ever come down (if anything, they will rise), there’s also no reason to expect boxy, angular cars to return.

Just after I’d finished analyzing this car prediction, look who showed up.

**IMPORTANT NOTE I’M ADDING AT THE LAST MINUTE: On November 21, 2019, Elon Musk debuted Tesla’s “Cybertruck” at an event in Los Angeles, and the vehicle is a trapezoidal, sharp-angled curiosity that looks fit for the dark streets of Blade Runner. While I doubt it heralds a shift in car design, and it’s possible the Cybertruck could be redesigned between now and its final release date in 2021, I’d be remiss not to mention it here.

Therapeutic cloning will be a mature technology. There’s a scene in the film where two fugitive replicants confront and kill the man who designed their eyes in his genetics lab. It further establishes the fact that the replicants are made of organic parts that are manufactured in separate labs and then assembled. This technology is called “therapeutic cloning,” and today it is decades less advanced than Blade Runner predicted it would be.

Two replicants confronting the geneticist who designed their eyes.

We are unable to grow fully-functional human organs like eyes in labs, and can barely grow rudimentary human tissues using the same techniques. The field of regenerative medicine research was in fact dealt a serious blow recently, when a leading scientist and doctor Paolo Macchiarini was exposed as a fraud. Dr. Macchiarini gained worldwide fame for his technique of helping people with terminal trachea problems by removing tracheas from cadavers, replacing the dead host’s cells with stem cells from the intended recipient, and then transplanting the engineered trachea into the sick person. For a time, his work was touted as proof that therapeutic cloning was rapidly advancing, and that maybe Blade Runner levels of the technology would exist by 2019. Unfortunately, time revealed that Macchiarini had faked the results in his medical papers, and that most of his patients died soon after receiving their engineered tracheas.

The actual state-of-the-art in 2019 is lab-made bladders. Being merely an elastic bag, a bladder is much simpler than an eye.

Legitimate work in regenerative medicine is overwhelmingly confined to labs and involves animal experiments, and there are no signs of an impending breakthrough that will enable us to start making fully functional organs and tissues that can be surgically implanted in humans and expected to survive for non-trivial lengths of time. The best the field can muster at present is a few dozen procedures globally each year, in which a small amount of simple tissue, such as a bladder or skin graft, is made in the lab and implanted in a patient under the most stringent conditions. (Of note, only a small fraction of people with missing or non-functional bladders have received engineered bladders, and the preferred treatment is to do surgery [called a “urostomy”] so the person’s urine drains out of their abdomens through a hole and into an externally-worn plastic bag.) As noted in my Predictions blog entry, I don’t think therapeutic cloning will be a mature field until about 2100.

Advertisements will be everywhere. In Blade Runner, entire sides of buildings in L.A. have been turned into huge, glowing, live-action billboards advertising products. This prediction was right in spirit, but wrong in its specifics: Advertisements are indeed omnipresent, and the average person in Los Angeles is probably more exposed to ads in 2019 than they would have been in 1982. However, the ads are overwhelmingly conveyed through telecommunications and digital media (think of TV and radio commercials, internet popup ads, browser sidebar ads, and auto-play videos), and not through gigantic billboards. Partly, I think this is because huge video billboards would be too distracting–particularly if they also played audio–and would invite constant lawsuits from city dwellers who found them ruinous of open spaces and peace.

Which is worse: Huge video billboards or being constantly pummeled with spam emails, digital ads, and the knowledge that your personal internet data is being sold and traded without your control?

No one will turn on the lights. Blade Runner is a dark movie. No, I mean literally dark: Almost all of the scenes are set at night, and no one in the movie believes in turning on anything but dim lights. It may have been a bold, iconic look from a cinematography standpoint, but it’s not an accurate depiction of 2019. People do not prefer dimmer lights now, and in fact, nighttime artificial light exposure is higher than at any point in human history: satellites have confirmed that the amount of “light pollution” emanating from the Earth’s surface (mainly from street lights and exterior building lights) is greater than ever and still growing. Also, people now spend so much time staring into glowing screens (smartphones, computer monitors, TVs) that circadian rhythm disruption has become a public health problem.

If your light is so bright that it can be seen in space, then you’re wasting a lot of electricity.

Intriguingly, I don’t think this trend will continue forever, and I think it’s possible the world will someday be much darker than now. I intend to fully flesh out this idea in another blog entry, but basically, as machines get smarter and better, the need for nighttime illumination will drop. Autonomous cars will have night vision, so they won’t need bright headlights or bright streetlights to see the road. Streetlights will also be infused with “smart” technology, and will save energy by turning themselves off when no cars are around. And if intelligent machines replace humans (and/or if we evolve into a higher form), then everyone on Earth will have night vision as well, which will almost eliminate the need for all exterior lights.

Note that, in controlled environments, machines can already function in the dark or with only the dimmest of lights. This is called “lights-out manufacturing.” As machines get smarter and move from factories and labs to public spaces, they will bring this ability with them. My prediction merely seizes upon a proof of concept and expands upon it.

It will be possible to implant fake memories in people. Very early in a replicant’s life, he or she is implanted with fake memories. The process by which this is done is never revealed, but it is sophisticated enough to fill the subject’s mind with seeming decades of memories that are completely real to them. We lack the ability to do this, though psychological experiments have shown in principle that people can be tricked into slowly accepting false memories.

Since memories exist as physical arrangements of neurons in a person’s brain and as enduring patterns of electrochemical signaling within a brain, it should be possible in principle to alter a person’s brain in a way that implants a false memory in him or her, or any other discrete piece of knowledge or skill. However, this would require fantastically advanced technology (probably some combination of direct brain electrical stimulation, hypnosis, full-immersion virtual reality, drugs, and perhaps nanomachines) that we won’t have for at least 100 years. This is VERY far out there, along with being able to build humans from different body parts grown in different labs.

Computer monitors and TVs will be deep, and there will not be any thin displays. In one scene, we get a good look at a personal computer, and it appears to have an old-fashioned CRT monitor, and is almost a foot deep. Additionally, flat-panel TVs, computer monitors, laptops, or tablets and never seen in the film. This is a largely inaccurate depiction of 2019, as flat-panel screens are ubiquitous, and the average person owns several flat-screen devices that they interact with countless times per day.

Deckard sitting on his couch while looking at his computer screen. It looks like there might also be a second screen at the far right, facing away from him. Note that he doesn’t like turning on the lights.

I said the depiction was largely inaccurate because, even though CRT monitors and TVs are obsolete and haven’t been manufactured in ten years, millions of them are still in use in homes and businesses across the world, mainly among poor people and old people who lack the money or interest in upgrading. There’s even a subculture of younger people who prefer using old CRT TVs for playing video games because the picture looks better in some ways than it does on the best, modern OLED displays. In short, while it’s increasingly rare and unusual for people to have deep, CRT computer monitors in their homes, it is common enough that this scene from Blade Runner can be considered accurate in its depiction.

The median and mean lifespan of a CRT TV is 15 years, and almost none of them last more than 30 years. With that in mind, functional CRT monitors will not be in use by 2039, except among antique collectors. The Baby Boomers will be dead by then, and their kids will have thrown away any CRT screens they were clinging to.

People will talk with computers. Deckard’s apartment building has a controlled entry security feature: anyone who enters the elevator must speak his or her name, and the “voice print” must match with someone authorized to have access to the building, or else the elevator won’t go up. Also, in his apartment, Deckard uses voice commands to interface with his personal computer. Blade Runner correctly predicted that voice-user interfaces would be common in 2019, though it incorrectly envisioned how we would use them.

Electronic, controlled entry security technology in common areas of apartment buildings, like elevators and lobbies, are very common, but overwhelmingly involve using plastic cards and key fobs to unlock scanner-equipped doors. In fact, I’ve never seen a voice-unlocked door or elevator, and think most people would feel silly using one for whatever reason.

Smart speakers like the Amazon Echo are also very common and can only be interfaced with via speech. Modern smartphones and tablets can also be controlled with spoken commands, but it’s rare to see people doing this.

This brings up the valuable point that, though speech is an intuitive means of communication, we’ve found that older means of interface involving keyboards, mice, and reading words on a screen are actually better ways to interact with technology for most purposes, and they are not close to obsolescence (and might never be). An inherent problem with talking with a computer is you lose privacy since anyone within earshot knows what you’re doing. Also, while continuous speech recognition technology is now excellent, the error rates are still high enough to make it an aggravating way to input data into a machine compared to using buttons. Entering complex data into a computer, such as you would do for a computer programming task, is also much faster and easier with a keyboard, and anything involving graphical design or manipulation of digital objects on a screen is best done with a mouse or a stylus.

To understand, watch this clip of Deckard talking to his computer, and think about whether it would be easier or harder to do that image manipulation task using a mouse, with intuitive click-and-drag abilities to move around the image, and a trackball for zooming in and out: https://youtu.be/QkcU0gwZUdg

Deckard holding a photograph he found.

Hard copy photographs are still around. In that scene, Deckard does the image manipulation on a photograph that he found. He inserts it into a slot in his computer, and it scans it and shows the digital scan on his screen. While hard-copy photographs are still being made in 2019, they’re very uncommon, especially when compared to the number of photographs that were taken this year across the planet, but never transferred from digital format to a physical medium. I doubt that even 0.01% of the personal photographs ordinary people take are ever printed onto paper, and I doubt this will ever change.

Image scanners will be common. The computer’s ability to make a digital copy of a physical image of course means it has a built-in scanner. This proved a realistic prediction, as flatbed scanners with excellent image scan fidelity levels cost under $100. When Blade Runner was filmed, scanners were physically large, very expensive, made low-quality image conversions, and were almost unknown to the general public.

Cameras will take ultra high-resolution photos. The photo that Deckard analyzes is extremely detailed and has a very high pixel count, allowing him to use his computer to zoom in on small sections of it and to still see the images clearly. In particular, after zooming in on the round mirror hanging on the wall (upper right quadrant of the photo shown above), he spots an image of one of the replicants. While grainy, he can still make out her face and upper body.

It’s impossible to tell from the film sequence exactly how high-res the photo is, but today we have consumer-grade cameras that can take photos that are about as detailed. The Fujufilm XT30 costs $800 and is reasonably compact, putting it within the range of average-income people, and it takes very high quality 26.1 MP photos. One of its photos is shown above, and if you download the non-compressed version from the source website and open it in an imaging app, you’ll be able to zoom in on the rear left window of the car far enough to see the patterns of the decals and to read the words printed on them. (https://www.theverge.com/2019/4/12/18306026/fujifilm-xt30-camera-review-fuji-xt3-mirrorless)

Firearms will still be in use. The only handheld weapons we see in the film are handguns that use gunpowder to shoot out metal bullets. One is shown for only a split-second at the start of the movie when a replicant shoots a human, and the other is seen several times in Deckard’s hands. It’s big, bulky, looks like it shoots more powerful bullets than average, and has some glowing lights that seem to do nothing. In short, it’s nothing special, and probably isn’t any better than handguns that most Americans can easily buy for $500 today. Thus, the depiction the 2019’s state-of-the-art weaponry is accurate.

Deckard pointing his pistol.

And I do say “state-of-the-art” because, being an elite bounty hunter on an important mission to kill abnormally strong, dangerous people, Deckard has his choice of weapons, and it says a lot that he picks a regular gunpowder handgun instead of something exotic and stereotypically futuristic like a laser pistol. As noted in my reviews of The Terminator and Starship Troopers, we shouldn’t expect firearms to become obsolete for a very long time, and possibly never.

Video phone calls and pay phones will be common. There’s a scene where Deckard uses a public pay phone to make a video call to a love interest. This depiction of 2019 turned out to be half right and half wrong, but for the better: Pay phones have nearly disappeared because even poor people have cell phones (which are more convenient to use). Video call technology is mature and widespread, the calls can be made for free through apps like Skype and Google Hangouts, and even low-end smartphones can support them.

It’s surprising that video calls, long a staple of science fiction, became a reality during the 2010s with hardly anyone noticing and the world not changing in any major way. Also surprising is the fact that most people still prefer doing voice-only calls and texting, which is even more lacking in personal substance and emotional conveyance. Like talking with computers, using video calls to converse with other humans has proved more trouble than it’s worth in most cases.

Links:

  1. Why cars got curvy – https://www.vox.com/2015/6/11/8762373/car-design-curves
  2. Famous Lancet retraction of Dr. Macchiarini’s papers – https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(18)31484-3/fulltext
  3. A patient who got a cloned bladder – https://www.bbc.com/news/business-45470799
  4. Light pollution is bad and getting worse – https://www.scientificamerican.com/article/the-end-of-night-global-illumination-has-increased-worldwide/
  5. Swedish study that found CRT TVs almost never survive longer than 30 years, and CRT monitors die by 20 – https://www.sciencedirect.com/science/article/pii/S0956053X1530101X
  6. Review of the Fujifilm X-T30 – https://www.theverge.com/2019/4/12/18306026/fujifilm-xt30-camera-review-fuji-xt3-mirrorless
  7. Vaping is not as bad for your health as smoking – https://www.politifact.com/truth-o-meter/article/2019/oct/21/vaping-safer-smoking/
  8. Three-person IVF done to overcome the mother’s mitochondrial genetic defects – https://www.bbc.com/news/health-47889387
  9. Barbra Streisand has two cloned dogs – https://variety.com/2018/film/news/barbra-streisand-oscars-sexism-in-hollywood-clone-dogs-1202710585/
  10. The ISS took 14 years to go from approval to space – https://www.issnationallab.org/about/iss-timeline/