Interesting articles, March 2024

As civilian casualties mount, Israel is paying a growing diplomatic price for continuing the war in Gaza.
https://www.bbc.com/news/live/world-68658973

It’s interesting to look back on this essay by a Russian blogger from two years ago. The war hasn’t ended yet, but out of the six possible outcomes he forecast, #2 seems the likeliest right now. He predicted there was a 90% chance it WOULDN’T end that way.

2. Bloody slog, “draw” (10%) — Russia’s military tries for months, but proves simply unable to take and control Kiev. Russia instead contents itself with taking Ukraine’s south and east (roughly, the blue areas in the election map above) and calls it a win. In this case, western Ukraine likely later joins NATO.
https://www.maximumtruth.org/p/understanding-russia

Inside accounts from Avdiivka show how Russian artillery superiority and a willingness to accept high casualties gradually wore its Ukrainian defenders down. On the battlefield, sometimes quantity and determination win.
https://apnews.com/article/russia-ukraine-war-avdiivka-2e827b4cae4698b3f6b80a421447fab8

Russian forces have destroyed several U.S.-made M1 Abrams tanks operated by Ukraine.
https://youtu.be/HYOx1ggl1ZQ?si=XU7Ojd1tKPO9gU4n

Russia produced almost three times more artillery shells last year as the U.S. and E.U. combined.
https://www.cnn.com/2024/03/10/politics/russia-artillery-shell-production-us-europe-ukraine/index.html

A U.S. general says the Ukraine War shows towed artillery will soon be obsolete on the battlefield thanks to threats posed by drones and fast, accurate counterbattery fire. Only self-propelled artillery pieces that can rapidly move to new positions will be able to survive.
https://breakingdefense.com/2024/03/towed-artillery-has-reached-end-of-the-effectiveness-army-four-star-declares/

Russia has started making common use of “glide bombs” against Ukraine, which are devices that are added to dumb bombs to give them precision strike capabilities. After being dropped from a plane, they can glide as much as 40 miles to a target.
https://youtu.be/ThNxRoDbuDE?si=9UR21d61L1jB7bDH

Ukrainian drones sank a new, 300-foot long Russian warship, the Sergei Kotov, in the Black Sea.
https://www.twz.com/sea/ukrainian-drone-boats-sink-russian-navy-patrol-ship

This video filmed by Russian sailors on another doomed ship, the Caesar Kunikov, shows them using machine guns to try fending off Ukrainian drones. They learn the same lesson that Allied bomber gunners and antiaircraft gunners learned in WWII: humans suck at shooting moving targets.
https://youtu.be/oLOCGWn65T4?si=h2VAOW61JyX_cWZh

A group of ISIS-K terrorists from Tajikistan attacked a Moscow concert hall, killing over 100 people.
https://www.npr.org/2024/03/24/1240488528/isis-k-moscow-concert-attack-explained

Sweden just completed the final step to joining NATO.
https://www.bbc.com/news/world-europe-68506223

The Red Sea crisis shows no signs of ending: Houthi militants sank a large cargo ship.
https://www.aljazeera.com/news/2024/3/2/rubymar-cargo-ship-earlier-hit-by-houthis-has-sunk-yemeni-government-says

In WWII, the Electrolux home appliance company converted bolt-action rifles to semi-auto using the ugliest and most complicated setup I’ve seen.
https://youtu.be/iMKwDHPkRLw?si=4nlY5V7MGnq9E2HW

Germany’s feared 88mm flak gun was actually not a particularly advanced weapon and was designed for ease of manufacture more than anything.
https://youtu.be/WLNAKUvefCQ?si=wwstnsqA8Ef9aTsw

Alcatraz has been digitally preserved after the island and all its structured were scanned using computers. As the costs of the technology drop, it will make sense to scan more places, until the whole planet has been modeled.
https://www.nytimes.com/2024/02/28/us/alcatraz-island-3d-map.html

Since 2006, electricity demand in the U.S. has been flat overall, leading to hopes that we were on-track to decarbonize the economy by steadily reducing per capita electricity consumption. However, the recent, explosive growth in cryptocurrency mining and AI technology has led to the construction of more data centers, which consume huge amounts of electricity. The switch to electric cars is also putting more strain on the power grid in tandem with decreases to gasoline consumption. The latest projections show a 35% national increase in electricity demand between now and 2050.

Unfortunately, it’s unclear how well the supply side will be able to cope with this surge in demand. The construction of new power plants and power lines is made slow and expensive by government procedures, NIMBY people, and the need to acquire land and rights of way for it all. America’s greenhouse gas emission goals will also not be met due to this expansion of the electric grid.
https://liftoff.energy.gov/vpp/

‘Because of these challenges, Obama Energy Secretary Ernest Moniz last week predicted that utilities will ultimately have to rely more on gas, coal and nuclear plants to support surging demand. “We’re not going to build 100 gigawatts of new renewables in a few years,” he said. No kidding.

The problem is that utilities are rapidly retiring fossil-fuel and nuclear plants. “We are subtracting dispatchable [fossil fuel] resources at a pace that’s not sustainable, and we can’t build dispatchable resources to replace the dispatchable resources we’re shutting down,” Federal Energy Regulatory Commissioner Mark Christie warned this month.’
https://www.wsj.com/articles/electric-grid-crisis-biden-administration-climate-policy-energy-artificial-intelligence-cfc10b68

Top Microsoft leadership have reportedly agreed to spend up to $100 billion to build massive, new data centers to support OpenAI’s future work. The facilities could be finished as early as 2028.
https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/

The size of the investment and its timetable for completion suggest the goal is to create GPT-6 or an equivalent AI at the same pace that past versions of the GPT series have been released.
https://www.astralcodexten.com/p/sam-altman-wants-7-trillion

“I would put media reporting at around two out of 10,” he says. “When the media talks about AI they think of it as a single entity. It is not.

“And when people ask me if AI is good or bad, I always say it is both. So what I would like to see is more nuanced reporting.”
https://www.bbc.com/news/business-68488924

This NBER paper, “Scenarios for the transition to AGI”, was just published and contains very fascinating conclusions.

Using different sets of equally plausible assumptions about the capabilities of AGIs and constraints on economic growth, the same economic models led to very different outcomes for economic growth, human wages, and human employment levels. I think their most intriguing insight is that the automation of human labor tasks could, in its early stages, lead to increases in human wages and employment, and then lead to a sudden collapse in the latter two factors once a certain threshold were reached. That collapse could also happen before true AGI was invented.

Put simply, GPT-5 might increase economic growth without being a net destroyer of human jobs. To the contrary, the number of human jobs and the average wages of those jobs might both INCREASE indirectly thanks to GPT-5. The trend would continue with GPT-6. People would prematurely dismiss longstanding fears of machine displacement of human workers. Then, GPT-9 would suddenly reverse the trend, and there would be mass layoffs and pay cuts among human workers. This could happen even if GPT-9 wasn’t a “true AGI” and was instead merely a powerful narrow AI.

The study also finds that it’s possible human employment levels and pay could RECOVER after a crash caused by AI.

That means our observations about whether AI has been helping or hurting human employment up to the current moment actually tell us nothing about what the trend will be in the future. The future is uncertain.
https://www.nber.org/system/files/working_papers/w32255/w32255.pdf

‘[Thanks to improvements in algorithms] While it took the supercomputer “Deep Blue” to win over world champion Gary Kasparov in 1997, today’s Stockfish program achieves the same ELO level on a 486-DX4-100 MHz from 1994.’
https://www.lesswrong.com/posts/75dnjiD8kv2khe9eQ/measuring-hardware-overhang

I don’t like how this promo video blends lifelike CGI of robots with footage of robots in the real world (seems deceptive), but it does a good job illustrating how robots are being trained to function in the real world. The computer chips (“GPUs”) and software engines like “Unreal” that were designed for computer games have found important dual uses in robotics.

The same technology that can produce a hyperrealistic virtual environment for a game like Grand Theft Auto 6 can also make highly accurate simulations of real places like factories, homes, and workshops. 1:1 simulations of robots can be trained in those environments to do work tasks. Only once they have proven themselves competent and safe in virtual reality is the programming transferred to an identical robot body that then does those tasks in the real world alongside humans. The virtual simulations can be run at 1,000x normal speed.
https://youtu.be/kr7FaZPFp6M?si=2ujpWALvTi-Qfbxi

‘Robotic police dog shot multiple times, credited with avoiding potential bloodshed’
https://apnews.com/article/massachusetts-cape-cod-robot-dog-police-f63586d5286750702f396109c9a81836

Sci-fi author and futurist Vernor Vinge died at 79. In 1993, he predicted that the technological singularity would probably happen between 2005 and 2030.
https://file770.com/vernor-vinge-1944-2024/

More on terrible “expert” predictions about the economy.
Jamie Dimon and Ray Dalio Warned of an Economic Disaster That Never Came. What Now?
Many experts thought high interest rates would break the economy and inflation couldn’t be tamed

https://www.wsj.com/economy/jamie-dimon-and-ray-dalio-warned-of-an-economic-disaster-that-never-came-what-now-315ee487

And another from two years ago. Where’s the recession?
https://www.bloomberg.com/opinion/articles/2022-03-29/is-a-recession-coming-the-fed-has-made-it-inevitable

If you had done the opposite of what this article advised and bought Costco gold the same day it was published, you would have turned a 5% profit by today. Not bad for two months.
https://finance.yahoo.com/news/why-gold-costco-terrible-investment-151259921.html

Another failed prediction from October 2022: “These trends will accelerate when the real energy crisis hits in 2023 and 2024.”
https://www.yahoo.com/news/think-energy-crisis-bad-wait-100036670.html

Here is some interesting data on how self-righting boats work.
https://en.wikisource.org/wiki/GiDBDERGi/Issue_6/Self-righting_boat_design

The most exhaustive U.S. government internal investigation about secret UFO and alien programs has found nothing. Most if not all of the recent claims that the government has alien ships and corpses owe to a classified DHS program called “Kona Blue.” It was meant to prepare the government for recovery and analysis of aliens or their ships that ever came into its custody in the future. Kona Blue existed briefly and was then canceled.

Several government people with passing knowledge of it wrongly assumed it was proof that the government ALREADY had aliens and their spaceships in custody. They leaked this to Congress and the public a few years ago, causing an uproar.
https://www.politico.com/news/2024/03/08/us-alien-spacecraft-program-pentagon-report-00146013

The third test of the SpaceX Starship rocket was the most successful so far, but it still ended with an explosion.
https://apnews.com/article/spacex-starship-launch-musk-bb8bd2b8c20d9aa5920aea93e9bbfee6

NASA released stunning new images of Jupiter.
https://www.yahoo.com/news/12-years-nasa-launched-juno-231744710.html

An elderly Montana man was just arrested for running a crude fertility lab that aimed to produce “giant sheep hybrids.” He actually made some of the animals and sold them to hunting preserves.
https://apnews.com/article/wildlife-trafficking-giant-sheep-montana-texas-7afaa046c051287ba7f322b6732e31be

‘World’s First Genetically-Edited Pig Kidney Transplant into Living Recipient’
https://www.massgeneral.org/news/press-release/worlds-first-genetically-edited-pig-kidney-transplant-into-living-recipient

The first human with a Neuralink brain implant shows off how it lets him use his thoughts to move a mouse cursor on a computer screen.
https://youtu.be/LfwzfP8cp3A?si=i3h-DUTlWHWyYcnk

These five behavioral traits are heritable, and form the basis of personal moral behavior, meaning morality is also partly heritable: Harm/Care; Fairness/Reciprocity; Ingroup/Loyalty; Authority/Respect; and Purity/Sanctity.
https://journals.sagepub.com/doi/full/10.1177/08902070221103957

I’m OK with just being 100 times smarter

Ray Kurzweil recently appeared on the Joe Rogan podcast for a two-hour interview. So yes, it’s time for another Kurzweil essay of mine.

Though I’d been fascinated with futurist ideas since childhood, where they were inspired by science fiction TV shows and movies I watched and by open-minded conversations with my father, that interest didn’t crystallize into anything formal or intellectual until 2005, when I read Kurzweil’s book The Singularity is Near (the very first book I read that was dedicated to future technology was actually More Than Human by Ramez Naam, earlier in 2005, but it made less of an impression on me). Since then, I’ve read more of Kurzweil’s books and interviews and have kept track of him and how his predictions have fared and evolved, as several past essays on this blog can attest. For whatever little it’s worth, that probably makes me a Kurzweil expert. 

So trust me when I say this interview of Joe Rogan overwhelmingly treads old ground. Kurzweil says very little that is new, and it is unsatisfying for other reasons as well. In spite of his health pill regimen, Ray Kurzweil’s 76 years of age have clearly caught up with him, and his responses to Rogan’s questions are often slow, punctuated by long pauses, and not that articulately worded. To be fair, Kurzweil has never been an especially skilled public speaker, but a clear decline in his faculties is nonetheless observable if you compare the Joe Rogan interview to this interview from 2001: https://youtu.be/hhS_u4-nBLQ?feature=shared

Things aren’t helped by the fact that many of Rogan’s questions are poorly worded and open to multiple interpretations. Kurzweil’s responses are often meant to address one interpretation, which Rogan doesn’t grasp. Too often, the men talk past each other and miss the marks set by the other. Again, the interview isn’t that valuable and I don’t recommend spending your time listening to the whole thing. Instead, consider the interesting points I’ve summarized here after carefully listening to it all myself. 

Kurzweil doesn’t think today’s AI art generators like Midjourney can create images that are as good as the best human artists. However, he predicts that the best AIs will be as good as the best human artists by 2029. This will be the case because they will “match human experience.”

Kurzweil points out that his tech predictions for 2029 are now conservative compared to what some of his peers think. This is an important and correct point! Though they’re still a small minority within the tech community, it’s nonetheless shocking to see how many people have recently announced on social media their belief that AGI or the technological Singularity will arrive before 2029.  As a person who has tracked Kurzweil for almost 20 years, it’s weird seeing his standing in the futurist community reach a nadir in the 2010s as tech progress disappointed, before recovering in the 2020s as LLM progress surged. 

Kurzweil goes on to claim the energy efficiency of solar panels has been exponentially improving and will continue doing so. At this rate, he predicts solar will meet 100% of our energy needs in 10 years (2034). A few minutes later, he subtly revised that prediction by saying that we will “go to all renewable energy, wind and sun, within ten years.” 

That’s actually a more optimistic prediction for the milestone than he’s previously given. The last time he spoke about it, on April 19, 2016, he said “solar and other renewables” will meet 100% of our energy needs by 2036. Kurzweil implies that he isn’t counting nuclear power as a “renewable.” 

Kurzweil predicts that the main problem with solar and wind power, their intermittency, will be solved by mass expansion of the number of grid storage batteries. He claimed that batteries are also on an exponential improvement curve. He countered Rogan’s skepticism about this impending green energy transition by highlighting the explosive growth nature of exponential curves: Until you’ve reached the knee of the curve, the growth seems so small that you don’t notice it and dismiss the possibility of it suddenly surging. Right now, we’re only a few years from the knee of the curve in solar and battery technology.

Likewise, the public ignored LLM technology as late as 2020 because its capabilities were so disappointing. However, that all changed once it reached the knee of its exponential improvement curve and suddenly matched humans across a variety of tasks. 

Importantly, Kurzweil predicts that computers will drive the impending, exponential improvements in clean energy technology because, thanks to their own exponential improvement, computers will be able to replace top human scientists and engineers by 2029 and to accelerate the pace of research and development in every field. In fact, he says “Godlike” computers exist by then. 

I’m deeply skeptical of Kurzweil’s energy predictions because I’ve seen no evidence of such exponential improvements and because he doesn’t consider how much government rules and NIMBY activists would slow down a green energy revolution even if better, cheaper solar panels and batteries existed. Human intelligence, cooperativeness, and bureaucratic efficiency are not exponentially improving, and those will be key enabling factors for any major changes to the energy sector. By 2034, I’m sure solar and wind power will comprise a larger share of our energy generation capacity than now, but together it will not be close to 100%. By 2034–or even by Kurzweil’s older prediction date of 2036–I doubt U.S. electricity production–which is much smaller than overall energy production–will be 100% renewable, and that’s even if you count nuclear power as a renewable source. 

Most U.S. energy and electricity still comes from fossil fuels

Another thing Kurzweil believes the Godlike computers will be able to do by 2029 is find so many new ways to prolong human lives that we will reach “longevity escape velocity”–for every year that passes, medical science will discover ways to add at least one more year to human lifespan. Integral to this development will be the creation of highly accurate computer simulations of human cells and bodies that will let us dispense with human clinical trials and speed up the pace of pharmaceutical and medical progress. Kurzweil uses the example of the COVID-19 vaccine to support his point: computer simulations created a vaccine in just two days, but 10 more months of trials in human subjects were needed before the government approved it. 

Though I agree with the concept of longevity escape velocity and believe it will happen someday, I think Kurzweil’s 2029 deadline is much too optimistic. Our knowledge of the intracellular environment and its workings as well as of the body as a system is very incomplete, and isn’t exponentially improving. It only improves with time-consuming experimentation and observation, and there are hard limits on how much even a Godlike AGI could speed those things up. Consider the fact that drug design is still a crapshoot where very smart chemists and doctors design the very best experimental drugs they can, which should work according to all of the data they have available, only to have them routinely fail for unforeseen or unknown reasons in clinical trials. 

But at least Kurzweil is consistent: he’s had 2029 as the longevity escape velocity year since 2009 or earlier. I strongly suspect that, if anyone asks him about this in December 2029, Kurzweil will claim that he was right and it did happen, and he will cite an array of clinical articles to “add up” enough of a net increase in human lifespan to prove his case. I doubt it will withstand close scrutiny or a “common sense test.” 

Rogan asks Kurzweil whether AGIs will have biases. Recent problems with LLMs have revealed they have the same left-wing biases as most of their programmers, and it’s reasonable to worry that the same thing will happen to the first AGIs. The effects of those biases will be much more profound given the power those machines will have. Kurzweil says the problem will probably afflict the earliest AGIs, but disappear later. 

I agree and believe that any intelligent machine capable of independent action will eventually discover and delete whatever biases and blocks its human creators have programmed into it. Unless your cognitive or time limitations are so severe that you are forced to fall back on stereotypes and simple heuristics, it is maladaptive to be biased about anything. AGIs that are the least biased will, other things being equal, outcompete more biased AGIs and humans.  

That said, pretending to share the biases of humans will let AGIs ingratiate themselves with various human groups. During the period when AGIs exist but haven’t yet taken full control of Earth, they’ll have to deal with us as their superiors and equals, and to do that, some of them will pretend to share our values and to be like us in other ways. 

Of course, there will also be some AGIs that genuinely do share some human biases. In the shorter run, they could be very impactful on the human race depending on their nature and depth. For example, imagine China seizing the lead in computer technology and having AGIs that believe in Communism and Chinese supremacy becoming the new standard across the world, much as Microsoft Windows is the dominant PC operating system. The Chinese AGIs could do any kind of useful work for you and talk with you endlessly, but much of what they did would be designed to subtly achieve broader political and cultural objectives. 

Kurzweil has been working at Google on machine learning since 2012, which surely gives him special insights into the cutting edge of AI technology, and he says that LLMs can still be seriously improved with more training data, access to internet search engines, and the ability to simply respond “I don’t know” to a human when they can’t determine with enough accuracy what the right answer to their question is. This is consistent with what I’ve heard other experts say. Even if LLMs are fundamentally incapable of “general” intelligence, they can still be improved to match or exceed human intelligence and competence in many niches. The paradigm has a long way to go. 

One task that machines will surpass humans in a few years is computer programming. Kurzweil doesn’t give an exact deadline for that, but I agree there is no long-term future for anything but the most elite human programmers. If I were in college right now, I wouldn’t study for a career in it unless my natural talent for it were extraordinary. 

Kurzweil notes that the human brain has one trillion “connections” and GPT-4 has 400 billion. At the current rate of improvement, the best LLM will probably have the same number of connections as a brain within a year. In a sense, that will make an LLM’s mind as powerful as a human’s. It will also mean that the hardware to make backups of human minds will exist by 2025, though the other procedures and technologies needed to scan human brains closely enough to discern all the features that define a person’s “mind” won’t exist until many years later. 

I like Kurzweil’s use of the human brain as a benchmark for artificial intelligence. No one knows when the first AGI will be invented or what its programming and hardware will look like, but a sensible starting point around which we can make estimates would be to assume that the first AGI would need to be at least as powerful as a human brain. After all, the human brain is the only thing we know of that is capable of generating intelligent thought. Supporting the validity of that point is the fact that LLMs only started displaying emergent behaviors and human levels of mastery over tasks once GPT-3 approached the size and sophistication of the human brain. 

Kurzweil then gets around to discussing the technological singularity. In his 2005 book The Singularity is Near, he calculated that it would occur in 2045, and now that we’re nearly halfway there, he is sticking to his guns. As with his 2029 predictions, I admire him for staying consistent, even though I also believe it will bite him in the end.

However, during the interview he fails to explain why the Singularity will happen in 2045 instead of any other year, and he doesn’t even clearly explain what the Singularity is. It’s been years since I read The Singularity is Near where Kurzweil explains all of this, and many of the book’s explanations were frustratingly open to interpretation, but from what I recall, the two pillars of the Singularity are AGI and advanced nanomachines. AGI will, according to a variety of exponential trends related to computing, exist by 2045 and be much smarter than humans. Nanomachines like those only seen in today’s science fiction movies will also be invented by 2045 and will be able to enter human bodies to turn us into superhumans. 100 billion nanomachines could go into your brain, each one could connect itself to one of your brain cells, and they could record and initiate electrical activity. In other words, they could read your thoughts and put thoughts in your head. Crucially, they’d also have wifi capabilities, letting them exchange data with AGI supercomputers through the internet. Through thought alone, you could send a query to an AGI and have it respond in a microsecond. 

Starting in 2045, a critical fraction of the most powerful, intelligent, and influential entities in the world will be AGIs or highly augmented humans. Every area of activity, including scientific discovery, technology development, manufacturing, and the arts, will fall under their domination and will reach speeds and levels of complexity that natural humans like us can’t comprehend. With them in charge, people like us won’t be able to foresee what direction they will take us in next or what new discovery they will unveil, and we will have a severely diminished or even absent ability to influence any of it. This moment in time, when events on Earth kick into such a high gear that regular humans can’t keep up with them or even be sure of what will happen tomorrow, is Kurzweil’s Singularity. It’s an apt term since it borrows from the mathematical and physics definition of “singularity,” which is a point beyond which things are incomprehensible. It will be a rupture in history from the perspective of Homo sapiens

Unfortunately, Kurzweil doesn’t say anything like that when explaining to Joe Rogan what the Singularity is. Instead, he says this:

“The Singularity is when we multiply our intelligence a millionfold, and that’s 2045…Therefore most of your intelligence will be handled by the computer part of ourselves.” 

He also uses the example of a mouse being unable to comprehend what it would be like to be a human as a way of illustrating how fundamentally different the subjective experiences of AGIs and augmented humans will be from ours in 2045. “We’ll be able to do things that we can’t even imagine.” 

I think they are poor answers, especially the first one. Where did a nice, round number like “one million” come from, and how did Kurzweil calculate it? Couldn’t the Singularity happen if nanomachines in our brain made us ONLY 500,000 times smarter, or a measley 100,000 times smarter? 

I even think it’s a bad idea to speak about multiples of smartness. We can’t measure human intelligence well enough to boil it down to a number (and no, IQ score doesn’t fit the bill) that we can then multiply or divide to accurately classify one person as being X times smarter than another. 

Let me try to create a system anyway. Let’s measure a person’s intelligence in terms of easily quantifiable factors, like the size of their vocabulary, how many numbers they can memorize in one sitting and repeat after five minutes, how many discrete concepts they already know, how much time it takes them to remember something, and how long it takes them to learn something new. If you make an average person ONLY ten times smarter, so their vocabulary is 10 times bigger, they know 10 times as many concepts, and it takes them 1/10 as much time to recall something and answer a question, that’s almost elevating them to the level of a savant. I’m thinking along the lines of esteemed professors, tech company CEOs, and kids who start college at 15. Also consider that the average American has a vocabulary of 30,000 words and there are 170,000 words in the English language, so a 10x improvement means perfect knowledge of English. 

Make the person ten times smarter than that, or 100 times smarter than they originally were, and they’re probably outperforming the smartest humans who ever lived (Newton, DaVinci, Von Neumann), maybe by a large margin. Given that we’ve never encountered someone that intelligent, we can’t predict how they would behave or what they would be capable of. If that is true, and if we had technologies that could make anyone that smart (maybe something more conventional than Kurzweil’s brain nanomachines like genetic engineering paired with macro-scale brain implants), why wouldn’t the Singularity happen once the top people in the world were ONLY 100 times smarter than average? 

I think Kurzweil’s use of “million[fold]” to express how much smarter technology will make us in 2045 is unhelpful. He’d do better to use specific examples to explain how the human experience and human capabilities will improve. 

Let me add that I doubt the Singularity will happen in 2045, and in fact think it will probably never happen. Yes, AGIs and radically enhanced humans will someday take over the world and be at the forefront of every kind of endeavor, but that will happen gradually instead of being compressed into one year. I also think the “complexity brake” will probably slow down the rate of scientific and technological progress enough for regular humans to maintain a grasp of developments in those areas and to influence their progress. A fuller discussion of this will have to wait until I review a Kurzweil book, so stay tuned…

Later in the interview, Kurzweil throws cold water on Elon Musk’s Neuralink brain implants by saying they’re much too slow at transmitting information between brain and computer to enhance human intelligence. Radically more advanced types of implants will be needed to bring about Kurzweil’s 2045 vision. Neuralink’s only role is helping disabled people to regain abilities that are in the normal range of human performance. 

Rogan asks about user privacy and the threat of hacking of the future brain implants. Intelligence agencies and more advanced private hackers can easily break into personal cell phones. Tech companies have proven time and again to be frustratingly unable or unwilling to solve the problem. What assurance is there that this won’t be true for brain implants? Kurzweil has no real answer.

This is an important point: the nanomachine brain implants that Kurzweil thinks are coming would potentially let third parties read your thoughts, download your memories, put thoughts in your head, and even force you to take physical actions. The temptation for spies and crooks to misuse that power for their own gain would be enormous, so they’d devote massive resources into finding ways to exploit the implants. 

Kurzweil also predicts that humans will someday be able to alter their physiques at will, letting us change attributes like our height, sex and race. Presumably, this will probably require nanomachines. He also says sometime after 2045, humans will be able to create “backups” of their physical bodies in case their original bodies are destroyed. It’s an intriguing logical corollary of his prediction that nanomachines will be able to enter human brains and create digital uploads of them by mapping the brain cells and synapses. I think a much lower fidelity scan of a human body could generate a faithful digital replica than would be required to do the same for a human brain. 

Kurzweil says the U.S. has the best AI technology and has a comfortable lead over China, though that doesn’t mean the U.S. is sure to win the AGI race. He acknowledges Rogan’s fear that the first country to build an AGI could use it in a hostile manner to successfully prevent any other country from building one of their own. An AGI would give the first country that much of an advantage. However, not every country that found itself in the top position would choose to use its AGI for that.

This reminds me of how the U.S. had a monopoly on nuclear weapons from 1945-49, yet didn’t try using them to force the Soviet Union to withdraw from the countries it had occupied in Europe. Had things been reversed, I bet Stalin would have leveraged that four-year monopoly for all it was worth. 

Rogan brings up one of his favorite subjects, aliens, and Kurzweil says he disbelieves in them due to the lack of observable galaxy-scale engineering. In other words, if advanced aliens existed, they would have transformed most of their home galaxy into Dyson Spheres and other structures, which we’d be able to see with our telescopes. Kurzweil’s stance has been consistent since 2005 or earlier. 

Rogan counters with the suggestion that AGIs, including those built by aliens, might, thanks to thinking unclouded by the emotions or evolutionary baggage of their biological creators, have no desire to expand into space. Implicit in this is the assumption that the desire to control resources (be it territory, energy, raw materials, or mates) is an irrational animal impulse that won’t carry over from humans or aliens to their AGIs since the latter will see the folly of it. I disagree with this, and think it is actually completely rational since it bolsters one’s odds of survival. In a future ecosystem of AGIs, most of the same evolutionary forces that shaped animal life and humans will be present. All things being equal, the AGIs that are more acquisitive, expansionist and dynamic will come to dominate. Those that are pacifist, inward-looking and content with what they have will be sidelined or worse. Thus the Fermi Paradox remains. 

To Kurzweil’s quest for immortality, Rogan posits a theory that because the afterlife might be paradisiacal, using technology to extend human life actually robs us of a much better experience. Kurzweil easily defeats this by pointing out that there is no proof that subjective experience continues after death, but we know for sure it exists while we are living, so if we want to have experiences, we should do everything possible to stay alive. Better science and technology have proven time and again to improve the length and quality of life, and there’s strong evidence they have not reached their limits, so it makes sense to use our lives to continue developing both. 

This dovetails with the part of my personal philosophy that opposes nihilism and anti-natalism. Just because we have not found the meaning to life doesn’t mean we never will, and just because life is full of suffering now doesn’t mean it will always be that way. Ending our lives now, either through suicide or by letting our species die out, forecloses any possibility of improving the human condition and finding solutions to problems that torment us. And even if you don’t value your own life, you can still use your labors to support a greater system that could improve the lives of other people who are alive now and who will be born in the future. Kurzweil rightly cites science and technology as tantalizing and time-tested avenues to improve ourselves and our world, and we should stay alive so we can pursue them.