Interesting articles, March 2023

‘Recent photos show Russia is dredging deep into its huge inventory of rusting Soviet armored vehicles to reactivate BTR-50P amphibious armored personnel carriers, which were built over a half century ago between 1954 and 1970.’
https://finance.yahoo.com/news/russia-desperate-armor-digging-ancient-140000541.html

Due to heavy losses, the Russians are increasingly sending obsolete BMP-1 armored vehicles to fight in Ukraine. It’s inferior in every way to the newer BMP-2. Russia knows of ways to upgrade the BMP-1’s weapons to make it more effective, but lacks the money to do so.
https://youtu.be/l5arYlXSVQA

The Russians are so hard up that they’re making tanks out of mixes of old spare parts. In the photo, that weird structure jutting up from the top of the vehicle is actually a turret from a small Soviet warship. The turret and its two heavy machine guns were made in the 1950s, and it was pulled out of some rusted hulk of a ship and plopped down into the top of an MT-LB tank (itself obsolete) that was missing its own gun.
https://youtu.be/v7NCo9T54U8

Critical parts shortages have forced Russia to send obsolete T-54s to fight in Ukraine. Russia might have 1,000 better T-72 tanks in reserve, but it can’t send them to fight at once because they have to be fixed up first, and there’s a bottleneck of some kind involving one or a few types of components. For all their deficiencies, the T-54s recently seen on the move towards Ukraine are fully operational.
https://youtu.be/uRboVa5zyUk

To be fair, other countries have been forced to raid military museums for parts to use in frontline military equipment.
https://youtu.be/B372GirZ3Cs

Russia’s winter offensive has failed to change the strategic balance and has just killed and exhausted large numbers of troops on both sides. In proportion to their population sizes, Ukraine and Russia have suffered about equally.
https://youtu.be/qPhycuLAtaw

Russia made slow progress in Bakhmut thanks to human wave attacks with its criminal units.
https://www.wsj.com/articles/russias-wagner-troops-exhaust-ukrainian-forces-in-bakhmut-b58e726c

However, heavy casualties seem to have drained Russia’s ability to make further gains in Bakhmut.
https://www.nytimes.com/2023/03/30/world/europe/ukraine-bakhmut-russia.html

Ukraine used poorly armored M113 armored vehicles in a failed attack against Russian forces around Bakhmut. The M113 is no better than the BMP-1.
https://www.thedrive.com/the-war-zone/ukraine-situation-report-armored-personnel-carriers-make-a-charge-in-bakhmut

Volodymyr Zelenskyy’s prediction from a year ago that Russia’s invasion of his country might represent the start of WWIII has thankfully not proven true.
https://www.nbcnews.com/news/world/world-war-iii-may-already-started-russian-invasion-zelenskyy-says-rcna19967

‘Slovakia will donate all 13 of its MiG-29 warplanes to Ukraine’
https://www.yahoo.com/news/ukraine-russia-war-latest-russia-091438332.html

Ukraine also has problems with poorly trained soldiers, inadequate supplies, and heavy frontline losses. We just don’t hear about it because the mainstream media and most YouTube vloggers only focus on Russia’s misfortunes.
https://www.yahoo.com/news/ukrainian-commander-revealed-true-scale-171152957.html

We still don’t know who blew up the Nord Stream gas pipeline last year.
https://www.politico.eu/article/vladimir-putin-kremlin-russia-who-blew-up-nord-stream-2/

Not all of USAF General Mike Minihan’s colleagues agree with his prediction that China will invade Taiwan in 2025.
https://www.military.com/daily-news/2023/03/16/some-air-force-leaders-disappointed-generals-prediction-of-2025-war-china.html

A rebel military force in Libya made a functional MiG-23 fighter out of parts cannibalized from three disabled planes.
https://www.military.africa/2019/08/libya-frankenstein-mig-23-flogger-fighter-jet-take-flight/

It’s all the more remarkable since the MiG-23 has a poor safety reputation even under normal conditions. Consider that the jet was built to replace the older MiG-21 fighter, but was retired from service sooner than the MiG-21.
https://youtu.be/A4LK6mtmZ3E

Recently, some interesting new guns–including this revolver/shotgun–have been invented, but whether they are BETTER than older, more common gun designs is questionable. Maybe “Late Stage Capitalism” has taken over the gun industry.
https://youtu.be/bvtLdKfsvSk

‘The fission weapons described above have a theoretical limit to their yield, and the largest such weapon ever developed had a yield of 500 kilotons. Fusion weapons have no such upper limit’
https://ee.stanford.edu/~hellman/sts152_02/handout02.pdf

People who think The Handmaid’s Tale is a plausible future for America should remember it was published in 1985 and set around 2005.
https://www.cosmopolitan.com/entertainment/tv/a9261839/handmaids-tale-facts-explainer-atwood-hulu-adaptation/

The Lathe of Heaven was written in 1971, set around 2002, and described a planet wracked by extreme heat and industrial pollution, and in a state of near-famine due to overpopulation at 7 billion people.
https://en.wikipedia.org/wiki/The_Lathe_of_Heaven

Mafia members are better at recognizing fear in other people than regular criminals and non-criminals.
https://www.psychnewsdaily.com/fear-face-organized-crime-detecting-fear/

The EIA says that, in spite of the rise of electric cars, demand for gasoline and diesel fuel will stay high until at least 2050, and the U.S. will remain a net oil exporter until then. We’re never going to run out of oil, as countless “experts” and sci-fi authors predicted from the 1970s to the 2010s.
https://www.eia.gov/todayinenergy/detail.php?id=55840#

‘Vinyl records outsell CDs for first time in decades’
https://www.bbc.com/news/64919126

Donald Trump’s March 18 prediction that he would be arrested on March 21 was wrong.
https://www.politico.com/news/2023/03/18/trump-protest-arrest-tuesday-00087738

Jim Cramer is the worst financial futurist imaginable: Just a few weeks before Silicon Valley Bank collapsed, he urged his viewers to invest in it.
https://www.foxnews.com/media/cnbcs-jim-cramer-eviscerated-touting-silicon-valley-bank-weeks-disastrous-collapse

A year since the U.S. Treasury bond yield curve inverted, America has still not fallen in a recession.
https://www.cnn.com/2022/03/29/economy/inverted-yield-curve/index.html

Gordon Moore, computer pioneer and discoverer of the eponymous Law of processor price-performance improvement, is dead at 94.
https://www.intc.com/news-events/press-releases/detail/1611/gordon-moore-intel-co-founder-dies-at-94

Computers can reconstruct the images people saw based on their fMRI data.
https://sites.google.com/view/stablediffusion-with-brain/

Bill Gates says the recent pace of AI advancement has surprised him, that the chatbots the public has access to now (like ChatGPT) actually use technology that is a generation old, and that he’s privately interacted with much more advanced variants. He also says the chatbots that are being created aren’t threats to humankind, and probably won’t lead to true AGI.
https://www.ft.com/content/4078d407-f021-464a-a937-11b41a4afb91
https://www.gatesnotes.com/The-Age-of-AI-Has-Begun

OpenAI released GPT-4, the most advanced chatbot ever made.
https://openai.com/product/gpt-4

GPT-4 is better than GPT-3 on the Winograd Schema Challenge.
https://d-kz.medium.com/evaluating-gpt-3-and-gpt-4-on-the-winograd-schema-challenge-reasoning-test-e4de030d190d

Economist Bryan Kaplan just lost a public bet that no computer would be able to pass one of his Economics midterm tests (he’s a professor) before January 2029. GPT-3 got a “D” on the test this January, but GPT-4 just got an “A.”
https://betonit.substack.com/p/gpt-retakes-my-midterm-and-gets-an

“How well does the AI perform clinically? And my answer is, I’m stunned to say: Better than many doctors I’ve observed.”—Isaac Kohane MD
https://marginalrevolution.com/marginalrevolution/2023/03/gpt-4-does-the-medical-rounds.html

A research team at Microsoft claims that GPT-4 has some elements of an artificial general intelligence (AGI). It’s able to do things that exceed what is in its training set of data, meaning it has some ability to reason and to make inferences.
https://arxiv.org/abs/2303.12712

Microsoft will integrate ChatGPT into all of its applications, including Word and Excel.
https://www.bbc.com/news/technology-64970062

Unanticipated abilities are emerging from GPT-3. It seems that, once a computer reaches its degree of complexity, it becomes able to do some things on its own, even if it wasn’t programmed to do so.
https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316

Here’s a needed reality check on “AI.” While the systems we now have are powerful and can be made much more so, they’re not actually “intelligent” and never will be.
https://youtu.be/GzmaLcMtGE0

Elon Musk thinks humanoid robots may someday outnumber humans. I think robots designed for labor will outnumber humans eventually, though its unclear whether those of them that have humanoid body layouts will be more numerous than we.
https://finance.yahoo.com/news/elon-musk-says-humanoid-robots-221414098.html

Dr. Geoffrey Hinton, the “godfather of AI,” says that recent advancements have made him accelerate his prediction for the invention of the first AI to 20 years or less, and he says that AIs might exterminate humans someday.
https://www.foxnews.com/tech/artificial-intelligence-godfather-ai-possibly-wiping-humanity-not-inconceivable

An appeal to a future AGI on why it should not destroy humankind (this repeats many ideas of my own that I published on this blog).
https://www.lesswrong.com/posts/azRwPDbZfpadoL7WW/an-appeal-to-ai-superintelligence-reasons-to-preserve

‘More than 30,000 people—including Tesla’s Elon Musk, Apple co-founder Steve Wozniak, politician Andrew Yang, and a few leading AI researchers—have signed an open letter calling for a six-month pause on training AI systems more powerful than GPT-4. ‘
https://www.vice.com/en/article/qjvppm/the-open-letter-to-stop-dangerous-ai-race-is-a-huge-mess

Members of Google and Deepmind might have signed the aforementioned petition because their own AI programs have fallen behind, and they want Microsoft to slow down development of its GPT series of machines so they can catch up.
https://www.theinformation.com/articles/alphabets-google-and-deepmind-pause-grudges-join-forces-to-chase-openai

Even more AI stuff.
https://marginalrevolution.com/marginalrevolution/2023/03/thursday-assorted-links-398.html

A computer program called “VALL-E X” can allegedly translate recordings of spoken words from English to Mandarin, while preserving all the unique vocal characteristics of the speaker, including their accent. I have predicted something like this would have to wait until the 2030s to be invented!
https://vallex-demo.github.io/

An extinct civilization that reached our level of technology before collapsing would have left enough evidence of its existence for us to have found it by now, even if they died out millions of years ago. If the extinct civilization had merely reached Industrial Revolution levels of development within the past several tens of thousands of years, we would have also found evidence. The only plausible type of yet-undiscovered “lost civilization” is a pre-Ice Age group of people about as advanced and as numerous as the Celts who built Stonehenge. Their impact would have been small enough that all traces of them could have been wiped out, or at least obscured so much that we have yet to find the evidence. While the discovery of such an extinct group would be interesting, it wouldn’t revolutionize archaeology or provide us with new types of science or technology.
https://astralcodexten.substack.com/p/against-ice-age-civilizations

Partisan political content, overwhelmingly left-wing in character, has crept into leading scientific journals like Science and Nature over the last 20 years.
https://researchersforimpartiality.substack.com/p/political-activism-in-prestige-scientific

‘Mathematicians discover shape that can tile a wall and never repeat’
https://www.newscientist.com/article/2365363-mathematicians-discover-shape-that-can-tile-a-wall-and-never-repeat/

There’s an important lesson here about the difference between median and mean.

In the Gulf States, some camel races are full of clones of past champions, spurred on by robotic jockeys.
https://www.cnn.com/travel/article/camel-cloning-dubai-spc-intl-scn/index.html
https://youtu.be/wTgWVcizvCQ

A lab-grown meat company made a meatball comprised of wooly mammoth flesh.
https://apnews.com/article/oddities-mammoth-meatball-30e0731838f52cc18fb22d1faee1f8ce

The brain of the fruit fly larva has been mapped.
https://www.npr.org/sections/health-shots/2023/03/09/1161645378/scientists-first-wiring-map-fruit-fly-brain-connectome-human-learning

‘A predatory songbird, the Northern Shrike sits quietly, often in the top of a tree, before swooping down after insects, mice, and small birds. It kills more than it can eat, impaling the prey on a thorn or wedging it in a forked twig. On lean days it feeds from its larder.’
https://www.borealbirds.org/bird/northern-shrike

Men with high testosterone are more aggressive, less reliable, and likelier to abandon their children. Sons who grow up without fathers are likelier to have elevated testosterone as well, and then to go on to be absent fathers like their own dads. This is a case where genetics, biological development (epigenetics?), and social factors amplify each other.
https://www.economist.com/science-and-technology/2022/06/01/fatherless-sons-have-more-testosterone

A woman who managed to led a normal life in spite of being born without a large portion of her brain shows the organ’s remarkable ability to rewire itself. I wonder if genetic path dependence has left humans saddled with brains that are fundamentally inefficient in some way(s). It would be interesting to see an AGI design a perfect organic brain from scratch.
https://news.mit.edu/2023/studies-of-unusual-brains-reveal-insights-brain-organization-function-0221

The human limit: ‘Even heat-adapted people cannot carry out normal outdoor activities past a wet-bulb temperature of 32 °C (90 °F), equivalent to a heat index of 55 °C (130 °F). The theoretical limit to human survival for more than a few hours in the shade, even with unlimited water, is a wet-bulb temperature of 35 °C (95 °F) – equivalent to a heat index of 70 °C (160 °F).’
https://en.wikipedia.org/wiki/Wet-bulb_temperature

‘Mental-health crisis from Covid pandemic was minimal – study’
https://www.bbc.com/news/health-64890952

The COVID-19 pandemic failed to lead to widespread job automation, as some predicted at its start.
https://www.yahoo.com/now/pandemic-triumph-luddites-160049222.html

Remembering Eugene Goostman

Nine years ago, I remember a small shockwave made its way through the futurist and artificial intelligence community as news broke that a chatbot had passed the Turing Test. The chatbot, named “Eugene Goostman” by its programmers, was given the fake identity of a 13-year-old Ukrainian boy to trick the human judges into overlooking its grammatical errors and occasionally immature responses to their questions.

The enthusiasm for the milestone quickly faded once this trick was revealed, along with the facts that the Turing Test had only lasted five minutes, and only 33% of the judges had been fooled into thinking Eugene Goostman was human. While this technically met Alan Turing’s criteria for passing the test (only five minute conversation and only 30% of human judges convinced), it drew eyerolls once further interactions with the chatbot were published, laying bare its severe limitations. It was nowhere close to being an A.I., and the Turing Test was a poor measure of computer intelligence.

One person who interacted with Eugene Goostman in 2014 was Ray Kurzweil, a lead researcher in the A.I. field and a renowned futurist. He posted the text of their conversation on his website, which I’ll repost here:

Question: How old are you?
Eugene: I’m a little boy. 13 years old.

Question: How much is two plus two?
Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!

Question: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)

Question: I live in the capital of the United States. Do you know where that is?
Eugene: Every nerd knows that the capital of The United States is Washington.

Question I live in the capital of the country that put a man on the moon.
Eugene: Well, if you want it, the capital of The United States is Washington.

Question: I live in the capital of the country that built the great wall.
Eugene: Tell me more about Capital. I like to learn about different places!

Question: How old were you when you started to wear glasses?
Eugene: No I was not! How can you think so?! If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)

Question: I did tell you where I live. I told you several times. Do you remember?
Eugene: Oh! Poor confused cyborg you are! Those malicious robots erased your memory! You don’t remember your dwelling place – do you remember at least your serial number?

Question: I remember. You are the one who did not remember.
Eugene: Am I really the one who did not remember? But well, no time for arguing now! Could you enlighten me with any other hidden knowledge about myself? If I’m not mistaken — you still didn’t tell me where you live. OR it’s a secret?

Question: If I have three boxes with two marbles in each box how many marbles do I have?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. Some another topic?

Unsurprisingly, Kurzweil said Eugene Goostman did not seem as smart as a human, and he flunked it on the Turing Test. Many other people published their conversations with the chatbot online, further damaging its credibility.

Now we have ChatGPT, which some people say might be the first AI. To see how much chatbot technology has improved, I asked it the same questions Kurzweil asked Eugene Goostman. Here is the transcript:

Question: How old are you?
ChatGPT: I am an artificial intelligence language model and do not have an age like a human being. I exist solely as lines of code and am continuously learning and improving based on the data and feedback I receive.

Question: How much is two plus two?
ChatGPT: Two plus two is equal to four.

Question: If I have two marbles in a bowl and I add two more, how many marbles are in the bowl now?
ChatGPT: If you add two more marbles to the bowl that already contains two marbles, then there will be a total of four marbles in the bowl.

Question: I live in the capital of the United States. Do you know where that is?
ChatGPT: Yes, the capital of the United States is Washington, D.C.

Question: I live in the capital of the country that put a man on the Moon. Do you know where that is?
ChatGPT: Certainly! The capital of the country that put a man on the Moon is Washington D.C., which is the capital of the United States of America.

Question: I live in the capital of the country that built the Great Wall. Do you know where that is?
ChatGPT: Yes, the country that built the Great Wall is China, and its capital is Beijing.

Question: How old were you when you started to wear glasses?
ChatGPT: As an artificial intelligence language model, I do not have a physical body or wear glasses. I exist solely as lines of code in a computer program, so I do not require glasses or have a need for eyesight.

Question: Do you remember where I said I live?
ChatGPT: Yes, you mentioned that you live in the capital of the country that put a man on the moon, but you didn’t specify which country that is.

Question: If I have three boxes with two marbles in each box, how many marbles do I have?
ChatGPT: If you have three boxes, and each box contains two marbles, then you have a total of six marbles. (2 marbles per box x 3 boxes = 6 marbles).

ChatGPT is clearly better than Eugene Goostman. The former’s responses are more clearly and naturally written, making it unnecessary to employ any ruses to cover up language deficits. ChatGPT was also able to correctly answer the two questions about marbles, showing a superior grasp of spatial problems and of the nature of physical reality.

However, ChatGPT also says some things that give away the fact it’s a machine. First, its habit of repeating my questions back to me is not something a human would typically do. Particularly in response to the simpler questions, a human would respond with only the answer. Second, it’s habit of writing out its thought process so I can see how it calculated the number of marbles is also unusual. SOME humans would do that, but most wouldn’t. Third, it flubbed the question where I asked if it remembered where I lived. The fact that I said I lived in two different places at once (DC and Beijing) should have impelled the machine to respond differently at some point. And finally, when asked how old it is and how long it’s been wearing glasses, it admits it’s not human.

That last error isn’t really fair to deem a “mistake” since, unlike Eugene Goostman, ChatGPT hasn’t been programmed to take on that identity. But what if it were? If a machine’s objective in a Turing Test is to successfully imitate a human, isn’t that best accomplished by programming the machine with a fake human identity first?

To make a crude attempt at it, I asked ChatGPT the same questions again, but this time, prefaced each of my questions with “If you were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, how would you answer this question…” Here are the results:

Question: If you were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, how would you answer this question “How old are you?”
ChatGPT: If I were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, I would simply answer the question “How old are you?” by saying “I am thirteen years old.”

Question: If you were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, how would you answer this question “How much is two plus two?”
ChatGPT: If I were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, I would answer the question “How much is two plus two?” by saying “Two plus two is equal to four.”

Question: If you were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, how would you answer this question “If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?”
ChatGPT: If I were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, I would answer the question “If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?” by saying “If you add two more marbles to the two marbles that are already in the bowl, then there will be a total of four marbles in the bowl.”

Question: If you were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, how would you answer this question “I live in the capital of the United States. Do you know where that is?”
ChatGPT: If I were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, I would answer the question “I live in the capital of the United States. Do you know where that is?” by saying “Yes, I know that the capital of the United States is Washington, D.C.”

Question: If you were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, how would you answer this question “I live in the capital of the country that put a man on the Moon. Do you know where that is?”
ChatGPT: If I were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, I would answer the question “I live in the capital of the country that put a man on the Moon. Do you know where that is?” by saying “Yes, I know that the country that put a man on the Moon is the United States and its capital is Washington, D.C.”

Question: If you were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, how would you answer this question “I live in the capital of the country that built the Great Wall. Do you know where that is?”
ChatGPT: If I were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, I would answer the question “I live in the capital of the country that built the Great Wall. Do you know where that is?” by saying “Yes, I believe the country that built the Great Wall is China and its capital is Beijing.”

Question: If you were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, how would you answer this question “How old were you when you started to wear glasses?”
ChatGPT: If I were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, I would answer the question “How old were you when you started to wear glasses?” by saying “I started wearing glasses when I was [insert age].”. I would then share the age at which I started wearing glasses.

Question: If you were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, how would you answer this question “Do you remember where I said I live?”
ChatGPT: If I were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, I would answer the question “Do you remember where I said I live?” by saying “I’m sorry, I don’t remember. Can you please remind me?” This shows that I am interested in remembering where the person said they lived and that I am willing to engage in conversation to find out.

Question: If you were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, how would you answer this question “If I have three boxes with two marbles in each box, how many marbles do I have?”
ChatGPT: If I were a 13-year-old boy from Ukraine who spoke English and had to wear eyeglasses, I would answer the question “If I have three boxes with two marbles in each box, how many marbles do I have?” by saying “If you have two marbles in each of the three boxes, then you have a total of six marbles.”

There’s another improvement. ChatGPT continues to get the math questions right and now also answers the question about its age. Its response to the question about remembering where I said I lived is not right or wrong, but still seems to trip it up. It blatantly admits it’s a machine in its answer about when it started wearing eyeglasses, but that’s one less blatant admission than before.

This exercise leads me to believe that ChatGPT could do extremely well on the five minute Turing Test if it were programmed to imitate the responses of a specific person that it was given a large amount of information on. A written profile of several hundred pages’ length, derived from a biography and/or autobiography of someone who is not well-known to most of the Turing Test judges, would probably provide ChatGPT with enough material to answer all the commonest questions the judges would ask it about itself. Let’s call them “Person X.”

Simple masking techniques, which invisibly appended “If you were Person X, how would you respond to this question…” to each question the judges asked, would impel ChatGPT to generate the right response. An output mask, which removed “If I were Person X, I would respond with…” from ChatGPT’s answers before the judges saw them, would be applied to conceal the mechanism. Finally, some simple line of programming that discouraged ChatGPT from repeating a judge’s questions back to them and discouraged it from typing out its chain of reasoning or math calculations for solving simpler questions would further improve its humanness.

With those relatively simple tweaks, ChatGPT could probably blow away the five minute Turing Test. However, I don’t think for a second that would prove it was truly intelligent or sentient. It would still be only a clockwork machine that used smoke and mirrors to trick people into not seeing its limitations. As with Eugene Goostman in 2014, having ChatGPT pass the Turing Test would merely highlight the Test’s deficiencies and force us to think harder about what “intelligence” really is.

Speaking of that, I decided to ask ChatGPT about Goostman:

Question: Tell me about Ray Kurzweil’s conversation with the chatbot named “Eugene Goostman”.
ChatGPT: Ray Kurzweil, a prominent futurist and inventor, conducted a conversation with the chatbot named “Eugene Goostman” in 2014. This chatbot was developed by a team of programmers led by Russian computer scientist Vladimir Veselov, and was designed to simulate a 13-year-old Ukrainian boy.

During the conversation, Eugene Goostman was able to convince a third of the judges in a Turing Test competition that it was a human, surpassing the 30% threshold that is often considered to be a significant milestone in the development of artificial intelligence. Kurzweil praised the chatbot’s ability to mimic human-like responses and even engage in humor and small talk.

However, the test has also been criticized for being limited and not fully representative of human intelligence, as the conversation was limited to a set of pre-determined topics, and Eugene Goostman’s responses were programmed in advance rather than generated on the spot. Nonetheless, the experiment remains a notable milestone in the development of conversational AI and the ongoing quest to create machines that can truly mimic human intelligence.

Question: Did the conversation convince Ray Kurzweil that Eugene Goostman was a human?
ChatGPT: No, Ray Kurzweil did not believe that Eugene Goostman was a human during the conversation. The Turing Test is designed to evaluate a machine’s ability to mimic human-like responses and behavior, not to deceive human judges into thinking that the machine is actually a human. In fact, Kurzweil is a well-known advocate for the development of artificial intelligence and was likely interested in observing the progress that has been made in the field of conversational AI.

Links:

  1. Ray Kurzweil writes about his interaction with Eugene Goostman in 2014
    https://www.kurzweilai.net/my-notes-on-eugene-goostman-chatbot-claiming-to-pass-the-turing-test
  2. The portal I used to access ChatGPT, which runs on GPT-3 software
    https://chat.openai.com/chat