Review: “M3GAN 2.0”

[Written with the help of GPT-5]

Plot:

“She’s back!”

In M3GAN 2.0, the terrifying AI doll returns in a story that leans more into action, humor, and moral ambiguity than the original. The film follows Gemma (played by Allison Williams) as she grapples with the consequences of her earlier creation while navigating a rapidly escalating technological arms race. A new and more dangerous robot—AMELIA—emerges, with her own mysterious agenda, and forcing former enemies into uneasy alignment.

This movie is all about M3GAN’s redemption arc. Instead of being purely a villain, she becomes something closer to an antihero: still sharp-tongued and unsettling, but now positioned as a necessary ally against a greater threat. The dynamic between M3GAN and Gemma drives much of the story, blending tension, dark comedy, and reluctant trust.

The film expands its scope significantly, introducing corporate conspiracies, experimental technologies like brain implants, and high-stakes set pieces ranging from covert infiltrations to chaotic public confrontations. That broader ambition comes with a tradeoff—I didn’t like how this movie’s plot was more convoluted than the first movie’s. There are more moving parts, more characters, and more overlapping agendas, which can make the narrative feel cluttered at times.

Still, I thought the movie was fun and funny and Allison Williams was stunning. The humor lands more often than in the original, and M3GAN’s personality is given more room to shine, especially in her interactions with humans. The film balances its darker sci-fi elements with moments of levity and spectacle, making it an entertaining, if occasionally overcomplicated, continuation of the story.

Analysis:

Though no year is given for the movie’s events, it is clearly set either in the present day or at most five years from now.

Handcuffs won’t work on robots. At the start of the film, the evil gynoid AMELIA is captured by Iranian soldiers who mistakenly believe she is human. They restrain her with standard handcuffs. Once they lower their guard, she easily breaks free by pulling the cuffs apart with her bare hands. This depiction is largely realistic, with an important caveat.

AMELIA

AMELIA is clearly a specialized combat and infiltration machine, like the T-800 from the Terminator films. A robot designed for that role would likely possess strength far beyond human limits, allowing it to break restraints such as handcuffs, zip ties, or even more robust confinement systems intended for people. More broadly, many elements of the built environment that restrict human movement—locked doors, fences, barbed wire, or walls—would not reliably stop such a machine.

Fortunately, this kind of capability would not be typical. Most real-world robots will be task-specific or general-purpose assistants designed to operate safely around humans. That implies limits on strength, speed, weight, and durability, as well as the absence of combat-oriented programming. In other words, AMELIA represents an extreme edge case rather than a baseline for future robotics.

While I am not aware of any current humanoid robot capable of breaking handcuffs, there is no fundamental engineering barrier to building one. The film’s prediction is therefore plausible in principle, even if it reflects a niche category of machines.

Robots will use excessive force to kill humans. AMELIA surprises and attacks her captors, killing one by punching him so hard that his head is severed from his body. While a sufficiently strong machine might be capable of such force, the depiction is unrealistic in how that force is applied.

A system designed for lethal efficiency would not rely on extreme, energy-intensive blows. Instead, it would favor precise, minimal-force strikes to vulnerable areas such as the throat, neck, or head. A metal hand delivering a controlled but targeted impact would be more than sufficient to incapacitate or kill a human, and would do so with far less energy expenditure. As I wrote in my Terminator Salvation review, hand-to-hand combat with killer robots will be brutally skillful and done in seconds.

To the film’s credit, it later presents a more plausible approach. In a subsequent scene, AMELIA disables one FBI agent by striking his windpipe and knocks another unconscious with a controlled punch. This reflects a more realistic model of how an advanced machine might apply calibrated force.

A related scene involves M3GAN defending herself against armed attackers at a gala party event. She uses martial arts techniques to quickly and nonlethally subdue them. This raises an important point about how machines might handle violent encounters. Unlike humans, robots would not experience fear, panic, or hesitation, and they would not depend on imperfect training under stress. As a result, they could consistently apply the minimum force necessary to neutralize a threat.

Humans—especially police—often resort to lethal force in part because of fear, uncertainty, and limited confidence in close-quarters combat. Robots would not share these limitations. Their “lives” would not be at risk in the same way, either because their cognition is backed up remotely or because their bodies are more durable than human ones and immune to punches, knives and even bullets. Even in cases where they are vulnerable, they would not be driven by self-preservation in the same emotional sense. Some might not be programmed to value their own lives.

Because of this, many encounters that humans perceive as life-or-death would not be for machines. Robots could take actions that would be extremely risky for a person, enabling them to resolve dangerous situations with precision and restraint. With appropriate design, this could allow them to reduce the overall level of violence in such encounters.

Finally, since machines will lack emotions (or at least be able to turn their emotions off at will), they won’t be subject to fatigue, stress, anger, or other emotional states that undermine human judgement and training during crises. They will be completely cool and rational in even the most intense confrontations. Instead of robots mass murdering people, we could live in a future where they behave in ways that sharply decrease the number of violent human deaths, worldwide. And for the reasons I’ve mentioned in this section, robots will make better police officers than humans.

There will be brain implants that let people merge minds with other humans and machines. The film introduces a powerful but highly speculative technology in the form of advanced brain implants. Early in the film, an arrogant tech billionaire demonstrates one embedded in his own skull, and late in the film, Allison Williams’ character is forcibly given a similar device.

Allison Williams with a brain implant

These implants are depicted as deeply integrated into the brain, enabling telepathic communication, sensory sharing (e.g. – seeing what the other person’s eyes are seeing), and even remote control of another person’s body. Allison Williams allows this when she needs M3GAN’s fighting skills to beat up enemies. Users can “speak” to one another though internal dialogue, exchange visual perspectives, and override motor functions.

This portrayal is not consistent with the current state of technology or any credible near-term trajectory.

The core difficulty lies in the nature of the brain itself. The brain does not transmit simple, discrete commands that can be easily intercepted and replicated. Instead, it operates through complex, distributed patterns of neural activity that are still not fully understood. Accurately reading intent, translating it into motor output, and maintaining real-time sensory feedback would require breakthroughs across neuroscience, computation, and bioengineering.

There are also major engineering challenges. Current neural interfaces can only interact with a tiny fraction of neurons, signals degrade over time, and biological responses such as scar tissue can interfere with implants. Additionally, spinal and cortical systems are not passive conduits—they actively process and transform signals.

The film also depicts immediate recovery and full functionality after implantation, which is highly unrealistic. Even in a mature version of this technology, such a procedure would likely involve significant medical recovery time, extensive calibration, and a prolonged training period.

Finally, the visual design of the implant—a metallic device protruding from the skull—is implausible from a user-acceptance standpoint. Real-world designs would almost certainly prioritize minimal visibility for aesthetics.

Overall, this is not a near-future technology. It remains speculative and likely requires decades of advancement before anything approaching this functionality becomes possible. As I wrote in my big list of future predictions, I think we’ll have to wait until 2100 for this technology to exist. Furthermore, as I said in my review of “Physics of the Future”, brain implants will not be externally visible and won’t take the form of metallic protuberances coming out of people’s skulls since humans will find that ugly.

Cybernetics will have cured spinal cord damage. The film also presents a dramatic medical breakthrough: the arrogant billionaire stands up from his wheelchair, revealing that cybernetic implants in his spine have restored his mobility.

The arrogant, disabled billionaire

This technology doesn’t exist, nor will it in the near future, so this depiction is incorrect. Moreover, as I’ve said in my big list of future predictions, I don’t think the technology will exist until the 22nd century. ChatGPT explains the formidable hurdles:

The spinal cord is not simply a bundle of wires transmitting signals from the brain to the body. It is a complex processing system that integrates signals, coordinates movement, and manages reflexes. Bypassing a spinal injury would require recreating highly specific patterns of neural activity rather than simply rerouting a signal.

Another major challenge is decoding the brain’s “language” of movement. Motor commands are distributed across large populations of neurons and are continuously adjusted based on sensory feedback. A functional system would need to read these signals, translate them into precise stimulation patterns in the spinal cord, and maintain a real-time feedback loop.

There are also practical limitations. Current electrode technology cannot interface with enough neurons to replicate natural movement, and implants often degrade over time due to biological responses. Additionally, many spinal injuries involve complex damage rather than clean breaks, further complicating intervention.

That said, early experimental systems have shown promise. Some patients have regained limited ability to stand or take assisted steps using brain–spine interfaces. These results demonstrate that the concept is viable, but fully restoring natural, fluid movement remains a long-term challenge.

Terminators will exist. AMELIA is essentially the same thing as a T-800 from the Terminator films–a humanoid machine that looks externally the same as humans but is gifted with superior strength, speed, and fighting skills, and designed to infiltrate human spaces and to destroy targets. M3GAN’s upgraded body is about as capable. While impressive advances in robots have been recently made, we’re still far from being able to make machines that are this sophisticated, so this prediction the movie made will fail. As I’ve said in my reviews of Prometheus and The Terminator, I doubt androids with these constellations of traits will exist until near the end of this century.

AI will help you navigate the world in real time. One of the film’s more grounded ideas appears in a scene where Allison Williams’ character attends a gala while receiving real-time guidance from M3GAN through an earpiece. It presumably has a small microphone that transmits the sound of her own voice. M3GAN can see her surroundings and provide context-sensitive advice, such as warning her about suspicious individuals or suggesting specific actions (e.g. – “Search his pockets for an access card.”). It’s unclear how the visual data is being transmitted, but maybe she is wearing a tiny camera device on her clothing, or M3GAN has hacked into the building’s surveillance camera system.

This depiction is largely accurate as a near-term prediction.

Modern AI systems—particularly large language models—are already capable of providing useful guidance across a wide range of scenarios. The remaining challenge is integration: combining audio input, visual perception, and real-time processing into a seamless, wearable system.

A setup involving an earbud and a camera-equipped smartphone (or similar device) could plausibly achieve this (this was shown in the film Her, in which the main character also wore and earbud and kept a smartphone in his front breast pocket with the camera unobstructed and facing forward). With continued advances in multimodal AI and hardware miniaturization, such systems could become affordable and widely available within the five years.

Exoskeletons will be able to move around on their own. One piece of technology Allison Williams’ company has developed is a powered, full-body exoskeleton for disabled people. It has integral arms and legs connected to a flat torso, and each part tightly mirrors the wearer’s corresponding body part. Early in the film, her male employee puts it on a demonstrates how it works. Around the midpoint of the film, AMELIA seizes control of it remotely and uses it to chase down and attach the company’s female employee.

While there already are exoskeletons for disabled people, they are intended for people with weak or disabled legs and as such only have legs and lack arms. Full-body units meant for people with complete paralysis below the neck don’t exist and almost certainly won’t in the next five years, so this depiction of the present/near future is wrong. However, the film’s underlying logic contains an important insight.

An exoskeleton must be capable of maintaining balance, coordinating movement, and supporting both its own weight and that of its wearer. This implies that, in principle, it should also be capable of operating without a human inside it. In fact, removing the human simplifies the control problem by eliminating unpredictable movements and reducing load.

Additionally, a properly designed exoskeleton would route forces through its own structure rather than through the human body. If the machine lifts a heavy object, the load would be borne by the mechanical frame, not the wearer. (I touched on this in my review of Edge of Tomorrow.)

Given these considerations, it is reasonable to conclude that some future exoskeletons could move autonomously and even be remotely controlled. The film’s depiction is therefore conceptually sound, even if its timeline is optimistic. So, yes, in the future, some kinds of exoskeletons could, by themselves, chase people down and beat them up with their arms. And if they were connected to the Internet, other people or AIs might be able to hijack them and to remotely command them to do the same.

Exoskeletons won’t be waterproof. The characters defeat the hijacked exoskeleton by dumping a fish tank full of water on it, which short-circuits its electronics. This will prove inaccurate.

If exoskeletons are intended for real-world use, they would need to operate in a variety of environments, including exposure to water. Basic waterproofing and environmental sealing are standard engineering practices for modern electronics and machinery. A system as complex and expensive as a full-body exoskeleton would almost certainly be designed to withstand such conditions.

Robots will be able to detect your vital signs by touching you. At the gala party, M3GAN and AMELIA have their first real confrontation. AMELIA kicks Allison Williams hard enough to knock her unconscious and send her skidding across the floor. M3GAN runs to Allison to make sure she’s still alive, and uses her hand to open her eye and to visually scan it, revealing her heart rate, body temperature, and blood oxygen levels. This depiction is partially plausible but flawed in its details.

Allison Williams getting checked out

It is possible to tell heart rate from subtle visual cues, including changes in blood flow visible in the face or eyes, though this requires sensitive instrumentation. However, the eyes do not provide information about body temperature or blood oxygen levels.

A more plausible approach would involve sensors placed on the forehead, particularly over the temporal artery. Heart rate is of course measurable there, infrared thermometers can measure core body temperature from this location, and optical sensors can estimate blood oxygen levels by analyzing light reflected off the skull. These technologies could, in principle, be miniaturized and integrated into a robot’s fingertip, allowing it to gather multiple vital signs through touch.

Of course, no robots have this feature today, nor are they likely to within five years, so the movie’s depiction is wrong. Whatever the case, in the future, I’m sure domestic robots will have the ability to quickly and accurately take vital signs from humans, either thanks to inbuilt sensors or knowledge of how to use basic medical devices.

Robots will entertain you. At the emotional nadir of the film, when all seems lost, Allison Williams and M3GAN are back at their base, alone. As Allison nearly breaks down crying over their predicament, M3GAN first offers consoling words, and then suddenly breaks into a situationally appropriate song–“This Woman’s Work” by Kate Bush. I thought this was hilarious and recommend you watch it to raise your own spirits, and I also think it’s an accurate and surprisingly important depiction of how machines will soon interact with humans.

Anyone who has played around with LLMs knows they’re capable of producing good poetry, funny jokes, understanding countless nuances of the human experience, and rendering good advice on what to do or not to do in specific situations. With entirely reasonable advances in the technology and in the interfaces through which we interact with them (the earbud + forward-facing camera setup I mentioned earlier), it’s likely we’ll have disembodied AIs that can be invisibly present around us at all times and keep us entertained with jokes, songs, or interesting conversation. Five years is a reasonable timeline for such a thing. And there’s no reason to think the companion robots we’ll have in the farther future won’t be endowed with those same attributes.

The implications of this for quality of life for billions of people are profound. Imagine an easing of loneliness, and every day being simply funnier and more entertaining. We’ve been so fixated on the consequences of machines acquiring our job skills and intelligence that we’ve overlooked the upsides of what happens when they’re as funny and as fun as we are.

Leave a Reply

Your email address will not be published. Required fields are marked *