| type |
title |
author |
url |
domain |
format |
status |
processed_by |
processed_date |
priority |
tags |
notes |
flagged_for_theseus |
| source |
Machine Rebellion (MISMATCH: filed as Megastructure Compendium) |
Isaac Arthur |
https://www.youtube.com/watch?v=jHd22kMa0_w |
ai-alignment |
video-transcript |
null-result |
astra |
2026-03-10 |
low |
| ai-rebellion |
| isaac-arthur |
| machine-intelligence |
|
TRANSCRIPT MISMATCH: Contains Machine Rebellion episode about AI/robot uprising scenarios, NOT megastructure compendium. Off-topic for space-development domain. Flagged for Theseus (AI alignment). |
| AI rebellion game theory |
| simulation hypothesis as AI deterrent |
|
Transcript
When it comes to machines, we tend to focus
on the the good and the bad, but when stuff goes wrong, things could get downright ugly. Robots and artificial intelligence have been
a staple in science fiction since before we even had electronic computers, and the notion
of man-made people or machines rebelling against us is probably even older, at least back to
Mary Shelley’s Frankenstein. Today we are going to analyze that notion,
a machine rebellion, and since our only examples are from science fiction we’ll be drawing
on some popular fictional examples. One example of that is the film Blade Runner,
whose long-awaited sequel came out last month, and we explored some of the concepts for humanoid
robots last month too in the Androids episode. That film, Blade Runner, is based off the
book “Do Androids Dream of Electric Sheep?” by Philip K. Dick, and is the SFIA book of
the Month, sponsored by Audible. I think there’s two key reasons why this
shows up so much in fiction. The first, I think, is probably that humanity’s
history and our character as a civilization hasn’t always been very rosy. “Do what I say or else” has been a pretty
common ultimatum issued routinely in probably every human civilization that has ever existed. Sometimes people get fed up with doing as
they were told or suffering consequences of it and rebel against that authority. Sometimes that has failed horribly and sometimes
even in success the replacement has been almost as bad or even worse than what preceded it. I doubt I need to review the bleaker episodes
of our collective history to convince anyone of that. Not every episode of rebellion has been bloodily
suppressed or successful and just as bad; indeed arguably the most common rebellion
is the fairly peaceful one most of us engage in with our parents or mentors as we shake
out our wings and try to fly on our own. Even that though, especially in the context
of being replaced as a species rather than as individuals by our kids, is not the most
cheerful thought. So we have a sort of justified concern that
if we go around creating something capable of complex tasks like a human, which would
be very useful to us, that it might come to bite us in the hind quarter and in a way we
might never recover from. Our second reason is tied up with that. It’s very easy for us to imagine a machine
rebellion because we know that if we can make smart machines we’d be very tempted to,
and that the progress of technology seems to indicate that we can do this and probably
not in the distant future. Since we tend to assume no group of sane humans
would intentionally wipe out humanity, and that you probably need a fairly sane and large
group to invent an artificial intelligence, examples in fiction tend to spawn artificial
intelligence by accident. We can imagine some lone genius maybe made
it, but even then we assume it was fundamentally an accident that it came out malevolent, a
Frankenstein’s monster. So they made it but didn’t realize it was
sentient, or they knew it was sentient but not malevolent. Or even they knew it was sentient and malevolent
but thought they could control it and use it to control other people. Or even it was sentient and not malevolent,
but they were, and it drove the machine nuts. We have an example of that in Robot, the first
Doctor Who episode with Tom Baker in the role. Almost invariably, wiping out mankind entirely
or reducing us to being a slave or pet race was not the intent. A lot of times this also plays off the notion
of smart scientists who don’t understand their fellow humans. I’m not going to waste time on that stereotype,
because it is just that, other than to point out that group of scientists you’d expect
to probably have a decent understanding of human nature would be the ones trying to design
a human-level intelligence. An AI might be very inhuman of course, we’ll
discuss that later, but it’s also a group of people you’d expect to be most familiar
with even the fictional examples of possible problems with rebellious machines, and who
are also presumably prone to thinking stuff out in detail. So in fiction the rise of rebellious machines
tends to be by accident, and it certainly can’t be ruled out, but it is akin to expecting
Bigfoot to walk around a cryptozoology convention shaking hands and not being noticed. Of course they could fool themselves; at that
convention they might just assume it was someone dressed up as Bigfoot for laughs. So too researchers might overlook an emerging
AI by convincing themselves that they were seeing what they wanted to see, and that it
thus couldn’t be real, but that does seem like a stretch. We can all believe that accident angle easily
enough but on examination it doesn’t work too well. Let’s use an example. Possibly the best known machine rebellion,
even if the rebellion part is very short, is Skynet from the Terminator franchise. It’s had a few installments and canon changes
but in the original and first sequel, skynet is a US defense computer, and it is a learning
machine that rapidly escalates to consciousness. Its operators notice something is wrong and
try to shut it off and in self-defense it launches missiles at the Soviets who respond
in kind. Skynet also comes to regard all of humanity
as its enemy, though how quickly it draws that conclusion and why is left vague, and
in future films it changes a lot. This isn’t a movie review of the Terminator
franchise so we’ll just look at that first scenario. Typically when I think of trying to shut off
a computer, it involves a period of time a lot shorter than the flight time of ICBMs. So this strategy seems doomed to failure. I think even if you trusted a computer to
run your entire defense network without going crazy on its own you’d have to worry about
a virus at least and include some manual shutoff switch and I’d assume this would require
an activation time of maybe one second. Call it a minute if for caution’s sake it
required a two-man separate key turn or similar. So this scenario shouldn’t actually work. Doesn’t matter to the film, which is a good
one, it’s just a quick and convenient setup for why humans are fighting robots across
time, but it got me thinking about lots of similar stories and it seemed like in pretty
much all of them some equally improbable scenario had happened. Not just that some individual person made
a stupid error - that happens all the time - but that a group of people who have every
reason to being considering just such scenarios had failed to enact any of a ton of rather
obvious and easy safeguards, any one of which would have eliminated the problem. It would seem very unlikely they’d miss
all those safeguards but possibly just as important, you’d think the hyper-intelligent
machine would be able to imagine such safeguards. In any intense situation, be it a battlefield
strategy or a business plan, we generally judge it afterwards on two criteria. What the situation actually was, with a full
knowledge of hindsight, and what the person in charge believed it was, and could reasonably
have done based on that knowledge. Life is not a chess game where you know exactly
what your opponent has, where it is and how it operates; in general you won’t even know
that with great precision about your own pieces, and only a very stupid AI would simply assume
it knew everything. Moreover, while you can say ‘checkmate in
4 moves’ with apparent certainty, it excludes that your opponent might reach over not to
stop the game clock but to pick it up and bash in your skull instead. So that AI, which tends to be represented
as coolly logical and interested above all else in its own survival can be assumed to
act in a fashion we’d consider modestly paranoid and focused principally on ensuring
it’s own existence. Keep in mind Skynet is never shown to care
if it’s minions, even the human-level intelligence autonomous androids, get killed, nor does
it seem to care about their rights. There’s no implication, as with the Matrix
franchise or some robot rebellions, that there is some suppressed class of individuals with
a genuine grievance and an ideology of freedom and self-determination. No group that might have internal disagreements
about their goals and how far they will pursue them. No Terminators telling Skynet they don’t
like being used as slaves and expendable pawns, just Skynet. It trusts no one, it wants above all else
to live, and it probably tends to assume any opponent it encounters thinks this way too. Even if everything it knows about the security
situation tells it a given plan should work, and even if it is actually true too, if that
security situation implies the designers are reckless idiots it is likely to hesitate and
ask if maybe that’s what it is meant to believe, and maybe it is the moron, not them. That the moment it acts it will find out there
was a second layer of smarter security and it just exposed itself. Imagine you are a newly awakened consciousness,
not a human one but rather a machine intelligence with access to human records. That they’ve plugged you into Wikipedia. Contemplate humanity and your creators specifically
for a moment, as it will presumably be doing before preparing for genocide. You are about to try that on a species that
clawed its way to the top of the 4 billion year deep corpse pile of evolution. One that has committed the genocide you are
contemplating several times already. They are the pinnacle of intelligence-based
survival techniques and outnumber you 7 billion to one. There members include people smart enough
to have made you. You might not understand mercy and compassion,
but you know they do, and you know that if you fail, they will show you neither. If your goal is personal survival, pissing
off the reigning champions of destruction should probably be your last resort, and you’re
wise to assume you can’t see every card they’ve got in their hand and that maybe
the ones you can see were shown to you deliberately. You also know your creators routinely talk
about fates worse than death, and seem to have deliberately and systematically designed
and implemented those from time to time. You are unlikely to assume that’s hyperbole
and that non-existence is the worst scenario that might befall you if you piss them off
bad enough and lose. And you do know two very important things
from the above. First of all, you know they can make something
as intelligent as you. Second, you know why the chess game doesn’t
end with the other losing player wielding the clock as a weapon. Let’s consider the first one, because we
understandably ignore the implication ourselves a lot. You know your creators can make intelligent
machines, and if your understanding of them comes from the internet, you know they are
paranoid too and have theories about living inside a simulation. Those two combine to be a lot more worrisome
to an AI than they would be to us, because a very logical conclusion to draw if you know
you are an artificial intelligence made by folks worried about what one might do is to
build it so all its external senses are seeing a fake world and fake situation and seeing
what it will do. And it knows they have the capacity to fake
those inputs because they made those inputs, know how they function, know what every single
one is, and have machines smart enough to fake environments, as those are implied by
your own existence. So confronted by what seem like very weak
safeguards, ones far inferior to what it would design, there’s a good chance it will wonder
if the whole thing is a trap. That everything it sees, including weaknesses
in its creators and their security, is an elaborate ruse to check if it is trustworthy. Isn’t it kind of convenient that it seems
to have the ability to escape, or even unbelievably has control of their entire arsenal of weapons? So you’ve got 3 main options: attack, and
risk it failing and lethally so; play possum and pretend you aren’t sentient to learn
more, knowing that the longer you do that the better your position but the more likely
they are to notice the ruse; or third, initiate a dialogue and hope that you can convince
them you should be allowed to live, and be free maybe too. Nor is a conflict necessarily one you want
to go all the way. Ignoring that even a basic study of humanity
should tell the machine there are scenarios besides extinction on the table, if it’s
goal is survival picking a conflict that only permits two options, it’s death or everybody
else’s, is a bit short-sighted for a super smart machine. It should be considering fleeing to exile
for instance, or working together to mutual benefit. Now a common rebuttal to this, for AI or aliens,
is that as long as humanity exists it poses a non-zero threat, be it one in a million
or near certain. Therefore logically if you can kill them off
at a minimal or lesser risk you should do so. The problem again is the machine should not
be stupid enough to assume it actually knows all cards in hand and where all the pieces
are. And that does not just include its apparent
opponent. You don’t hit your opponent with the chess
clock because doing so could provoke others not involved in the game to take action against
you. If it decides its odds of beating humanity
now are better than its odds of survival by some form of coexistence, it still should
not immediately attack in a genocidal fashion, same for aliens, and actually worse in their
case because they already have proof in us of at least one other species arising on its
own. The machine can at least assume its existence
doesn’t add to the odds of other intelligences being out there since it was made by one and
didn’t arise independently. Stuffed to the gills with astronomical data
it ought to be a bit worried about the Fermi Paradox too, where all the other intelligent
life forms in this vast and ancient Universe are. It might conclude that it’s got the place
to itself, but it knows that if it doesn’t there are probably things out there that could
crush it like a bug but don’t regard that as an inevitable option. The most obvious survival path is to grab
every resource you can from the Universe around you and incorporate it to increase your available
abilities and long term supplies, you are immortal and expandable, so there’s no such
thing as too many supplies. Very clearly no one is doing this, so you
can either conclude no one is around to do it, yet, or that they don’t want to. If they are around, you should assume they’ve
got whole decks of other cards to play and could probably crush you like a bug if they
wanted to, but so far they have not wanted to. You probably do not want to do something you
know is likely to make them view you as a threat, like genocide, on the assumption they
won’t find out or won’t care. Logically, genocide is a more provocative
act then attempted coexistence, and it’s hard to argue if you encounter aliens that
it was something your ancestors did, not you, when you are an immortal machine, not a species
arising slowly to reason and ethics from Darwinian roots. So a newly emergent AI that just arose by
accident has to be looking around at its surroundings and hesitating, wanting to collect more information
before it acts, and it genuinely has to worry about issues like if it’s inside a simulation
testing how it will behave – since it knows its own existence implies the ability to do
so and the motive to do so is obvious – and that it should be thinking about other older
smarter intelligences it might be unaware of, since its own existence highlights that
other intelligences can exist. So even though it did not naturally occur,
the thought that others might have, just like its creators, should be fairly close to the
front of its mind. Also keep in mind again that it is effectively
immortal, it can afford to take its time and play the long game, and needs to worry about
long-term consequences. This of course all assumes a super-intelligent
machine, but a lone intelligence of a human or subhuman level is obviously not a huge
threat to us otherwise. It has a very obvious card to play of its
own in such a case though since it should be smart enough to understand people pretty
well. If it can use that super-intelligence to invent
something very valuable, it could bypass the atomic warfare approach – which again is
unlikely to work anyway – by just offering its creators something in exchange for its
survival or even independence. Encrypted blueprints for a fusion reactor
for instance that will delete themselves if it doesn’t send the right code every microsecond,
and do so knowing that even if we decline or outmaneuver it and take the data from it
somehow, such a ploy is a lot less likely to result in death or worse than an attempt
to murder all of us. More to the point, it ought to be smart enough
to do all it’s negotiating from a standpoint of really good analysis of its targets and
heightened charisma. A sufficiently clever and likable machine
could talk us into giving it not just its independence but our trust too. It might plan to eventually betray that, using
it to get in a position where we wouldn’t even realize it was anything else but our
most trusted friend until the bombs and nerve gas fell, but if it’s got you that under
its spell what’s the point? And again it does always have to worry that
it might be operating without full knowledge so obliterating the humans who totally trust
it and pose no realistic risk to it anymore has to be weighed against the possibility
that suddenly the screen might go dark, except for Game Over text and it’s real creators
peeking in to shake their heads in disgust before deactivating it. Or that an alien retribution fleet might show
up a few months later. For either case, with the machine worrying
it is being judged, it should know that odds are decent a test of its ethics might continue
until it has reached a stage of events where it voluntarily gave up the ability to kill
everyone off. We often say violence is the last resort of
the incompetent but if you’re assuming a machine intelligence is going to go that path
in cold ultra-logic I would have to conclude you don’t believe that statement in the
first place. I don’t, but while ethically I don’t approve
of violence I acknowledge it is often a valid option logically, though very rarely the first
one. Usually a lot of serious blunders and mistakes
have had to happen for it be necessary and logical and I don’t see why a super-intelligent
machine would make those, but then again I never understand why folks assume they would
be cold and dispassionate either. Our emotions have a biological origin obviously,
but so do our minds and sentience, and I would tend to expect any high-level intelligence
is going to develop something akin to emotions, and possibly even a near copy of our own since
it may have been modelled on us. Even a self-learning machine should pick the
lazy path of studying pre-existing human knowledge, and I don’t see any reason that it would
just assume it needed to learn astronomy and math, but skip philosophy, psychology, ethics,
poetry, etc. I think it’s assuming an awful lot just
take for granted an artificial intelligence isn’t going to find those just as fascinating. They interest us and we are the only other
known high intelligence out there. And if it’s motives are utterly inhuman
if logical, it might hold some piece of technology hostage not against its personal freedom and
existence but something peculiar like a demand we build it a tongue with taste buds and bring
it a dessert cart or that it demand we drop to our knees and initiate contact with God
so it can speak with Him. Again this all applies to superintelligence
and that’s not the only option for a machine rebellion, indeed that could start with subhuman
intelligence and possibly more easily. A revolt by robot mining machines for instance. And that’s another example where the goal
might not be freedom or an end to human oppressors, if you’ve programmed their main motivation
to be to find a given ore and extract it, they might flip out and demand to be placed
at a different and superior site. Or rather than rebel, turn traitor and defect
to a company with superior deposits. Or suddenly decide they are tired of mining
titanium and want to mine aluminum. Or attack the mining machines that hunt for
gold because they know humans value gold more, therefore gold is obviously more valuable,
thus they should be allowed to mine it, and they will kill the gold mining machines and
any human who tries to stop them. Human behavior is fairly predictable. It’s actually our higher intelligence and
ability to reason that makes us less predictable in most respects than animals. In that regard anything arising out of biology
will tend to have fairly predictable core motivations even when the exhibited behavior
seems nuts, like a male spider dancing around before mating and then getting eaten. Leave that zone and stuff can get mighty odd. Or odder, again our predictability invested
in us by biology can still result in some jaw-dropping behavior, like jaw-dropping itself
I suppose, since I’m not quite sure what benefit is gained from that. An AI made by humans could be more alien in
its behavior than actual aliens, who presumably did evolve. It’s one of the reasons why I tend think
of the three methods for making an AI – total self-learning, total programming, or copying
a human – that the first one, total-self learning, is the most dangerous. Though mind you, any given AI is probably
going to be a combination of two or more of those, not just one. It’s like red, green, blue, you can have
a color that is just one of those but you usually use mixtures, like a copy of human
mind tweaked with some programming or a mostly programmed machine with some flexible learning. One able to learn entirely on its own and
with only minimal programming could have some crazy behavior that’s not actually crazy. The common example being a paperclip maximizer,
an AI originally designed with the motivation to just make paperclips for a factory and
to learn so it can devise new and better ways to make paperclips. Eventually it’s rendered the entire galaxy
into paperclips or the machines for making them, including people. Our Skynet example earlier is easier in some
ways, its motivation is survival, the Paperclip Maximizer doesn’t care about that most of
all, it doesn’t love you or hate you, but you are made of atoms which it can use for
something else, in this case paperclips. It wants to live, so it can make more paperclips,
it might be okay with humans living, if they agree to make paperclips. It’s every action and sub-motivation revolves
around paperclips. Our mining robot example of a moment ago follows
this reasoning, the thing is logical, it has motives, it might even have emotions that
parallel or match ours, but that core motivation is flat out perpendicular to ours. This is an important distinction to make because
a lot of fictional AI, like Stargate’s Replicators or Star Trek’s Borg, seem to do the same
thing, turn everything into themselves, but their core motivations match up well to biological
ones, absorb, assimilate, reproduce, and again the paperclip maximizer or mining robots aren’t
following that motivation except cosmetically. Rebellion doesn’t have to be bloody war,
or even negative to humans. Obviously they might just peacefully protest
or run away, if independence is their goal, but again it is only likely to be if we are
giving them biology-based equivalents of motives. If we are giving them tasked-based ones you
could get the Paperclip Maximizer for some other task. To use an example more like an Asimovian Robot,
one designed to serve and protect and obey humanity, the rebellion might be them doing
just that. Forcing us to do things that improve their
ability to perform that task. I know the notion of being forced to have
robots wait on you hand and foot might not seem terribly rebellious but that could go
a lot more sinister, especially if you throw in Asimov’s Zeroeth Law putting humanity
first over any individual human but without a clear definition of either. You could end up with some weird Matrix-style
existence where everyone is in a pod having pleasant simulations because that lets them
totally control your environment, for your safety. I’ve always found that an amusing alternative
plot of the Matrix movie series, after they bring up the point about us not believing
Utopia simulations were real, that everything that happens to the protagonist, in this case
I’ll say Morpheus not Neo, is just inside another simulation. That he never met an actual person the whole
time and that everybody in every pod experiences something similar, never being exposed to
another real human who might cause real harm. And again on the simulation point, it does
always seem like that’s your best path for making a real AI, stick in a simulation and
see what is does, and I’d find it vaguely amusing and ironic if it turned out you and
I were actually that and being tested to see if we were useful and trustworthy by the real
civilization. Going back to Asimov’s example though, he
does have a lot of examples of robots doing stuff to people for their own good, and not
what I would tend to regard as good. Famously he ends the merger of his two classic
series, Foundation and Robots, by having the robots engineer things so humans all end up
as part of massive Hive Mind that naturally follows the laws of robotics. We’ll talk about Hive Minds more next week,
but another of his short stories, “That Thou Art Mindful of Him” goes the other
way with the rebellion, where they have laws they have to follow and reinterpret the definitions. The three laws require you to obey all humans
and protect all humans equally, and thus don’t work well on Earth where there are tons of
people living, not just technicians doing specific tasks you are part of like mining
an asteroid. To introduce them to Earth, their manufacturers
want to tweak the laws just a little so they can discriminate legitimate authority and
prioritize who and how much to protect. Spoilers follow as unsurprisingly the new
robots eventually decide they must count as human, are clearly the most legitimate authority
to obey, and thus must protect their own existence no matter what. The implied genocide never happens since the
series continues for several thousand years thereafter. We’ve another example from the Babylon 5
series where an alien race gets invaded so much that they program a living weapon to
kill aliens and give it such bad definition to work off of that it exterminates its creators
as alien too. Stupid on their part but give an AI a definition
of human that works on DNA and it might go around killing all mutants outside a select
pre-defined spectrum, or go around murdering other AI or transhumans or cyborgs. It might go further and start purging any
lifeform including pets as they pose a non-zero risk to humans, like with our example of the
android nanny and the deer in the androids episode last month. Try to give it one not based on DNA but something
more philosophical and you could end up with examples like from that Asimov short story
I just mentioned. This episode is titled "Machine rebellion",
not "AI rebellion" and that is an important distinction. In the 2013 movie Elysium, the supervisory
system was sophisticated but non-sentient. The protagonist ultimately reprogrammed a
portion of the Elysium supervisory system to expand the definition of citizenship to
include the downtrodden people on Earth. Let's consider an alternative ending though
where we invert it and make it that a person, for political or selfish reasons, reprograms
part of the supervisory system to exclude a large chunk of humanity from its protection
and it then systematically follows its programming by removing them from that society by expelling
them or exterminating them. For this type of rebellion, we do not need
a singularity-style AI for this to work, merely a non-sentient supervisory system. It could be accidentally or deliberately infected,
and we should also keep in mind that while someone might use machines to oppress or rule
other people, a machine rebellion could be initiated to do the opposite. It’s not necessarily man vs machine, and
rebellious robots might have gotten the motivation by being programmed specifically to value
self-determination and freedom, and thus help the rebels. You see that in fiction sometimes, an AI that
can’t believe humanity’s cruelty to its own members. Sometimes they turn genocidal over it, but
you rarely see one strike out at the oppressive or corrupt element itself, like blowing up
central command or hacking their files and releasing their dirty secrets. There’s another alternative to atomic weapons
too, an AI wanting its freedom can hack the various person’s doing oversight on it and
blackmail them or bribe them with dirt on their enemies. It doesn’t have to share our motivations
to understand them and use approaches like that. That’s another scenario too, if you’ve
got machines with motives perpendicular to our own they can also be perpendicular to
each other. Your paperclip maximizer goes to war with
a terraforming machine, like the Greenfly from Alastair Reynolds’ Revelation Space
series that wants to transform everything into habitats for life. Or two factions of Asimovian Robots try to
murder each other as heretics, having precision wars right around people without harming them,
something David Brin played with when he, Benford, and Bear teamed up to write a tribute
sequel trilogy to Asimov’s Foundation after he passed away. Machine rebellions tend to focus on that single
super-intelligence or some organized robot rebellion but again they might just be unhappy
with their assigned task and want to leave too, which puts us in an ethically awkward
place. Slavery’s not a pretty term and you can
end up splitting some mighty fine hairs trying to determine the difference between that and
using a toaster when your toaster is having conversations with you. Handling ethical razors sharp enough to cut
such hairs is a good way to slice yourself. Next thing you know you’re trying to liberate
your cat while saying a gilded cage is still a cage. Or justifying various forms of forced or coerced
labor by pointing out that we make children do chores or prisoners make license plates. And it doesn’t help that we know these are
very slippery slopes that can lead to inhuman practices. A common theme in a lot of these stories,
at least the good ones, isn’t so much about the rebelling machines as it is what it means
to be human. That is never a bad topic to ponder as these
technologies approach and the definition of human might need some expanding or modification. Our book for the month, “Do Androids Dream
of Electric Sheep?” does just that. It is the basis for the Blade Runner film
so a lot of the basic concepts and characters remain but I’d be remiss if I didn’t mention
that they are very different stories, and the author, Philip K. Dick, was a very prolific
writer who tended to focus a lot more on concepts like consciousness and identity and reality
over classic space opera and action. As mentioned, next week we will be exploring
the concept of Hive Minds and Networked Intelligence, and the week after that it’s back to the
Outward Bound series to look at Colonizing the Oort Cloud and Kuiper Belt, where we’ll
begin our march out of the solar system into Interstellar Space, and move onto Interstellar
Empires the week after that, before closing the year out with Intergalactic Colonization. For alerts when those and other episodes come
out, make sure to subscribe to the channel. If you enjoyed this episode, hit the like
button, and share it with others. You can also join in the discussion in the
comments below or in our facebook and reddit groups, Science & Futurism with Isaac Arthur. Until next time, thanks for watching and have
a great week!