View this post on the web at [link removed]
If every neuron in a human was accurately simulated in a computer, would it result in human consciousness?
A human is not a mind. A mind is not a brain. A brain is not a machine. These three facts are obvious and should be beyond contestation. This is why mechanists are always forced toward arguing in the alternative that a machine brain which does all our thinking for us will be superior to our given equipment, specifically in that, ultimately, it will give us power and pleasure immeasurably beyond what we can attain as a human.
In fact, they must, and do, go further. Not only is such power and pleasure the key to a satisfaction being merely human can never deliver; it is the only thing that will save us. Our given humanity is not just ugly and confining, but unsustainable. It is a ticking time bomb, a freak accident, an untenable transitional phase in the evolution of consciousness. Like it says in the comic books, the choice, which must be made right now, is between annihilation and superman. Kal-El must be loaded into the spaceship and fired far, far away from Krypton, which a "natural cataclysm" is about to destroy utterly and forever.
Machiavelli, actually, makes a somewhat similar point. Time, he says (speaking more as the last ancient Roman than the first modern Italian) is cyclical; the ultimate appeal against oppressive government is to the nature which periodically and eventually plows civilizations under, leaving not even a trace in the memory of those who live on. He also notes, however, that a civilization unable to develop military science necessary to its protection and defense (e.g. the Etruscans) will be similarly effaced from both the earth and from memory. Taken together, the thrust of these observations is that the competitive development of military science continuously forces us away from the ground of natural science, i.e. philosophy, and therefore away from the cyclical time of the ancients and into linear, progressive, cumulative time. In other words, our hatred of oblivion whips our intelligence and industriousness into a sort of war against nature, one where our existence and our identity increasingly depend more on our technological extensions of ourselves than on our given human being.
It's apparent right now that the dilemma inherent in this development is coming to a head. Our mechanical prostheses of body and mind have already swamped the world, to the degree all begin to grasp that only our digital devices, and not any human individual, group, or entity, can hold sway over the entire world. These devices (including programs and algorithms) have rapidly outstripped human capabilities at the most primal level, e.g. they possess invisibility and ubiquity, they can fly almost unimpeded through the air, etc. It is only natural that we develop a poignant yet infuriating envy toward these machines. The obvious or instinctual salve for such envy is to control the machines, to reassert ourselves as their indisputable masters. But it appears few actually have any hope for achieving this outcome. The fallback position, psychologically, is the one our mechanists say is the only tenable position: we must become one with the machines; we must be the ultimate machines, even more powerful, more total, more godlike than the pseudo-divine entities we have already created.
From a normie perspective this determination is indisputably at odds with Christianity (not to mention other monotheistic religions). But from the strengthening perspective of not so normal people, it is actually unclear, that is, they harbor an increasingly open desire to fuse Christianity and eschatological mechanism into a sort of super-religion for supermen. Normie or no, one confronts the inescapable conclusion that the question of technological development today is inseparable down to its fundamental components from the reality of spiritual war.
-James Poulos, executive editor of The American Mind
A few years ago, Elon Musk made a comment to the effect that there is only a one-in-a-trillion chance that we are not living in a computer simulation. His reasoning was that, in the future, AI will have gotten so advanced that it will spin out representations of every possible reality ever. Furthermore, Musk says that we better hope that he's correct, because otherwise civilization will have broken down to the point that "Moore's Law"—predicting an exponential increase in the rate of computer chip processing power—no longer holds.
Well Elon Musk surely gets the best drugs, so who's going to gainsay his insights? The problem with most talk about living in a computer simulation is that it's irrefutable. How can you prove or disprove it either way? The locus classicus for most people on this issue is The Matrix, which proffered the possibility of waking up from the simulation—a hopelessly optimistic, Cartesian perspective that has no room in Musk's dark vision.
Rather than worrying about whether or not an advanced computer can create consciousness, we are better off trying to develop consciousness in the soft machines we already inhabit. "Now beacon, now sea," wrote Samuel Beckett about "unfathomable mind," referencing the dual nature of consciousness as both that which is and that which perceives. Trippy riddles are amusing, but what happens when we die?
-Seth Barron, managing editor of The American Mind
I am inclined to answer “no” to this particular question, as phrased: the fact of having existed continuously through time, not only remembering experiences but actually having lived them and bearing the scars of them, makes something meaningfully different than simply having been called into being fully formed, with memories that refer to no actual event in time. The old “ship of Theseus” problem—if I replace the hull, then the keel, then the rudder of a ship, is it still the same ship?—is no conundrum if we look at the world the way Aristotle did: what it means to be “something” (ti) is to have both a form and matter, to be an abstraction (“man”) molded into particular stuff across a continuous stretch of time. What makes Theseus’ ship the same ship even as all the parts are replaced one by one is this unfolding across time: the fact that it has both a specific essence and a recorded history of embodying that essence at particular times and places.
So if you simulated every one of my neurons right now in a computer, it would not be me—I am not certain it wouldn’t quickly go insane. It would be unmoored from the memories communicated to it by my neurons, and since the question does not indicate that it would have a human body, those memories would not relate in any discernable way to its present reality. They would relate also in a highly uncertain and destabilizing way to whatever “experiences” or “input” the machine then proceeded to have. It would be an abomination, an aberration, and potentially an act of cruelty toward the machine—since, if it were not a human consciousness, it might still in some horrifying sense be conscious.
More perplexing to me is what would happen if we could synthesize out of raw materials a sperm and egg that might develop then into a purely “synthetic” human with a real history of embodiment. That would be a pretty problem indeed, and one whose implications we had better get to pondering right quick.
-Spencer Klavan, associate editor of The American Mind
Unsubscribe [link removed]