Braaaaains. Are squishy.
/If you don't know about the whole Singularity woo-woo, just walk on; I'm not going to supply links. Go read Pharyngula. Christopher Chabris, lead author with Daniel Simons of the famous inattentional blindness experiment and the book "The Invisible Gorilla" (NYT review here) wrote in the 16 October 2011 New York Times Book Review about some new books on how the brain works. Or might work. Or might not work. (Oddly, the same review is titled "Think Again" in the print Book Review, but "Is the Brain Good at What It Does?" in the online version of the same text -- makes for tricky searching.) If these studies, and Chabris's take on them, hold any water, this is going to be bad news for people who think we're thisclose to uploading our brains to immortal silicon-and-stainless bodies.
The... conversation... you'll usually end up in when talking to a Singularist centers on their unshakeable belief that the brain is not just metaphorically similar to, in some ways, a computer, but at its base really is a computer. They swear it it's just a matter of time before chips of sufficient processing power arrive so that we can duplicate enough 0/1 switches while also mapping the brain (which they think is simply 0/1 switches) and transfer one to the other (footnote: the data from the most precise brain scans today is thousands of petabyes per human brain -- and that's not including the storage and computation needed to transform that data into a map. And then you just have the map, without even talking about the rules, logic, behavior. The data storage is certainly a problem of engineering, but it's still a big problem. And remember the brain isn't a network; it's a network of networks.). It's a weird, transcendental materialism but one that relies on faith in something ineffable. Their time frame is usually 20 years out. It's always 20 years out. Coincidentally, I'm beginning to think that the average person thinks they'll live for about 20 more years.
There are other logical fallacies at the heart of Singulaism, such as projecting out Moore's Law (see the IEEE on how that's more a rough prediction and one that's not infinitely extensible; uncertain breakthroughs are needed even now to "add at least a half-dozen years" to the viability of Moore's prediction), but I want to be clear: skepticism about the Singularital braincase transfer brigade does not rely on, nor imply, spiritualism. As for how this movement misunderstands the biology of the brain, PZ Myers (see above Pharyngula link) is on top of that. Science is needed here.
And speaking of science, back to one of the books Chabris reviewed: "Brain Bugs: How the Brain's Flaws Shape Our Lives" by Dean Buonomano. The book goes into detail on a number of "bugs" in how our brains tend to work; of particular interest is that a large part of what goes on up there seems to be due to an "associative architecture", which files away much of how we perceive and remember facts about the world in a relational -- and sloppy -- way.
It's not a matter of the limitations of language (e.g., the word "bank" is burdened with manifold contexts and meanings), nor is it a question of having to structure a relational database, which any Filemaker user could do. One experiments showed that exposure to, say, polite words, were later more polite, while an Ig Nobel-winning study (http://www.improbable.com/ig/winners/#ig2011) by Mirjam Tuk and Luk Warlop found that subjects who really, really needed to pee had more self-control than subjects who'd gone before they started the test. This kind of leaky, sloppy associative mechanism could go towards explaining why advertising and appeals to emotion work (sometimes -- of course we can train ourselves out of this) -- and it also is a huge stumbling block, as well as a moral question, for the Singularists.
If we were to be able to map the brain, map the connections in each network, map the connections between each one-to-many network, then we still have the qualitatively different question of how to create the rules governing these connections. How would we deal with leaky associations? I suppose an engineer could concoct a detailed set of rules for each language -- a descriptive task -- and build hard-linked "leakages" into the system. But these leaks are unpredictably dynamic, and affected not only by what we are exposed to every second of our existence, but by second- and higher-order associations; a traumatic exposure to cabbage might link associatively to a grandmother's cooking, and then to grandfather's smoking, and then to ash found on a carpet, and who knows what else. Or it might not, depending on other situational contexts or, as we've seen with the politeness experiment, what else happened that day.
The point is, the programmer would have to make some choice not only about the initial conditions of the artificial brain structure he's planning on beaming himself into, but how the process will unfold. Would you program out depressive associations, a fear reaction to heights?
This speaks to the larger conundrum facing the whole cyborg brain project. More and more research is showing that how we operate, who we are, is largely based on bugs in the system. (Granted, this should come as no surprise to anyone who has dealt with less-than-perfect people -- that is, people -- for more than five minutes.) I won't get into the moral quagmire that's still being plumbed with GMO, designer-gene IVF, etc., but ask instead: would you really want to deal with someone, or be someone, who's "fixed"? It seems to me that it'd be like being stuck with someone who totally doesn't get your sense of humor. More Human than Human? Not fun at parties.
(Of course, Derek Parfit would say that it doesn't matter, as we're never who we were, moment by moment. But that's a whole different book I haven't read yet.)