A great deal of philosophical and metaphysical thought is devoted to the topic of mind uploading. We are moving into an age in which the emulation of human brains in software will be possible, and clearly strong artificial intelligence will result from that work, even if not achieved through other means.
There is considerable overlap between supporters of longevity science and supporters of work on strong AI. A large contingent of people view mind uploading - making a copy of their mind and then running it in software - as a perfectly valid approach to achieving radical life extension. Look at the 2045 Initiative, for example, as a determined outgrowth of this community. This appears fine if you believe that a copy of you is you, but the problem is that this is not the case. A copy is a copy, its own entity. There are also other rather important existential issues inherent in existing as software rather than hardware: are you a continuous being, or are you just a sequence of disconnected, momentary separate beings, each destroyed an instant after its creation? A shadow of life and an ongoing atrocity of continual murder, not actual life.
So the details of implementation matter. Replace your neurons as they die, gradually, with long-lasting machinery that serves the same purpose in hardware and you are still you. Nothing is different as you transition continuously from flesh to machine. But to copy the brain and throw it away, to replace it instantly with that same end result is death. So far as I can see there is no near-future technology of gradual machine replacement that is likely to provide radical life extension on the same timeframe as work in rejuvenation medicine. Artificial neurons for gradual replacement are a long way away in comparison to implementation of the SENS vision for reversal of human aging.
In any case, here is a little philosophical reading on mind uploading, with links to much more in the way of thought on the subject. It might not be terribly relevant to our future, but that doesn't stop it from being interesting:
A couple of years ago I wrote a series of posts about Nicholas Agar's book Humanity's End: Why we should reject radical enhancement. The book critiques the arguments of four pro-enhancement writers. One of the more interesting aspects of this critique was Agar's treatment of mind-uploading. Many transhumanists are enamoured with the notion of mind-uploading, but Agar argued that mind-uploading would be irrational due to the non-zero risk that it would lead to your death. The argument for this was called Searle's Wager, as it relied on ideas drawn from the work of John Searle.
This argument has been discussed online in the intervening years. But it has recently been drawn to my attention that Agar and Neil Levy debated the argument in the pages of the journal AI and Society back in 2011-12. Over the next few posts, I want to cover that debate. I start by looking at Neil Levy's critique of Agar's Searlian Wager argument.
The major thrust of this critique is that Searle's Wager, like the Pascalian Wager upon which it is based, fails to present a serious case against the rationality of mind-uploading. This is not because mind-uploading would in fact be a rational thing to do - Levy remains agnostic about this issue - but because the principle of rational choice Agar uses to guide his argument fails to be properly action-guiding. In addition to this, Agar ignores considerations that affect the strength of his argument, and is inconsistent about certain other considerations.