Transhumanism

Posted on 01 March 2026 (2026-03-01) in the Essays section ❖ 1909 words

Transhumanism is broadly understood as a philosophy of life, or a movement, interested in using various types of technology to evolve humanity past its current form, to help people become something more than human—posthumans, as a transhumanist would say. Transhumanists wish to overcome biological limits, improving a person’s health, life expectancy, physical and mental capability, etc. As I understood it before doing any research into the topic, transhumanism was simply about technologically enhancing the human body, and, to the best of my knowledge, that is an important part of transhumanism—though, as I’ve come to realise, not the only one.

Transhumanism piqued my interest because I could not, and still cannot, conclusively decide what kinds of technological enhancements I would be okay with, and what kinds I would deem unethical. On the surface, transhumanism sounds great—using cool technology to improve lives? That’s great, right? Well, as I’ve found, there are many issues with transhumanism and its devout followers, and, additionally, I still cannot figure out where the line should be drawn in terms of the ethicality of technological enhancements (though now I have more of an idea).

I was surprised to learn that transhumanism is not as simple as just a desire to augment the body. The transhumanist movement, as a philosophy, advocates for continuous improvement, a perpetual strive for progress, and overcoming difficulty. Transhumanists are not interested in reaching some single point of perfection and idly remaining there—they want to continually improve humanity’s circumstances, by focusing on improvement and the overcoming of difficulty. What also struck me as interesting was that technology (that transhumanists would see used for improvement) is not limited to the most common understanding of the term—it involves the obvious things such as electronics and medicine, but also things that are less obviously “technology,” like political systems, social arrangements, and other similar inventions.

I used to have a vague idea that people perceived transhumanism and transhumanists negatively, or at least wearily. This struck me as bizarre, since improving lives with technology, which is what I thought transhumanism to (only) be, seemed rather obviously good. However, after reading some transhumanist works, I find myself sharing this weariness that first so surprised me. The answers to some common criticisms of transhumanism that the Transhumanist FAQ, a document presumably written by supporters of transhumanism, provides are concerning at best, and genuinely appalling at worse. I am not certain that I did not misread their views, so I’d encourage you to take a look at the document yourself, but I myself was rather disgusted by a large portion of what I read.

Though this, obviously, doesn’t necessarily imply anything about transhumanism as a whole, I do think it is interesting to note that the person who coined the term, Julian Huxley, was a eugenicists. Though contemporary transhumanists usually deny any support of eugenics, and I do not mean to suggest that they are lying, I did find the general “vibe” around transhumanist messaging to be… off, so to speak. Of course, the description of transhumanism provided at the beginning of this essay is largely benign, and I wouldn’t want to write off the philosophy as a whole, but now I definitely understand the attitudes towards transhumanism I saw and see, and I will certainly approach any works by self-proclaimed transhumanists with caution and distance.

Possibly my biggest concern with transhumanism is the effective lack of choice that it leaves individuals with, in regards to whether or not they do or do not wish to enhance themselves with technology. If a certain enhancement became ubiquitous, it would become almost impossibly difficult to live a good life without said enhancement. Imagine if most people got a chip that enhances their intelligence, but someone didn’t want to do so, for whatever reason. Their life would be miserable; they would have immense trouble applying to academic institutions or jobs—because who would hire or admit someone who lacks a major mental enhancement which almost everyone else has—and they would also, simply speaking, likely be quite unhappy, living surrounded by people who are all technologically superior to them. I believe everyone should have the right to choose whether or not they wish to apply some particular piece of technology to themselves, and I believe that a choice between reluctant conformity and lifelong misery is no choice at all.

I am also quite concerned about the possible, and frankly likely, amplification of social divides that transhumanist enhancement could cause. For the sake of an example, let’s imagine scientists develop that aforementioned intelligence-improving chip, though this applies to most other technologies as well. In a perfect world, the first people to gain access to this technology would be the ones most in need of it. Unfortunately, ours is not a perfect world, and the first people to gain access to the chip would be the wealthy, the powerful, or both. The chip would then allow them to solidify their position as wealth-and-or-rich, and make it even more difficult for people of lower social class to move up the social ladder. I don’t find it unlikely that this could eventually lead to a society where a class of physically superior (by virtue of technological improvements) people rules over what is little more than a class of slaves, forbidden from accessing any enhancements.

Connected to this is the idea of equality. Our sense of every person being equal is drawn, at least in part, from the notion that we are all, in a meaningful sense, humans. There is something that we call our humanity, and by realising that (almost) everyone shares that humanity, is how we (hopefully) conclude that we all deserve equality. So, if transhumanists were to rid themselves of that humanity, by becoming posthumans, they might deem themselves better, “more equal,” than the un-enhanced humans. And from there, there is a slippery slope leading to extreme forms of discrimination, and who knows what else.

Shifting the focus from societal dangers posed by transhumanism to those on a more personal scale, we have to consider the ultimate unpredictability of enhancements. What makes us human is an extremely complex and at times seemingly random combination of various factors. This means that it is effectively impossible to predict what a certain enhancement or modification would do to us, how it would affect our psyche, our emotions, our sense of purpose, or even more down-to-earth things, like our physical health. Even when an enhancement is seemingly entirely benign and beneficial, it might, potentially, lead to disastrous consequences, which seems to imply that any transhumanist technology is extremely dangerous—and I’m inclined to agree with that conclusion.

When I considered my attitude towards specific technological enhancements, it became clear that I was most concerned about any that had to do with the mind. This line of thought led me to a wonderful article by David DeGrazia, in which he examines the relation of anti-depressants to the authenticity of the self; I am not certain as to the extent to which I agree with the thoughts laid out there, but they have certainly helped me clarify my own.

If we assume the view that our self is something static, something given to us, and something that we discover, then any technological modification of the mind poses a clear threat to the sanctity and authenticity of that self, and could even be called a violation of it. However, I am inclined to believe that the self is instead something that we have a relatively active role in creating. Some parts are, of course, affected by various external factors, but the formation of the self is an active process of creation in which we partake. Under that view, mental modifications are nothing more than a technological aide in the self-creation process.

However, one might argue that such means of modifying the self are not entirely natural, and therefore not entirely desirable. Though I most certainly have nothing against people using contemporary forms of modification of the mind, such as anti-depressants—on the contrary, I believe they are quite beneficial, and that anyone who needs them should get them—I am personally rather reluctant to apply such technologies to myself, and I find myself wary of theoretically possibly more invasive forms of technological mind modification.

I am less worried about modifications to the body, such as prosthetics, but I am nonetheless worried. I am deeply supportive of technologies meant to help people with disabilities, either from birth or from injuries, but going a step further and removing any threat of injury and death or removing all of our body’s needs, seems much more dangerous and perhaps even undesirable. It is difficult for me to say why, but I do, intuitively, believe that the ever-looming threat of death and the endless needs of our body are in some way important and contribute to making our lives meaningful.

In general, I am open to all technologies that would heal or help circumvent various illnesses, disabilities, or similar afflictions, but am reluctant to accept anything more. Of course, finding the line between the two is extremely difficult and likely even impossible, but this is the best distinction I’ve been able to come up with.

I can’t put my finger on why, but, emotionally and intuitively, it seems to me that making ourselves no longer human is wrong. There is something beautiful about the fact that we all partake in the mysterious thing that is consciousness and humanity, and messing with that is not something I, and surely many other people, want. I am reminded of the experience machine thought experiments in which, to share my potentially limited recollection, one is asked whether one would like to be hooked up to a machine which perpetually generates pleasurable, but fake, scenarios or sensations for one’s mind to experience. Many people, myself included, would refuse this offer, which seems to imply that there indeed is something more to life than just maximising pleasure and minimising suffering.

Ultimately, which enhancements one would be okay with is a matter of personal choice; far be it from me to make asinine claims about someone’s choice to implant a magnet in their hand being an immoral violation of their humanity. This questions seems to come down to the notoriously difficult and very subjective question of life’s meaning. Everyone must decide for themselves where the line is for them, and what they themselves would want to do to their body and mind.

Unfortunately, full freedom of choice is not an ethical option here. As I’ve outline above, when a bunch of people decide to implement some enhancement, all others will be strongly coerced, if not forced, to use the same enhancement or be left behind, social divides will expand, and, likely, a plethora of other terrible consequences will follow. What, then, should be the line?

This investigation left me with more questions than answers. In light of the apparent impossibility of letting everyone choose for themselves, what choice should be made for the general public? Who would make such a choice? Is it even necessary? Maybe transhumanism is just good? I do not have the answer to any of those questions—all I have is more questions. I will certainly continue pondering this highly interesting topic, and I would encourage any and all readers to do the same.