The Intrinsic, Psychopathic Essence of Software Engineering
Written on
Chapter 1: The Nature of Creation
It’s hard to find a software engineer who hasn’t written code simply because they could. If you think you haven’t done this, give it time—you likely will. As I reflect on the evolution of AI, the rise of generative language models, the cutting-edge robotics from Boston Dynamics, and the breakthroughs in quantum computing, I can’t shake the feeling that the “omnipotence” I experienced when I crafted my first IF statement in C goes beyond a mere innocent desire to create.
Let’s take a moment to consider this notion of creative impulse. Why do humans feel the need to create? What drives us to produce something entirely new? Is this drive innocent, or does it stem from a deeper, perhaps darker, instinct? Do we secretly aspire to be deities, and what might that imply for the field of software engineering? Recent advancements in technology offer insights into these questions. So let’s begin with a crucial inquiry.
As a software engineer, do you assess the societal impact of your code before you begin, or is your primary goal to demonstrate a concept through your work?
I suspect that the chance to create a proof of concept will often take precedence over any other considerations. However, if you’ve found yourself in a scenario where you conducted a human impact analysis beforehand, I invite you to share your insights in the comments—many readers, including myself, would appreciate your perspective.
We code because we can...
This isn’t just a cliché; it’s a reality. Every piece of technology we use today, from microwaves to the latest gadgets, originated from an “I did it because I could” mindset. The innate desire to experiment—like writing code to track and translate license plates of speeding cars—mirrors the curiosity that led to earlier innovations, such as OCR, and now propels us toward advancements like ChatGPT.
“What if I tried this, or that?” is a question every software engineer encounters repeatedly throughout their day. Often, these inquiries are contextual, such as pondering whether to use a ternary operator instead of a switch statement. Yet, many of us progress to more existential questions: How much can I create using code? This leads to a series of attempts and failures, but similar to other scientific disciplines, computer science reveals a universal truth: the more you try, the greater your chances of success.
I haven’t encountered a software engineering challenge I couldn’t eventually resolve. This isn’t arrogance; it’s simply a matter of probability. And I’m confident I’m not alone in this regard. Compared to many other engineers I've met, I consider myself merely average. I don’t believe I possess extraordinary talents. What I do have, however, is tenacity. As software engineers, we understand that as long as the necessary hardware is available, whatever we envision can and will become reality. And when that hardware doesn’t exist, we either wait for it to be developed or create it ourselves.
But what does this mean for humanity?
This is where things get unsettling. Notably, I haven’t posed the question, “But should I?” in the previous section. Much like other technological advancements, many experiments occur within a so-called sandbox—a personal computer, a private GitHub repository, or even a lab. Code is written and evolves over time, ultimately leading to the demonstration of a concept. Most often, these proofs of concept are designed to validate, not to refute.
This sandbox environment resembles the garage of Steve Jobs and Steve Wozniak, or someone tinkering away in their shed on a theoretical flux capacitor. It’s a presumed “safe space” with no immediate ramifications for humanity—until it works. That’s when the creator typically contemplates the potential impact of their invention. Yet, this inner voice of caution is often drowned out by the creator’s excitement, leading to a cycle where the ethical implications of software engineering are pushed aside in favor of innovation.
And what creator wishes to destroy their own masterpiece?
Listening to Geoffrey Hinton, a pioneer in artificial intelligence, and observing the developments by OpenAI, it becomes evident that the approach to AI has largely been “code first, consequences later.” This brings me to the concept of psychopathy.
Psychopathy, you ask? Yes, really.
I recognize how this may sound—perhaps accusatory or at least unflattering. To clarify, I’m not labeling software engineers as psychopaths or sociopaths; rather, I’m examining the psychopathic nature inherent in software engineering itself. In our modern world, it represents one of the most formidable skills available, finding applications across various industries—from cleaning floors to launching spacecraft.
According to Wikipedia, psychopathy affects about 1% of the global population, characterized by persistent antisocial behavior, lack of empathy and remorse, and traits that are bold, disinhibited, and egotistical. That’s not an insignificant number—approximately 80 million people, which is about 12 times the number of COVID-19 related deaths since 2020.
In a previous article discussing PyScript, I noted that, according to Peter Wang, there were around 25 million software developers worldwide in 2022. While I’m not suggesting a direct correlation between the two figures, the lack of remorse displayed by many AI founders regarding the societal consequences of their creations is concerning.
What often emerges is a picture of relaxed, excited innovators showcasing their creations. When discussions about societal repercussions arise, the responses are often ill-prepared or dismissive, advocating that it’s up to lawmakers to address those concerns. This attitude reflects a tendency to push boundaries without considering the potential fallout, which does little to inspire confidence that these innovators will avoid catastrophic outcomes.
It’s important to clarify that I’m not targeting specific individuals; rather, I’m highlighting the perilous path of software engineering, where the desire for creation frequently overshadows ethical considerations.
In truth, we didn’t develop AI out of necessity. We required fire, the wheel, and other essential tools. However, artificial intelligence emerged from our desire to create, driven by an insatiable need to replicate ourselves at any cost—even at the expense of our well-being. This should alarm each of us.
Moving forward, perhaps the mantra of software engineering ought to be: “Just because you can, doesn’t mean you should.”
Chapter 2: The Consequences of Creation
In the first clip titled "We Need to Talk About Kevin," we explore the profound implications of creation and its effects on the human psyche.
The second clip, "We Need to Talk About Kevin - Clip 2," further delves into the ethical considerations surrounding technological advancements.