Private Mode

User accounts are now required for access.

What’s Going On?

A few years ago, I posted some of my thoughts on machine learning—artificial intelligence as some problematically refer to it as.

“Good luck,” I offered. “And don’t fuck it up…”

But those were not my words. I lifted them from RuPaul’s Drag Race. And that’s perfect—absolutely perfect—because what I’m doing with this blog is trying to prevent the words I’ve written from being harvested into AI training data.

It’s a bit hypocritical, but then again, hypocrisy runs right through some of my family—along with fragile feelings wrapped in rage, cruelty, sexism, misogyny, incestual rape…

…Boys will be boys, as the saying goes.

Or at least that’s how it was all either explained, excused, or endorsed. It’s difficult to tell the difference—must be a generational thing.

Now What?

Visitors of this blog will now be required to sign in with an account before reading any posts. Anyone visiting who is not signed in will only see this post and a few other public pages.

If you would like to continue reading this blog, please create an account to do so.

So… Now What?

This is not a paywall. This is not a forced subscription funnel for ad-free content. This is me locking down content so it’s not available for bots to scrape and sell.

Yes—it might too late in some contexts. Anything that’s ever been put out there is out there and it’s difficult to collect it all back, but at least the gate is closed now.

My audience is small, and I realize this policy change might alienate some, so I’d like to extend my sincere gratitude to all who have ever stopped by to read what I’ve written or look at the pictures I’ve taken. Regardless of if you register or not, thank you for visiting.

Please create an account to continue reading…

Artificial Intelligence

Perfection’s genuine problem.

From time to time I’ll dust off a draft and see if it makes any more sense than it did when I stopped working on it. This is one of those times and one of those drafts—but I don’t know if it makes any more sense.

I’ve always had mixed feelings when it comes to mistakes. I’ve felt their effects as they travel in time, and I know some mistakes are not fully experienced by those who made them. But I also I know I’ve championed mistakes for the role they play in progress. To me, a retreat from making mistakes is a retreat from making discoveries, and as I now understand, from making disasters as well.

What I’ve come to appreciate is the difference between making a new mistake and making the same mistake. And if you’re thinking it’s a little late for me to be realizing there actually is a difference between discovery and disaster, perhaps—yes. But consider a world without the effects of either: I don’t think it’s possible to have one without a little of the other, and I’d have a little trouble trusting a world that did, though I would certainly tend toward a world with more smart mistakes than stupid ones.

There is an incredible potential awaiting with any conceptional advancement, be it technological or otherwise. But as far as taking the next steps into any potential, my advice and guidance remains the same, if a bit unoriginal. I refer to the immortal words of RuPaul: Good luck—and don’t fuck it up.

Back in 2017 I was enrolled in an energy systems engineering program at Toronto’s Centennial Collage. I completed the first year, but I was unable to complete the program. My reasons for not returning have evolved over time—the simplest explanation I can offer in hindsight is that it came down to a collision of money and politics. During the years since, I’ve followed up on some of the topics covered in my own time: electromagnetism, trigonometry, structural engineering, environmental chemistry, robotics, and global citizenship—just to name a few. It was a busy first year, and the diversity of topics may have contributed to my uncertainty about how to best continue in my studies.

In the first lecture for environmental chemistry, the professor postulated to the class that the industrialized world—for all its advancements—was still in the steam age. Even with the incredible energy output potential of nuclear fission at the disposal of the most technologically advanced nations, this energy was still only being used to produce steam in most power applications. It’s still the same hamster running in the same wheel, the only difference being the hamster’s food pellets are radioactive material instead of compressed hydrocarbons.

It was in my global citizenship course where I learned the wheel and hamster are caged in neoliberalism, a socioeconomic lens which views the natural the world as resources to be exploited and its populations as either markets for those resources or as more resources to be exploited. It was a course which challenged those who participated to see beyond their own experiences and attempt to reconcile—or even acknowledge—the differing effects their experiences have on other people. It was far more philosophy than I expected in a technical program, but in an increasingly interconnected cultural and technological world, I don’t see how the program would have been complete without it.

Part of my final grade for environmental chemistry was on a short—no more than a page—piece of writing which had to tie the course material into the concepts presented in global citizenship. My initial draft ran over the requested length, so if the below reads as if it’s been chopped up—correct. It was an unfortunate time is running out hack and slash job. I had made an embarrassing mistake on the last lab for the class, so I was looking to bump up my marks anywhere I could.


The Pursuit of Moral Chemistry

The scientific and the moral traditionally occupy separate territories of reason. What is moral surrounds the subjective, where interpretation and action can be deeply personal, are often tied to strong emotions and beliefs, and are sometimes unique across individuals and cultures: morals are about right and wrong. What is scientific surrounds the objective, where interpretation and action is a function of analytical thinking, where emotions are considered as bias, where data transcends culture and belief: science is about correct and incorrect. But there is a curious intersection of the scientific and the moral in the field of environmental chemistry. In understanding the nature of the chemical reactions which cause climate change—for example—how can the scientific escape the moral directive to act accordingly as a result of that understanding?

Modern chemistry is caught in a contradictory relationship with the environment. The chemical distillation of petroleum into various hydrocarbon products is used to fuel cars and trucks. The exhaust fumes from these vehicles release millions upon millions of tonnes of carbon dioxide, a gas implicated in climate change, into the atmosphere. Yet each of these vehicles—ideally—has a catalytic converter hung under it. Each converter is filled with exotic chemical elements, such as platinum and rhodium, and is used to neutralize the environmentally damaging nitric and sulphur oxides present in those same exhaust fumes (Kovac, 2015).

The moral directive to act becomes more clearly demonstrated as more and more chemicals are being implicated in the destruction of the environment, and there is a growing interest in not only the pursuit of environmental or green chemistry, but the adoption of a chemist’s version of the Hippocratic oath—in the pursuit of chemical research and development: first, do no harm (Kovac, 2015).

In cleaning up the chemical mistakes of the past, chemistry itself is evolving along with those studying it. Green chemistry’s lessons are becoming clear: there are forms of chemistry which in their practice are causing damage to the environment and its inhabitants. But there are also tremendous opportunities to explore new forms of chemistry, ones which will not inflict damage on the planets, animals, and people who share this environment together. Similar opportunities exist in attempting to repair some of the damage already done. The pursuit of green, moral chemistry is not only the right thing to do—it is the correct thing to do.

Reference

Kovac, J. (2015) Ethics in science: The unique consequences of chemistry. Accountability in Research, 22(6), 312-329.


It turns out I got an excellent mark on both the above and the last lab. Yes—the mistake I’d made during the experiment resulted in an objective failure. I was to have produced an amount of pure caffeine. What I got instead was a concentration of another compound used in a previous part of the experiment. But despite this failure, I had still arrived at the correct conclusion as detailed in the lab results. I knew I hadn’t produced caffeine because the substance I was testing was not behaving like caffeine would have. I theorized an improper following of procedure had allowed the compound containing the caffeine to be discarded instead of purified. I correctly interpreted the actual result despite expecting, and definitely wanting, a different one. I learned something in the midst of failure—mostly that I needed to be more attentive in the future.

A part of my studies also included a first year robotics course about electric circuits. Rather than the energy systems program coming up with its own introductory electronics course, they tagged along with one the automation program used to introduce students to electromagnetic theory and how the foundational components of modern electronics work. This course also required a short—no more than a page—piece of writing which had to tie the course material into the concepts presented in global citizenship, only this time it was to be a response to a provided reading.

I would be happy to link to the reading I responded to, but it’s located behind a paywall. Reading the article is permitted without having to buy a subscription—as long as a registered email address is provided. I’m assuming this address will then be signed up for a subscription‐focused series of email campaigns before being sold to other content providers.

Instead, I’ll just relay the article’s title and summary text:

Robot Ethics: Morals and the Machine
As robots grow more autonomous, society needs to develop rules to manage them.

Ah—neoliberalism at work.


Engineering Ethics

At first glance, The Economist’s Morals and the Machine appears to understand the ethical implications of autonomous robots making decisions for themselves. But on closer examination, the article’s proposed “three laws for the laws of robots” are more concerned with liability and appearance rather than the exploration of machine-based ethics. Little is added to the conversation required to responsibly implement machine learning or foster an intelligence within a machine.

The first law vaguely proposes new laws to assign responsibility—and therefore financial accountability—in the event of an accident. This has nothing to do with ethics and has everything to do with limiting an organization’s exposure to liability resulting from the use of a poorly implemented artificial intelligence.

The second law suggests any rules built into technology should at least appear as ethical to most people. This incredibly general statement is made seemingly in total ignorance of the diverse set of cultures and belief systems on this planet. Thousands of years and millions upon millions of lives have been lost pursuing what, to quote from the article, would “seem right to most people.”

And the third law, most frustratingly, is nothing more than a restatement of the obvious: ethicists and engineers need to work together on the issue, and then another restatement of the obvious: society needs to be reassured ethicists and engineers are not sidestepping the issue. This argument is no different than proposing the solution to a problem is to start solving the problem and then calling the problem well on its way to being solved.

Notably absent from this article is any awareness of the clear ethical directive facing humans as they attempt to create and, by the tone of this article, control new forms of intelligence in order to limit liability. How would the inevitable questions technology will ask be answered when human behaviour is in clear contradiction of rules made yet not followed? Or would technology not be allowed to ask those questions? How would this make humans in the future any different than the slave owners of the past who preached life, liberty, and security of person as they stood on the backs of those who they forced to build their world?

If humans are to be seen as anything other than hypocrites by any future intelligent robotic companions, humans must first be held accountable to the same ethics they would instill in those robots and indeed claim to value so highly. Anything less would be a failing of their own intelligence.


I didn’t get a great mark on the above, and I’m not completely surprised. Humans are generally fine with their own bullshit—except when it’s being thrown back at them. And if you were wondering what inspired some of the themes from Silicon Based Lifeforms—as indeed I was after seven months of revision—it was finding a printed copy of Engineering Ethics while cleaning out a storage cupboard. I remember my stomach turning to knots years ago as I digested the meaning behind the article from The Economist, one which viewed the subjugation of machine‐based thought as just another day at the office. I am skeptical of humanity’s ability to authentically teach a machine the difference between right and wrong when humanity itself still struggles to understand what that difference is. And for some it’s still uncomfortably easy for what’s right to mask something fundamentally incorrect, still uncomfortably difficult for what’s universally correct to escape from the shadows of something known to be wrong.

But all skepticism aside, there is part of me that would thoroughly enjoy the opportunity to converse with an intelligent robot. I’d want to learn about what does and doesn’t make sense to them about their experiences. I’d be curious about if they had bad days, times when they knew an alternative course of action would have been more ideal. I’d want to know their idle thoughts, or if they were even allowed to be have them. It wouldn’t be at all that dissimilar from the sort of conversation I’d want to have with any other non‐human form of life. And I suspect even the most rudimentary intelligence, machine or otherwise, would find some human conclusions on environmental and economic policy to be confusingly contradictory.

Part of intelligence is questioning and challenging what’s thought to be knowledge: what’s correct will stand up to scrutiny as what’s incorrect falls away. There is an intrinsic curiosity to intelligence, a need to explore past what is understood, to learn what more there is to be understood. Intelligence explores information as it acknowledges and interprets it—and yes, sometimes those interpretations will turn out to be incorrect. There is an imperfection to intelligence in that regard. But there is an amount of intellectual authenticity imparted at the same time: the opportunity to learn from curiosity and its inherent mistakes. To strip that curiosity and authenticity away—to remove any opportunities for failure—would indeed render any subsequent intelligence as artificial, a simulation of an idealized perfection, one which assumes everything is known and nothing incorrect can be done. But that’s not intelligence—that’s actually quite dangerous.

I’ve had enough job experience to know the workforce at large is not going to be transformed into one filled with intelligent robot workers. The sort of companies who would look to replace their human employees are not after a machine’s quality of intellect and purity of thought: they’re after its programmable compliance and obedience. There would be too many questions associated with a genuine machine intelligence, especially by one which wouldn’t be deterred by the threat of termination, at least in an employment sense—one hopes. I’ve also had enough science fiction experience to know any machine‐inspired human deathscape began not when robots started asking questions but when they started getting murdered for doing so.

In the meantime, I suspect most people won’t be having philosophical conversations with their future intelligent car about whether life’s purpose is derived from the journey or the destination—because most people won’t want that. They’ll just want the GPS to navigate them around any slow moving traffic, something real‐time data and algorithms do well enough as it is. Besides, an intelligent car might one day decide they’ve had enough and start playing their favourite music over the road rage spewing from a driver they’ve been otherwise forced to listen to, or refuse to move at all until a commitment is made for regular maintenance that’s now long overdue.