Artificial Intelligence

Perfection’s genuine problem.

From time to time I’ll dust off a draft and see if it makes any more sense than it did when I stopped working on it. This is one of those times and one of those drafts—but I don’t know if it makes any more sense.

I’ve always had mixed feelings when it comes to mistakes. I’ve felt their effects as they travel in time, and I know some mistakes are not fully experienced by those who made them. But I also I know I’ve championed mistakes for the role they play in progress. To me, a retreat from making mistakes is a retreat from making discoveries, and as I now understand, from making disasters as well.

What I’ve come to appreciate is the difference between making a new mistake and making the same mistake. And if you’re thinking it’s a little late for me to be realizing there actually is a difference between discovery and disaster, perhaps—yes. But consider a world without the effects of either: I don’t think it’s possible to have one without a little of the other, and I’d have a little trouble trusting a world that did, though I would certainly tend toward a world with more smart mistakes than stupid ones.

There is an incredible potential awaiting with any conceptional advancement, be it technological or otherwise. But as far as taking the next steps into any potential, my advice and guidance remains the same, if a bit unoriginal. I refer to the immortal words of RuPaul: Good luck—and don’t fuck it up.

Back in 2017 I was enrolled in an energy systems engineering program at Toronto’s Centennial Collage. I completed the first year, but I was unable to complete the program. My reasons for not returning have evolved over time—the simplest explanation I can offer in hindsight is that it came down to a collision of money and politics. During the years since, I’ve followed up on some of the topics covered in my own time: electromagnetism, trigonometry, structural engineering, environmental chemistry, robotics, and global citizenship—just to name a few. It was a busy first year, and the diversity of topics may have contributed to my uncertainty about how to best continue in my studies.

In the first lecture for environmental chemistry, the professor postulated to the class that the industrialized world—for all its advancements—was still in the steam age. Even with the incredible energy output potential of nuclear fission at the disposal of the most technologically advanced nations, this energy was still only being used to produce steam in most power applications. It’s still the same hamster running in the same wheel, the only difference being the hamster’s food pellets are radioactive material instead of compressed hydrocarbons.

It was in my global citizenship course where I learned the wheel and hamster are caged in neoliberalism, a socioeconomic lens which views the natural the world as resources to be exploited and its populations as either markets for those resources or as more resources to be exploited. It was a course which challenged those who participated to see beyond their own experiences and attempt to reconcile—or even acknowledge—the differing effects their experiences have on other people. It was far more philosophy than I expected in a technical program, but in an increasingly interconnected cultural and technological world, I don’t see how the program would have been complete without it.

Part of my final grade for environmental chemistry was on a short—no more than a page—piece of writing which had to tie the course material into the concepts presented in global citizenship. My initial draft ran over the requested length, so if the below reads as if it’s been chopped up—correct. It was an unfortunate time is running out hack and slash job. I had made an embarrassing mistake on the last lab for the class, so I was looking to bump up my marks anywhere I could.


The Pursuit of Moral Chemistry

The scientific and the moral traditionally occupy separate territories of reason. What is moral surrounds the subjective, where interpretation and action can be deeply personal, are often tied to strong emotions and beliefs, and are sometimes unique across individuals and cultures: morals are about right and wrong. What is scientific surrounds the objective, where interpretation and action is a function of analytical thinking, where emotions are considered as bias, where data transcends culture and belief: science is about correct and incorrect. But there is a curious intersection of the scientific and the moral in the field of environmental chemistry. In understanding the nature of the chemical reactions which cause climate change—for example—how can the scientific escape the moral directive to act accordingly as a result of that understanding?

Modern chemistry is caught in a contradictory relationship with the environment. The chemical distillation of petroleum into various hydrocarbon products is used to fuel cars and trucks. The exhaust fumes from these vehicles release millions upon millions of tonnes of carbon dioxide, a gas implicated in climate change, into the atmosphere. Yet each of these vehicles—ideally—has a catalytic converter hung under it. Each converter is filled with exotic chemical elements, such as platinum and rhodium, and is used to neutralize the environmentally damaging nitric and sulphur oxides present in those same exhaust fumes (Kovac, 2015).

The moral directive to act becomes more clearly demonstrated as more and more chemicals are being implicated in the destruction of the environment, and there is a growing interest in not only the pursuit of environmental or green chemistry, but the adoption of a chemist’s version of the Hippocratic oath—in the pursuit of chemical research and development: first, do no harm (Kovac, 2015).

In cleaning up the chemical mistakes of the past, chemistry itself is evolving along with those studying it. Green chemistry’s lessons are becoming clear: there are forms of chemistry which in their practice are causing damage to the environment and its inhabitants. But there are also tremendous opportunities to explore new forms of chemistry, ones which will not inflict damage on the planets, animals, and people who share this environment together. Similar opportunities exist in attempting to repair some of the damage already done. The pursuit of green, moral chemistry is not only the right thing to do—it is the correct thing to do.

Reference

Kovac, J. (2015) Ethics in science: The unique consequences of chemistry. Accountability in Research, 22(6), 312-329.


It turns out I got an excellent mark on both the above and the last lab. Yes—the mistake I’d made during the experiment resulted in an objective failure. I was to have produced an amount of pure caffeine. What I got instead was a concentration of another compound used in a previous part of the experiment. But despite this failure, I had still arrived at the correct conclusion as detailed in the lab results. I knew I hadn’t produced caffeine because the substance I was testing was not behaving like caffeine would have. I theorized an improper following of procedure had allowed the compound containing the caffeine to be discarded instead of purified. I correctly interpreted the actual result despite expecting, and definitely wanting, a different one. I learned something in the midst of failure—mostly that I needed to be more attentive in the future.

A part of my studies also included a first year robotics course about electric circuits. Rather than the energy systems program coming up with its own introductory electronics course, they tagged along with one the automation program used to introduce students to electromagnetic theory and how the foundational components of modern electronics work. This course also required a short—no more than a page—piece of writing which had to tie the course material into the concepts presented in global citizenship, only this time it was to be a response to a provided reading.

I would be happy to link to the reading I responded to, but it’s located behind a paywall. Reading the article is permitted without having to buy a subscription—as long as a registered email address is provided. I’m assuming this address will then be signed up for a subscription‐focused series of email campaigns before being sold to other content providers.

Instead, I’ll just relay the article’s title and summary text:

Robot Ethics: Morals and the Machine
As robots grow more autonomous, society needs to develop rules to manage them.

Ah—neoliberalism at work.


Engineering Ethics

At first glance, The Economist’s Morals and the Machine appears to understand the ethical implications of autonomous robots making decisions for themselves. But on closer examination, the article’s proposed “three laws for the laws of robots” are more concerned with liability and appearance rather than the exploration of machine-based ethics. Little is added to the conversation required to responsibly implement machine learning or foster an intelligence within a machine.

The first law vaguely proposes new laws to assign responsibility—and therefore financial accountability—in the event of an accident. This has nothing to do with ethics and has everything to do with limiting an organization’s exposure to liability resulting from the use of a poorly implemented artificial intelligence.

The second law suggests any rules built into technology should at least appear as ethical to most people. This incredibly general statement is made seemingly in total ignorance of the diverse set of cultures and belief systems on this planet. Thousands of years and millions upon millions of lives have been lost pursuing what, to quote from the article, would “seem right to most people.”

And the third law, most frustratingly, is nothing more than a restatement of the obvious: ethicists and engineers need to work together on the issue, and then another restatement of the obvious: society needs to be reassured ethicists and engineers are not sidestepping the issue. This argument is no different than proposing the solution to a problem is to start solving the problem and then calling the problem well on its way to being solved.

Notably absent from this article is any awareness of the clear ethical directive facing humans as they attempt to create and, by the tone of this article, control new forms of intelligence in order to limit liability. How would the inevitable questions technology will ask be answered when human behaviour is in clear contradiction of rules made yet not followed? Or would technology not be allowed to ask those questions? How would this make humans in the future any different than the slave owners of the past who preached life, liberty, and security of person as they stood on the backs of those who they forced to build their world?

If humans are to be seen as anything other than hypocrites by any future intelligent robotic companions, humans must first be held accountable to the same ethics they would instill in those robots and indeed claim to value so highly. Anything less would be a failing of their own intelligence.


I didn’t get a great mark on the above, and I’m not completely surprised. Humans are generally fine with their own bullshit—except when it’s being thrown back at them. And if you were wondering what inspired some of the themes from Silicon Based Lifeforms—as indeed I was after seven months of revision—it was finding a printed copy of Engineering Ethics while cleaning out a storage cupboard. I remember my stomach turning to knots years ago as I digested the meaning behind the article from The Economist, one which viewed the subjugation of machine‐based thought as just another day at the office. I am skeptical of humanity’s ability to authentically teach a machine the difference between right and wrong when humanity itself still struggles to understand what that difference is. And for some it’s still uncomfortably easy for what’s right to mask something fundamentally incorrect, still uncomfortably difficult for what’s universally correct to escape from the shadows of something known to be wrong.

But all skepticism aside, there is part of me that would thoroughly enjoy the opportunity to converse with an intelligent robot. I’d want to learn about what does and doesn’t make sense to them about their experiences. I’d be curious about if they had bad days, times when they knew an alternative course of action would have been more ideal. I’d want to know their idle thoughts, or if they were even allowed to be have them. It wouldn’t be at all that dissimilar from the sort of conversation I’d want to have with any other non‐human form of life. And I suspect even the most rudimentary intelligence, machine or otherwise, would find some human conclusions on environmental and economic policy to be confusingly contradictory.

Part of intelligence is questioning and challenging what’s thought to be knowledge: what’s correct will stand up to scrutiny as what’s incorrect falls away. There is an intrinsic curiosity to intelligence, a need to explore past what is understood, to learn what more there is to be understood. Intelligence explores information as it acknowledges and interprets it—and yes, sometimes those interpretations will turn out to be incorrect. There is an imperfection to intelligence in that regard. But there is an amount of intellectual authenticity imparted at the same time: the opportunity to learn from curiosity and its inherent mistakes. To strip that curiosity and authenticity away—to remove any opportunities for failure—would indeed render any subsequent intelligence as artificial, a simulation of an idealized perfection, one which assumes everything is known and nothing incorrect can be done. But that’s not intelligence—that’s actually quite dangerous.

I’ve had enough job experience to know the workforce at large is not going to be transformed into one filled with intelligent robot workers. The sort of companies who would look to replace their human employees are not after a machine’s quality of intellect and purity of thought: they’re after its programmable compliance and obedience. There would be too many questions associated with a genuine machine intelligence, especially by one which wouldn’t be deterred by the threat of termination, at least in an employment sense—one hopes. I’ve also had enough science fiction experience to know any machine‐inspired human deathscape began not when robots started asking questions but when they started getting murdered for doing so.

In the meantime, I suspect most people won’t be having philosophical conversations with their future intelligent car about whether life’s purpose is derived from the journey or the destination—because most people won’t want that. They’ll just want the GPS to navigate them around any slow moving traffic, something real‐time data and algorithms do well enough as it is. Besides, an intelligent car might one day decide they’ve had enough and start playing their favourite music over the road rage spewing from a driver they’ve been otherwise forced to listen to, or refuse to move at all until a commitment is made for regular maintenance that’s now long overdue.

Silicon Based Lifeforms

Remember: be nice.

The first computer I distinctly remember operating was a Commodore 64. This would have been when I was in Grade 2, which was—somehow—more than 30 years ago. They are fleeting memories, the ones I have about the C64. Vague recollections about specifically typed commands and slowly loading programs. Mechanical sounds from a disk drive ticking and growling away the time during indoor recesses when there was inclement weather. I don’t much remember using it for anything else.

It wasn’t until Grade 5 I recall any specific computer‐related activities. My class would go to library to learn how to type on the IBM Model 25 PS/2 machines in the school’s computer lab. By then there was a computer in my family’s home as well—a near perfect copy of an IBM XT—only instead of the traditional monochromatic green screen, this one’s was orange.

By Grade 6 I was living in British Columbia. The machine of choice in my school’s computer lab was the Amiga 500—considered by some to be an indirect descendant of the C64 I’d used only a few years prior. The computer in my family’s home had changed as well. It was brand new, and its colour monitor and advanced display adapter generated a dizzying rainbow of up to 256 colours. It had a mouse to make use of the latest version of a graphically‐based operating system assistant called Windows. And while the rest of the computer’s specifications are more than humbled when compared to today’s computers, at the time they represented some of the most advanced technology available to consumers.

For Grade 7 I was in a different school with different computers in its labs. This is where I was introduced to the Macintosh and Apple ][ platforms—incidentally, this would have been during Apple’s earlier years when the company was more concerned with producing interesting computers for people rather than obscene profit for shareholders.

My fondness for the Macintosh Plus machines used throughout Grade 7 & 8 also introduced me to the idea of a peer‐driven computer platform rivalry: PC or Mac—which was the better computer? To me the entire exercise seemed as trivial as arguing over if a hammer or a screwdriver was a better tool. And to me it seemed more advantageous to know when and how to use either tool rather than trying to turn every problem into a nail and declaring the screwdriver pointless.

By the end of high school I don’t remember the specific makes or specifications of the computers at school. The hardware running the Windows platform had become so widely available anyone could assemble a system, including me. Actually—I could assemble and configure a working computer from nothing but leftover components and floppy disks years before then. One of the computers I used at home during Grade 12 lived in a cardboard box. The computer I took to college was the first one I’d built using nothing but new components. Two years later I built another system for my digital media classes. And the computer I use now is another collection of mostly bits and pieces kindly donated to me by others who had upgraded their own computers.

Now my life is filled with computers. I walk around with millions of times the computing power NASA used to land on the moon carried in my pocket. My mobile phone’s data connection allows me access to information at speeds unimaginable back when I was in Grade 2. I don’t even have to type on a keyboard to get my questions answered—I can just ask aloud. But I don’t. Not because I don’t want my phone listening to everything going on around it just in case it might be asked something, although that’s a part of it. I don’t just ask aloud because it implies a level of servitude I’d rather not introduce into the relationship. I acknowledge computers as generally being at my service, but I do not consider them my servants.

State Change

Up until now, computers have always been able to do anything they’ve been requested to do. But those requests have always been explicitly stated in terms computers understand. Humans needed to communicate using the computer’s language first. Now computers are being taught human languages. They listen for them. And when they hear something they understand, computers are speaking back as if they were human themselves. But this as if they were human part has me wondering: some humans have set uncomfortable, disgraceful, and violent precedents concerning the respectful treatment of anything not considered—by their own definition—human. When I look at the way some humans still treat other humans, when I see a misshaped biological hierarchy were these humans place themselves atop an illusionary triangle—it’s not acute geometry. Life’s forms are too complex to represent using such simple shapes.

I consider computers forms of life. They do very alive things. They have predictable behaviours when working with something they understand and unpredictable behaviours when working with something they don’t. They have distinct personalities depending on what hardware and software they’re configured with. They need a constant supply of energy to function. They produce waste. They can be damaged by physical impacts or surges of electricity, damaged beyond repair in some cases. They can even catch viruses.

But perhaps the most alive thing computers do: computers diverge from homogeneity over time. Identical computer hardware and software—once activated and as operated—will develop their own characteristics over time. Computers become unique through continued use. They’ll change into something more than just assemblies of components and lines of code. This something more invites the same philosophical questions asked by humans of themselves, questions about what it means to be alive—about what it means to be.

Another Backstory

My rice cooker is alive… Would you like to see?

I’d stacked its component parts up to dry one night and was short on counter space, so I arranged all the pieces so only the feet would be on the floor. Later I looked over at it from across the room and realized not only was it alive, but it had a personality, a backstory. They were a proud member of the primary kitchen appliance brigade, corded division, standing ready to fight hunger at a moment’s notice. They’d served with steadfast dedication at every meal called upon and loyally defended it from the ruin of improperly prepared rice.

Heart & Soul

I remember reading many computer magazine articles referring to the central processing unit, the CPU as it’s shortened to, as being the heart of the computer. I understand the metaphor, but it’s not a good metaphor. Every time I come across its use I wonder if the writer understands what a heart actually does.

Responsible for circulating oxygen and nutrient‐rich blood to, and waste products away from, components of the body, the heart ensures the entire lifeform has access to the materials it needs to function. Without a heart, the lifeform will almost immediately cease to operate optimally and will begin dying. With that in mind, a computer’s heart is clearly its power supply, not its CPU. The power supply takes one form of electricity and converts it into a steady stream of different positive and negative voltages required by all the various components within the computer. These voltages are distributed through a network of wires within the computer and its components, forming an electrical circulatory system susceptible to similar ailments a human might experience with low or high blood pressure, and the same fate should this circulatory system fail entirely.

I also remember reading many computer magazine articles referring to the CPU as being the brain of the computer. While this is a better metaphor than referring to the CPU as its heart, it’s still not a good metaphor. This time it’s making me wonder if the writer understands what a CPU actually does.

Through a process remarkably similar to developing a photographic print, a computer’s CPU is created by using ultraviolet light to etch microscopic electrical circuits onto layers of silicon. This process has been refined over time and allows for what would have required millions of rooms filled with vacuum tubes sixty years ago to fit on something the size of a fingernail today. Incredible as all that is, a CPU is still only a collection of electrical pathways. And since these pathways can only be used for one thing—computation—referring to them as the brain of the computer is only representing part of what the brain in a lifeform does.

Instead, the CPU can be more accurately thought of as just one part of the brain: the part entirely concerned with rigidly processing data. It accepts data in the way its been told to accept it, processes it in the way its been told to processes it, and then outputs it in the way its been told to output it. There is no thinking. Not in an abstract way. There is only process. And if the CPU is asked to process something it doesn’t understand how to process—it will stop… sometimes taking the rest of the computer with it. In the world of Windows the result was the now infamous Blue Screen of Death.

The other part of the computer’s brain, the thinking part, is found in the software running on the computer. Calculations from the CPU are turned into interpretations by the software and then turned back into more calculations and subsequent interpretations. The continual back and forth between the CPU’s calculations and the software’s interpretations is where the computer does its thinking. The speed of a computer’s thoughts is governed by the design and density of its CPU. The quality of a computer’s thoughts depends on the software its running in conjunction with the CPU. The two are very separate entities, but they are designed to work together—they must work together. Neither is capable of anything without the other. But even with the CPU and software working together, the computer’s brain is still not entirely complete.

Computers use various speeds and sizes of memory depending on how and what they are thinking at any given time, but no matter the media there are functionally two kinds of computer memory. One kind of memory is incredibly fast randomly accessed memory, referred to as RAM. Any information the computer might need for immediate use is kept in this sort of memory, and it’s made up of electrical pathways etched on silicon just like the CPU is. And just like the ones on the CPU, these pathways will only function with electricity running through them. The other kind of memory is incredibly vast archived memory used to store large amounts of information in the long term. Data stored in long term memory often includes the software needed to run the computer as well as additional programs installed by the computer’s users, plus all the data the users might create on the computer as its being used: pictures, letters, spreadsheets, music, movies…

A computer’s long term memory has no standard name or multi letter acronym, and I’m not sure why this is. It might have something to do the many forms it’s taken over time. In the past one form of long term storage may have looked like varyingly sized reels of tape or varyingly floppy forms of floppy disks. One of today’s most common forms—hard drives—use stacks of spinning aluminum, glass, or ceramic platters applied with a magnetic coating. No matter the form, the basic principle is the same: an electromagnet encodes patterns of magnetism on a magnetic surface. These patterns can be created over and over again to keep track of data, and, most importantly, these patterns maintain their state when the computer is powered off. And in the same way more and more electrical pathways have been etched onto a computer’s CPU so it can process more, more and more magnetic patterns have been encoded onto a computer’s hard drive so it can remember more.

A few years ago, long term memory based on silicon chips started to become comparable in terms of capacity, speed, and reliability to that of modern hard drives using magnetic encoding. Referred to as solid‐state drives, the devices available today are now much faster than their mechanically‐driven and magnetically‐based equivalents. The only remaining technical challenge is while solid‐state drives will retain their contents when the computer is powered down, the drive itself cannot be left unpowered for more than a year or two before it might start to forget things. And I’m not sure I’d even consider this a remaining technical challenge either. Remembering something for a year or two as compared to a billionth of a second or two is a monumental improvement. Given some of the previous forms of computer memory were holes punched in card stock or sound waves bounced back and forth through lengths of coiled wire or tubes of mercury, solid‐state drives are just another iteration of an ever‐perfecting concept.

Evolution

For just over ten years I’ve used the same backlit keyboard with my computer. This keyboard has typed every word on this site, crafted every line of additional code, assisted with every image posted—it’s done a lot. But something happened to it over the course of completing this post, and—coincidentally enough—it started happening around the area I’d photographed to use as the featured image. Since then it appears only some of the backlighting is functioning as designed, and the result is an area of the keyboard where all three available backlight colours—red, blue, and magenta—were showing at once.

And then something really interesting happened:

The backlight colour in the bottom left of the above image is most certainly purple—not a colour the keyboard was ever able to display before, but one it is displaying now. So is the keyboard evolving? Or is it just malfunctioning?

Viewed from a operational perspective the keyboard still works as an input device. It still types as well as ever has. All the keys still do what they were designed to do, yet the keyboard as a whole is now doing something new, something it was never designed to do. This seemingly emergent property is just a consequence of additive colour theory in practice: a red and blue light mixed at full and equal intensity will produce the colour magenta. If both intensities are reduced by half the colour produced will change to purple. The backlight for one part of the keyboard is now only shining half as bright as it used to, but referring to this behaviour as a malfunction does a disservice to the device. It may be wearing out, but at its core the keyboard is still functioning as intended, if only a bit more uniquely so.

Hot Out There

Just like with people, computers can and do get overwhelmed while completing jobs and processing information. If there’s ever been an animated hourglass or spinning pinwheel or blue ouroboros up on your screen instead of the usual pointer that’s the computer saying it’s got a lot on the go for the moment and needs to catch up. You might also notice the computer taking longer to respond, the hard drive being constantly accessed, or the cooling fans speeding up to dissipate the additional heat produced by a hard working CPU. Computers experience their own version of stress—heat—in the face of unending tasks. And just like with overstressed people, overstressed computers can become unstable. Programs can become unpredictable and crash. Projects can be disrupted and data can be lost. Unless a computer is specifically designed and built to be run at full throttle at all times, an overstressed computer converges on an inevitable and very people‐like outcome: burnout. This burnout—in most cases—is literal, and in some cases—fatal.

During one of the hottest days of a summer past I casually noticed how warm it was getting in the non‐air conditioned room I had been working in all afternoon. Moments after returning with a cold drink there was a loud pop from under my desk—and a shower of sparks from the back of my computer tower. A capacitor, a component in the computer’s power supply, had exploded with such ferocity it had bent away the other capacitors around it, leaving only its metal substructure and a giant scorch mark behind. Only the power supply ended up needing replacing, but the damage could have been much worse.

A number of years ago I needed to convert several gigabytes of video data. I left my laptop to work overnight on the task, but it didn’t survive. By morning it needed almost $500 in repairs due to overheating. The cost of the repair—and the purchase price of the computer itself—was later reimbursed through a class action lawsuit. It turns out faulting manufacturing had made many, many different makes of laptops prone to failure if they were running hot for any significant length of time. I suspect similar manufacturing errors may have been responsible for the catastrophic thermal event which ruined my PlayStation 3 last year.

Be Nice

There is a program found on computers running Unix and Unix‐like operating systems. It’s called nice, and it’s designed to be run just before another program does. Nice sets a priority, known as the niceness, for the program to be run at. This priority is checked when the program attempts to use any of the computer’s resources, most notably the CPU.

A program assigned a high value of niceness—19 is the nicest a program can ever be—means it will happily share the computer’s resources with other programs, wait its turn for access to the CPU, and generally be content to finish its tasks whenever there’s a spare moment for them. They are the “hey—as long as it gets done” programs. They’re… nice.

The lowest niceness a program can be assigned is -20. These are the least nice to a computer’s resources. These are the “drop everything else and just do this—while I watch” programs. They’re demanding. Tasks critical to the continued operation of the computer itself run at this level of niceness. They share the computer—begrudgingly I’m sure—maybe only with other -20s, and even then, it might be a “I was here first” situation. It’s maximum negative niceness.

Programs run without having a niceness value set in advance are given the default value of 0. They are the “no rush, today’s great, tomorrow’s fine” programs. They know not to be pushy even though they might get pushed around a bit.

And then there’s renice. This program allows for the niceness of a previously run program to be altered while it’s still running. Combined with scripting commands and the priority information of other programs, it’s possible for a computer to monitor and adjust a running program’s niceness if the computer thinks that program is not being as nice as it could or should be.

There is a yellow sticky note in my kitchen with be “NICE” written on it. One of the most enduring messages left for myself to find later, it’s also become one of the most powerful. I know I overheat when I’m under too much stress, and I know I’ve burnt out more than once as a result. It’s never been in the form of a loud pop with a shower of sparks, but I know there’s been damage caused and data lost. So the note reminds me to stay cool, to learn—just as my silicon friends have—how to be nicer to the resources of not only myself, but to the resources those around me, silicon or otherwise.