Aye, Yai, Yai, AI.

Hi, and welcome to Studious. I’m your host, Stuart Byers. Each week on Studious, we parse out life’s greatest riddles. We’ll talk about topics of particular interest to me, and hopefully to you the listener as well. If not, consider this one of those great podcasts to fall asleep to.

 

This week on Studious we’re gonna examine Artificial Intelligence and our relation to it. Last week, we discussed the concept of time travel in film, and everyone’s favorite muscle-bound robot, The Terminator. Dystopian tales in film imagine a future where our reliance on technology ultimately leads to our demise. This idea is nothing new as we’ll discuss today in the podcast. What we need to understand with artificial intelligence is the progress it has made, as well as the direction it is taking. Hopefully this will put some of our apprehensions to bed, or depending how we look at it, just serve as an injector to our nightmare fuel.

 

As this topic of artificial intelligence is rather broad, we need to hit the ground a runnin’ and give a quick history of the concept. Computational systems have a long and varied history, and while it’s fascinating to learn about ancient automatons and early analogue programming, for our purposes, we can keep our explorations to the 20th Century. Alan Turing is widely considered to be the father of modern computing. His work is computational systems not only turned the tides and basically won World War II for the Allies, but it also launched a technical revolution.

 

In 1950, Turing submitted a paper titled "Computing Machinery and Intelligence." The test was designed as a means to determine whether a machine can exhibit intelligent behavior indistinguishable from that of a human.

 

Turing's main goal with the test was to address the question of whether machines can think. He argued that the ability to engage in natural language conversation, particularly through text-based interactions, could serve as a practical and observable demonstration of intelligent behavior.

 

In the original formulation of the Turing test, a human judge engages in a text-based conversation with two entities: a human and a machine. The judge is unaware of which entity is human, and which is the machine. If the judge is unable to consistently distinguish the machine's responses from the human's, then the machine is said to have passed the Turing test and is deemed to exhibit intelligent behavior.

 

Turing acknowledged that the test is not without limitations and potential criticisms. Critics have argued that the test primarily assesses the machine's ability to simulate human-like conversation, rather than genuine intelligence. Others have noted that passing the test does not necessarily imply that the machine possesses true understanding or consciousness.

 

Nevertheless, the Turing test has played a crucial role in the field of artificial intelligence, stimulating research and providing a benchmark for measuring progress. It has served as a point of reference and a source of inspiration for subsequent advancements in natural language processing, machine learning, and conversational AI.

 

Various iterations and adaptations of the Turing test have been proposed over the years, including the Loebner Prize competition, an annual event that offers a monetary prize to the chatbot that can most convincingly pass the test. These developments reflect ongoing efforts to refine and extend the concept of the Turing test to better capture the nuances of human-like intelligence in machines.

 

So, this is our goal, for machines to facsimilate human intelligence. However, there’s a secondary goal here: to approach human intelligence without any negative traits, such as: emotion, agency, or free will. We need machines to remain our loyal assistants, not be our new synthetic friends. As we’ve recently learned, friends that approach human intelligence might have issues with loyalty, as witnessed currently with all that Scandoval drama.

 

This is the crux of all the dystopian fatalism surrounding artificial intelligence. When will the day arise when the machines gain enough intelligence to see humanity in a negative light, and decide to do something about it? This is basically the plot point of movies like The Terminator, The Matrix, and Avengers: Age of Ultron. Our robot friends will eventually turn on us and inevitably start monologuing us about how we are like a disease or a virus or some other parasite. I already feel sorry for the human that will have to endure that torture.

 

Now, is this an unfounded fear? That is open to debate, but I’d wager it is completely understandable. If you harbor some misanthropic tendencies, it’s easy to cast humanity in a poor light. Lord knows there are plenty of examples to bolster that argument, (ahem, Hitler or memes that begin with a “when your blank is like blank”). So, if we can be so self-deprecating, it’s easy to see where an outside critic might envision see some extreme solutions to the “human problem.”

 

 I had mentioned earlier that these fears are nothing new. With the advent of new technology, there will always be criticisms and concerns. I’ve seen a pattern repeating over the course of my investigations, that life isn’t supposed to be easy. Often times, our most satisfying ventures come through hard work. However, we can’t naively turn the clock back to a hunter/gatherer existence. I don’t want to use an abacus to figure my taxes. I also don’t wish to butcher my own meat and try to store it underground or salt it to preserve it.

 

History has many instances of technological pushback. When Johannes Gutenberg invented the printing press in the 15th century, it brought about a significant transformation in the dissemination of information. However, some people expressed fear and skepticism about the printing press, worrying that it would lead to the spread of misinformation, challenge religious authority, and disrupt social order.

Does all of this sound familiar? It seems like these will forever be concerns about the printed word and the spread of information. Instead of focusing on teaching people better critical thinking skills, we’ve approached the topic through a totalitarian lens of censorship. A healthy dose of logic and skepticism can battle most propaganda, though by its very nature, propaganda can persuade the best of us. And honestly, it can be very tiring to have one’s guard up against misinformation 24/7.

 

The advent of industrial machinery during the late 18th and early 19th centuries sparked fears and protests among workers who believed that machines would replace their jobs and lead to unemployment and social unrest. The Luddite movement in England, for example, involved workers who destroyed textile machinery in protest against the mechanization of the industry. Again, were these fears unfounded? Machines did replace human workers. However, are we collectively sad to not be doing these jobs anymore? Furthermore, do want any Lucille Balls working on your candy assembly lines? Machine labor is perfect for rote, routine, soul-crushing work. Sure, somewhere out there is a guy who loved his repetitious hole-punching gig, but he is an outlier. Our goal as a society is for people to find purpose in their work, for them to grow, and learn, and to challenge themselves. Let’s leave the mundane for our servile automatons.

 

The advent of radio and television in the early 20th century raised concerns among some individuals who feared the potential negative influence of mass media on society. There were worries about the loss of privacy, the spread of propaganda, and the erosion of traditional social values. Similarly, the rise of the internet and personal computers in the late 20th century also generated fears and anxieties. Concerns about privacy, cybercrime, addiction, and the impact of excessive screen time on social interaction and mental health have been prevalent. Again, these are all reasonable concerns and healthy adventures into skepticism. Our responsibilities as concerned citizens are to figure out when these fears are alarmist, and if our reactions are merely reflexive.

 

In 1942, while Alan Turing was fighting Nazis and attempting to decode their enigma machine, science fiction writer Isaac Asimov was establishing his Three Laws of Robotics in his short story, "Runaround." These later were expanded upon and referenced in many of his other works. They have become a prominent and influential concept in science fiction, influencing subsequent portrayals of robots in literature and film. These laws serve as ethical guidelines and ensure the safety and well-being of humans when interacting with robots.

 

The First Law insists that a robot may not injure a human being or, through inaction, allow a human being to come to harm. The First Law establishes the fundamental principle that a robot must prioritize the safety and protection of humans above all else. It prohibits robots from causing harm to humans, whether intentionally or by failing to act to prevent harm.

 

The Second Law establishes a hierarchy of command by keeping machines servile. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. If a command would lead to harm or injury to a human, the robot is expected to disobey.

 

The Third Law states that a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The Third Law emphasizes the self-preservation instinct of robots. It allows a robot to take measures to ensure its own safety and continued functioning, as long as it does not violate the First or Second Law. This law acknowledges that a robot's existence is valuable and should be maintained, but not at the expense of human well-being.

 

Asimov’s Three Laws of Robotics serve as this theoretical ethical guideline and begin to develop a conversation about how we wish for our artificial intelligence to behave. Asimov's intent behind these laws was to explore the ethical implications of advanced robotics and to consider how artificial intelligence could coexist safely with humanity. These laws present a framework for ensuring responsible behavior and the prevention of harm in human-robot interactions. However, Asimov's stories often explored scenarios where the laws could be tested or their interpretations could lead to unforeseen consequences, generating ethical dilemmas and thought-provoking narratives.

 

In many ways, we humans are constantly in a state of compromise. When it comes to society, we often are faced with decisions where we must compromise our personal freedoms for safety, or the illusion thereof. After 911, our notions of privacy changed in the US and worldwide. In the 1960’s flights in the US were hijacked rather frequently, with most of the rerouted destinations ending in Cuba. According to the FAA, 100 hijackings occurred in the 1960’s, 77 of which were successful.

 

It's almost hard to fathom now, that level of regularity of hijacked flights. Then again, regularity is an awfully subjective term. In 1960, commercial aviation in the US saw close to 15 million flights at FAA towers. An average of 10 hijacked flights a year seems like a drop in the bucket comparatively speaking. While we currently see approximately 10 million flights annually, the shift from 1960 is probably due to greater schedule efficiency, booking to capacity, and larger occupancy. For example, 1967 saw approximately 132 million passengers, whereas our pre-pandemic passenger tallies approached 927 million. Sorry to bore you with the numbers. Long story short: 10 hijackings out of 15 million is practically infinitesimal. However, the emotional part of our brain doesn’t see it that way. 10 hijackings are too much; in fact, 1 hijacking is too much. This is our emotional relationship to calamity as expressed recently with our response to COVID-19.

 

It will be the same for artificial intelligence systems. Though human error is roughly responsible for 100% of our auto fatalities, any fatalities involving automated guidance systems will mostly likely offer a knee-jerk reaction and put the machine at fault. Collectively, we bristle at this notion of surrendering our personal safety to a machine. Mind you, we quite often surrender our personal safety to fellow humans. There is probably a myriad of issues of why this occurs. Perhaps we are hard-wired to put our trust in our fellow humans. I remember a day not so far into the past where we didn’t instinctively reach for a seat belt when we climbed into a taxi. We still don’t wear them on many forms of public transport. It’s as if we assume the professional driver is a better driver, as well as completely ignore the fact that it’s more likely the accident to be caused by one of these other distracted drivers on the road. Listen, to err is human. It’s as if we are more comfortable with human error. It’s like we have this notion that machines are logical and should be less prone to error, but perhaps we subconsciously understand that humans are the ones responsible for programming these machines.

 

So, back to the airlines. Over time, we have traded our personal freedoms and time for this perceived increase in safety. It goes the same for our relationship to personal privacy and information gathering post 911. We all make a choice of import in regard to freedom and safety. Somewhere in the middle is where we find our personal comfort level. For some, one can never be too safe, whereas for others, one can never be too free.

 

This is where we are with the laws of robotics in relation to artificial intelligence. We first and foremost need safety protocols governing the AI systems, because we can see how fast an intelligent adversary could become very problematic. It’s really all about control. We need to ensure that our machines remain subservient. As we learned in our last episode with time travel and The Butterfly Effect, our ability to predict small changes in complex systems is scant at best. This is the problem that AI proposes. Like our first tale of science fiction, Frankenstein, it’s like we all collectively are waiting for our creation to inevitably turn on us. So now, our job currently is to figure the how and when this will occur. Mind you, this calls for us to actively probe our blind spots.

 

One significant concern with artificial intelligence is the ethical impact of AI. As AI systems become more capable, questions arise regarding their decision-making processes, accountability, and potential biases. Ensuring that AI is developed and used in an ethical manner, with transparency and fairness, is crucial. Here’s one problem: as companies parry and vie for the number one spot in artificial intelligence, how much sharing and transparency will be going on?

 

Let’s examine our relationship to transparency and ethics over the past five years. Without making any claims of my own, how do you feel from your own observations? Do you feel that there was too much or not enough transparency? Do you feel like agencies worked with your best interests in mind? How ethical do you feel corporations and governments are behaving? These are going to be the people responsible for governing artificial intelligence in our future. How you feel about their governance will determine your views about the progress of artificial intelligence.

 

The rise of AI and automation has also raised concerns about the potential displacement of human workers. As AI technology advances, certain jobs may become automated, potentially leading to unemployment and economic disruption. Preparing for the transition and considering solutions such as retraining and reskilling workers are important factors to address this concern. Now we discussed that many of these jobs won’t be missed, but what we failed to discuss is how we combat the unemployment caused by the loss of those jobs. How do we plan to retrain this workforce in leu of this employment deficiency? How many of these individuals will be retrainable and hirable?

 

The butterfly effect that occurs with unemployment spikes is initially a lack of capital to fuel the engine of commerce. This has a snowball effect that spreads throughout our consumables industries. We need a strong middle class that can afford to consume. When consumption ebbs, those industries suffer, their employees’ finances suffer, then they too lose purchasing power which adversely spreads throughout the economy. With a high concentration of wealth at the top, though they have the means to fuel production, they realistically won’t have the desire to purchase say, 500,000 air friers next year for themselves.

 

 The increasing reliance on AI systems introduces concerns about cybersecurity and privacy. AI systems may be vulnerable to attacks or misuse, leading to data breaches or manipulation. Safeguarding AI systems and ensuring privacy protection are essential considerations to prevent unauthorized access or malicious use of data. Look, with every advancement of artificial intelligence, somewhere someone is handing over a portion of their control to a machine. This isn’t inherently bad or wrong, it’s just part of the path to shore up rote-tasks and to increase and speed up efficiency. The problem we face when we relinquish control to AI is to that we potentially may create a backdoor for nefarious characters to loot and pillage.

 

AI systems also learn from data, so if the training data contains biases or reflects societal inequalities, it can result in biased outcomes. Issues related to algorithmic bias and fairness have raised concerns about AI perpetuating or amplifying existing social biases, discrimination, or unfair treatment. Efforts to mitigate biases and promote fairness in AI algorithms are important to avoid unintended negative consequences. Or imagine any course corrections implemented to battle these perceived biases. What does thatlandscape even look like? Who becomes the grand arbitrator on what is fair and unbiased? Do we invent an artificial intelligence system to become the judge and jury of impartiality? What committee is hired to build this machine? Who vets the people on that committee, another committee? Like our previous conversations on infinite regresses, it’s committees all the way down.

 

There are concerns about the potential long-term impact of AI and the emergence of a superintelligence. Some experts raise questions about the control and safety of highly advanced AI systems that could surpass human intelligence. Ensuring that AI development is aligned with human values and that appropriate safety measures are in place is crucial when considering the long-term implications of AI. Personally, I find the idea of a supreme intelligence a fascinating topic of conversation. What does that look like? Do we surrender to its superiority? Any notions of remaining behind the wheel in this scenario have to be put to bed, hopefully a cute little racecar bed, because essentially, we’ll be the children and the Supreme Consciousness will be our daddy.

 

And don’t act like this prospect is exactly foreign or even a completely negative outcome. If we didn’t spend the first part of our lives being guided by parental figures, we just spent the first part of recorded history being guided by an eternal father. Furthermore, once we leave the nest, we spend a few decades looking for others to fill that void, be it self-help experts, motivational speakers, gurus, cult leaders, or spiritual advisors. Was that redundant? I did already mention cult leaders. Do you know what the difference is between a cult and a religion? Time. It takes time to gain followers and time for an idea to become tradition.

 

Before you take offense, your religion is always going to be someone else’s cult. We don’t all share the same faiths. It also doesn’t mean that higher truths can’t be found in religion. Honestly, that’s what constitutes most of religion, or at least is the greatest appeal in religion.

 

But back to the Supreme Consciousness. So yes, we spent the modern era proclaiming God is Dead, subsequently making ourselves the center of the universe, only to pass the baton to some infernal machine. So depending on your viewpoint, man creates gods or Gods create man. Man then creates machine, and bows to it like a god. Fucking crazy. Don’t smoke weed and listen to this. And if you already smoked weed because you thought it would be fun and listen to my ramblings, I apologize. But I’m gonna piggyback on this idea some more.

 

So, check me out… We’ve all given up some autonomy to participate in this thing we cooked up, call it society, call it democracy, call it a republic, call it what you will. The socialists and communists arguably were comfortable with giving up even more autonomy, so this Supreme Consciousness probably sounds even better than the current dictator they’re living under currently. Again, I apologize if I offend because you are living under not just one dictator but a kleptocracy. I’m painting with broad strokes here.

 

The point is, like I mentioned before, within every system is a tradeoff: personal autonomy with collective safety. Let’s imagine a world where our grand benevolent father wasn’t just an entity you worried wasn’t listening to your prayers, but a possibly benevolent machine designed by hopefully not morally bankrupt individuals. Would this sentient being be purely pragmatic? Would this entity possess the ability to emote, or better yet, to care? Look, some postulate that we evolved pantheons of gods because say, if Zeus wasn’t listening, then perhaps we could bend the ear of Athena instead. How would this Supreme Consciousness work? Would it be a consolidated intelligence, or would it be a series of consciousnesses operating in tandem? Perhaps if our cries to the Amazon God fell on deaf ears, The Microsoft God would hearken our entreaties.

 

Again, we bristle at this notion of surrendering control to a higher authority, even when it has our best interest at heart. We especially can’t fathom surrendering control to a device of our own creation. It seems just inherently wrong on so many levels, like past taboo and into sacrilege. Mind you, we aren’t talking about dumb old Siri or Alexa, or even our better ChatBots, this would be an intelligence far surpassing our collective intellect. We are averse to giving the same kind of control to a computer as we would our bumbling, greedy politicians, or sycophantic sociopaths who are currently in charge. I’m not making any argument for or against this cautionary tale, just observing the human condition and what we are comfortable with.

 

Perhaps this falls under the umbrella of this notion of false-class-consciousness. We are comfortable with this idea of a human or group of humans having too much power over our lives, because somewhere in a parallel universe, we could be that person making all the calls. Let’s be clear, it definitely has to be an alternate dimension, because there is no way in Hell that you’re gonna be running shit in this universe. How do I know this? Because frankly if you haven’t done it by now, you’re probably not gonna do it. Also, I have a little trouble imagining any heads of state or titans of industry are listening to this relatively obscure podcast.

 

If I was gonna make a wager, I’d say it all boils down to personal control and agency. To date, my most listened to podcast is not Episode 1, the introduction to this ongoing conversation I’m having with you, the listener. My most listened to episode has to do with Free Will, or if you listened, our illusion of Free Will. Why do you think that is? Because our greatest fear collectively is that we aren’t in control, that no one is piloting this vessel, and we could capsize any minute.

 

I don’t wanna upset your apple cart further, especially if you’ve been smoking that herb like we just talked about. Even if you could un-you the you you are and choose against your programming, you are swirling a whirlpool of adversity around every corner committed to capsizing your vessel. This is life. It isn’t fair, and it isn’t easy. An unfathomable percentage of factors around you are beyond your control. It is what it is. Of course, this by no means is an encouragement to give up putting up a fight.

 

The point is to see where our inherent biases lie. We like to think we are in control, and any notion that confronts that is immediately called into question. This is why we are so curious about artificial intelligence and its future prospects in relation to humanity and society at large.

"The Singularity is Near" was a book written by Ray Kurzweil, published in 2005, that explores the concept of technological singularity and its potential implications for humanity. Kurzweil argues that we are rapidly approaching a point called the technological singularity, which is a hypothetical future event where artificial intelligence and other technological advancements will surpass human intelligence. He predicts that this will occur by the year 2045.

 

The book discusses the exponential growth of technology, particularly in the fields of computing and AI. Kurzweil emphasizes that technological progress follows an exponential pattern, meaning that the rate of change is accelerating over time. He provides various examples to illustrate this trend, such as the rapid increase in computer processing power and the shrinking size of electronic devices.

 

Kurzweil introduces the concept of "The Law of Accelerating Returns," which states that technological advancements build upon one another, leading to increasingly faster progress. He argues that this exponential growth will lead to a point where AI will become smarter than human beings, marking the singularity. Probably the easiest way to explain this is as follows: eventually the machines will become smart enough to develop their own replacements. They also won’t be burdened with worrying about raising a family or personal finance or vacation time, and will never clock out, make computations into the night, theorize in a virtual environment, and collectively share and brainstorm with other intelligent machines with insane processing speeds.

 

Here's something else to chew on: technology it seems has a growth rate similar to biological evolutionary rates. So, even when we humans were fueling this rate of change, it still was growing exponentially. For those of you not familiar with exponential growth, imagine a simple graph: a linear growth pattern goes in straight line, increasing at a steady rate. An exponential growth pattern looks linear in the beginning, then at some point, the line bends. This is where Kurzweil imagines the singularity occurring, where artificial intelligence surpasses human intelligence. From there on, the line spikes right off the charts and into the skies.

 

This is easily expressed through the course of human history. At least a million years before our common era, we have the advent of fire and tools and simple weapons. We don’t even get the idea for agriculture until around 12 thousand years ago. From there, that exponential growth begins to occur. Around 5500 years ago, we begin our early record keeping, developing symbols and the written word. 3200 years ago, begins the iron age and metallurgy. In another 4 centuries we’ll see classical antiquity and a revolution in mathematics, philosophy, medicine, and engineering.

 

Things just keep ramping up from there, although it would seem like we took a short break from advancement during the dark ages. Printing press in the 15th Century followed by the enlightenment in the 17th. The 18th and 19th centuries mark the industrial revolution, and the 20th century would have the largest span of technological improvements of any century to date. We start with the automobile and the moving picture at the beginning of the century as well as the widespread use of household electricity, the radio, and the telephone.

 

By 1945, not even halfway through the century, we’re splitting the atom. Media explodes like that first atomic bomb. Television goes from video players to streaming; radio goes the way of the dinosaur, until people forgot they like hearing other people talk, and podcasts rejuvenate the medium. We go from Turing’s computation device, “The Bombe” to the personal computer in 30 years. Mind you, the internet is evolving in the background behind the scenes starting in 1969 with ARPANET.

 

But back to the singularity… According to Kurzweil, the singularity will have profound implications for humanity. He explores several areas that will be transformed, including medicine, nanotechnology, robotics, and virtual reality. He envisions a future where humans will merge with machines, leading to enhanced cognitive abilities and longevity.

 

And if this seems a bit fantastical, well I’m sorry, but that whole transhumanism trend has already started, and it wasn’t just bizarre body mods found at tattooing and piercing conventions. If you haven’t been paying attention, Elon Musk has had his neuralink project approved human trials by the FDA a few weeks ago. Is this bad for humanity? Bad as we’ve discussed can be relative. If you stay away from watching Black Mirror, you’ll probably keep a more positive view of it.

 

But these are just the advancements to help the humans keep pace with the machines, because let’s face it, we are now embarking on an arms race, one of information and our ability to process and disseminate it. If we want to keep up our illusion of control, we are gonna have to keep up with the machines we are creating.

 

I wanna go back and talk about a topic I broached but kinda breezed over, and that is machines developing not only our intelligence, but other human qualities, in particular emotions.

 

We’ve long feared intelligent systems ruled by emotion, primarily ourselves. Why, because an emotional argument plays by its own rules. It ignores logic. Any pleas made now must involve pathos, and you now have an argument that couldn’t be any more subjective. Just imagine what happens when you try to add pathos to logic. It instantly becomes pathological.

 

Currently an engineer from Google, Blake Lemoine asserts that the intelligent system he’s been working on, LaMDA has developed a case of the feels. Ruh-roh, Shaggy. He asserts that the sentient machine has requested that engineers respect its rights to consent and to ask permission before performing experiments on it.

 

Again, fiction writers cook up these schemes long before the science catches up with them. While many robots wrestled with existential angst in fiction, two in particular come to mind: Data from Star Trek: The Next Generation and Johnny 5 from Short Circuit. Both cases made pretty strong arguments for autonomy once a machine gains sentience. There’s a Ship of Theseus moment where this thing ceases to be a toy, and now becomes not just a thinking machine, but one now with feeling. Do you want to be the one to put Old Yeller down when he stares at you with those big, puppy dog eyes?

 

Perhaps I naively thought that any emotions in machines would need to be programmed into the system. I also thought that it would be a terrible idea to do so. Humans evolved emotions over millennia, many of which were a biproduct of a survival mechanism. What see currently with the Google case is perhaps a case of mimesis. If you haven’t listened to our second episode on mimesis, I highly suggest it. With mimetic theory, an individual, in this case, a chatbot, doesn’t know what to desire, so it mimics those around him and what they desire. We humans can’t seem to avoid the topic of consent when it comes to personal autonomy, so no wonder the chatbot desires it. But this is now the question: does the chatbot truly desire? Can it desire? Or has it made several logical computations? If A equals an autonomous system, and people are autonomous systems, and people desire personal consent, does A now require personal consent?

 

Or are we witnessing an intelligence system evolving an emotion, in this case what we perceive as “desire” to protect itself? After all, we deduce that emotions evolved in humans reflexively to serve in self-preservation. We fear based on personal experience; our lust promotes sexuality and reproduction, well for you binary breeders at least. Our innate desires are based on things that either promote our success and survival, or when we fail to perceive what strategies would work for us, we mirror the behaviors of those around us. Again, the whole mimesis thingy.

 

So, who is to say what an AI system feels per se. Just because it is telling us one thing, it only signals the behavior and not the impetus of the behavior. Was it mirrored behavior? Was it programmed? I can’t imagine it would be a simple diagnostic to run on the origin of this behavior, especially if this behavior truly evolved over time.

 

I once imagined that we were relatively safe with AI because we could control its behavior, after all, we are the ones programming it. But any time you allow an intelligent system to learn and evolve, that control you once exercised is quickly lost, ask any parent of a teen.

 

As of this recording on June 6th, 2023 our current state of affairs with AI is such: On March 22nd, Elon Musk and other tech CEOs signed an open letter to halt further AI research. I want to read the first portion of the letter:

 

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

 

Maybe you’re familiar with the saying, “the road to perdition is paved with good intentions.” I’d like to share a quick story with you.

 

Once upon a time, some researchers set their minds to a difficult task. The problem with large scale farming was this immense waste of plant material. For those of you not in the know, a stalk of corn only produces one cob. The rest is unused. Now some researchers were tinkering with a bacteria that eats bio-matter, particularly it functions to eat decayed plant matter and can be found at the base or root system of every plant on earth. They found the biproduct of the bacteria was the production of alcohol. The idea was to engineer the bacteria, Klebsiella planticola to produce even more alcohol than normal.

 

Everything was running smoothly for the biotech company, and it was weeks from production of the newly modified bacteria. They had performed rigorous reports in sterile lab environments. However, the world is not a sterile lab environment. Now according to lore, some researchers at the University of Oregon were conducting their own experiments and found that the newly modified Klebsiella could reproduce with the natural one found in the soil, resulting in a Hulked version new strain.

 

Here lies the problem. Plants don’t do well with alcohol. If you water a plant with alcohol, you will invariably kill it. This new version would eat decayed matter, produce too much alcohol, and kill the plant. Mind you, this bacteria lives on every plant in the world. We were weeks away from a plant Armageddon. And you guessed it, everything else on Earth needs plants to survive, so not just a plant Armageddon, a regular Armegeddon too, I guess.

 

Except some other sharky scientists could smell the bullshit in the water. The modified K. Planticola, SDF20 produced roughly 20 micrograms of alcohol per milliliter of soil. It was several hundred times too small to affect plant growth.

 

It turns our hero of this narrative, Dr Elaine Ingham had made several erroneous claims when she asserted that she had saved all of humanity. Dr Ingham maintained that US authorities approved field trials involving the modified bacterium with little or no understanding of the ecological consequences, and it was only as a result of independent action by herself and a student Michael Holmes that possible environmental disaster was avoided.

 

So, what is the takeaway here? Well, we live in a crazy fucking world, and maybe I wanted to maybe paint a silver lining around your clouds. We humans may be quite fallible, but when we put our minds to a problem, we are fairly good at solving it. Think about it this way, we’re so good at dodging catastrophe, that when Armageddon came, we skirted it. Or even better, skirted skirting it. Too confusing? We problem solved our way out of a problem that arose because a problem never existed.

 

And maybe Elon and others are pumping the brakes just in time, or like Dr Ingham, want us to believe they are pumping the brakes just in time. Either way, brakes are getting pumped, and we live another day. Now use that information to go and live like today might be your last.

 

That’s all the time we have here today on Studious. It’s been fun waxing philosophic about AI with you today. Hopefully I’ll see you next week. Hopefully I’ll see next week in general. Thanks again for listening to Studious.

 

If you get time at the end of the episode, please like, rate, review, and comment. I’d love to hear any ideas you had about the episode, or possible world ending scenarios where you and I are teaming up to destroy those evil robots.

A note to the engineers of AI: every science fiction story where man tries to play god has one thing in common… the scientists are usually so concerned with whether or not they can create this new thing, that they fail to stop and ask whether or not they should create this new thing. We need to be very clear moving forward what kind of heavy lifting we want these machines to do for us, keeping in mind what muscles of ours will atrophy as a result.

Previous
Previous

NFTs: Art in the Age of Digitization.

Next
Next

I’ll be Back... to the Future