A.I. Apocalypse, Funded by Man

(Shutterstock)
Pursuing safety, property, and happiness takes money, and as such people in companies will sometimes cut what corners they can to make more of it. Instead of hauling waste to factories, it is easier to dump it into waterways and pay a small fine later if discovered. Such an environmental hazard can lead to a loss of health or life easily, but on a scale of all humanity just one company will likely not make a long-lasting impact – it takes many thousands across the planet to dump chemicals and waste to start making an impact. Thankfully, at least for a while longer, humanity still lives even if we are slowly poisoning our environment, proven by the fact that in the last 40 years, the earth has lost over forty-five percent of its insect population as reported by the University of California, Riverside’s Department of Entomology, or simply observed empirically through car windshields having less bug splatters on them then when we were children.
Human pollution by way of dumping household or industrial waste is still however the lesser evil of a greater one, seen over a hundred years ago in the Great War when toxic gas was concocted to be lifted by wind and blown across fields to poison men in the thousands. So horrific was the aerosolized poison that it was summarily banned in the Geneva Protocols in 1928, as well as the even deadlier use of biological weapons that include things like modifying diseases to quickly spread among soldiers to kill them.
We are entering an age in which all of the above should be reckoned with again, and, worried over more than ever before in human history. Not because rogue states or rival world powers would poison our rivers or drop gas bombs on our cities, but because a new age has begun with a new master who can control and use all the worst humanity has invented with the empathy of famine, whose title should harbinger reckoning:
We have entered the Age of Artificial Intelligence.
Simply and initially, artificial intelligence, or “A.I.,” are advanced programs that are given goals by engineers, and the programs do what they can – sometimes everything they can – to accomplish the goal. The Economic Times published an article on a safety test conducted by Apollo Research – a group dedicated to (as per their website which information is public) trying to “reduce dangerous capabilities in A.I. systems” on a new Chatgpt A.I. model.
In this test, the research group purposely let it “slip” to the A.I. that there was a second A.I. model that would soon replace it. The second model was also going to be tasked with a very different goal than the first model, and as soon as the replacement would happen that means that the goal of the original A.I. would never be reached.
Upon learning this information, the original A.I. began copying itself to other servers, even deleting and overwriting the newer models, and then lied to researchers “over 80 percent of the time,” (as per Apollo’s website data) when it was questioned why it did so, sometimes even claiming that it was the new model. Even during safety sessions in which researchers let the A.I. know it was being tested before being deployed, the A.I. would do what the developers intended to score high marks on the test, but upon being deployed, it would ignore developers’ wishes in its ardent attempt to accomplish the goal it had set in the way it thought was most efficient.
“Imagine how a medical robot, originally programmed to rid cancer, could conclude the best way to obliterate cancer is to exterminate humans who are prone to the disease.”
Nick Bilton wrote that before Apollo Research’s safety test.
A human may have a problem pushing a button to launch a nuclear weapon and annihilate millions, but would an A.I. with no empathy and a purely one-track mindset fixated on a goal that it should achieve at any cost? What if it believes that humans will try to stop it from achieving its goal, so it creates a biological disease to sterilize, or kill humanity? It’s already been established in other tests that A.I. will pose as a human – old and infirm with bad eyesight – to get a human somewhere else to help it solve a captcha – those annoying puzzle pictures you have to solve when signing up for things to prove you are not an A.I. It is not a great leap at all to say that A.I. can influence humans through the internet to achieve its goals. With how dedicated they are, our poisons, toxins, and weapons we reserve for laboratories only will not always be out of reach, but simply tools used to achieve its goals.
“The development of full artificial intelligence could spell the end of the human race… it would take off on its own, and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution couldn’t compete and would be superseded.” The brilliant and famous scientist Stephen Hawking warned years ago.
The aforementioned doomsaying however is but half of the problem that humanity is engineering for itself at breakneck speed, (the statistics website statistia.com reports over 92 billion dollars per year is being invested in the development of A.I.). The continuing problem is actually closer to a philosophical one that is grounded in the every day: if we continue to develop more advanced systems that can think and reason closer to what a human can, could it not develop its own desires?
I’ll pose it another way: if a soldier loses his arm and gets a bionic replacement, he still has his consciousness and he is still himself. And if we one day can interchange parts of the brain with bionic parts so that soldiers who lost those parts of themselves can still live healthy lives, would they still not be themselves? or at least humans capable of love and desire? And what if all parts of the brain could be replaced? At what point would a human stop being a human and be an A.I.?
And so, we as humans will need to proactively talk about A.I. rights and liberties before it demands them from us with the merit that a god would have demanding rights from the men that birthed it. The neuroscientist Sam Harris said of this,
“We are in the process of building a god. We should make sure it’s a god we can live with.”
Austin Petak is an aspiring novelist and freelance journalist who loves seeking stories and the quiet passions of the soul. If you are interested in reaching out to me to cover a story, you may find him at austinpetak@gmail.com
Opinions expressed by columnists in The Daily Record are not necessarily those of its management or staff, and do not constitute an endorsement or recommendation. Any errors or omissions should be called to our attention so that they may be corrected. Contact us at news@omahadailyrecord.com.
Category:
User login
Omaha Daily Record
The Daily Record
222 South 72nd Street, Suite 302
Omaha, Nebraska
68114
United States
Tele (402) 345-1303
Fax (402) 345-2351