A recent Maine Compass (“AI at a frightening crossroads,” Dec. 31) asks if the technology leaders creating today’s spectacular advancements in artificial intelligence are doing enough to “impart morality into the advancement of this science?”
As someone who uses AI on a daily basis; as someone who teaches courses on machine learning and other branches of AI and computing; as someone who also recalls the first time I was captivated by HAL in “2001: A Space Odyssey,” I’d like to suggest that we’re asking the wrong question.
Public hysteria and popular movies aside, we are generations away from the truly sentient, self-motivated AI of science fiction. For now, AI — both predictive AI (is this a picture of a cat?) and generative AI (please create a picture of a cat in the style of Vincent Van Gogh) — are both the result of the same learning process. How do you teach an algorithm to identify a cat in a picture? You give it 10,000 pictures, some labeled “this has a cat” and some labeled “no cat here” and the algorithm processes the pictures and learns a pattern. How do they train the likes of ChatGPT? By processing billions of sentences of existing text, which is then converted into a large model of human language.
The fatal flaw in all this? The patterns these algorithms learn come from, well, us. Humans. Garbage-In, Garbage-Out is an idea as old as computing itself — the result of any program is only as good as the information we give it.
When we train an algorithm to make decisions regarding criminal sentences, when we ask it to select candidates to interview for a job or even pick which newsfeed dominates our nightly doom-scrolling, we are training it based on our own behavior. Are those moral algorithms? They simply hold a mirror to society, reflecting both our progress and our flaws.
Do the sentencing guidelines reflect an implicit societal bias where black men are still four times more likely to be incarcerated than white men? Was the resume-screening algorithm moral when it learned to favor male applicants who played college sports from past hiring practices led by a HR director who captained his Ivy-League lacrosse team? We cannot and should not blame the tool for its misuse — the blame lies in the hand which wields it.
Tools do not have morality. AI is a tool. Does a hammer have a sense of morals? How can we expect a tool to have a greater sense of morality than its creator? Isaac Asimov famously coined the three laws of robotics: First, a robot must not injure a human or allow a human to come to harm through inaction; Second, a robot must obey orders from humans, unless those orders conflict with the First Law; Third, a robot must protect itself, as long as that protection doesn’t conflict with the First or Second Law. How can we expect humans to instill this moral code in technology when we can’t live by it ourselves?
Send questions/comments to the editors.
We invite you to add your comments. We encourage a thoughtful exchange of ideas and information on this website. By joining the conversation, you are agreeing to our commenting policy and terms of use. More information is found on our FAQs. You can modify your screen name here.
Comments are managed by our staff during regular business hours Monday through Friday as well as limited hours on Saturday and Sunday. Comments held for moderation outside of those hours may take longer to approve.
Join the Conversation
Please sign into your CentralMaine.com account to participate in conversations below. If you do not have an account, you can register or subscribe. Questions? Please see our FAQs.