By Bryan Ritchie and Russ Warner – For years, computer hacks have endangered our data, our privacy, our identities and our finances. Sometimes viruses or ransomware cost us time, deprive us of access, or cause other inconveniences like lost work and productivity. Other times, such as a power grid attack or sextortion, they cause pain and suffering.
Technological Arms Race
The response to such attacks has historically been a technological arms race, where hackers and cyber security experts take turns one-upping each other. As hackers get more sophisticated, cyber security experts respond with more advanced algorithms and technologies to thwart them.
What is lost in this competitive vortex is the realization that the heart of the problem is a bad actor who intentionally or carelessly exploits the system. More often than not it’s poor password security, lost access cards, shelf ware, and careless clicks that open the door to hacker attacks — more than deficiencies in the code. Closing this door has more to do with managing the people involved than the technology they use.
AI Arms Race
As ChatGPT and other artificial intelligence (AI) solutions come to market, we’re amazed at the power and breadth of their application. But as with previous technologies, there will be serious ethical implications in AI’s use. Sadly, bad actors will again create pressure to engage in an arms race, only with AI this time.
AI will only make the problem harder to solve through traditional means. How can we know if someone used ChatGPT to do something that should have been done by a person? Examples include academic tests, papers and homework; original writings, ideas or designs; social influence on politics and business; applications for credit; personal and health data and resumes; and so forth.
To address this challenge, experts are perpetuating the errors of the past by developing AI tools to outsmart the use and application of AI. In particular, new algorithms are being developed to help identify cases where AI was used. But as with cyber security before, the heart of the issue is the person exploiting the system. We need solutions that can evaluate the ethics of the person, something AI cannot do.
Assess the Operator
The ideal answer is to apply new technologies that can assess the operator or user, not the AI. We need the capability to simply ask someone if they used AI to produce their work and then confidently trust the response.
But is this possible? Humans are notoriously untrustworthy. Lying is easy. Or is it?
Truth Verification Advancements
Recently, new technologies in truth verification have made the same quantum leaps as AI. A new app with advanced algorithms can now determine on a mobile phone whether a person is telling the truth. It’s about 80% accurate.
With a simple 5-minute test, you can ask a student, “Did you use AI on any assignment or test this semester?” Or even more specifically, “Did you use AI in writing your dissertation?” The ability to get answers to these questions would thwart AI much more effectively than applying an algorithm to all of a person’s academic output to determine whether AI generated that output. The same is true of the other use cases previously mentioned: “Did you use AI on this deliverable?” “Did you use AI to develop these data (health or otherwise)?” “Did AI generate these visuals?” And so forth.
Bottom line? Assessing the person at the center of the AI application will produce better and faster results than having AI assess the ethical or authentic use of AI!
Stop the AI Arms Race
Historically, arms races are never won — they just drag on. Hopefully, with new truth verification technologies, we can abandon the strategies of technological one-upmanship we’ve applied in the past. It’s time to find lasting solutions to important ethical applications of promising new technologies, like AI.