Imagine our technology world racing at 200 miles per hour to implement or roll out some form of AI or AI generative capability. Everything is zooming by so fast. The changing workplaces, and employees being laid off because of AI while some folks call this the gold rush of the modern era. Do the Sam Altman of this word ever stop to think for a moment on what if? What if AI suddenly gets holds or circumvents access to nuclear weapons and humanity faces the mercy of AI a system that is currently never known for its conscience? What if something goes wrong when we do not have proper guardrails? Why should we care about ethical considerations in the first place? What if AI becomes superhuman and humanity loses control of creation? This sounds like science fiction from Hollywood until we know it is not.
Ethical considerations and guard rails are important to society whenever we talk about technology or forms of human endeavor whether space travel or nuclear weapons. Artificial intelligence (AI) is no different. What concerns many folks today is the pace at which big technology companies rush to implement AI when our legal frameworks from across the globe including at the United Nations are not ready. As Levin, B. Downes, L., pointed out in their HBR article “Who is going to regulate AI” many countries worldwide are not ready. We have millions of users of AI from around the world and yet there is no legal framework in place should something go wrong. In as much as I believe that organizations have a fiduciary and moral obligation to act responsibly, the truth is, that many organizations are placing profits ahead of ethics. This in many ways is symptomatic of what is known as Corporate Greed. A few today need a reminder of the 2008 financial crisis and corporate greed which started with the collapse of Lehman Brothers back in September of 2008. We have experienced this way often that corporate greed knows no boundaries. Very often, we have seen company executives taking shortcuts to drive profit over fiduciary responsibility.
In a world, almost everyone on the globe is engaged in some form of content creation on social media. Have we ever paused to think what if some bad actor used this good AI technology and caused massive data breaches on a global scale? This sounds fictional, right? The interesting or scary is this is a real possibility. The Upwork Team cited a potentially disturbing scenario where a fake video was generated and replaced the voice of Mark Zuckerberg, Facebook founder, and Meta CEO with an actor. Such a nefarious act and manipulated video had the potential of reaching to 3 billion Facebook users. Imagine for a second if, such an incident alone had caused data and privacy breaches to 3 billion unsuspecting users. While no data breach happened, the incident alone raises more questions than answers, especially in the wake of the use of generative AI. Again, a single incident like this further amplifies why we need guard rails at the company level, state, country, and United Nations levels. The United Nations cites the creation of an AI Advisory Body tasked with a mandate on how to govern AI for humanity. This is a good start. Certainly, more work is required in coming up with a framework for governing the use of AI for the better of all humanity.
In a separate case, the Upwork Team article states that Robert Williams from Detroit was falsely arrested in 2020 due to a facial recognition software error and malfunction. While this may sound like an isolated incident, the use of AI tools without proper guard rails and proper oversight means can easily turn someone’s life upside down in an instant. This example is a clear message even for law enforcement that AI should supplement good detective work and not replace it. AI should be used with proper oversight.
What if bad actors get hold of AI? It is no secret that countries like Russia have used technology before to meddle in work elections in the USA and elsewhere around the world. This is not speculative, it happened before. In her article, Douglass, C provided a more chilling message by stating that bad actors need only basic Generative Pre-trained Transformer (GPT) AI systems to manipulate and bias information on platforms, rather than more advanced systems such as GPT 3 and 4, which tend to have more guardrails to mitigate bad activity. In an article published by George Washington University it is predicted that daily, bad-actor AI activity is going to escalate by mid-2024, increasing the threat that it could affect election results. With over than 50 countries set to hold national elections in 2024, analysts have long sounded the alarm on the threat of bad actors using artificial intelligence (AI) to disseminate and amplify disinformation during the election season across the globe. Such examples of research from reputable institutions help amplify the message of why the world needs an AI governing framework.
In conclusion, the cases above provide ample evidence the world is driving AI adoption way faster than we put proper guardrails and governing frameworks in place. The AI driving pace is way too fast for many to even have time to comprehend or even encapsulate the high odds of AI being used to interfere with the world’s democratic systems let alone fall into bad actors’ hands. In my view, technology leaders are currently preoccupied with profit margins and shareholder value while paying very little time to their fiduciary obligations and ethical considerations. This in turn raises the century-old debate on corporate social responsibility versus corporate greed. I believe in the transformative nature of AI, which makes businesses more efficient, nimble, agile, and competitive. I also believe corporate social responsibility should co-exist with profit margin in the AI race. A framework where guard rails, the ethical and human considerations are front and center in the AI race and not the other way around.