×
Guardians of Tomorrow: Shaping AI Regulation For a Safer Future/ai-insights/guardians-of-tomorrow-shaping-ai-regulation-for-a-safer-future

Guardians of Tomorrow: Shaping AI Regulation For a Safer Future

December 21, 2023

Guardians of Tomorrow: Shaping AI Regulation For a Safer Future

Artificial Intelligence, which has gradually been on the rise in the past few years, took the world by storm with the introduction of ChatGPT, the AI bot which realized about 100 million active monthly users a mere two months from launch in November 2022, thus making it the “fastest-growing consumer application in history”, the first of its kind to be witnessed ever.

Artificial Intelligence seems to keep growing at a very fast pace. The rise of progressive knowledge in the limitless possibilities of AI-tech systems, as well as the trends of Generative AI, seems to have the world playing catchup with its surging development.

While the potentials abound, many of which we have begun to recognize and take advantage of, there are growing concerns and conversations about the “downsides’ of this powerful technological evolution.

The increased need to avoid a repeat of the ‘Colingridge dilemma’ a methodological quandary in which efforts to influence or control the further development of technology face a double-bind problem is a growing concern for many. Presently, the standard deviation for AI effects (positive to negative) is pretty much unbounded, and we don’t even have a sense of where the mean is at any future point in time.

The rapid advancements in AI, the frantic rollout of new systems, and the ensuing hype cycle have resulted in a lot of positive perceptions that AI can change society into an ideal state. However, we are not on track toward the actualization of those glowing dreams yet.

The way AI is developing is governed by the Big AI companies, present-day goliaths serving as gatekeepers for information, communication, and commerce. Even now,

society is already paying off the price of this battle in the fight over AI dominance and seems to be unaware of how dangerous these systems can be.

The short-term harm may include acceleration of election rigging, fraudulent schemes, and amplification of such as biases and discrimination taking away individuals’

Privacy. The systemic hazards are very high, including cyberattacks, terrorism, cybersecurity, and significant environmental costs. Everyone agrees there are a lot of critical risks. The intellectual power of AI is on the rise, and this could be used in both beneficial and dangerous ways.

There is the consideration of the security of these AI systems to make sure they don't fall into bad hands, and that we can at least apply some levels of safety to. If we compare an open-source system versus a closed-source system like ChatGPT with an API, and if we find a vulnerability in ChatGPT, the code can be changed.

Consequently, the next request that comes will benefit from that change, but if the vulnerability is found in, for example, Llama2 which is an open-source model, it becomes too late. This is because it has been irreversibly shared with everyone and the bad actors are not going to apply a security patch on it.

So, the decision to allow a piece of code and the trained system, especially to be shared with the world. Instead, it should be collectively fought for, as the consequences of these actions, if we make the wrong decisions, will be borne by everyone.

Now, the question is: What do we really begin to consider in regulating these? Should the regulations be vertical, horizontal, or both? How extreme should enforcement be to punish bad actors and how much engagement should the government have?

My response would be that it should be a collective effort on every level. First, I believe it's really important that governments invest in building expertise in AI, to help the regulator, as they need to set the bar somewhere, and also start thinking about what's going to happen when these AI systems that maybe are not in the legitimate hands that follow the law, but in bad actors’ hands are used for malicious or human detrimental intent.

Transparency, equality, security, trust, and explainability are always the starting points as we're looking at a few questions.

  • What are the risks that these models present?
  • Crucially how can we mitigate any of those risks going forward?
  • How do we inform the public, or how do we ensure that the public understands what these tools are and what sort of impact they might have?

Again, it is a battle whose success will be largely dependent on collective effort. The government should effectively work in partnership with the private sector and use AI to maximize society’s defensive capabilities. Where applicable, the signing of executive orders to limit AI bias and risks in Federal agency programs should be considered.

Also, the government must consider legislating pre-approvals or licensing of AI models in high-risk areas. This might stifle innovation but will be beneficial in the long run, as the dynamics of proactively managing the AI risks are being worked out.

Governments need to invest in their own AI capabilities to protect the public by creating parastatals such as a Federal AI Risk Coordinating and Research office which could help bring stability across agencies and high-level administrative leadership, thus focusing regulation on the highest risks.

So far, 31 countries have passed AI legislation and 13 more are debating AI laws, while the AI EU’s act with its strong enforcement approach and big financial penalties seems to be the world’s first and most comprehensive AI law. There is an urgent need for other countries to adopt a similar approach.

Conclusion

In conclusion, effective advocacy and cross-sector coalition building are crucial to informing policy from the public's perspective with representation from diverse communities, and no one having to play catchup. These should include educators and researchers from K12, not just higher education. AI should not be kept out of schools, because children and teens will not be shielded from it either. The best we can hope for, and work towards is to be fully aware of its use, implementation, and impact.

The government should effectively work in partnership with the private sector and use AI to maximize society’s defensive capabilities. Independent regulatory bodies should also address many AI risks under existing laws, and private tech companies should advance their own responsible AI initiatives. It is a collective quest that should involve teaming together in the interest of humanity and thinking about ways we can communicate about risks around AI and its immense possibilities, without contributing to this overall climate of fear and distrust.