Can the US effectively regulate AI use?
While the US is still solidifying domestic AI legislation, global bodies such as the EU are finding common ground.
While the US is still solidifying domestic AI legislation, global bodies such as the EU are finding common ground.
While the US is still solidifying domestic AI legislation, global bodies such as the EU are finding common ground.
Artificial intelligence, once a new frontier associated with science fiction and futurism, is rapidly becoming commonplace.
Although some tech innovators may tout the benefits of AI, many global representatives across the political spectrum are reticent.
Countries with adversarial relationships, , are even hosting joint summits to tackle the issue headfirst. Concerns range from global threats to national security to more domestic issues such as cyber-bullying in schools.
The Biden administration鈥檚 proactive efforts have included airing , announcing a whitepaper for AI use called the "," and creating a
However, countries may still struggle to track and penalize AI misuse, especially abuse, within their own borders.
Although the FBI has sentenced criminals for creating , many victims of materials are left to seek their own path to justice as the legislation further solidifies at the state and federal levels.
However, reaching common ground and passing actual enforceable laws is possible, as shown by the European Union鈥檚 vote on AI.
The EU recently passed a outlining best practices for AI development and use, in addition to financial penalties for those who violate their policies.
The EU also created a to help developers determine the level of risk an AI program could pose before introducing it into the EU鈥檚 market. The risk could range from minimal, such as video games or spam filters, to limited, such as informing users when they鈥檙e interacting with a chatbot.
High-risk includes 鈥渁utomated processing of personal data to assess various aspects of a person鈥檚 life, such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement,鈥� , while prohibited risk includes 鈥渟ocial scoring鈥� and 鈥渃ompiling facial recognition databases.鈥�
Reaching a global consensus on AI use may be an unrealistic goal, but communities should continue to voice their concerns and questions so that AI鈥檚 current blind spots sharpen into view for future use.