米兰体育

Skip to content
NOWCAST 米兰体育 13 10p Newscast
Watch on Demand
Advertisement

Can the US effectively regulate AI use?

While the US is still solidifying domestic AI legislation, global bodies such as the EU are finding common ground.

Can the US effectively regulate AI use?

While the US is still solidifying domestic AI legislation, global bodies such as the EU are finding common ground.

Generative AI has officially entered the chat and it has brought with it all sorts of new questions and complications about how it can be used and abused to make AI use safe and ethical for everyday use, lawmakers, creators and users may all have to work together before the technology outpaces us all. The Biden Harris administration has introduced *** number of guidelines around the use of AI. There's the executive order which provides standards for safety while fostering innovation. There's the blueprint for an AI Bill of Rights, which, by the way, is *** white paper, not *** binding piece of government policy. There's AI.gov, which includes job listings in the field, and the National AI advisory Committee, which is tasked with advising the president on all things AI. And that's just at the national level. Many states have proposed or even enacted legislation as well. The Biden administration materials appeared to be an optimistic and cautious approach to dealing with AI while aspiring to protect those who use it or whose data is used by it every day. The use of AI has global implications as well. The EU has passed an act outlining its approach to AI, and there have been *** few summits between countries that typically haven't found common ground. What are some of the issues these policies attempt to prevent? Some have begun to question the quality of data going into shaping AI for public use, citing embedded biases in gender and ethnicity for AI generated content, which could make using AI for non-discriminatory hiring practices *** potential challenge. To make matters worse, generative AI makes the creation of deep fakes significantly easier. Victims of deep fakes have had to advocate for themselves in *** legal arena that's still unformed. For instance, at *** high school in New Jersey, girls were targets of cyber bullying via AI created nude photos and videos. They were left without any direct recourse. Then there's the questions around copyright, creativity, academic integrity, disinformation, misinformation, and fraud, and perhaps some areas we humans haven't even yet foreseen. Even though legal action for AI related crimes could take time, legislation is in the works. *** bipartisan task force in the US House is working on ways to add guardrails to AI use, like increased civil and criminal punishments for crimes committed with AI, such as imitating someone's voice. Another potential model for the US could mimic the EU's AI Act, which hits companies with financial penalties for violating the policies. As AI becomes increasingly commonplace, legislators will need to work even faster to outline its limits for public use.
Advertisement
Can the US effectively regulate AI use?

While the US is still solidifying domestic AI legislation, global bodies such as the EU are finding common ground.

Artificial intelligence, once a new frontier associated with science fiction and futurism, is rapidly becoming commonplace. Although some tech innovators may tout the benefits of AI, many global representatives across the political spectrum are reticent. Countries with adversarial relationships, like China and the U.S., are even hosting joint summits to tackle the issue headfirst. Concerns range from global threats to national security to more domestic issues such as cyber-bullying in schools. The Biden administration鈥檚 proactive efforts have included airing congressional hearings, announcing a whitepaper for AI use called the "AI Bill of Rights," and creating a bipartisan task force. However, countries may still struggle to track and penalize AI misuse, especially abuse, within their own borders. Although the FBI has sentenced criminals for creating generative AI porn, many victims of deepfake materials are left to seek their own path to justice as the legislation further solidifies at the state and federal levels. However, reaching common ground and passing actual enforceable laws is possible, as shown by the European Union鈥檚 vote on AI. The EU recently passed a landmark act outlining best practices for AI development and use, in addition to financial penalties for those who violate their policies.The EU also created a Compliance Checker to help developers determine the level of risk an AI program could pose before introducing it into the EU鈥檚 market. The risk could range from minimal, such as video games or spam filters, to limited, such as informing users when they鈥檙e interacting with a chatbot. High-risk includes 鈥渁utomated processing of personal data to assess various aspects of a person鈥檚 life, such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement,鈥� according to the EU, while prohibited risk includes 鈥渟ocial scoring鈥� and 鈥渃ompiling facial recognition databases.鈥漅eaching a global consensus on AI use may be an unrealistic goal, but communities should continue to voice their concerns and questions so that AI鈥檚 current blind spots sharpen into view for future use.

Artificial intelligence, once a new frontier associated with science fiction and futurism, is rapidly becoming commonplace.

Although some tech innovators may tout the benefits of AI, many global representatives across the political spectrum are reticent.

Advertisement

Countries with adversarial relationships, , are even hosting joint summits to tackle the issue headfirst. Concerns range from global threats to national security to more domestic issues such as cyber-bullying in schools.

The Biden administration鈥檚 proactive efforts have included airing , announcing a whitepaper for AI use called the "," and creating a

However, countries may still struggle to track and penalize AI misuse, especially abuse, within their own borders.

Although the FBI has sentenced criminals for creating , many victims of materials are left to seek their own path to justice as the legislation further solidifies at the state and federal levels.

However, reaching common ground and passing actual enforceable laws is possible, as shown by the European Union鈥檚 vote on AI.

The EU recently passed a outlining best practices for AI development and use, in addition to financial penalties for those who violate their policies.

The EU also created a to help developers determine the level of risk an AI program could pose before introducing it into the EU鈥檚 market. The risk could range from minimal, such as video games or spam filters, to limited, such as informing users when they鈥檙e interacting with a chatbot.

High-risk includes 鈥渁utomated processing of personal data to assess various aspects of a person鈥檚 life, such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement,鈥� , while prohibited risk includes 鈥渟ocial scoring鈥� and 鈥渃ompiling facial recognition databases.鈥�

Reaching a global consensus on AI use may be an unrealistic goal, but communities should continue to voice their concerns and questions so that AI鈥檚 current blind spots sharpen into view for future use.