No, Russia and China are not on the list
By Lindsay Clarke
September 6, 2024
Credits @FFHR.CZ
The US, EU, UK, and other nations have signed up to a legal framework setting out a treaty for the implementation of AI that is underpinned by human rights and democratic values.
The agreement follows two years of talks involving more than 50 countries, which also included Canada, Israel, Japan, and Australia. It sets out accountability for harm and discrimination resulting from the application of AI in business and society.
Speaking to the Financial Times, a Biden administration official said the US was "committed to ensuring that AI technologies support respect for human rights and democratic values."
The new framework agreed by the Council of Europe commits parties to collective action to manage AI products and protect the public from potential misuse.
The agreement was signed against a backdrop of high expectations from governments, which see AI as likely to boost productivity and, for example, increase cancer detection rates - despite concurrent concerns from industry over hallucinations and incurracy. On the regulatory side of things, fears persist that AI could also risk the spread of misinformation or create biased automated decision-making.
The UK's lord chancellor and justice secretary, Shabana Mahmood, who signed the agreement, said the technology has the capacity to radically improve the responsiveness and effectiveness of public services and "turbocharge" economic growth.
"This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law," she said.
Representatives including European Commission vice-president for values and transparency Věra Jourová signed the Framework Convention on Artificial Intelligence during a conference of ministers of justice in Vilnius, Lithuania.
The European Commission, the executive arm of the EU, said the new convention was consistent with the recently introduced EU AI Act, including a number of overlapping concepts such as a risk-based approach and key principles for trustworthy AI.
The Commission said the convention is set to apply to activities within the life cycle of AI systems undertaken by public authorities or the commercial sector acting on their behalf.
"As regards private sector actors, while they still must address risks and impacts from AI systems in a way that aligns with the Convention's goals, they have the option to either apply the Convention's obligations directly, or implement alternative, appropriate measures," it said in a statement. ®
Source: theregister.com
Comments