
“First, there is an urgency to adopt AI – this is not an area where businesses feel able to wait and see. “Two clear messages ring out from this research,” said the group’s co-lead intellectual property and technology lead. Responding to DLA Piper’s report, the Global AI Practice Group’s Jeanne Dauzier urged enterprises to be more cautious in rolling out new AI products. Nearly half of respondents, meanwhile, said that AI projects have been interrupted, paused or rolled back due to data privacy issues and a lack of governance framework. Assembled from interviews with 600 senior executives at companies with an average annual turnover of $900m or more, the report also reveals that over a third of those surveyed were not confident that their firms are complying with current AI law, with another 39% unclear on how regulation is evolving.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team Sign up hereĪn investigation by global legal practice DLA Piper found the current paucity of regulatory frameworks is a major reason for this anxiety among business leaders. Indeed, the principles outlined in the Biden administration’s voluntary scheme are similar to those outlined by the UK as priorities for debate during its upcoming Bletchley AI Safety Summit. These light-touch approaches contrast with the more proscriptive attitude towards AI adopted by other jurisdictions, like the EU in its upcoming AI Act. Similar commitments to release models for third-party testing have been secured by the UK government.
Adobe sign code#
The new voluntary code is seen by the White House as the foundation of a global agreement on AI development. This includes tracking data used in training its models at every stage and reports on each stage of training. However, the Fact Sheet also urges companies to avoid publishing model weights and specifics until all security risks have been addressed. Some companies, including IBM, are working on tools to improve transparency and reportability. It is thought that better regulation and governance will help speed up the adoption of AI across the marketplace. “The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.” “To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety,” the White House declared. This mirrors similar principles outlined in the UK AI White Paper published earlier this year. The White House said at the time of the initial announcement that the initiative underscores three important principles for the development of artificial intelligence: safety, security and trust. They are also expected to share information on ways to reduce risk with the wider community and invest in cybersecurity measures.Įach of the organisations agreed to begin implementation immediately in work described by the administration as “fundamental to the future of AI”. The White House agreement also holds signatories to safety commitments such as testing output for misinformation and security risks. Adobe is one of eight new signatories to the White House voluntary code that, among other provisions, commits the company to watermarking AI content. Originally aimed at the big AI labs OpenAI, Anthropic and Google DeepMind, the initiative is designed to encourage signatories to uphold specific governance standards, including watermarking AI-generated content and “facilitating third-party discovery and reporting of vulnerabilities in their AI systems”. The news comes as a new study reveals business leaders are delaying AI projects due to a lack of guidance and clear regulation.

Adobe, IBM and Nvidia are among eight new companies that have signed up to US President Joe Biden’s voluntary AI governance scheme, the White House has announced.
