8 Companies Join Voluntary White House AI Standards

Several companies join voluntary White House AI standards. Companies include Nvidia, Adobe, IBM, Salesforce, Palantir. Here is the full list:

8 Companies Join Voluntary White House AI Standards
Adobe
Cohere
IBM
Nvidia
Palantir
Salesforce
Scale AI
Stability

These standards aim to ensure ethical development of artificial intelligence (AI). The agreement includes requirements to disclose AI-generated content, share vulnerabilities, and conduct external testing before releasing AI products.

This addition brings the total number of companies adhering to these standards to 15. Alphabet, Meta Platform, Microsoft, and OpenAI, the creator of ChatGPT, were among the first to join.

Nvidia, a chipmaker, led a group of eight companies in agreeing to these standards, emphasizing voluntary disclosure, safety, and security for AI tools and services in development.

The White House has actively engaged with the AI industry, organizing meetings with top executives and technology leaders. Meanwhile, lawmakers and regulators are considering the necessary rules as AI becomes increasingly integrated into society.

Companies like Adobe, which markets AI tools through Photoshop, and Stability, known for its “Stable Diffusion XL” AI service, have joined the commitment. One key aspect of these standards is clear labeling of AI-generated content, such as with a watermark.

Additionally, government data mining service provider Palantir, which has credited AI for its recent success, has also joined.

Information sharing across the industry and with various entities, including government agencies, academics, and risk management organizations, is another important provision.

Other companies focused on generative AI development, like Cohere, specializing in large language models, and Scale AI, providing data for AI training, have also joined.

These companies must report on their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use under these standards.

IBM and Salesforce, both components of the Dow Jones Industrial Average, have entered into the agreement, with their own AI platforms in development.

The voluntary commitment requires these companies to prioritize research aimed at minimizing potential harm from AI tools, including addressing security challenges, mitigating harmful biases, and safeguarding privacy.

The agreement is set to take immediate effect, obligating all participating companies to conduct internal and external security testing of their AI systems before release.

It also places significant emphasis on safety and security, including addressing insider threats, and encourages the discovery and reporting of AI vulnerabilities by third parties.

The White House has also taken further steps to enhance AI safety and security, including the proposal of a “Blueprint for an AI Bill of Rights” to safeguard Americans’ rights.

Additionally, the Office of Management and Budget (OMB) is developing a policy that will establish rules governing government workers’ use of AI.

References

More from thoughts.money