Digital Transformation » AI » Is the new Coalition for Secure AI just a publicity stunt?

Is the new Coalition for Secure AI just a publicity stunt?

Is the new Coalition for Secure AI just a publicity stunt?

To address the growing concerns around artificial intelligence (AI) security, major tech companies including Google, IBM, Intel, Microsoft, NVIDIA, and PayPal have joined forces to launch the Coalition for Secure AI (CoSAI).

Announced on July 18 at the Aspen Security Forum, this open-source initiative aims to develop standardised practices and tools for creating secure AI systems, potentially reshaping the landscape of AI development and deployment.

The Coalition for Secure AI (CoSAI) emerges as a response to the fragmented landscape of AI security, where developers often struggle with inconsistent guidelines and standards. The coalition, hosted by the OASIS global standards body, purports to tackle the fragmented landscape of AI security where developers grapple with inconsistent guidelines. However, critics might argue that this fragmentation is partly due to the very companies now claiming to solve it.

“CoSAI’s establishment was rooted in the necessity of democratising the knowledge and advancements essential for the secure integration and deployment of AI,” said David LaBianca, Google’s representative and CoSAI Governing Board co-chair.

The initiative brings together a diverse range of stakeholders, including industry leaders, academics, and experts. In addition to the founding Premier Sponsors, other tech giants such as Amazon, Anthropic, Cisco, OpenAI, and several others have joined as founding Sponsors.

CoSAI’s scope is comprehensive, addressing various aspects of AI security including:

  1. Securely building, integrating, deploying, and operating AI systems
  2. Mitigating risks such as model theft, data poisoning, prompt injection, scaled abuse, and inference attacks
  3. Developing security measures that address both classical and unique risks associated with AI systems

To kickstart its efforts, CoSAI has announced three initial workstreams:

  1. Software supply chain security for AI systems
  2. Preparing defenders for a changing cybersecurity landscape
  3. AI security governance

Omar Santos, Cisco’s representative and CoSAI Governing Board co-chair, emphasised the collaborative nature of the initiative: “We are committed to collaborating with organizations at the forefront of responsible and secure AI technology. Our goal is to eliminate redundancy and amplify our collective impact through key partnerships that focus on critical topics.”

This collaborative spirit, whilst commendable, seems at odds with the competitive nature of these companies in the AI race. It remains to be seen whether they can truly set aside their individual interests for the greater good of AI security.

Industry backed?

The announcement of CoSAI has been met with enthusiasm across the tech industry, with participating companies emphasising the importance of collaboration in addressing AI security challenges.

Paul Vixie, Deputy CISO at Amazon Web Services, stated, “As a sponsor of CoSAI, we’re excited to collaborate with the industry on developing needed standards and practices that will strengthen AI security for everyone.”

Anthropic’s Chief Information Security Officer, Jason Clinton, highlighted the alignment with their company’s mission: “As a safety-focused organisation, building and deploying secure AI models has been core to our mission from the start. We’re proud to partner with other industry leaders to help foster a secure AI ecosystem.”

The formation of CoSAI comes at a critical time when AI technologies are rapidly advancing and being integrated into various sectors. As AI systems become more complex and influential, ensuring their security becomes paramount.

 “We’ve been using AI for many years and see the ongoing potential for defenders, but also recognize its opportunities for adversaries. CoSAI will help organisations, big and small, securely and responsibly integrate AI – helping them leverage its benefits while mitigating risks,” said Heather Adkins, Vice President and Cybersecurity Resilience Officer at Google.

This balanced view is commendable, but it also underscores the immense power these companies wield in shaping the future of AI – a power that some argue requires more independent oversight.

The initiative is expected to have far-reaching implications for the AI industry:

  1. Standardization: By developing common frameworks and best practices, CoSAI could help create a more unified approach to AI security across the industry.
  2. Democratization: The open-source nature of the initiative aims to make secure AI practices accessible to organizations of all sizes, not just tech giants.
  3. Trust-building: As AI systems become more prevalent in critical applications, standardized security practices could help build public trust in these technologies.
  4. Innovation acceleration: With clearer security guidelines, companies may be able to innovate more quickly and confidently in the AI space.
  5. Regulatory influence: The standards developed by CoSAI could potentially inform future regulations and policies surrounding AI technologies.

The initiative promises a lot on paper but these lofty goals raise questions: Will the standards truly be inclusive, or will they primarily benefit the big players? Can we trust these companies to self-regulate effectively?

As CoSAI moves forward, it invites contributions from the wider tech community.

But will smaller players have a meaningful say, or will the agenda be dominated by the founding giants?

Share
Was this article helpful?

Comments are closed.

Subscribe to get your daily business insights