Automation » Responsible AI? Suggests the models we have now are irresponsible

Responsible AI? Suggests the models we have now are irresponsible

The general public are responsible for the growing sophistication of LLMs through the provision of their data. But do they reap any of the rewards? Big tech firms are benefiting from the deployment of AI within their businesses, but a broader discussion around responsible and ethical use of AI is needed

“We need to balance the potential of AI with the responsibility we have towards society.”

Speaking at Workday Rising EMEA in November, Sayan Chakraborty provided insight into one of the less talked about facets of AI – the ethics.

There is no doubt that Generative AI will be the phrase of the year. Since ChatGPT’s inception just over a year ago, the use of large language learning models has ballooned to an extent no one ever expected.

“[AI] is probably as close to a generalised problem solver as human beings have ever invented. But the LLMs we are seeing today have their limitations,” Chakraborty said during an executive panel discussion.

Appearing alongside Carl Eschenbach and Angelique de Vries-Schipperijn, the Workday elite honed in on a core theme of their conference – thoughtfulness with applying AI, and specifically the ethics and regulatory considerations that have to be part of the discussion.

The ethics of LLMs

Much of the debate around the use of AI has been about the march toward efficiency and innovation. But as companies increasingly embed AI into their core processes, the need for a nuanced understanding of its ethical implications becomes paramount.

Central to this ethical quandary is the issue of bias and fairness. The AI of today, a product of its training data, often mirrors the societal biases ingrained in its inputs. In sectors where AI’s judgment carries weight, such as recruitment or customer interactions, the risk of perpetuating existing inequalities is significant. Here lies a profound responsibility for businesses: to diligently counter these biases, ensuring that AI systems are not just intelligent but also equitable.

The narrative of AI’s ethical integration is further complicated by the opacity of its decision-making processes. The complexity inherent in AI algorithms often veils the logic behind their conclusions, particularly in critical sectors like healthcare and finance. This lack of transparency necessitates a pursuit of explainable AI, where decision-making is not only effective but also decipherable and accountable to those it impacts.

Privacy and data security concerns are inextricably linked to this narrative. With AI’s voracious appetite for data, companies face the ethical imperative to safeguard sensitive information. This is not merely a matter of regulatory compliance – though adherence to frameworks like GDPR is non-negotiable – but of maintaining the trust and confidence of customers and the public.

Moreover, the issue of accountability in AI-driven decisions stands at the forefront. As AI takes on more autonomous roles in business operations, the lines of responsibility for its actions, particularly in cases of error or negative impact, become blurred. Ensuring that AI integration does not abdicate human accountability is a challenge businesses must meet head-on.

And finally,  integration of AI in business is not a narrative confined to corporate walls. Its ripples are felt across the workforce and society at large. The automation potential of AI, while driving efficiency, also raises concerns about job displacement and the need for skill adaptation. Furthermore, the societal impacts of AI – from influencing public opinion to potentially exacerbating social divides – places an additional layer of ethical responsibility on businesses.

Questions unanswered

Chakraborty emphasised the gravity of these considerations during the panel discussion where he articulated a vision of AI that is not just about harnessing computational power but also about navigating the societal ramifications with a deep sense of responsibility.

“We’re at the very beginning stages of understanding its full potential and challenges?,” he said?. This infancy stage of AI technology, where models are rapidly growing in size and complexity, brings forth a spectrum of considerations.

Large language models, with their ability to process and generate human-like text, have the potential to influence public opinion, cultural narratives, and even political discourse. Chakraborty’s emphasis on “balancing the potential of AI with the responsibility we have towards society” underscores the need for developers and businesses to be acutely aware of the power these models wield.

The amplification of challenges as AI technology spreads widely is a critical point of concern. For instance, Chakraborty highlights the phenomenon of ‘model collapse’ where AI models trained on outputs from other AI models lose their effectiveness rapidly. This raises questions about the reliability and long-term sustainability of AI technologies in practical applications.

The efficacy of large language models hinges on the vast datasets they are trained on, which often include personal and sensitive information. This raises significant privacy concerns, especially in scenarios where the data’s provenance and the consent of the data subjects are murky. While a consumer may be happy to share their information with a service provider, it is unclear whether that also means their information can be used further by their service provider’s partners.

Who benefits was the biggest question. Members of the public are providing huge swathes of data to help train large language model – do they benefit? Not as much as say the Open AI’s or Microsoft’s of the world who turn these insights into revenue.

Chakraborty’s stress on thoughtful consideration in this area reflects the need for stringent data governance policies, ensuring that the privacy rights of individuals are not compromised in the pursuit of technological advancement.

Moreover, the ethical adoption of AI also involves addressing the potential biases inherent in these models. Since they learn from existing data, there’s a risk of them perpetuating and amplifying societal biases and stereotypes. This aspect of AI ethics is particularly critical as it touches upon issues of fairness and justice. The responsibility lies in actively identifying and mitigating these biases to ensure that AI systems are not only accurate but also equitable and non-discriminatory.

Chakraborty’s perspective brings to the forefront the idea that ethical AI implementation is a multi-dimensional challenge. It is about creating systems that not only advance technological boundaries but also align with ethical standards, societal values, and respect for individual privacy.

This balanced approach requires ongoing vigilance, a commitment to ethical principles, and a proactive stance in addressing the multi-faceted challenges posed by AI technologies. In essence, it’s about steering the AI revolution in a direction that maximizes its benefits while minimizing its risks to society and individuals.

Regulation must move faster

The panellists placed significant emphasis on the critical importance of data privacy and regulatory compliance, especially in the evolving field of AI. This focus is not merely a matter of adhering to existing standards but reflects a deeper commitment to responsible innovation in a space where technology constantly intersects with personal data.

The EU, with its AI Act, is setting a global precedent for comprehensive AI regulation. This Act categories AI systems based on risk levels, enforcing strict bans on high-risk applications and demanding transparency for AI-generated content.

Key aspects of the EU AI Act include:

  • Unacceptable Risk: Certain AI systems considered a threat to people will be banned. These include systems that manipulate behaviour, perform social scoring, or use real-time and remote biometric identification, like facial recognition??.
  • High Risk: AI systems that could negatively affect safety or fundamental rights are categorized as high risk. These include AI in medical devices, law enforcement, and other specific areas that must be registered in an EU database??.
  • Transparency Requirements for Generative AI: AI systems like ChatGPT must disclose AI-generated content and be designed to prevent generating illegal content??.
  • Limited Risk AI Systems: These systems must comply with minimal transparency requirements to inform users they are interacting with AI. This includes systems like deep fakes??.

Conversely, the US approach is more fragmented, lacking a unified federal framework akin to the EU’s. Instead, various US agencies like NIST, FTC, and FDA are independently issuing guidelines and frameworks to address specific AI concerns, such as privacy and sector-specific regulations.

Recent developments in the US include:

  • National Institute of Standards and Technology (NIST): Released the AI Risk Management Framework (RMF) as a voluntary guide for managing AI risks, promoting trustworthy and responsible AI development and use. The RMF outlines trustworthiness characteristics for AI systems and provides a framework for managing unique AI risks, such as privacy concerns and data quality issues.
  • Federal Trade Commission (FTC): The FTC has signalled increased scrutiny on AI use, issuing warnings to businesses about unfair or misleading AI practices??.
  • Food and Drug Administration (FDA): Announced plans to regulate AI-powered clinical decision support tools as devices, indicating sector-specific regulatory intent??.

Workday’s approach to data privacy and regulation involves a proactive engagement with both EU and US regulatory frameworks. Chakraborty underscored this point, noting that Workday has “always been engaged with regulators or regulation”.

Such engagement is not just a compliance strategy but also a foundational aspect of Workday’s operational philosophy. The company deals with a vast array of personally identifiable information, making it crucial to integrate strong privacy and security measures at the core of their software and services. Chakraborty mentioned that these regulatory models are “built into the very heart of the software,” highlighting a commitment to data protection that goes beyond surface-level compliance.

The rapid advancement of AI technologies presents new challenges and opportunities, necessitating a dynamic approach to regulatory compliance. The panellists emphasised the need for a framework that allows for technological innovation while ensuring that the data privacy rights of individuals are not compromised. This balance is particularly crucial as AI systems increasingly process and analyse personal data in more sophisticated and potentially intrusive ways.

Share
Was this article helpful?

Comments are closed.

Subscribe to get your daily business insights