Nov 282024
 

In an earlier post, I discussed the importance of governance in Artificial Intelligence (AI) and how, arguably, aside from the initial hurdle of getting started, governance is one of the most significant barriers to adoption, particularly in large enterprises. Concerns such as liability, intellectual property and the risk of introducing incorrect or biased information into AI models are often cited as the biggest impediments to AI integration at scale.

My previous advice encouraged experimentation, emphasizing the importance of gaining momentum, learning from efforts and celebrating small wins. However, as promised, this follow-up aims to define what governance in AI really means. The first paragraph above provides some context, but let’s dive deeper.

AI-governance

 

Governance in AI refers to the set of practices, principles and processes that an organization establishes to develop, deploy and manage its Artificial Intelligence systems. In practice, this encompasses all systems that provide data to the AI, all outputs and outcomes generated by the AI and all stakeholders – individuals, teams, departments – whose jobs, roles and successes are influenced by AI. Since AI is fundamentally built on data, this broad reach underscores the technology’s far-reaching impact.

AI is still relatively new in the wider society and not fully understood. It is imperative that the governance framework adopted by an organization is designed with a clear end-goal in mind, and is implemented transparently, with widespread knowledge across the organization. This approach helps the AI initiatives align with organizational acceptance.

This does not imply that organizations should become paralyzed by over-analysis, as failing to implement AI would likely mean falling behind in today’s competitive landscape. The key to success lies in balancing careful governance with agile action. Trust is a vital component of AI adoption, and proper governance fosters trust by ensuring transparency and accountability.

Additionally, AI systems must be regularly monitored and evaluated to ensure they continue to function as intended, without introducing unforeseen risks or biases. This ongoing governance is essential for maintaining the public’s trust in AI technologies, as well as ensuring compliance with evolving regulations.

AI governance is multifaceted, but definitely possible and practical. Keeping a human in the loop is a check against an unintended consequence. Diverse stakeholders need to focus on long-term goals and organizations must engage to harness the full potential of AI while minimizing risks and fostering trust.

 

Things That Need To Go Away: The ‘AI Can Wait’ Attitude

Nov 132024
 

I speak with buyers on a mission to procure the right product and services for their project. They include decision-makers figuring out the best applications for their companies.

 

They are all interested in AI and range from experimenting with LLMs (Large Language Models) and ML (Machine Learning) in silos to those who are eager to unleash AI for all their employees to take advantage of a range of possibilities. More and more, every one of the above conversations conveys a simple concept: AI is becoming pivotal to everything we will do.

Did you know that 91% of info/tech companies have mentioned ‘AI’ in their earnings calls at least once so far this year (2024)? Yet, a common concern arises consistently in conversations with enterprise management:

  • They are concerned about AI governance.
  • The compliance team is worried about data integrity and inputs.
  • The security team is daunted by the task of implementing AI that impacts their products and
  • Senior executives are uneasy regarding the potential implications if anything goes wrong during customer and end-users interaction with the technology.

These are legitimate concerns.

We will address and define what governance in AI is in a subsequent post. For now, it is important to remember that there are no absolutes and, the answer is, that we all must have the same expectations of AI as we have with any other piece of technology. Put another way, it makes sense to think of AI in the same way we expect the Internet or SAAS applications to function. Having said that, there are actions that technology custodians can and must take.

For AI governance to be done right we need to meticulously follow responsible data practices. These include:

  • Documenting the origin of the data and educating the user base is a start.
  • Similar transparency in where the data will be deployed and which use cases it covers is required.
    Abiding by applicable laws is a must and non-negotiable. These include the EU Artificial Intelligence Act, which went into force on the 1st of August 2024. Additional regulations are en route, from federal and state jurisdictions, such as Canada’s AI And Data Act. The legal framework includes the above bullet points, yet it is important to remember that the framework and implementation will always evolve, so keep an eye on its implementation and evolution.
  • Keep a Human In The Loop. In other words, designating a person to interact with the LLM ensures human oversight; allowing for timely intervention when needed. The underlying models are getting better and algorithms learn and improve, but HTL allows for human intervention in any case.

In tandem, we know that the LLMs by IBM, Meta and others allow for scrutiny and the peace of mind of the user community to some extent because they are open source in code and licensing. Other models strive to accomplish the same credibility by offering access to the foundation models. This does not infer perfection. It does imply scrutiny and a level of credibility, however.

Being concerned and diligent is warranted. Not moving forward due to fear of the technology, however, is a recipe for falling behind. To stay competitive, remain informed about AI developments, begin experimenting and consider a low-risk use case for an initial quick win.

 

Things That Need To Go Away: AI At Any Cost And No AI At Any Cost