I speak with buyers on a mission to procure the right product and services for their project. They include decision-makers figuring out the best applications for their companies.
They are all interested in AI and range from experimenting with LLMs (Large Language Models) and ML (Machine Learning) in silos to those who are eager to unleash AI for all their employees to take advantage of a range of possibilities. More and more, every one of the above conversations conveys a simple concept: AI is becoming pivotal to everything we will do.
Did you know that 91% of info/tech companies have mentioned ‘AI’ in their earnings calls at least once so far this year (2024)? Yet, a common concern arises consistently in conversations with enterprise management:
- They are concerned about AI governance.
- The compliance team is worried about data integrity and inputs.
- The security team is daunted by the task of implementing AI that impacts their products and
- Senior executives are uneasy regarding the potential implications if anything goes wrong during customer and end-users interaction with the technology.
These are legitimate concerns.
We will address and define what governance in AI is in a subsequent post. For now, it is important to remember that there are no absolutes and, the answer is, that we all must have the same expectations of AI as we have with any other piece of technology. Put another way, it makes sense to think of AI in the same way we expect the Internet or SAAS applications to function. Having said that, there are actions that technology custodians can and must take.
For AI governance to be done right we need to meticulously follow responsible data practices. These include:
- Documenting the origin of the data and educating the user base is a start.
- Similar transparency in where the data will be deployed and which use cases it covers is required.
Abiding by applicable laws is a must and non-negotiable. These include the EU Artificial Intelligence Act, which went into force on the 1st of August 2024. Additional regulations are en route, from federal and state jurisdictions, such as Canada’s AI And Data Act. The legal framework includes the above bullet points, yet it is important to remember that the framework and implementation will always evolve, so keep an eye on its implementation and evolution. - Keep a Human In The Loop. In other words, designating a person to interact with the LLM ensures human oversight; allowing for timely intervention when needed. The underlying models are getting better and algorithms learn and improve, but HTL allows for human intervention in any case.
In tandem, we know that the LLMs by IBM, Meta and others allow for scrutiny and the peace of mind of the user community to some extent because they are open source in code and licensing. Other models strive to accomplish the same credibility by offering access to the foundation models. This does not infer perfection. It does imply scrutiny and a level of credibility, however.
Being concerned and diligent is warranted. Not moving forward due to fear of the technology, however, is a recipe for falling behind. To stay competitive, remain informed about AI developments, begin experimenting and consider a low-risk use case for an initial quick win.
Things That Need To Go Away: AI At Any Cost And No AI At Any Cost