
There’s been some expert chatter over the last couple of years about how what everyone calls AI (artificial intelligence) today isn’t really. Rather, it’s an LLM (Large Language Model) that is essentially a correlation engine. In other words, instead of coming up with brilliant or novel ideas or practical suggestions, today’s AI gobbles data to spit out an answer that’s probably correct because the LLM has seen it enough times. Speaking of correlation engines, and as if AI wasn’t giving the masses enough, concern, Here comes a lawsuit against Eightfold AI’s job applicant screening systems claiming it discriminates and violates privacy laws of the United States. The suit seeks class certification (and participants).
My view is that it will fail because screening and ranking job seekers isn’t new. It’s mainstream. People ‘voluntarily’ share their information with LinkedIn (whose revenue relies on selling it. Been pestered to “verify” yourself recently?) and perhaps most importantly any existing laws regarding privacy or discrimination are currently barely being enforced. Two things: Firstly, Eightfold claims it does not scrape data from social media and secondly, I put quotation marks around ‘voluntarily’ because it isn’t exactly voluntary when it would practically exclude you from the job market.
Putting aside whether the suit attracts plaintiffs and gets certified or not, I’ve argued before, and still believe, that the barn door has already been left open and the horse has long since bolted. While valuing privacy and safeguarding your information remains important, the best approach remains to actively engage with AI to enhance productivity, gain knowledge and deepen our understanding of it so we can master it, or perhaps just grapple with it, effectively.
It is a constant fight to not become a commodity and, these days, an express path to commoditization is to not understand or use AI.
Back to the topic of Eightfold AI and the suit, Ontario’s new Working For Workers Four Act stipulates that employers with 25+ employees must disclose in public job postings whether AI is used for screening, assessing or selecting candidates. Left unsaid: how the government will supervise compliance. More pertinently, it leaves the door open to the machine processing of applications and the unfairness it purports to eliminate. After all, if there is anything wrong with the practice shouldn’t the government have banned the use of AI ranking instead of merely asking companies (my assessment: every company with 25+ employees) to simply state it? That would take regulating or banning AI (for this specific use case) and my cynicism tells me that is not in the cards. In essence, the government is saying: “Hey, it is wrong to do it, so make sure you tell people you are doing it!” Well, perhaps the government is saying not telling people is the only wrong. In effect, everyone will now add a sentence telling applicants they are doing what they always did and that everyone knew they are doing. In the meantime, AI will continue correlating you with success, hard work, being introverted or extroverted or not and more.
Real change would have been banning the practice and taking ghost jobs with it. Given how real action is not forthcoming, go ahead and master AI or it masters you by rendering you a mere data point.
Things That Need To Go Away: Change That Is Not Change

