Dec 072024
 

Absolutes are typically mythical, being analogous to sasquatch and E.T. Trends and developments are rarely linear.

There are usually pros and cons in everything. Benefits and hazards are built into most things. Even an accelerating race car hesitates momentarily before it gathers more speed. That is the nature of things.

Artificial Intelligence has done a lot of good and will continue to do a lot of good. It is not for nothing, however, that many of the smartest voices of the scientific community warn us about the dangers of AI. Many of these forebodings are unprecedented and cataclysmic. Somewhere in-between, and more imminent, is the tectonic shift to our day-to-day lives that is happening right in front of our eyes. Online education platform Chegg has lost half a million paid subscribers, its market cap has crashed from $15 billion (US) to $300 million and it has laid off 25% of its employees. What will this do for authors?

True Artificial Intelligence, not the imposter AI that everyone and anybody is touting these days, following the training phase learns independently, benefits from advanced machine learning algorithms and deep neural networks, such as deep learning, and improves without direct human input. Certain AI models, like transformer-based architectures (the ‘T’ in GPT) rely on data to improve their ability to reply and make (better) predictions overtime. This is what makes this technology different from anything that has come before it. It is self-reinforcing and can be unleashed to run independently. Change after change in history, including a variety of technologies, has moved the landscape, made certain skills and jobs redundant while creating others. Think of a bridge replacing a ferry. Computers replacing typewriters. Cloud-based software (SAAS) replacing dedicated workstations, among others. This one is different.

With that said, it would be both societal and personal mistakes to stay away and try to close the stable door after the AI-driven, robotic horse has bolted. Sticking one’s head in the sand is not the solution. More likely, not making an effort and attempting to learn passively is not adequate. It is counterproductive. The fact of the matter is, and not many would admit it, that AI is not well understood even by the scientists and programmers at some of the best-known tech organizations of today. In such an environment, the most sane way to proceed, is to counterintuitively lift the lid, bring AI to the masses, give everyone every opportunity to use the tech, be educated on it and make it publicly sourced. Legislation is desirable, and necessary, because guardrails and moats do not build themselves, but even more powerful is a public taught to understand what AI is, what it is not, what it does, what it does not do and transparently knows where and how this technology is deployed. Shining a light, embracing and understanding is the best antidote to ignorance and the best way to insulate oneself from becoming redundant thanks to AI – or any change for that matter.

Things That Need To Go Away: Being Scared Of AI And Trying To Dock

Nov 282024
 

In an earlier post, I discussed the importance of governance in Artificial Intelligence (AI) and how, arguably, aside from the initial hurdle of getting started, governance is one of the most significant barriers to adoption, particularly in large enterprises. Concerns such as liability, intellectual property and the risk of introducing incorrect or biased information into AI models are often cited as the biggest impediments to AI integration at scale.

My previous advice encouraged experimentation, emphasizing the importance of gaining momentum, learning from efforts and celebrating small wins. However, as promised, this follow-up aims to define what governance in AI really means. The first paragraph above provides some context, but let’s dive deeper.

AI-governance

 

Governance in AI refers to the set of practices, principles and processes that an organization establishes to develop, deploy and manage its Artificial Intelligence systems. In practice, this encompasses all systems that provide data to the AI, all outputs and outcomes generated by the AI and all stakeholders – individuals, teams, departments – whose jobs, roles and successes are influenced by AI. Since AI is fundamentally built on data, this broad reach underscores the technology’s far-reaching impact.

AI is still relatively new in the wider society and not fully understood. It is imperative that the governance framework adopted by an organization is designed with a clear end-goal in mind, and is implemented transparently, with widespread knowledge across the organization. This approach helps the AI initiatives align with organizational acceptance.

This does not imply that organizations should become paralyzed by over-analysis, as failing to implement AI would likely mean falling behind in today’s competitive landscape. The key to success lies in balancing careful governance with agile action. Trust is a vital component of AI adoption, and proper governance fosters trust by ensuring transparency and accountability.

Additionally, AI systems must be regularly monitored and evaluated to ensure they continue to function as intended, without introducing unforeseen risks or biases. This ongoing governance is essential for maintaining the public’s trust in AI technologies, as well as ensuring compliance with evolving regulations.

AI governance is multifaceted, but definitely possible and practical. Keeping a human in the loop is a check against an unintended consequence. Diverse stakeholders need to focus on long-term goals and organizations must engage to harness the full potential of AI while minimizing risks and fostering trust.

 

Things That Need To Go Away: The ‘AI Can Wait’ Attitude

Nov 132024
 

I speak with buyers on a mission to procure the right product and services for their project. They include decision-makers figuring out the best applications for their companies.

 

They are all interested in AI and range from experimenting with LLMs (Large Language Models) and ML (Machine Learning) in silos to those who are eager to unleash AI for all their employees to take advantage of a range of possibilities. More and more, every one of the above conversations conveys a simple concept: AI is becoming pivotal to everything we will do.

Did you know that 91% of info/tech companies have mentioned ‘AI’ in their earnings calls at least once so far this year (2024)? Yet, a common concern arises consistently in conversations with enterprise management:

  • They are concerned about AI governance.
  • The compliance team is worried about data integrity and inputs.
  • The security team is daunted by the task of implementing AI that impacts their products and
  • Senior executives are uneasy regarding the potential implications if anything goes wrong during customer and end-users interaction with the technology.

These are legitimate concerns.

We will address and define what governance in AI is in a subsequent post. For now, it is important to remember that there are no absolutes and, the answer is, that we all must have the same expectations of AI as we have with any other piece of technology. Put another way, it makes sense to think of AI in the same way we expect the Internet or SAAS applications to function. Having said that, there are actions that technology custodians can and must take.

For AI governance to be done right we need to meticulously follow responsible data practices. These include:

  • Documenting the origin of the data and educating the user base is a start.
  • Similar transparency in where the data will be deployed and which use cases it covers is required.
    Abiding by applicable laws is a must and non-negotiable. These include the EU Artificial Intelligence Act, which went into force on the 1st of August 2024. Additional regulations are en route, from federal and state jurisdictions, such as Canada’s AI And Data Act. The legal framework includes the above bullet points, yet it is important to remember that the framework and implementation will always evolve, so keep an eye on its implementation and evolution.
  • Keep a Human In The Loop. In other words, designating a person to interact with the LLM ensures human oversight; allowing for timely intervention when needed. The underlying models are getting better and algorithms learn and improve, but HTL allows for human intervention in any case.

In tandem, we know that the LLMs by IBM, Meta and others allow for scrutiny and the peace of mind of the user community to some extent because they are open source in code and licensing. Other models strive to accomplish the same credibility by offering access to the foundation models. This does not infer perfection. It does imply scrutiny and a level of credibility, however.

Being concerned and diligent is warranted. Not moving forward due to fear of the technology, however, is a recipe for falling behind. To stay competitive, remain informed about AI developments, begin experimenting and consider a low-risk use case for an initial quick win.

 

Things That Need To Go Away: AI At Any Cost And No AI At Any Cost

Nov 012024
 

 

We have all read them – or worse, been there. These are accounts of people who had a job interview set up and the interviewer failed to show up to the initial telephone or video interview, dropped off the radar mid-process or when a recruiter went AWOL and didn’t provide any feedback at the end of the process.
I was reminded of this conundrum upon coming across an article about a new website called Ghostedd. Visitors are invited to “share a ghost story.” The merits and trustworthiness of the service aside, the question for me was why such a service is needed in the first place? This Guardian article, which was printed the same week as the aforementioned one, fuels the conversation by reminding readers of fake jobs (as much as 40%), which sadly remains an ongoing issue.

Call me naive, but why can’t we have a process where we respect one another’s time, humanity and need for feedback and information? Recently, I came across a LinkedIn post where a recruiter was empathizing with someone who, having been ghosted by a company during her interview process, was expressing frustration about the system. Ironically, the same recruiter had tried to recruit me some ten years ago and then failed to follow up, update or give feedback after executing a disappearing act. The rub? He had called me and insisted what a great fit it was and how the hiring vice-president was eager to meet. The follow-up and meeting he’d promised a week later never materialised. To be fair, the recruiter may have evolved over the past decade, but if not, then his comment highlights a need for greater honesty in addition to propriety.

The flip side, which is not a mystery to most, is that recruiters and hiring managers often feel hesitant. Short on time and also facing potential blowback or litigation if they were to speak their minds and give actual feedback, they take the easy and safe way out. Yet, surely, there is a fine line between ghosting someone, placing yourself in a compromising position or ignoring someone you’ve put in a pipeline process. Moreover, if candidates reciprocate the courtesy and consideration expected in the process and given to them then interviewers should feel more assured in giving feedback and not rudely abandoning those they have engaged. For now the Internet is doing what it was supposed to do by crowdsourcing information, but could we all agree that when we are more courteous and considerate of our fellow humans and make their journey better, everyone benefits and is in a better position? This should not be a revolutionary concept. After all, if nothing else, we may be the ones looking for proper etiquette or propriety next time.

 

Things That Need To Go Away: Decenit a.k.a. Lack Of Mutual Respect, Lack of Feedback, Lack Of Explanation, Lack Of Follow-up And Fake Job Postings

Oct 072024
 

The agenda for the Elevate Festival in Toronto caught my attention. The event took place at Meridian Hall and the St. Lawrence Centre for Arts, which are nearly adjacent venues, last week. The combination of sessions, speakers and, what they call, masterclasses seemed intriguing both personally and professionally. My primary focus is AI at the workplace and those sessions were my main reason for attending naturally.

First though, one question lingers for anyone who knows. There were neither floats nor confetti. No one was dancing, throwing candy or engaging in revelry, but seriously, the question was, why is it called a “festival”?

elevate festival

Elevate festival

Oddly, there was another ‘festival’ called Future Festival occurring in Toronto concurrently. Surely, it can’t be a coincidence.

Returning to the conference, while AI was a prominent theme, the three-day event offered more than just discussions on the hot topic of Artificial Intelligence. There were start-ups and funders, a sales course, sessions for creative professionals and implicit infomercials for the speakers and their companies. Met a couple of nice people in line looking to sell their services and found out outsourced marketing and sales departments are more in-demand than ever. Some say AI is going to make both obsolete… or is outsourcing departments the first step towards elimination?

Here are a few things learned from the event:

  • A speaker emphasized that AI is software and should be treated exactly as such. That is, it can be used for good or evil and we should have the same expectations for it as we have for (other) software, the Internet or other pieces of technology. Think about it, that is the gist of it, isn’t it?
  • Emphasis was placed on the importance of experimentation. AI is still in its infancy and enterprises need to move forward even when use cases are not fully defined and also because there will be many use cases that companies do not even know they need yet. It is important to begin experimenting and exploring possibilities.
  • A few humanitarian use cases cited and demonstrated were: wildfire detection, Foundation For Healthcare, which is like AI from a medical school, and Alphafold, which by classifying and identifying the structure of protein is advancing medicine in the fight against disease.
  • A speaker mentioned that whatever you do, don’t do the ostrich.
  • According to KPMG research, companies aiming to excel in SEO and demand generation for AI should aim to publish three pieces of content (such as a white paper, blog post or solution PDFs) on their websites each week.
  • While transparency and accountability are crucial in the usage and deployment of AI, most LLMs (Large Language Models) already incorporate ethical practices. This was done even before the EU’s Artificial Intelligence Act went into force on August 1st, mandating GDPR, privacy and PII practices.
  • The CDO (Chief Data Officer) from Telus and IBM’s VP for AI shared that they can’t be any more explicit about safety protocols built into AI than by saying they have opened AI to all their employees, including developers, internally themselves without exception.
  • IBM, Meta (which has experienced 350 million downloads of its Llama model) and Mozilla all reminded attendees that their LLMs are open source and subject to scrutiny by the world-wide community.
  • KPMG’s Canadian AI lead: “AI will be in every function in every industry” and “our analysis shows a 72% ROI in 3 three years” for the enterprise, numbers he characterized as “unchartered” return numbers.
  • Finally, someone boldly described the situation with AI a rising tide lifting all boats, highlighting the collective potential of these advancements.

The event was educational even if there was a lot of ‘what’ without the ‘how’ at the sessions on my agenda. Companies like Telus, Google, Meta, IBM and Mozilla made the effort to appear.

 

Things That Need To Go Away: ‘Educational’ Sessions At Paid Conferences That Are Implicit Advertisements

 

Jun 092024
 

AI, in general, is a hot topic everywhere. This site has discussed the nature of AI before and posted about the hype versus reality of it. Every passing day has made the promise and reality of AI more real, more rewarding and more ominous. The aforementioned AI article listed several vendors who had begun working on inserting artificial intelligence and its brain, machine learning, into sales and its various departments.

A recent quotation by Eric Yuan, chief executive of Zoom, the video conferencing company, was worth attention. He is suggesting that an AI version of us can attend meetings for us in five or six years. The implication is that a digital version of us could act for us and, moreover, would be as diligent, acceptable and effective as the person it represents. Yuan was careful to couch this as an augmentation technology as opposed to a replacement one. The scenario again was the utopian dream of having much more leisure and downtime. Yet, his thoughts, if taken to their logical conclusion, could make a reader imagine being replaced by the said system gradually. Moreover, what is stopping AI from creating multiple copies of us? AI could create dozens of me if it can duplicate me once.

 

Artisan, a company which bills itself as “Creating Highly Advanced Digital Workers … Using Cutting-Edge AI Technology,” is marketing itself as offering functional AI avatars replacement of salespeople, customer success, BDRs, recruiters, financiers, etc. with names like Ava, Liam, Noah, Olivia and others. Thought prospects are being bombarded by business development and sales now? Wait until a million Avas start getting in touch. In fact, among the features Artisan advertises per its artificial human replacements is “Sends 1000s of emails per month.” Buyers, decision-makers, V and C-levels take cover. Incoming! Pricing? From less than $1,000 (US) down to almost $100 per month. Do I get to talk to AI if I contact their sales department though?

Similarly, there is GoCust, which advertises itself as “Sales Teams Now Have Online Assistants.” It touts itself as a SFA, assistant, mapping and route optimizier, that gains cost and time advantages in order “to minimize the need for sales teams to constantly struggle with messaging, phone and email traffic…  develop solutions to increase the rate at which customer conversations are recorded as data… to offer sales managers the opportunity to proactively manage their teams.” And these companies are not alone.

All of this has become possible, in the meantime, because storage and computation costs have fallen drastically and are basically cheap.

In the meantime, according to a study by Zendesk, which itself has jumped wholeheartedly into the AI space ($50/user/month), management believes AI is beneficial and helpful. “Four in five (81%) employee experience leaders now see AI as essential in boosting workers’ ability to tackle complex tasks,” reports Zendesk.

At this point, a forecast of five or six years for these technologies to be operationalized and become part of the mainstream seems frankly… conservative.

 

Things That Need To Go Away: Efficiency, Effectiveness And Productivity Ai Technology That Never Mentions The Potential For Creating Layoffs And Attrition. The Discussion Around These Technologies Needs To Be Brave And Candid.

Jun 022024
 

Amazon, and its partner, Adastra offered a one day AI overview in Toronto recently. It was a hot topic that the presenters encapsulated well. Of course, they had an AWS bias and promoted Amazon’s services, but the context and implications were presented succinctly and clearly.

Below is a synopsis of the day’s presentations on Amazon’s Generative AI (Artificial Intelligence).

AWS (Amazon Web Services – the Cloud division of Amazon) Options:

1- Powered by Amazon’s Bedrock model

These are sitting on a host of services like Amazon Titan, Cohere, Meta’s Llama 3 and many more

AAS (as a service)/managed service and no overhead

2- Amazon SageMaker

Which is a ‘build your own’ platform

 

Amazon emphasises, however, that not one LLM fits all. It is important to assess your needs.

New Announcement: Amazon Q (an AI powered assistant)

Designed for business use cases | you can ask questions from it now

Use Cases: 

  • CX/EX: onboarding, KYC, Routing, etc.
  • Personalization: Forecast/advisory, recommend
  • Text Analytics: Extract information from internal and external sources e.g. in the old days we had OCR with rules on top
  • Predictive Analytics: Extract data
  • Fraud Detection: identity fraud, anomalies

These were expensive to build and expensive to maintain and therefore fickle until now.

With AI all of the features are baked into the model and therefore a lot less development is required

Additional New Use Cases in Amazon Q:

  • Improve CX and EX
  • Increase knowledge of workers – think how important this is. We are all in exactly this business whether Marketing, Sales, Compliance or Analysis, etc.
  • Product Innovation and Process Automation including
    • Data Extraction 
    • Natural Language Interfaces to Analytics
    • Personalized Content Generation

Amazon Q (Suite of Gen AI Services) Portfolio – “AI Powered Assistant” General Availability: 30.04.2024 (new!) of 3 flavours:

  1. Amazon Q in QuickSight (powered by Bedrock) can be considered a BI Service with AI Capabilities – structured data
  2. Amazon For Business – also unstructured data
  3. Amazon Q Developer – Assistance in writing code

Gen AI assistant for Accelerating software development and leveraging company data

These are being embedded in Amazon Services

Traditionally LLMs have been best for:

  • Broad World Information
  • Assisting Human Work (summarizing, teaching and Generalizing) as well as Offering knowledge, autonomous tasks and calculations

But Newly: Amazon Q in QuickSight: 

  • Ask questions in natural languages and ML models interpret user’s questions and generate images and reports. 
  • AI powered dashboard. 
  • On-demand answers. 
  • Can be extended to other Apps. 
  • AI assisted Story telling (tells you what is going on and provides documents and slides to present the data)

These are available in different apps including your custom apps – Pricing (in USD):

  • Amazon Q Business Lite $3/user/month
  • Amazon Q Business Pro $20/user/month
  • Developer Free Tier
  • Developer Pro Tier: $19/user/month
  • etc.

Benefit: Ask yourself what is the productivity gain? We can see up to 75% productivity gain.

 

The AWS approach to Generative AI: 

  • Enterprise Focus
  • Open Approach with both Proprietary and Open Source code
  • Cost And Power Optimized
  • Data Privacy and Security: Own Your Data/Access Control included/Permissions + Connectors to everything including Exchange/SFDC/Confluence/JIRA etc.
  • Based on the RAG Architecture

Generative AI and ML Considerations For Financial Services (because the day had a Financial Services bent nominally)       

  • Approach? Is a human in the loop and is it possible given the amount of data? Is the input and output from structured or unstructured data?
  • Governance And Compliance?
  • Legal & Privacy: Think output validation and Reporting especially when using 3rd Party models 
  • Monitor legal and regulatory scope consistently as the scene changes and proposals like SEC’s or European regulation become updated.

 

Things That Need To Go Away: The Only Name Tag For An Event To Be Missing Being Mine

 

Mar 272024
 
telling a success story

photograph Credit: Anna Shvets

When reading a book called 7L: The Seven Levels Of Communication by Rick Masters a series of seven steps to tell a “successful success story” caught my attention. A review of the book as a whole is here, but taking a specific excerpt from the book here is how the book suggests someone should tell a success story.

 

It is useful for anyone interested in interpersonal relationships, business in general or sales.

  1. What was the client’s name and specific situation?
  2. What would have happened if you were not involved? Consider the worst case scenario without you.
  3. How did you help solve the problem?
  4. Specifically, what was the result or outcome?
  5. What did the client say or do to let you know you did well for them? Was there a referral or a testimonial?
  6. Based on the above, it is time to ask for a specific and relevant referral. For example, a realtor can ask for the name of someone who may need his or her services.
  7. CTA: Ask the person(s) to take a specific action to make number 6 happen.

 

Things That Need To Go Away: Not Leveraging The Network And Contacts

Mar 172024
 

The Seven Levels Of Communication is a good book that reads easily. The man behind it is an American realtor who reports success in his own business after having employed the techniques/system he teaches in the book. Not coincidentally, the story is narrated by a realtor who interacts with professionals who provide ancillary services like legal advice or mortgages. It is a “story” because, unlike most sales books, 7L lays out its system in a novel-like format complete with a romantic subplot.

 

Salespeople of all stripes can benefit from the methodology that emphasises giving, sharing and serving, but reading it one soon realises that it is best suited to those who run an independent business –  as opposed to inside or field salespeople working in a cubicle for someone else. Still, it is a useful read and lays out an agenda for growth. By the way, that was the other distinction between this and other books. The emphasis here is on sharing and giving. In practice, that means 7L is great for longer-term thinking and not quite apt for relieving quarterly quota pressures.

 

Do X, Y and Z and the sales take care of themselves. The gist of those “X, Y and Z” is to give, be helpful, coach and expect business benefits to boomerang. Forget the advertising and the ‘selling.’ Really, that is the essence of what the book preaches. The system includes multiple steps of seven: spiritual (my word) affirmations (“someone needs me”), the necessity of consistency (like a “ritual”), which the book calls “deliberate investment of time” and goals i.e., being ambitious.

 

The 7 Levels Of Communication (in order of effectiveness) are:

  • 1 On 1 meetings
  • Events And Seminars
  • Phone Calls
  • Handwritten Notes
  • Electronic Communication
  • Direct Mail
  • Advertising

Elsewhere, the book offers the 7 Steps To A Power Note (don’t forget to use a blue pen, use ‘you’ instead of ‘I’ and have a P.S.- page 50), How to tell a Successful Success Story (page 66) and the four behavioural styles – DiSC or Dominant (get to the point immediately), Influence (love socialising and crave fun and energy), Steadiness (slow, steady and systematic) and Compliance (perfectionists who crave order, detail and crispness) – to use them in order to interact with people accordingly. Maher has a twist on the old wisdom of treating people the way you want them to treat you. His advice is to treat them the way they want to be treated, according to their personality, not the way you want to be treated. Finally, there is a script for asking for referrals (page 96), which substitutes the direct question with the more indirect, “Who would you recommend for …?” Whoever they recommend, the follow-up question is to be curious, positive, find out why and ask what it would take for you to become their go-to recommended professional.

In addition to these, the book introduces the concepts of The Ego Era, Generosity Generation and L.I.F.E. That last one stands for Learn, Implement, Fail and Evaluate, but no need to worry. The book includes a glossary at its end.

After car salespeople and lawyers, realtors probably have the third worst reputation out there. It says something about the book, and what it teaches, if the author has succeeded in his business, become rich and has done it through referrals and popularity.

Dec 052023
 

Photograph Credit: Markus Spiske

Did you know that email existed in the 1970s? I consider myself fairly tech-savvy and this was news to me. This post is not about email’s incept date however. It is about spam email. Like many nowadays, my primary email address is a Gmail one. Looking at my Inbox, my Gmail address has been with me for almost twenty years. Google, which runs Gmail, purchased an anti-spam company years ago and has kept its users’ Inboxes largely free of junk email a.k.a. Spam. Not so, with Hotmail, which was purchased by Microsoft more than two decades ago. Hotmail, which has evolved into Live.com and Outlook.com, still has a spam problem. It famously had an open directory of addresses in the early days. My Hotmail account, which is largely ignored, predates my Gmail account by a couple of years. Logging into it this morning, what were the top emails in my Inbox? The most recent one’s subject line read, “The..Best..Gifts..on..Oprah’s…Favorite…Things..List” followed by “Life, Liberty, and the Pursuit of Savings! Get 16×20 Canvas Prints for $14.99.” Mind you, these were emails that Hotmail had not caught as spam despite all these years of my clicking that most useless of buttons, namely ‘Report Junk’ in the menu. Nevermind that someone out there managed to equate purported savings for a trinket with life and liberty!

 

The preamble grew long in order to give context and paint a picture, but when confronted with the aforementioned spam emails and coming across an article about the very first spam message (and subsequently finding another and more detailed one here) it got me thinking.

Why can’t we stop spam? Who are the people who perpetuate this menace (surely someone must be buying those Oprah favourite things to keep spammers in business) and what could be done about this menace, which at best is a waste of time and at worst could lead to phishing, ransomware, malware? Many spammers use our Inboxes to steal banking information, identities or to take down organizations. To do so they may even take over servers not belonging to them.

Photograph Credit: Diego PH

Then it occurred to me. The reason why, according to one online source, “60 billion spam emails are forecasted to be sent daily from 2019-2023,” is that it is almost free to send. Indeed, someone is buying into the spam messages, but sending spam emails by the millions is easy and low cost. Governments do not follow up or enforce anti-spam laws and perpetrators have no shame or care, but most of all it is done because it can be done so cheaply. The answer therefore has to be that email should not be free. Sending an email has to be analogous to sending a snail mail letter and has to cost something. Think about it. Any amount associated with sending an email would work.

Photograph Credit: Philipp Katzenberger

Let’s say we assign a cost of one cent to an email. This would be payable to the ISP or hosting provider or even the company that controls the gateway. Let’s say that I send 10 emails a week to friends and family. It would cost me 10 cents. Even if I send 100 personal emails a week a one dollar bill – or equivalent value in your part of the world – would not break the bank. At work, if 100 employees send 200 emails a week or even 1,000 emails a month the cost incurred would be $10 (1,000 * 0.01)/month/person, which the company would shoulder or reimburse. Surely, most people think of this amount as reasonable – especially to slay the scourge that is unwanted email.

 

Let’s go back to the world of spammers now. Remember that 60 billion daily spam count? Well, it would cost spammers $600 million a day. They would be out of business. Many spammers send out millions of emails weekly or daily. Could that be the end of that? Searching the net to see if anyone else had come up with a similar notion the first two pages were full of advice on how to handle spam, ironically including this one from Microsoft, but not much detail or discussion on making email cost-based. The idea has been thought of before as evidenced by this page. Still, and apparently, not enough

 

Things That Need To Go Away: Anyone Who Buys Something From A Spam Email