Ask AI contributor Chris McLellan breaks down his top picks of the legal, societal, and technological issues that will be facing the Canadian and international artificial intelligence sector in 2024 and beyond.
Editor's note: This post includes the opinions and assumptions of its author. While effort has been made to include links to related information, readers are encouraged to undertake their own investigations into the issues raised in this post.
1. AI vs. Copyright
Chat GPT was primarily trained on data scraped from the public internet (websites, forums etc) prior to the year 2020. While Open AI is definitely not the first to copy data in this way, they are the first AI company to translate the large-scale collection of public information into a business model that stands to make a single organization billions of dollars.
As a result, a slew of lawsuits have emerged claiming that Open AI is profiting off protected works. In late 2023, the New York Times, one of the highest-profile publishers in the world, filed a claim for billions in damages which will play out in 2024 and beyond.
This is set to be a defining case in artificial intelligence, and one that is likely to have major repercussions for the future training and operationalization of AI models, including the GPT Marketplace.👇
Further reading:
2. AI vs. personal data ownership
The first major AI-related event to take place in 2024 will be the January launch of the GPT Marketplace which will see thousands, perhaps millions of ordinary people and organizations develop niche AI solutions by leveraging Open AI's proprietary Generative AI technology.
Basically, any person or organization with a spreadsheet or database will be able to send a copy of their information to a giant brain in the cloud and launch a chatbot.
Sounds like a democratized approach to AI innovation, right?
Perhaps, but once a citizen, nonprofit, or municipality shares a copy of their dataset with an AI vendor like Open AI, and encourages their users to share even more data via prompts, there's no way to regain control of that information, or how it is used by 3rd parties who pay Open AI to access it.
These "bot marketplaces" will inevitably cause a feeding frenzy for personal and other sensitive information driven by commercial developers trying to improve their offerings.
Copyright lawsuits related to AI training data notwithstanding, this year we should expect to see a rise in popular demand for the development of technologies, regulations, and laws that give them increased control of personal data, including, and perhaps especially, the data generated by their children.
Further reading:
3. Regulatory turf wars
The potential for AI technology to impact virtually every corner of our society and economy has not gone unnoticed by regulators. Unsurprisingly, the EU was the first to pass a comprehensive framework that other countries, including Canada, are attempting to accommodate.
This might make it sound like AI is mostly under control, but the truth is that all AI regulations are still in their infancy, and we should expect things to get a lot more complicated before they get simple.
In Canada, for example, Federal legislators are focusing their efforts on Bill C-27 (which includes the Artificial Intelligence and Data Act) while Provincially-based Privacy Commissioners are simultaneously drafting their own sets of frameworks. We are also seeing professional bodies, standards agencies, trade unions, and individual businesses getting involved.
But what should not be lost in all of this rule-making activity are priorities. The potential for AI to cause negative disruption and even personal harm in areas like warfare, policing, education, elections, and healthcare far outweigh our need to restrict its use in other areas.
Further reading:
4. Open Source AI vs. Proprietary AI
The development of foundational AI models is incredibly complex and requires a level of brain power, computational power, and data collection that only tech giants can realistically support.
It's also important to note that AI/MI is not a category of software, but an entire class of technology, and so concentrating such power in so few hands is something we might wish to avoid for obvious reasons.
Furthermore, as regulators and end users demand greater protection and control of the data used in AI training and operationalization, many organizations, such as those operating in highly-regulated sectors such as healthcare and finance, will be simply unable to use proprietary Generative AI models like ChatGPT, and/or they will be unable to afford the self-hosted versions that enable them to comply with data protection demands.
Open sourcing is one way to help balance the AI landscape, but for the development challenges stated above, foundational models have proven very difficult to crowdsource in the same way that has been done with personal computer operating systems, for example.
Recently, there has been some positive movement in Open Source AI, notably from Meta, whose LLaMa2 is an open source language model known for its adaptability and versatility.But here again we reach yet another issue, which is whether we can manage the risk of open sourcing AI. This technology is so powerful that is probably closer to nuclear fission than it is to operating systems in terms of its potential for destructive use.
Watch this space!
Ask AI exclusive:
Further reading:
5. AI and the disruption of education
Public schools, colleges, and universities were some of the first organizations to be impacted by Generative AI in a major way. From teachers using AI tools to plan lessons to students using ChatGPT to write their homework, the disruption caused by AI in the classroom has been fast and very serious.
It is already difficult to imagine a future where K-12 pupils, undergraduates, and post-graduate students do not have instant access to increasingly sophisticated AI models to support their studies. The question is whether they will be turning to these tools as limitless tutors to help them learn, or omniscient ghost-writers to help them cheat.
The bottom line is that the unfolding crisis of the use of Generative AI in our schools, colleges, and universities provides a unique opportunity for educators at all levels to re-evaluate the relationship between teaching, learning, and technology.
It will be fascinating to see how (and how fast) we respond to this challenge as a society.
Ask AI exclusives:
Further reading:
6. AI and Autonomous Warfare
One could argue that all the concerns that we have about AI as a society actually pale in comparison to its potential to fundamentally change how sovereign states conduct warfare.
While it is true that there has been a level of autonomy within military systems for many years, it has generally been applied to navigation to a predefined target (as in guided missiles) or target identification e.g. cameras mounted on loitering drones used to identify tanks.
However, the decision to "pull the trigger" (or engage the kill switch) has been left up to a human operator.
But has this started to change? Are AI-powered systems being used to automate the decision to kill? Reports coming out of the Ukraine-Russia War would seem to indicate that this might already be the case.
Will 2024 be the year that we see at least one country make the leap from signing declarations to the passing of a law that makes fully-autonomous weapons systems illegal?
Further reading:
7. AI and the automation of white collar work
For generations, blue collar workers have demonstrated remarkable resilience in the face of technological advancement.
With some exceptions, when a machine has taken over a manual task (such as weaving, painting, or assembly) workforces have proven adept at making the transition to become machine operators and repairers, or shifting to other areas of the economy that are less vulnerable to automation, such as transportation, healthcare, and food services.
Throughout these cycles of disruption, office workers have had it pretty easy. The introduction of the mainframe computer in the 1960s, the desktop computer in the 1980s, software like CRMs and ERPs in the 1990s, and the SaaS and social media explosion of the past 20 years have all created more office jobs than they have taken away.
But the cozy relationship between white collar workers and digital technology changed dramatically in 2023, when Open AI let loose the power of Generative AI on an unsuspecting World.
Suddenly, anyone who makes there living with a keyboard or mouse became keenly aware that their career might not be as stable or secure as they once thought. Interestingly, it was script writers in the North American entertainment industry who were the first to push back in a meaningful way. This profession, unlike the majority of the knowledge economy, is unionized, and so taking industrial action was a familiar tactic.
So where does that leave the millions of marketers, analysts, customer support reps, financial planners, lawyers, and IT workers who are the focus of thousands of new Generative AI tools that are seeking to "augment" their productivity with automation?
It's difficult to say, but it's likely that this year will see trade unions start to find more receptive audiences in the glass towers that have kept them at bay for so long.
Further reading:
How the AI revolution is different: It threatens white-collar workers
DeepMind's cofounder warns that AI is a 'fundamentally labor-replacing' tool
Duolingo lays off staff as language learning app shifts toward AI
IMF warns AI to hit almost 40% of jobs worldwide and worsen overall inequality
AI is on a collision course with white-collar, high-paid jobs — and with unknown impact
8. Collaborative Intelligence vs. Collective Intelligence
Data may be the "new oil" that's powering the 4th industrial revolution, but it's intelligent information that's the rocket fuel that will power organizations to new heights of efficiency and innovation.
After all, "two silicon brains are better than one" and the resulting pursuit of intelligence will inevitably lead organizations to seek more and more sources of knowledge, both artificial AND human, in order to produce smarter products and services.
The ones that will thrive will find ways to achieve this in a way that does not require data contributors to give up control of their information. This approach, known as "data collaboration", generates "collaborative intelligence", which closely mirrors how humans developed technologies like language and agriculture.
Many contribute, many benefit, but all retain control.
The opposite of this is known as "collective intelligence", supported by "data sharing", which is another way to say "copy and paste". FYI, this one-way exchange of intelligence was made famous in Star Trek by "The Borg", an alien group that maintained a hive mind that forced all contributors to serve only the central entity.
In 2024, look for an increasing nuber of AI models and smart solutions that are powered by "data collaboration" in one form or another.
Further reading:
About Ask AI
Since 2017, the volunteers at the Ask AI nonprofit have been committed to raising understanding and awareness of Canada's world-class artificial intelligence sector, including its innovations, investments, and warning signs. We produce a popular podcast, informative newsletter, and open research.
Our Advisory Committee includes leaders from some of Canada’s most influential organizations, including the Vector Institute, Mila, AMII, AInBC, and the Responsible AI Institute.
Visit our website to subscribe and learn more about volunteering, guest posting, and sponsorship opportunities.
Comments