You might be living under a rock if you haven’t heard that Generative AI is the new “greatest thing since sliced bread”. According to Forbes columnist, Lance Eliot, “You can absolutely expect that the topic of Generative AI is going to grab ahold of headlines throughout 2023. No doubt about it.” Look no further than the title of this article for evidence of Eliot’s claim. However, if you’re skeptical, you have every right to be. Change comes with consequences; and, in regards to AI, it would be unwise not to tread lightly.
Right now, however, Generative AI is sitting pretty, primed to be a major business venture in the New Year. Entrepreneurs will be rushing to create or utilize AI-powered products, looking to put another notch on their respective resume-belts, and 2023 is definitely the year to do it. It isn’t all set in stone, of course, but looking at the relatively recent success of companies like Jasper or Copy.ai, it’s hard not to see a potential gold mine within Artificial Intelligence.
While no one can fully predict the future of AI, we can surely take a whack at it. After all, forward-thinking is a necessary skill in the life of an entrepreneur. And, to catch a vision of something great, all one needs to do is pay attention.
Lance Eliot comes across as a man who has been paying attention. Globally recognized as an expert on AI, Eliot’s insights are highly regarded in the world of technology and business; and, in his most recent article, featured in the Forbes’ Innovation Column, he shares twenty-five of his 2023 predictions for Generative AI. You will find his predictions listed below or, for the full article, follow the link here. My personal favorites are #13 and #16. Enjoy!
Dr. Lance Eliot’s 2023 Predictions for Generative AI:
1) Text-to-Art gets more sensibly artistic
Text-to-art generative AI will get better at producing artistic outputs. Trying to discern whether the artistry was made by a human artist versus AI is going to be nearly impossible. Debates about whether this art is “true art” will arise anew. A decrying that this is going to put human artists out of work is going to persist. A contention also will be that this is art without a soul. Another is that this is art without any semblance of creativity due to having been devised by AI. The counterargument will be that art is art, generally suggesting that any semblance of a soul is in the eye of the beholder and not how the art was generated or produced. On the creativity front, this too will be hotly debated since the randomness and computational complexity of the generative AI will produce art that in the eye-of-the-beholder might seem just as creative if not more so than other or some human artists. Let the artist’s philosophical games ensue.
2) Text-to-Photorealistic-Image gains deeper fakery
You undoubtedly already know that there is an enormous amount of handwringing about the advent of deepfakes. People opt to edit a photo of a real person and make it look as though the person is doing something that they didn’t actually do. This raises all manner of disinformation, misinformation, and potentially defamatory and other concerns. Generative AI will up the ante. You will be able to merely enter a text prompt that indicates the name of the celebrity or other named person, and indicates what you want the imagery to depict, and the AI will produce a photorealistic image for you. You can then tell the AI to refine it, doing so until it is exactly a perfected deepfake. Hurrah for AI (assuming that the deepfake is made for positive and beneficial purposes), or perhaps yet another miserable and altogether exploitable use of AI (assuming the deepfake is made for nefarious purposes).
3) Text-to-Essay overcomes some hallucinations and guffaws
One of the most notable downsides of today’s generative AI is that it can potentially produce erroneous outputs. For example, suppose the produced essay about the life of Lincoln indicated that he used to fly around the country in his private jet. You and I know that this is silly and patently incorrect. The thing is, people reading the outputted essays won’t necessarily know that somewhere in the narrative could be false statements. Sometimes the errors are due to how the AI originally computationally did the pattern matching across the Internet, while in other cases other factors come to play. When AI goes a bit mathematically awry, the AI field tends to call this an AI hallucination, which is coined terminology that I earnestly disagree with and have said we should avoid this kind of false anthropomorphizing.
The key point is that we are going to have to contend with generative AI that produces misleading or outright false outputs. In some cases, the produced essay might contain a subtle and marginally false claim, while in other instances it could be drastically incorrect. Imagine asking a generative AI app to produce a recipe for pumpkin pie, and the generated essay includes a step that tells you to add poison to the batch. The person that follows the instructions might not realize that a poison is the indicated ingredient if perhaps the item is listed under some other naming. Not good.
Disturbingly, generative AI might be a fast path to producing vast and insidiously immersed disinformation and misinformation. It gets worse too. Here’s how. Assume that people will generate all manner of essays via generative AI. They proceed to post those essays onto the Internet. Nobody has especially screened those essays to make sure they are free of errors. The amount of added clutter that we end up adding to the Internet begins to multiply manyfold because people can easily use generative AI to create textual content for them. Ultra-massive amounts of disinformation and misinformation pile onto the piles we already have as made directly by human hands. Yikes, the Internet gets even worse than it already is in terms of suspicious content.
I’ll somewhat shift gears and bring up a pertinent aspect specifically about ChatGPT. As I’ve discussed in my other posts about ChatGPT, there was a concerted effort by the AI developers to try and reduce the bad stuff outputs. For example, they used a variant of what is known as RLHF (Reinforcement Learning from Human Feedback), whereby before they released the AI to the public, they had hired humans to examine various outputs and indicate to the AI whether there were things wrong with those outputs such as perhaps showcasing biases, foul words, and the like. By providing this feedback, the AI app was able to adjust computationally and mathematically toward reducing the emitting of such content. Note that this isn’t a guaranteed ironclad method and there are still ways that such content can be emitted by the AI app.
You might find of interest that ChatGPT is based on a version of a predecessor AI app known as GPT-3. ChatGPT is considered to be a slightly next step, referred to as GPT-3.5. It is anticipated that GPT-4 will likely be released in the Spring of 2023. Presumably, GPT-4 is going to be an impressive step forward in terms of being able to produce seemingly even more fluent essays, going deeper, and being an awe-inspiring marvel as to the compositions that it can produce.
I bring this up because there is a potential Achilles heel to these better and bigger generative AI apps. If any AI vendor makes available a generative AI app that spews out foulness, this could dash the hopes of those AI makers. A spillover can also cause all generative AI to get a serious black eye. People will indubitably get quite upset at foul outputs, which has happened many times already and led to boisterous societal condemnation backlashes toward AI.
4) Text-to-Video becomes the next Big Thing
I earlier herein discussed text-to-video. As mentioned, this is being pursued in research labs and you can expect to see some quite interesting and attention-grabbing announcements in mid-2023. The better stuff will likely be unveiled toward the end of 2023.
5) Text-to-X transmutes Into Multi-X Multi-Modal all in one
I earlier herein discussed the notion of having generative AI that can go to and from a multitude of output or input modes, which I’m calling multi-X or multi-modal generative AI. These will be rolling out in 2023. I’d guess that this will cause quite a splash of interest and generate more buzz about AI.
6) Art-to-Text gets abundantly descriptive
As earlier mentioned, we will see heightened AI capabilities at taking art as input and then producing an essay that describes the inputted artwork. The essay can be somewhat customized by the person using generative AI. For example, you could tell the AI app to produce a summary of artwork or instead instruct the AI to be overtly profuse and generate a lengthy gushing elaboration.
7) Photorealistic-Image-to-Text catches the essentials
As mentioned earlier, we will also have generative AI that produces essays about inputted photos. These first versions will be not quite as impressive as the art-oriented ones. Don’t worry, these AI apps will be markedly improved and do better in 2024.
8) Essay-to-Text does remarkable recaps
Many people using generative AI apps do not realize that most of these AI apps provide a feature wherein you can feed an essay into the AI and get as output a summary of the essay. For example, you can take a lengthy article that someone has written, feed it as a prompt into the AI app, and ask the AI app to produce a recap or summary. Not all generative AI apps do this, plus some have restrictions on the length of the inputs. In any case, the odds are that we’ll by the end of 2023 have people regularly using generative AI to produce summaries for posting on the Internet or use in other ways.
9) Video-to-Text makes impressive baby steps
I earlier mentioned that we’ll be seeing some video-to-text generator AI apps. I’d bet that once these get relatively good at doing appropriate textual essays about an inputted video, a lot of people will be eagerly making use of this functionality. I say this because rather than having to watch an hour-long video, it would be handy to have a written description of what the video conveys, such that you can just breeze through the written essay and then decide if you want to laboriously view the video. Humans do this type of written depiction by hand, right now, while in 2023 and into 2024, we will increasingly use generative AI to do this for us.
10) Multi-X Multi-Modal tries to do reverse splits
I earlier mentioned this capability of doing multi-X or multi-modal as the input, for which then the generative AI app reverse engineers the input and can split things out for us. Suppose I provide a drawing of Lincoln as input, and I ask to get this turned into a video about the life of Lincoln, along with an essay that goes along with the video. Nifty.
11) Prompt engineering establishes footholds
The manner in which you enter a text prompt can radically produce a different essay on output. In a sense, there are good ways and not-so-good ways to write a text prompt. Some pundits are proclaiming that we will need to train humans in how to write good prompts, for which they will have the vaunted title of a prompt designer or prompt engineer. Though this might occur in the short-term, in the medium and long-term the AI will be enhanced to do handholding when people enter prompts. The days of humans having that task will be numbered, mark my words.
12) Chain Of Thought protocol advances toward convention
When you enter a text prompt into a generative AI app, sometimes the AI is set up to allow you to create a kind of thread of discussion with the AI. You enter a prompt. The AI responds with some output. You then refer to the output and ask or indicate to do something else with it. This goes on repeatedly. For example, I ask the AI app to produce a life story about Lincoln. Upon seeing the essay produced, I enter a subsequent prompt that says to focus the essay on the Civil War. A new essay is generated. I then tell the AI app to only cover the Gettysburg Address. Etc.
In some instances, this prompt upon prompt can materially alter the essays being generated. Though I don’t like the naming of this, due to the anthropomorphizing involved, many AI insiders tend to refer to this as a chain of thought protocols (in my view, the even worse moniker is that this is a chain of thought “reasoning” as though akin to human reasoning). Anyway, I do believe that this chain of thought approach has some interesting technological possibilities, and I am anticipating more AI work to advance on this in 2023.
13) Real-time Internet-connected generative AI blooms
Some of the generative AI is based on scanning the Internet as of some particular cutoff date, such as ChatGPT was established as a cutoff in 2021. There are several reasons for this. One is that the computational effort to do real-time access to the Internet and feed this into the generative AI for producing real-time results can be onerous. People are expecting to get their generated results in seconds, whereas real-time computational scanning of the Internet could push this into minutes, hours, or even days. Another concern is that if real-time Internet-accessed info is used, this might not be as readily caught if it contains foul content, whereas with a generative AI that is stopped in time you have a better chance of during training getting those aspects possibly cornered. And so on.
The good news is that where there is a will, there is a way. All kinds of computational trickery and cleverness can be used to contend with a desire to do real-time Internet-connected generative AI. You’ll see this start happening in 2023.
14) Sensible coupling of Internet search and generative AI flourishes
I previously covered in one of my columns on generative AI and ChatGPT that some are loudly sounding an alarm that Google and other search engine companies will be forced out of business due to generative AI ostensibly taking on the Internet search chore. I pointed out that this is one of those Mark Twain moments whereby the death of search engines is quite prematurely being proffered. My viewpoint is that we will have a side-by-side coupling of search engines and generative AI. Recall too that I’ve pointed out the unnerving facet of generative AI producing so-called AI hallucinations and other foul outputs. We don’t expect our search engines to do this, and thus it makes sense to keep to the sidekick role for now the generative AI, such that it doesn’t taint an already well-respected and huge ad-revenue generating highly-trusted search engine.
15) Zero-Shot generative AI glimmers and simmers
Most of today’s generative AI was crafted by doing extensive scanning across the Internet. This takes gobs of computational processing. Generally, if you bring up a topic in your text prompt that is not one that was previously covered by some scanned content, you will get either a brisk and potentially vacuous output or simply an indication that the generative AI has nothing to say about that topic. Another approach entails what is sometimes referred to as a zero-shot. This suggests that an AI app can pontificate on a topic without necessarily having to extensively be pre-trained on that topic. You can expect to see the zero-shot generative AI getting a glimmer and simmering into something substantive during 2023.
16) Personalization and cascading of generative AI is the next mighty hook
Most of the generative AI apps tend to be generic with respect to the person using the AI app. The AI app doesn’t know you. Anything you enter is treated the same as if entered by anyone else. Some of the generative AI apps do allow you to save a thread that you can return to later on, thus, in a modest way allowing for a modicum of being aware of your presence. I’m expecting that in 2023 we will see a personalization capacity added to generative AI. Your particular interests and style of prompting will become a pattern tracked by the AI app and be used to hone responses to how you prefer them to be composed. Also, you can expect that the cascading of a generative AI output into other generative AI will also become relatively popular and commonplace in 2023.
17) Breakthroughs appear for generative AI speed and efficiencies
A thorny issue facing the AI makers that are allowing their generative AI to be used by the general public is the question of the costs involved. In the instance of ChatGPT, the cost is currently being eaten by the AI maker during this freebie sampling period. Part of the indicated basis for having opted to cut off the sign-ups of ChatGPT at a million people was that the cost per transaction is notable and chewing up the dough. In addition, as these generative AI apps get bigger and tussle with more and more data, along with possibly being real-time Internet-connected, there is going to be a fervent need for speed. From a computer scientist purist perspective, finding ways to make generative AI faster and more computationally efficient is exciting and handy. The same kinds of breakthroughs in this particular domain can likely apply to a wide variety of other computing platforms and systems. Expect this to play out in 2023.
18) Synthetic data emerges from the shadows and does good
There is real data and there is synthetic data. An example of real data would consist of scanning the Internet for information such as the life of Lincoln. Synthetic data is when you essentially make up data for the purposes of training your AI. Rather than bearing the cost and effort of scanning for real data, you sometimes do things to create data that will be plentiful at the push of a button. In a sense, it is faked data, though usually based on some grounding that is real. The use of synthetic data for aiding the training and use of generative AI will be an emerging trend during 2023.
19) Flimsy generative AI starts to spoil the barrel
This is a sad face topic about generative AI. Now that generative AI has gotten its fifteen minutes of fame via the likes of ChatGPT, a lot of other AI makers are wanting to get into the same game. To make things abundantly clear, there are indeed already many bona fide generative AI apps that have been kept quietly under wraps or that the tech vendor was worried would get into trouble if the AI’s potential propensity to sometimes produce foulness was revealed while put into public use. Those generative AI apps are going to soon be marketed so that everyone will know that there is more than one mover and shaker in town. The limelight will shine upon many.
This though will also have a downside. There will be some generative AI rushed into the public eye. These flimsy versions are going to be rife for producing foul outputs. People will get upset. Whether society can distinguish between one maker’s generative AI versus another will be a big question. The flimsy versions could spoil the whole barrel. We will need to wait and see how this plays out in 2023.
20) Wild mishmash of generative AI apps with scams included
I have more than just a sad face on this one, it is a tooth-grinding grimacing face. In an upcoming column, I will be discussing how generative AI can be used to do evildoing, such as having the AI produce malware for you. All you need to do is tell the generative AI to do so, even if you have no clue how to code up malware on your own, and the generative AI app will produce the devious code. I realize that maybe this seems techie nerdish, so let’s consider other evil acts. Suppose you want to try and scam somebody, such as those emails that tell people you are a prince with lots of money and all you need is their bank account number to send them a zillion dollars to hold for you. Generative AI can help you come up with and devise such essay-based scams. I guess that’s why we can’t have any new toys.
21) Monetization of generative AI struggles for dough
I have an important question for you. How will people be able to make money off of providing generative AI apps? We don’t know for sure yet that these are truly money-making apps. Would you be willing to pay a transaction fee or a subscription fee to have access to a generative AI app? Maybe yes, maybe not. Some people are only having fun and playing with generative AI just for kicks, therefore the cost would presumably need to be on par with other forms of online fun such as using online games. Others are trying to more seriously use generative AI more for doing work-related tasks. For example, in my AI Lab, we have been experimenting with and adapting generative AI for use by attorneys in performing legal tasks such as putting together a legal brief. Lots and lots of ideas are floating around about how to leverage generative AI to make a buck. The odds are that 2023 is going to be the show-me-the-money year as to whether there are viable ways to turn generative AI into real-world money-makers. Follow the money, as they say.
22) Adverse carbon footprint undercuts generative AI accolades
I’ve previously discussed in my columns that one worry about the burgeoning use of AI is that devising and running these computationally intensive apps consumes a lot of computer processing power, see my analysis at the link here. To the surprise of many, there is a carbon footprint associated with AI. We need to weigh the benefits of AI against the societal costs of the carbon footprint. Expect to see AI Ethics and AI Law rising to bring greater awareness about the AI carbon footprint, including potentially enacting laws about the need to report on and publicly disclose carbon production regarding AI and what is being done to mitigate it. Nothing in life is free.
23) Generative AI toxic transgressions bode for grand condemnations
I’ve already mentioned several times herein that the generative AI of today can produce foul outputs. All it will take is for some of the generative AI in 2023 to produce outrageously biased commentary or other foulness and a societal backlash might suddenly erupt. When this happens, and it will, I am at least hoping that added attention to AI Ethics will be a kind of silver lining in that cloud. You can also bet that the impetus to forge new AI-related laws will likely be sparked by these unsavory occurrences. Regulators and legislators will get riled up.
24) European Union AI Act (AIA) enacts with ballyhoo and gotchas
I’ve written extensively about the EU AI Act (AIA) that is being drafted and revised. This will be by far the most significant new law about AI and will have monumentally sweeping effects. I am betting it will finally get enacted in 2023. Among the many controversies about this law is that it takes a risk-based approach to classify AI systems. In brief, there are four classifications consisting of (a) unacceptable risk, (b) high risk, (c) limited risk, and (d) minimal risk. Some believe that this is the best way to cope with AI from a legal perspective. Others disagree and assert that the risk framework is going to be untenable and create all manner of confusion and trickery by those that make or field AI. I have my own opinions on this, as discussed in my column postings. In any case, if indeed the EU AIA passes in 2023, you can certainly anticipate that there will be a whole lot of ballyhoo involved. We will all be waiting with bated breath to see how things go. Will this law aid in putting a lid on AI For Bad, or will it become an unintended killer of AI For Good, or end up somewhere in between? Stay tuned to 2023.
25) USA Algorithmic Accountability Act sits but stirs into consciousness
The United States has been slowly and gradually tussling with a bill in Congress that would be a large-scale AI law, known as the Algorithmic Accountability Act. I’ve discussed the draft, and also covered other associated federal and state AI-related legislative efforts. You might especially find of interest my analysis of the AI Bill of Rights that was released by the White House in 2022, see the link here. If the EU AIA passes in 2023, the odds are that this will awaken and fuel the US legislative efforts. At the same time, some will press for waiting to see how things go with the EU AIA before proceeding headstrong into a USA AI law. Partially, the push in the US would be accelerated if any big-time generative AI or other notable AI snafus caught widespread attention across the country. All in all, my prediction is that though the US effort will be stirred, I don’t see much movement forward until after the 2024 elections. Until then, the hustle and bustle of dealing with a large-scale AI law won’t seem worthwhile, unless of course some demonstrative bad thing happens with AI and an outcry makes the pursuit a sudden hot priority.