What is coming in AI this 2024

We’ve picked a few more specific trends. Here’s what to watch out for in 2024. (Come back next year and check how we did.)
Artificial Intelligence

We boldly stepped into the unknown last year and forecasted the direction of technology. How did we perform? We anticipated the emergence of multimodal chatbots in 2023, such as Google DeepMind’s Gemini and OpenAI’s GPT-4, which can interact with text, graphics, and audio. We also anticipated stricter laws, which the EU’s AI Act and Biden’s executive order brought about. About open-source startups taking on big tech, we were partly right, but notable businesses like OpenAI and Google DeepMind continued to exist. Though there aren’t yet any AI-developed medications on the market, the pharmaceutical business is still in the early stages of its transformation.

We are again resuming our efforts and focusing on the year 2024. We are ignoring the apparent trends, such as the regulators’ changing role and the persistence of huge language models as the dominant models. Rather, we are concentrating on particular advancements to monitor.

What is coming in AI this 2024


What is coming in AI this 2024

First off, personalized chatbots have become popular. Leading companies like Google and OpenAI are promoting user-friendly platforms in 2024 that enable anyone to develop customized chatbots without knowing how to code. This might cause the number of unique AI applications to soar. Imagine a customized AI model helping a real estate agent create property listings with ease. However, these models are only as good as their dependability since they can be hacked and frequently produce fake or biased output.

In 2024, the spotlight is on tech giants who’ve poured funds into generative AI. Now, it’s time for them to show that this investment pays off. Leading the charge, Google and OpenAI are taking a unique approach: they’re making it easier for everyday folks to jump into the AI world. Both companies have rolled out super user-friendly platforms, allowing people to tweak sophisticated language models and whip up their mini chatbots, tailored to their personal needs. The best part? You don’t need to be a coding whiz to do this. They’ve even launched web tools that turn anyone into a budding AI app creator.

This year could be a game-changer for generative AI, especially for those who aren’t tech-savvy. We’re expecting to see a surge in people experimenting with a plethora of tiny AI models. Today’s advanced AI models, like GPT-4 and Gemini, are not just about words—they’re about visuals and videos too. Think about the endless possibilities this opens up. Take a real estate agent, for example. They could upload texts from old property listings, tweak a robust AI model to churn out similar content with just a click, and even add videos and photos of new properties, asking the AI to craft a compelling description.

However, the real test is in the reliability of these models. Often, language models can get creative and not in a good way—they make stuff up. Plus, they come with their own set of biases and are surprisingly easy to hack, especially with internet access. Tech companies haven’t quite figured out how to tackle these challenges yet. Once the initial excitement dies down, they’ll need to come up with ways to help users navigate these issues.


Generative AI’s next big thing is expected to be video.

It’s incredible how quickly the extraordinary becomes the norm. Just last year, the first generative models capable of creating lifelike images burst onto the scene, quickly turning from groundbreaking to everyday tech. Tools like OpenAI’s DALL-E, Stability AI’s Stable Diffusion, and Adobe’s Firefly have been churning out stunning images that range from fashion-forward popes to award-winning artwork. However, it’s not all rosy; alongside the creative wonders, we’ve also seen a rise in less savory content like derivative fantasy art and problematic stereotypes.

Now, the cutting edge of this technology is shifting from static images to dynamic videos. Imagine everything we’ve seen in image generation, but on a grander, more complex scale.

Just a year ago, we saw the first attempts at this with generative models piecing together short video clips. The initial results were a bit rough—think blurry and uneven. But the technology is evolving rapidly.

Runway, the startup behind these generative video models and co-creator of Stable Diffusion, is constantly updating its offerings. Its latest tool, Gen-2, can produce videos that are just a few seconds long, but with impressive quality. Some of the results are almost on par with what you might expect from a studio like Pixar.

Runway’s not stopping there; they’ve launched an AI film festival, featuring experimental films created with AI. This year’s festival has a prize pool of $60,000, and the top 10 films will be showcased in New York and Los Angeles.

Major film studios, including big names like Paramount and Disney, are paying attention. They’re integrating generative AI into their workflows, from syncing actors’ lip movements in different languages to pushing the boundaries of special effects. For instance, the 2023 movie “Indiana Jones and the Dial of Destiny” featured a remarkably younger-looking Harrison Ford, thanks to deepfake technology.

Beyond Hollywood, deepfake technology is making waves in marketing and training. Synthesia, a company based in the UK, has developed tools that can transform a single acting performance into an endless array of deepfake avatars, each delivering any script at the push of a button. Impressively, nearly half of the Fortune 100 companies now use their technology.

However, this rapid advancement raises significant concerns, especially for actors. The use and potential misuse of AI in studios were central to last year’s SAG-AFTRA strikes. As Souki Mehdaoui, an independent filmmaker and co-founder of Bell & Whistle, points out, “The craft of filmmaking is fundamentally changing.” This evolution in technology is not just about new tools; it’s reshaping the entire industry.

AI-generated election disinformation will be everywhere 

As we can see from recent elections, deepfakes and AI-generated false information are becoming a big problem. This is especially true since the 2024 elections are projected to have a record number of voters. We’ve already seen how these tools are being used by leaders for their campaigns. In Argentina, for example, two candidates for president used AI to make films and pictures that were meant to hurt their opponents. In Slovakia, the elections got hot when fake news stories said that a liberal, pro-European leader said hurtful things about beer prices and the price of beer. On the American political stage, Donald Trump has backed a group that uses AI to make racist and sexist jokes.

It’s hard to say how much these strategies have changed the outcome of elections, but the fact that they are becoming more common is scary. It’s getting harder and harder to tell the difference between real and fake online, which is a worrying trend in a political climate that is already tense and split.

A few years ago, only people who were very good with computers could make a deepfake. These days, generative AI has made it surprisingly easy and available, and the results are getting more and more convincing. This technology works so well that it can even trick news organizations you trust. For instance, AI-made pictures that say they show the war between Israel and Gaza have been flooding stock image databases, such as Adobe’s.

This year will be very important for people who are trying to stop the spread of false information. The tools we have now to find and handle these deepfakes are still very new. Watermarks and other solutions like Google DeepMind’s SynthID are optional and not always accurate. Also, social media sites aren’t always quick to remove false information. Finding and eradicating artificial intelligence (AI)-generated false information will be a significant, real-time task for us.

Robots that multitask

Roboticists are now making robots that can do a wider range of tasks by learning from recent progress in creative AI.

In recent years, AI development has switched from using a lot of smaller models, each one specialized in a different job, like drawing, labeling, or identifying images, to using big models that can do all of these things and more. For example, researchers have been able to make OpenAI’s GPT-3 better at many things by adding a few specific examples. This has helped it do things like solve coding problems, write scripts for movies, and even do well on high school biology tests. In the same way, more advanced models like GPT-4 and Google DeepMind’s Gemini are good at both language and image tasks.

This all-in-one model method is now being used in robotics. This means that robots might not just be able to do one thing, like flipping pancakes or opening doors, but could do a lot of different things in the future. In 2023, there were big changes in this area.

DeepMind made progress when they released Robocat in June. It was an improvement over Gato, which they released the previous year. This new model learns to control different robotic arms by making mistakes on its own. This is different from how robots are usually programmed to work with specific parts.

In October, DeepMind released RT-X, a new flexible robot model, along with a large new training dataset that was made with the help of 33 university labs. The RAIL team at the University of California, Berkeley, and other top study groups are also looking into similar technologies.

The lack of data is a big problem in this area. Generative AI can use huge datasets of texts and pictures from the internet, but robotic learning doesn’t have as many sources. To fix this problem, Lerrel Pinto from New York University and his team are working on ways for robots to learn by making mistakes and collecting data about those mistakes. In a more grassroots project, Pinto has even asked helpers to use iPhones on trash pickers to collect home video data. At the same time, big companies like Meta have started putting out large databases for robot training, like Ego4D.

This method is working well in the field of self-driving cars. Startups like Wayve, Waabo, and Ghost are leading the way in a new age of self-driving technology. They control vehicles with a single, all-encompassing model instead of several smaller, task-specific models. Smaller businesses can now compete with big names in their fields like Cruise and Waymo thanks to this new technology. Wayve, for instance, is trying its self-driving cars right now in London’s complicated traffic. It means that robots in many fields are about to make a big step forward.

Most Popular


Get the latest updates from
Curious With AI

Discover special offers, top stories, upcoming events, and more.

Please enable JavaScript in your browser to complete this form.