Zebra’s AI trend predictions for 2024

Zebra’s AI trend predictions for 2024

AI will be a dominant theme at LEAP 2024, where Zebra Technologies will attend (stand H1A.G30 with partner, Ingram Micro) and share its eight predictions for AI for the year.

ChatGPT made waves last year but large language models (LLMs) also hit the headlines, showing seemingly human-like engagement and multimodal content generation across video, images, audio and text. Meanwhile, other LLM leaders showed that generative AI could work on-device, eliminating the need for costly cloud service.

Google Deepmind launched the powerful multimodal Gemini model and GNoME for material discovery. We also heard increased discussion around AI ethics, with high-profile summits and legislation aiming to guide the development and use of AI.

The next 12 months will certainly see some of last year’s advances move forward in sophistication and real-life use cases. And we’ll see new things around research and application that we haven’t even thought about yet. The art of the possible is becoming the art of the real at greater speed.

2024 will be the year of the AI agent and the self-improving AI agent. However, as AI scientist Dr. Ian Watson has noted, the AI agent isn’t new, with important papers as early as 1995 on the topic. What is new are the capabilities that AI agents will have thanks to LLMs. AI agents are tools that possess a level of autonomy beyond that of machine learning models or traditional computer programmes. AI agents can sense, learn, respond and adapt to new situations and make decisions with little human intervention.

Open-source frameworks like LangChain allow LLMs to interact with software and databases to complete tasks. OpenAI has released an Assistant API which serves as an agent and launched a platform for creating custom AI agents called GPTs and a GPT Store. Meanwhile, a research team at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) has recently developed an approach that uses AI models to conduct experiments on other systems and explain their behaviour. Their method uses agents built from pretrained language models to produce intuitive explanations of computations inside trained networks.

Could 2024 be the year where everyone gets an AI copilot? Generative AI use-cases will continue to become mainstream across industries. Like GitHub’s Copilot for developers and Microsoft’s 360 Copilot for desk workers, we’ll see more copilots come to market, serving the needs of front-line workers in retail, healthcare and manufacturing, for example.

When it comes to LLMs, we’ll see small, fine-tuned, task-specific LLMs outperform LLMs like GPT, Palm and Claude for most enterprise use-cases. This trend is expected to continue, potentially leading to the emergence of an industrial copilot that leverages LLMs to streamline and optimise industrial processes. By fine-tuning models for understanding and generating language specific to industrial workflows, these LLMs can serve as intelligent assistants or copilots, offering valuable insights, automating routine tasks and enhancing overall workforce efficiency. The integration of task-specific LLMs into enterprise software and hardware holds the potential for diverse applications across various industries. In manufacturing, these models could aid in quality control and predictive maintenance. In retail, they might augment retail assistant product knowledge, help in generating compelling product descriptions, improve online customer interactions and provide personalised shopping recommendations based on individual preferences and trends.

We’ll see the rise of on-device and on-premise generative AI processing, with smaller models that require less power, reduced cloud costs and improved privacy, with AI deployment and monitoring of services becoming commoditised. The shorter training time for smaller models allows faster iteration through different experiments. Swiftly identifying effective approaches and refining models becomes more feasible with quicker experimentation, contributing to the improvement of LLMs. And, cloud cost will continue to be the biggest profit and loss line item, with cost control being a major focus of the CTO office.

While some legal disputes about generative AI and IP will continue, others will emerge, but so will more standards, including from technology companies like Microsoft to guide the ethical development and use of generative AI. Alignment and regulation of AI will be in the spotlight, with safety debates becoming more mainstream. The short-term risks of generative AI, like disinformation and deep fakes, will become more apparent, particularly in regions with elections, conflicts and other turmoil. Addressing these risks will be prioritised over long-term extinction level risks from AI suggested by some AI and technology leaders.

Larger companies will increasingly realise that they must accelerate their AI strategy to stay relevant and competitive. This will result in a consolidation of AI start-ups, more specifically those that have been around for three to eight years in the more mature aspects of AI.

Companies will also prioritise the democratisation of AI and machine learning as a strategic priority. Whether a desk worker, retail assistant, engineer, finance professional, developer or designer, workforces will be upskilled and given learning resources and readymade, easy to use AI tools that take on some tasks and support workers in other areas of their role. Eventually, this approach will be standard rather than a differentiator in the battle for talent and skilling the workforce. Those who can introduce and use new copilots will put themselves at an advantage.

There will be advances in multimodal AI, with increasingly ‘perfect’ content across audio, video, image and text, along with the integration of multimodal systems with robots and vehicles, as seen with the likes of Volkswagen and BWM who are introducing LLMs into car systems. Voice/LLM-based operating systems will create new and improved ways to interact with devices such as hands-free and smart voice assistants. We’re already seeing early signs of this with the recently launched Humane AI and Rabbit R1.

Open-Source AI will mature, particularly around generative AI. The open-source landscape has grown a lot in the past 12 months, with powerful examples like Meta’s Llama 2 and Mistral AI’s Mixtral models. This could shift the dynamics of the AI landscape in 2024 by providing smaller, less resourced entities with access to sophisticated AI models and tools that were previously out of reach.

Transparency becomes a pivotal advantage for open source LLMs. With open source, we have security gains from the collaborative efforts of the community. The scrutiny of many eyes on the code facilitates the swift detection and resolution of security vulnerabilities. Open-source approaches can also encourage ethical development, as more eyes on the code means a greater likelihood of identifying biases, bugs and security vulnerabilities. Experts have also expressed concerns about the misuse of open-source AI to create disinformation and other harmful content. In addition, building and maintaining open source is difficult even for traditional software, let alone complex and compute-intensive AI models.

Click below to share this article

Browse our latest issue

Intelligent CIO Middle East

View Magazine Archive