Artificial intelligence (AI) technology trends evolve quickly, and staying on top of these trends is a critical part of planning for those developing, adopting, or investing in AI. From our research and conversations with many AI and machine learning stakeholders, we've identified six key trends that we expect will have a significant impact on the capabilities of AI over the next several years.
These include the following:
1. Graph machine learning is unlocking insights from complex, interconnected datasets
Compared to traditional databases, graph databases can be used to better model highly interconnected data that are defined by the relationships between data points (also called nodes), such as networks. Graph machine learning goes a step further and trains machine learning models on graph data to enable a variety of use cases, such as categorizing nodes or predicting new relationships between nodes. These models are quickly moving out of research settings and starting to have an impact on complex real-world applications. For instance, Google Maps is using them to model road networks to improve the accuracy of its real-time traffic predictions. Clients should expect to see many other graph machine learning applications on the horizon, whether they be for knowledge graphs, content recommendation systems, or modeling protein-protein interaction networks, among many others.
2. TinyML is bringing machine learning to lightweight sensors and devices
TinyML, or tiny machine learning, is a subset of edge computing and a part of AI-enabled sensors. It consists of machine learning hardware and software capable of performing on-device sensor analytics at low power, typically in the mW range and below. TinyML is seeing rapid growth in interest and is starting to unlock new applications and processes thanks to its minimal computing and power requirements. For example, Shoreline IoT is developing a plug-and-play predictive maintenance system that can run for years on small batteries, reducing the installation time and upkeep requirements compared to traditional predictive maintenance systems. Companies like Reality AI and Edge Impulse are also developing end-to-end platforms to help customers build and deploy custom TinyML models.
3. Transformer neural network architectures are transforming NLP & are starting to impact other areas of AI
Transformer neural networks for natural language processing (NLP) process words in relation to all the other words in a sentence, rather than one by one in order, and have produced state-of-the-art results on many tasks over the past few years. These models are already significantly improving search engines and question-answering systems and are enabling the development of powerful language models like GPT-3. While these models will continue to enhance NLP systems, they have also proven to be a valuable general-purpose architecture for modeling many other types of sequences; Transformer-based systems have recently been used in applications like computer vision, multimodal text-image applications, and drug discovery.
4. Hybrid AI systems are combining the best of machine learning models and physics-based models while minimizing their weaknesses
While sometimes thought of as competitive to one another, machine learning and physics-based models often have complementary strengths and weaknesses. For example, machine learning models can operate with an incomplete system characterization and have minimal computational costs during inference, while physics-based models require a complete system characterization and have heavy computational costs. On the other hand, physics-based models are more easily interpretable and can make predictions outside of the historical data range, while predictions from machine learning models are often difficult to interpret and are limited to what the model has seen from historical training data. Researchers and engineers are still exploring the best practices for how to combine the two techniques (e.g., using physics-based models to generate training data for machine learning models), but companies like Cognite and Beyond Limits are early adopters bringing hybrid AI to industries like oil and gas.
5. MLOps or machine learning operations tools are enabling organizations to more efficiently develop, deploy, and monitor machine learning models in production
As we discussed in our report on scaling AI, more than 85% of machine learning models developed inside organizations never make it into production, where they can start adding value. MLOps tools address this challenge by helping organizations reduce friction in each part of the AI model life cycle, including development (e.g., tools to help track model training experiments), deployment (e.g., tools to help serve machine learning models), and monitoring (e.g., tools to identify bias or data drift). The number of MLOps offerings is growing quickly, and clients developing machine learning models internally should start seeking out vendors to solve their team's specific MLOps needs.
6. Low-code, no-code, and AutoML tools are steadily democratizing machine learning capabilities.
As the AI and machine learning space has matured, the underlying tools have become easier to use by a wider audience. While capabilities like AutoML don't fully remove the need for data scientists, companies like DataRobot have gained substantial traction by offering drag-and-drop AutoML tools for tabular data. Likewise, cloud players like Google Cloud as well as startups like Primer and Levity are starting to offer no-code machine learning tools for creating NLP and computer vision models. Related to this, companies like Streamlit and Plotly Dash offer low-code tools for rapidly building front-end interfaces to create sharable web apps for data and machine learning applications. Together, these tools are lowering the barriers to creating and sharing machine learning applications, and clients looking to bring AI capabilities to more parts of their organization should start exploring such tools.
Of these trends, Transformer-based NLP, MLOps, and democratized machine learning tools are more mature, and clients should plan on adopting these tools for near-term impact. Graph machine learning, TinyML, and hybrid AI are less mature but present an opportunity for companies to gain an edge by tracking, investing in, and adopting these technologies early on. Additionally, while this list focused on trends that are just beginning to emerge, technologies that Lux has previously covered, such as novel AI chips and privacy-focused AI techniques, are also poised to have a significant impact in the next few years and should remain on your radar.