Select your language: EN JP

Extracting Value from AI for Data Scientists Through the CEO: Pitfalls & Best Practices

Lux Research Digital Team
April 23, 2021

Artificial intelligence (AI) remains a hot topic and priority for a massive community that includes corporations, startups, governments, VCs, and more. There is no arguing its impact already, as evidenced by tremendous leaps in areas like facial recognition and language translation. At Lux, we work with many players in the physical industries that are aiming to use AI to transform operations, products, and business models. As we reflect on 2020, it's fair to say that AI has not had the promised impact here that more traditional tech companies have seen.Lux Research Digital Deep Tech Newsletter

Having worked with many companies and observed dynamics in projects and initiatives, we remain convinced that this lack of return is not because the technology isn't impactful or suitable but often due to a lack of understanding of the bottlenecks to deployment and poor setting of expectations and goals. In fact, we have seen some companies more interested in using AI for something rather than identifying challenges where AI is best positioned to solve a problem. Indeed, you'd be surprised at how much money has been invested in AI without this clarity.

We have written a lot about what AI can do, how to choose the right problems, and how to scale it. At this stage, it's useful to consolidate these learnings into some concise pitfalls to avoid and best practices to employ to level set expectations and approaches within all levels of any organization. They are:

  1. If your data is dirty (i.e., inaccurate, incomplete, inconsistent), your results will be too (expect data wrangling to occupy at least 45% of your data scientists' time).

  2. Small data sets compromise the insights available.

  3. The lack of in-house data scientists/ML experts to develop and deploy models is a big barrier, as "plug and play" AI solutions are rarely that and often require a lot of customization.

  4. Accuracy is not guaranteed, so you must understand where that may arise and how to deal with it.

  5. Explainability is a huge obstacle but critical to earning trust and buy-in from skeptics or novices.

  6. AI doesn't work for every system but does where the past is a good predictor of the future (i.e., not the stock market).

  7. AI does not always mean real-time insights, which require considerable technical investment to achieve.

  8. Bias can creep into an AI system over time, so deployments must be reviewed consistently and tweaked or retrained as needed.

For practitioners, some of this may be reflexive; however, it's clear that making AI successful requires a unified vision for where it makes sense to use AI and how to maximize the chances of positive results, for experts and non-experts alike. This means everyone from the data scientist to the CEO to the board needs to know what it takes to be successful with AI and have the right expectations for the outcome.
Battling COVID-19 with Transformational Tech: Singapore Case Study

Battling COVID-19 with Transformational Tech: Singapore Case Study

Read More
The Digital Transformation of Supply Chain Management Report

The Digital Transformation of Supply Chain Management Report

Read More
Lux Research Establishes New Framework for Selecting Robotics Vendors

Lux Research Establishes New Framework for Selecting Robotics Vendors

Read More
More Digital Research
Schedule Your Demo