US companies are failing in their AI data strategies, according to research by Hitachi Vantara. Nearly two in five (37 percent) US and Canadian organizations identified data as their “top concern” when implementing AI projects, but few IT leaders are taking steps to ensure proper data quality and management, jeopardizing the success of AI initiatives.
This is perhaps worrying for AI outcomes generally, considering that the US is supposedly the most advanced market globally in the adoption of commercial AI projects.
The research, conducted among 250 firms, found that almost three-quarters (74 percent) are testing and iterating on AI models in real time, without controlled environments, leaving room for “significant risk” and “potential vulnerabilities,” said Hitachi Vantara. Only 3 percent report using sandboxes to manage AI experimentation, which raises concerns around the potential for security breaches and flawed data outputs.
With AI leading to a dramatic increase in the amount of data storage that organizations require, managing and tagging data to ensure quality for use in AI models is becoming increasingly difficult.
Highlighting the disconnect in proper data management, only 38 percent of survey respondents said that data is available when they need it “the majority of the time.” Even fewer (33 percent) said the majority of the outputs of their AI models are accurate.
This may not be surprising when few are taking steps to improve their data quality. Nearly half (47 percent) don’t tag data for visualization, only 37 percent are enhancing training data quality to explain AI outputs, and more than a quarter (26 percent) don’t review datasets for quality.
The research also found that 61 percent of organizations are focused on developing general, larger LLMs, rather than smaller specialized models, despite large-scale models being much hungrier to train, consuming up to 100 times more power.
Not surprisingly, therefore, AI strategies are lacking ROI analysis or sustainability considerations. Only 30 percent are prioritizing ROI, and just 32 percent consider sustainability as a priority in AI implementation.
“Many people are jumping into AI without a defined strategy or outcome in mind because they don’t want to be left behind, but the success of AI depends on several key factors, including going into projects with clearly defined use cases and ROI targets,” said Simon Ninan, senior vice president of business strategy at Hitachi Vantara. “Success also means investing in modern infrastructure that is better equipped for handling massive datasets in a way that prioritizes data resiliency and energy efficiency. In the long run, infrastructure built without sustainability in mind will likely need rebuilding, to adhere to future sustainability regulations.”
In other news, Hitachi Vantara is collaborating with Virtana, a specialist in AI-powered monitoring and observability for hybrid infrastructure environments, to help bring down customer costs in complicated hybrid cloud environments.
The partnership integrates the Virtana Platform suite with the Hitachi EverFlex infrastructure-as-a-service (IaaS) portfolio to deliver “advanced” consumption-based infrastructure solutions for workload management and hybrid cloud operations, said the pair.
“By integrating AI-driven insights with IaaS capabilities, Hitachi Vantara with Virtana is streamlining operations and reducing costs, while helping businesses enhance their cloud efficiency and agility,” promises Hitachi.