The Tabular AI Gap: Why Generative AI Is Leaving Value on the Table

Generative AI gets the headlines. Predictive AI gets the ROI. The problem is that building prediction models on structured data has always been slow and expensive, one bespoke pipeline per use case. Tabular foundation models change the equation: a single pretrained model that generalises across tasks. The foundation moment that transformed language and vision has just arrived for tabular data.
March 12, 2026
|
Business
Theo Marcolini
March 12, 2026

If you work in tech, finance, or any data-driven industry, you've probably noticed something strange over the past three years. Every conference, every board meeting, every LinkedIn post treats AI as if it means one thing: generative AI. ChatGPT, Midjourney, coding assistants, content generators.

But here's the thing: generative AI is a subset of AI. A loud, visible, well-funded subset. And the amalgamation between the two is costing companies real money.

There's another branch of AI that has been quietly delivering measurable ROI. It doesn't write poetry. It doesn't generate images. It predicts what's going to happen next, and it does so using the structured, tabular data that powers virtually every critical business decision.

It's called Predictive AI. And it's about to get a massive upgrade.

What is tabular data?
Tabular data is structured information organised in rows and columns, the records living in your ERP, CRM, and data warehouse. It is the primary language of business operations, and it drives the prediction tasks that generate measurable EBIT impact.

The Great AI Confusion

Let's start with what the data actually says.

McKinsey's 2025 Global Survey reports that 88% of organisations now use AI regularly in at least one business function, up from 78% for the previous year. That sounds impressive until you dig into S&P Global's Voice of the Enterprise data, which shows that 60% of organisations have implemented generative AI, while only 51% use pattern-recognition (predictive) models.

Generative AI, which has existed in its current form for roughly three years, has already surpassed predictive AI in adoption rates, despite predictive models having a 15-year head start.

Menlo Ventures puts the picture in even sharper relief: enterprises invested $37 billion in generative AI in 2025, a 3.2x surge from $11.5 billion in 2024. Meanwhile, the global predictive analytics market reached just $22 billion that same year, growing a modest 16% YoY. GenAI has not only overtaken Predictive AI in absolute spending but is outpacing it with triple the growth rate, fueled by aggressive adoption of LLMs and Copilots.

The result is a strange inversion. The AI that captures the most attention and capital is not the AI that captures the most value.

Five Reasons GenAI Ate the AI Narrative

Understanding why this happened matters, because it reveals where the opportunity lies.

Generative AI created a consumer market. Predictive AI never did. Before ChatGPT, AI was infrastructure. It ran fraud detection, recommendation engines, demand forecasting. Useful, but invisible. It was the first time regular people could interact with AI directly. That turned it into a cultural phenomenon, not just a technology shift. There are now over 10 Generative AI native applications generating more than $1 billion in annual recurring revenue (Menlo Ventures, 2025). No consumer has ever interacted with a predictive model.

Generative AI produces visible outputs. Predictive AI produces invisible decisions. When a generative model writes an email or creates an image, the output is tangible, shareable, tweetable. When a predictive model correctly flags a fraudulent transaction or forecasts demand within 2% accuracy, nobody notices. The decision just happens. Generative AI gets attention for both its successes and failures. Predictive AI gets attention for neither.

The venture capital narrative machine is structurally biased. Of the $252.3 billion in total corporate AI investment in 2024 (Stanford AI Index), $33.9 billion went specifically to GenAI startups. VCs need narratives that excite LPs (Limited Partners). "We built an AI that writes code" raises a Series B. "We improved stock return prediction by 2%" doesn't.

The consulting complex reinforces it. Every major consultancy publishes annual GenAI reports that become reference material for C-suite strategy. McKinsey, Deloitte, BCG, all frame their AI surveys almost entirely around generative and agentic AI. When the people advising CEOs only talk about it, CEOs only invest in it.

GenAI failures are visible, public, and potentially reputationally costly. That visibility keeps the technology in constant debate.

The ROI Reality Check

This is where it gets uncomfortable for GenAI advocates.

MIT's NANDA initiative published a report in August 2025 based on 150 interviews, 350 employee surveys, and 300 public AI deployments. Their finding: only roughly 5% of generative AI pilot programs achieve rapid revenue acceleration. The vast majority stall with little to no measurable P&L impact.

That's not an outlier finding. The broader AI project failure rate sits at 70-85% according to research from MIT and RAND Corporation (Fullview, 2025). Perhaps more telling: 42% of companies abandoned most of their AI initiatives in 2025, up sharply from 17% in 2024. Companies aren't just failing to scale Generative AI. They're actively pulling back.

Deloitte's 2025 survey of 1,854 executives across Europe and the Middle East captured the mood perfectly. One telecom executive told them: "Everyone is asking their organisation to adopt AI, even if they don't know what the output is. There is so much hype that companies are expecting it to just magically solve everything."

Meanwhile, McKinsey's own data shows that only 39% of organisations report enterprise-level EBIT impact from their AI initiatives.

Now compare that with predictive AI's track record. According to DemandSage's 2025 analysis, financial institutions that adopted predictive analytics saw ROI of 200-500% within the first year. 77% of financial institutions now use some form of predictive analytics, up from 37% the prior year. In healthcare, 71% of nonfederal acute care hospitals have integrated predictive AI into their electronic health records (Datagrid, 2025).

The pattern is clear. GenAI captures imaginations. Predictive AI captures value.

Why Tabular Data Is Where the Real P&L Impact Lives

To understand why this gap exists, you need to understand how enterprise data actually works.

The commonly cited statistic is that 80-90% of enterprise data is unstructured (Gartner, IDC). Emails, documents, Slack messages, meeting recordings, images, videos. This is the territory where generative AI excels: processing and generating unstructured content.

But here's what nobody says out loud. The remaining 10-20% of enterprise data, the structured, tabular data in your databases, warehouses, and spreadsheets, is where the decisions that move your P&L actually live. Financial forecasting. Demand planning. Risk scoring. Pricing optimisation. Churn prediction. Predictive maintenance. Clinical trial analysis.


These aren't flashy use cases. You'll never see a viral LinkedIn post about a demand forecasting model. But they're the decisions that determine whether a company hits its targets or misses them.

Said differently: generative AI is optimised for the 80% of data with indirect business impact. Predictive AI is optimised for the 20% with direct P&L impact.

And until recently, that 20% was being served by the same ML infrastructure we've used for over a decade: XGBoost, LightGBM, random forests. Good tools. Proven tools. But tools that require significant manual effort: feature engineering, hyperparameter tuning, pipeline maintenance, specialised ML teams.

That's changing.

Enter Tabular AI

Deep learning has already reshaped entire industries. Natural language processing, computer vision, speech recognition. In every case, the pattern was the same: train a large model on diverse data, let it learn general patterns, then apply it to specific problems with minimal adaptation. That "foundation model" approach replaced years of domain-specific engineering with systems that generalise out of the box.

But one major data modality was left behind: tables.

Structured, tabular data, the rows and columns that live in every enterprise's databases never got its foundation model moment.

The reason is architectural. Large language models process sequences: one token after another, predicting what comes next. That works brilliantly for text, where meaning is carried by syntax and context. But a table encodes information differently. Relationships between columns are non-sequential and non-linear. A customer's churn risk isn't a narrative. It's a multi-dimensional intersection of transaction frequency, contract terms, support history, and regional economic conditions.

As Annie Lamont, Co-Founder and Managing Partner at Oak HC/FT, put it: "Structured, relational data has yet to see the benefits of the deep learning revolution" (VentureBeat, Feb 2026).

That revolution is now arriving.

In January 2025, researchers published a paper in Nature demonstrating a tabular foundation model that outperformed all previous methods on datasets with up to 10,000 samples, achieving in 2.8 seconds what traditional ensemble methods needed 4 hours to match.

The paper has since accumulated over 427,000 accesses and 415 citations. At ICML 2024, a position paper argued that tabular foundation models should be an explicit research priority, noting the systematic underinvestment in the domain relative to its real-world importance.

Why Tabular AI Is the Next Platform Layer in Enterprise ML Infrastructure

Large language models were built to process language, not tables. When you try to make an LLM work with a spreadsheet, you face real constraints. You have to serialise the table into text, losing structural information in the process. You're limited by context windows, meaning most enterprise datasets (millions or billions of rows) vastly exceed what an LLM can handle in a single pass. And you lose the properties that make tabular analysis valuable in enterprise settings: reproducibility, precise aggregation, rigorous handling of missing values and outliers.

Conversely, as Neuralk's CEO Alexandre Pasquiou put it: "under current traditional ML methods, each new use case requires its own pipeline, its specific model, its own team, and numerous adjustments, often for mixed results. The problem is therefore not solely the accuracy of these models, but the cost and effort required to develop and maintain them at enterprise scale.”

That gap is exactly the opportunity. Tabular foundation models are architectured from the ground up to understand the structure of rows, columns, data types, and the causal relationships between variables. They can be applied out of the box, straight away. The result is models that can:

  • Deliver predictions on new datasets without task-specific retraining
  • Handle messy real-world data (missing values, outliers, mixed types) natively
  • Scale to enterprise data volumes
  • Produce reproducible, auditable results

For enterprises that have been running predictive models for years, this represents a step change. Instead of building and maintaining dozens of bespoke models, each requiring its own engineering pipeline, a single foundation model can generalise across prediction tasks. As Pasquiou explained: "A single pre-trained ("foundation") model can be deployed across dozens of use cases, while achieving better accuracy than a specialised model."

For enterprises that haven't yet adopted predictive AI because of the complexity barrier, tabular foundation models lower the entry point dramatically.

What This Means for Enterprise AI Strategy

Rebalance your AI portfolio, don't just add more AI.

Across this article, the tension between generative and predictive AI is not an argument for abandoning one in favour of the other. It's an argument for proportionality. Use GenAI for your unstructured data: it can drive real productivity gains, especially in content, communication, and document workflows. But approach it with governance, realistic ROI expectations, and a clear eye on where it tends to stall. The 95% pilot failure rate is a cost of misalignment, not a verdict on the technology.

For the 20% of data that actually moves your P&L, rethink your tooling.

Most Heads of Data Science know generative AI isn't the answer for their prediction pipelines. What they need to understand is how tabular foundation models change the economics of predictive AI itself. Compared to XGBoost and LightGBM pipelines, TFMs offer a fundamentally different value proposition:

  • More use cases in production: a single TFM generalises across prediction tasks, removing the need to build and maintain separate bespoke models for each one.
  • Faster time to production: TFMs reduce or eliminate task-specific feature engineering and retraining cycles.
  • Lower maintenance burden: fewer pipelines means less drift monitoring, fewer failure points, and reduced operational overhead.
  • Smaller team requirements: the complexity barrier drops significantly, allowing smaller data science teams to cover more ground.
  • Performance gains: TFMs like NICL have demonstrated accuracy advantages over traditional ensemble methods.

Three things to consider:

  1. Audit your data landscape honestly. What percentage of your most impactful business decisions are driven by structured, tabular data? For most enterprises, the answer is "the vast majority." That's where your AI investment should be proportional.
  2. Don't confuse visibility with value. Generative AI is visible because it produces things you can see and share. Predictive AI is invisible because it produces decisions and forecasts that get embedded into operations. The fact that nobody tweets about your demand forecasting model doesn't mean it isn't your highest-ROI AI initiative.
  3. Watch the TFM space closely. Fundamental's $255M raise is a signal, not an anomaly. The combination of academic validation (Nature publications, ICML position papers), venture capital conviction (unicorn valuations), and growing enterprise demand means this category will move fast. The companies that understand tabular foundation models early will have a significant advantage over those still treating all AI as a generative AI question.


If you're a Head of Data Science / Research responsible for prediction pipelines at scale, let's talk. I'm happy to show you what a single TFM looks like in practice. Connect with me on LinkedIn.

At Neuralk AI, we build tabular foundation models for structured data prediction. We work with enterprises in finance, industry, and beyond to deploy predictive AI that delivers measurable results on the data that actually runs their business. If you're exploring how TFMs can fit into your AI strategy, get in touch.

Sources:

  • McKinsey, "The State of AI in 2025" (Global Survey, Nov 2025)
  • S&P Global, "Voice of the Enterprise: AI & Machine Learning, Use Cases 2025"
  • Menlo Ventures, "2025: The State of Generative AI in the Enterprise" (Jan 2026)
  • DemandSage, "Predictive AI Statistics 2025" (Jan 2026)
  • MIT NANDA Initiative, "The GenAI Divide: State of AI in Business 2025" (Aug 2025)
  • Deloitte, "AI ROI: The Paradox of Rising Investment and Elusive Returns" (Oct 2025)
  • Stanford HAI, "AI Index Report 2025" (Apr 2025)
  • Hollmann et al., "Accurate predictions on small data with a tabular foundation model," Nature 637 (Jan 2025)
  • TechCrunch, "Fundamental raises $255M Series A with a new take on big data analysis" (Feb 2026)
  • VentureBeat, "Beyond the lakehouse: Fundamental's NEXUS bypasses manual ETL" (Feb 2026)
  • BusinessWire, "Fundamental Announces $255M in Funding" (Feb 2026)
  • Datagrid, "26 AI Agent Statistics: Adoption + Business Impact" (Mar 2025)
  • Gartner / IDC estimates on structured vs. unstructured enterprise data