back to overview

Excel is not a data platform

Data foundation

CONTENT

  • The spreadsheet that slowly replaces the system
  • Why spreadsheets are so attractive
  • When flexibility breaks scalability
  • The hidden fragmentation of data
  • Analysis that starts from scratch every time
  • The difference between analysis and architecture
  • Why data platforms work differently
  • The role of Capture

The spreadsheet that slowly replaces the system

In almost every industrial organization, there is a spreadsheet that has become more important than anyone intended.

It usually starts small. Someone wants a quick overview of downtime by line, or a comparison between shifts. Data gets pulled from different systems, dropped into a table, enriched with a few formulas. It works. Colleagues start using it. New tabs appear. A few months later, the spreadsheet has become the reference point for a weekly meeting.

And before anyone notices, something strange has happened. The spreadsheet is no longer just a tool alongside the system landscape. It has started replacing the system itself.

Why spreadsheets are so attractive

The appeal is easy to understand. Spreadsheets offer immediate control. Anyone can import data, adjust formulas, and build charts without depending on IT or integration projects. They are flexible, fast, and intuitive. Ideal for exploration and quick analysis.

But that same flexibility makes them unsuitable as the foundation for industrial data.

The moment a spreadsheet starts playing a structural role in how an organization understands performance, problems emerge that are not immediately visible.

When flexibility breaks scalability

The first problem appears when the same analysis needs to run across multiple sites.

Every spreadsheet contains implicit assumptions. Column names are chosen locally. One plant may define a short stop as anything under five minutes, while another uses ten. The spreadsheet works perfectly within its original context.

But the moment teams try to compare performance across sites, confusion starts to appear. The numbers look comparable. The definitions behind them are not.

The hidden fragmentation of data

A second problem arises when spreadsheets start multiplying. In many organizations, several versions of the same analysis circulate simultaneously: one for production, one for quality, one for maintenance. Each contains slightly adjusted calculations and local corrections.

At that point, multiple versions of the truth start to coexist.

One team discusses an OEE figure of 82 percent, while another arrives at 79 percent for the same line. Not because of errors in the data, but because of subtle variations in calculation logic. One type of downtime was included in one spreadsheet and excluded in another.

Because spreadsheets do not enforce centralized logic, every analysis grows organically in its own direction. The result is not just fragmentation of data, but fragmentation of interpretation.

Analysis that starts from scratch every time

Another issue appears when teams want to answer new questions using existing data. Understanding how process parameters influence performance across multiple lines requires data from several sources: production orders, sensor data, quality measurements, downtime records.

In an architecture built around spreadsheets, every new question requires the same preparation all over again. Engineers export data, combine it in new tables, and rebuild calculations from scratch.

Data never becomes an infrastructure that analysis can build on. It remains a collection of fragments that have to be assembled every single time.

The difference between analysis and architecture

This is the fundamental problem. Spreadsheets are excellent for analysis: exploring data, testing hypotheses, finding patterns. But analysis is not the same as architecture.

Architecture determines how data is organized, connected, and defined. When spreadsheets take over that role, an organization loses shared data context. Every analysis creates its own interpretation. Every comparison requires additional validation.

At small scale, that remains manageable. At multi-site scale, it becomes a structural limitation.

Why data platforms work differently

A data platform separates analysis from architecture. It first defines how data is structured, how assets are identified, and how events are recorded. Analyses, dashboards, and reports then build on top of that structure.

That means a downtime event remains linked to the machine on which it occurred, the production batch that was active, and the process parameters that applied at that moment. When an engineer performs an analysis later, that context is already there. No reconstruction needed.

Analysis becomes faster, more consistent, and reproducible.

The role of Capture

Capture is built on that principle. The platform organizes industrial data around a consistent structure in which assets, processes, and events remain connected. Data from different systems comes together in a shared context.

Dashboards, reports, and yes, even spreadsheets, can then operate on top of that structure without losing the underlying meaning of the data.

Excel remains a useful tool for analysis. It just stops being the place where industrial data takes its final form.

And that difference often determines whether an organization uses data as a tool, or as a foundation for real insight.