top of page

Manufacturing’s toughest problems need a new kind of AI

Image by Pawel Czerwinski

11 Mar 2026

CEO & Co-Founder

Karolina Bogacka

linkedin.png
Industrial reality check

In many ways, GenAI has already changed both how people work and how much they can get done. In software engineering, tools like Cursor and Copilot have reshaped everyday workflows. In media, the impact is very visible as well, with more AI-assisted content in film and social media. This demand is not surprising – both domains have short feedback loops, and mistakes are usually cheap to fix (although Replit’s CEO may have a different take).


GenAI disruption by industry, Source: MIT NANDA 2025


Still, there’s a statistic you may have seen echoed across blog posts and thinkpieces: 95% of GenAI pilots fail (MIT/NANDA 2025). What gets quoted far less is the sector-by-sector breakdown buried in the same report. The report shows Professional Services and Media sitting at the forefront of GenAI pilot activity, with Advanced Industries and Energy & Materials trailing near the back. Further reading shows some predictive maintenance pilots being carried out in the case of Advanced Industries, although without any major reshaping of supply chains; while in Energy even pure experimentation remains extremely limited.


Why is that? Is it because there aren’t many good GenAI use cases in "deep” industrial settings?


Not really. Take one very specific example: predictive maintenance in high-rate manufacturing – detecting when one of the machine parts should be replaced so the production line stays healthy. It’s an old problem that still costs companies a lot of money. ABB cites survey results where most decision makers say unplanned downtime costs at least $10,000 per hour, and many estimate it can reach $500,000 per hour. That means that even for larger manufacturers this is a measurable, recurring pain point. 


Is it then because manufacturing doesn’t compete on operational excellence, so there’s little reason to invest?


Also no. Manufacturing is famously competitive, especially in Europe, where labour costs are high, specialized SMEs defend narrow niches, and large players invest aggressively. Siemens, for example, reinvests around 8.3% of its revenue in R&D (about €6.3B yearly). Geopolitical pressure – strategic autonomy, supply-chain resilience, industrial capacity – pushes manufacturing companies in the same direction, that is, toward modernization, not away from it.


So what’s actually blocking progress? Reuters put it bluntly: manufacturers are slowing GenAI rollout because they’re worried about response accuracy and cost. In industrial settings, those aren’t abstract concerns. A hallucination in a chatbot is annoying. A hallucination on a manufacturing plant floor can cost the company hundreds of thousands of dollars.


Then there’s the issue of data. To do predictive maintenance well, you need to effectively integrate many sources (sensor streams, maintenance logs, MES context) and reason over them quickly. That’s a hard problem to solve with simple GenAI agents, especially once you add real-time constraints, conflicting sources of truth, and the need for auditability. As a result, McKinsey points to data management and data quality as major blockers to scaling AI in manufacturing.


And that’s where neurosymbolic AI comes in, seamlessly combining what ML is good at (user experience, flexibility, pattern recognition) with what industrial systems demand (constraints, traceability, and verifiable decision paths). Symbolic layers help integrate heterogeneous data, enforce rules, and support safety-aware workflows, while machine learning fills in what can’t be fully specified upfront. In the rest of this post, I will break down how neurosymbolic AI can revolutionize smart manufacturing.

Current problems

Smart manufacturing systems include many sources of heterogeneous, multi-rate data that spans everything from the shop floor to the business stack. At the lowest level, you have real-time PLC / SCADA signals from edge devices (describing alarms and states) and high-frequency sensor streams (e.g., vibration/acceleration and temperature) capturing how equipment behaves moment-to-moment. On top of that is the system-of-record layer (Manufacturing Execution System or MES, Manufacturing Operations Management or MOM, and ERP). It adds meaning to shop-floor data by tying each production event to what was supposed to happen, like the order being run or where the materials came from.


But structured data is only part of the picture. Plants also depend heavily on semi-structured and unstructured sources. Computerized Maintenance Management System (CMMS) work orders, event logs, and operator notes capture the causes behind the signals: what changed, what was attempted, and what failed. This messy but crucial context is often what drives the next engineering decision.


Traditional 5-layer industrial automation architecture, Source: Corbett, Felix. "How Industry 4.0 is Changing the Architecture of Industrial Automation", TTI Europe


Today’s solutions typically cover slices of the data landscape rather than the whole. You’ll see asset tracking and condition monitoring (with hard thresholds, alarms, dashboards), higher-level analytics (with OEE/downtime reporting and trend analysis), computer vision models built for narrow inspection or safety tasks, and physics-based models where equipment behavior can be explicitly represented. In practice, scaling any of these approaches beyond a narrow scenario runs into bottlenecks caused by data quality and readiness. Manufacturing data is often fragmented, inconsistently labeled, and difficult to standardize quickly enough for broad ML deployment – something McKinsey has highlighted as a central roadblock to scaling AI in manufacturing. This leads many "smart factory” stacks to end up as a patchwork of separate solutions clunkily stitched together instead of a complete end-to-end monitoring and decisioning system.


In smart manufacturing, there are four clearly identifiable major use cases for neurosymbolic AI. Root cause analysis links faults and problems with output quality to the operating conditions that triggered them, for example by connecting alarms with sensor patterns and production context. Predictive maintenance focuses on forecasting failures early enough to plan interventions and avoid unplanned downtime, typically by combining sensor signals describing the production line as it is now with maintenance history. Process automation pushes beyond reporting into real-time decision support (and, in controlled settings, automated actions) by turning streams and logs into trustworthy recommendations and even instant reactions. Finally, process optimization automatically adapts parameters to workload changes, minimizing the high costs of manually adjusting a production line.

Neurosymbolic AI in manufacturing – the story so far

Neurosymbolic ideas have been in "smart manufacturing” for far longer than the current GenAI wave. They may have just not been called neurosymbolic at the time. A good example is Siemens’ long-running use of constraint-based (symbolic) technologies to solve industrial configuration problems at scale. These technologies have been utilized for decades in production, with explicit constraints and deductive reasoning utilized to generate correct and consistent configurations in complex domains. 


Timeline of most popular approaches to predictive maintenance in smart manufacturing. Source: Hamilton, Kyle, and Ali Intizar. "Neuro-symbolic AI for Predictive Maintenance (PdM)--review and recommendations." arXiv preprint arXiv:2602.00731 (2026). Link


Looking at the historical context, common approaches to predictive maintenance have slowly evolved from a reactive "if it breaks, fix it” mindset into a comprehensive stack of different neural and symbolic methods. Early deployments leaned heavily on explicit thresholds and rules ("if temperature larger than 90 degrees, stop the line”) coupled with numerical physics- or engineering-driven models where degradations can be described with known equations. As sensor coverage expanded, teams moved on to adopting more data-driven solutions, because they were able to pick up patterns directly from operational signals. ML approaches were also able to surpass hand-built rules on pure predictive accuracy, at least as long as no large shifts in input data occurred. Unfortunately in real plants, conditions rarely remain unchanged. New product variants, seasonal effects, and gradual wear all influence the measurements. ML models can and will regularly face out-of-distribution situations – something the model hasn’t seen before – right when you most need it to behave safely. 


An example of a fan system failure diagram, Source: Hamilton, Kyle, and Ali Intizar. "Neuro-symbolic AI for Predictive Maintenance (PdM)--review and recommendations." arXiv preprint arXiv:2602.00731 (2026). Link


That’s where symbolic reasoning is able to fill in the gaps. A useful example of the symbolic side is a fault tree – a structure that starts with a top event like "fan system failure” and then branches into contributing causes connected by logic gates. As the fan might fail because of power issues, control faults, or blockage, a fault tree helps engineers narrow down what to check first and makes the reasoning easy to audit. Rules, constraints, and causal structure provide a well-tested way to avoid arbitrary decisions, and to encode "must-hold” requirements that don’t change with the dataset (safety interlocks, maintenance procedures). Predictive maintenance isn’t one task anyway – it’s a workflow: forecasting health (e.g., remaining useful life), detecting abnormal conditions, and recommending interventions. Combining neural models with knowledge graphs provides scalable guardrails as well as serves as next-level data infrastructure, giving you both flexibility (learning from data) and robustness (staying safe and interpretable when reality shifts).


However, bridging the gap between research and production is easier said than done. Neurosymbolic AI may be the clearest direction emerging from current research, but promising ideas alone are not enough to meet the demands of real-world deployment. Most papers validate the approach on narrow use cases, without tackling the data infrastructure, system design, and real-time performance needed for production-scale industrial environments. And while combining neural models with knowledge graphs is a powerful foundation, it does not by itself resolve the challenges of speed, integration, and scalability. In the end, execution is what determines success. To deliver real value, the solution must be engineered from the ground up to connect data sources, neural learning, and symbolic reasoning in a way that is seamless, reliable, and fast at scale.

Future solutions

The long-term vision is an agentic neurosymbolic maintenance system that runs like a constrained decision layer on top of plant data. Here, a concrete neurosymbolic maintenance agent can be described as four tightly connected layers. Firstly, a neural model produces a risk estimate – say, the probability of a bearing failure within the next 200 hours – often incorporating information about the operating regime (load, speed, temperature) and pairing it with uncertainty. The model can also output intermediate artifacts that are useful for reasoning, such as health embeddings, anomaly scores, or degradation trajectories. Secondly, the symbolic layer performs diagnosis. It grounds those neural outputs in a knowledge graph that links tags to assets, assets to subsystems, and subsystems to known failure modes, then combines this structure with causal models (like fault trees or dependency graphs) and logical reasoning to rank hypotheses and perform root-cause analysis across components. Thirdly, a constraint-aware planning layer turns the diagnosis into an executable intervention plan by applying temporal and operational constraints. including mutual-exclusion rules ("don’t shut down redundant pumps simultaneously”), ordering constraints ("inspection must precede replacement”), time windows ("replacement must occur within 24 hours of inspection”), and resource constraints (permits, staffing, tools, spare-part availability). Finally, an explanation layer produces a human-readable plan with an auditable justification that links the chosen actions back to the sensor evidence, the hypotheses considered, and the rules and constraints that were satisfied. With the knowledge graph acting as a shared backbone, the agent can also reason at system scale – choosing sequences of actions that minimize downtime given inventory and staffing, determining what must be isolated before intervention, and accounting for how failures propagate across connected components – so the output is not just a forecast, but a decision you can verify, explain, and safely execute.


We see agentic neurosymbolic maintenance as a defining capability for the next era of European manufacturing, and a direction that is shaping NeverBlink’s platform from the ground up. Our ambition is to turn fragmented data, engineering knowledge, and operational constraints into trusted, executable decisions that strengthen resilience, maximize uptime, and improve operational performance at scale. We welcome conversations with domain experts who share this view, and where there is alignment, we would be excited to explore co-development opportunities.


bottom of page