Separating Hype from Reality

The engineering automation space has a clarity problem. Marketing materials for engineering AI tools make broad claims — "AI that designs for you," "automated engineering at the push of a button" — that obscure the reality of what these systems actually do and what they require from engineering teams to work.

The reality is more nuanced and, for practical purposes, more useful. AI tools are solving specific, bounded problems in engineering automation with genuine effectiveness. Those specific applications deserve investment. The broader claims — AI that replaces engineering judgment, autonomous design systems, self-building configurators — remain either experimental or require such specific conditions that they don't apply to most manufacturing contexts.

The key question isn't "should we use AI?" but "at which specific steps in our engineering process does AI deliver reliable, measurable value?" The answer to that question is tractable today.

Think of AI in engineering automation not as a system that designs or decides, but as a system that accelerates specific tasks that previously required significant human time: rule extraction, document parsing, pattern recognition, and output checking. These are meaningful — just narrower than the marketing suggests.

Where AI Adds Value: Rule Drafting

The highest-friction step in building engineering automation is rule capture — extracting the design logic that experienced engineers carry in their heads and formalising it as explicit, unambiguous rules that a configurator or automation system can execute.

This process traditionally involves weeks of interviews, workshops, and iterative drafting. An AI-assisted approach changes the dynamic in two ways.

First, large language models can parse engineering documentation — design standards, calculation sheets, legacy configuration guides, product specifications — and draft initial rule sets from that documentation. The drafts require expert review and correction, but starting from a draft is significantly faster than starting from a blank page.

Second, LLMs can assist in formalising tacit knowledge captured in interview transcripts or notes. An engineer explains their decision-making process; the AI drafts it as structured conditional logic. The engineer validates and refines. The collaboration is faster than either the human doing it alone or the AI doing it without human oversight.

In practice, this approach reduces the rule-capture phase of a configurator project by 30-50% — a significant time saving on a step that typically takes 2-4 months.

Generative Design: Real but Bounded

Generative design — algorithms that explore design spaces and produce geometries optimised against specified objectives — is real, mature, and available in mainstream CAD platforms (Autodesk Generative Design, Siemens NX Topology Optimisation, Altair OptiStruct). But the marketing often obscures the conditions required for it to work.

Generative design works well for:

  • Mass reduction in structural components where complex geometry is manufacturable (additive manufacturing or high-precision machining)
  • Heat exchanger and fluid flow optimisation where topology changes improve performance
  • Bracket and connector optimisation for aerospace and automotive, where weight is a premium constraint

Generative design does not replace engineering judgment on:

  • Load case definition — the algorithm optimises against the loads you specify; wrong loads produce wrong geometry
  • Manufacturing constraint specification — geometry that the algorithm produces without manufacturing constraints may be unbuildable
  • Functional requirements — the algorithm doesn't know your assembly requirements, your customer's maintenance access needs, or your manufacturing floor's capabilities
30-70%

Weight reduction achievable with generative design on structural components — but only when load cases are correctly defined, manufacturing constraints are properly specified, and the resulting geometry is manufacturable in the target process.

AI-Assisted Output Validation

Automated design systems can generate outputs at volume — hundreds or thousands of drawing variants, BOM configurations, or product specifications. Validating these outputs has traditionally required human review, which becomes a bottleneck at scale and reintroduces the error risk that automation was meant to eliminate.

AI-based validation tools address this by learning what correct outputs look like from historical data, then flagging anomalies in new outputs for human review. This is qualitatively different from rule-based validation (checking that specific values are within defined ranges) — it catches deviation from expected patterns that rule-based systems don't anticipate.

Practical applications include: drawing completeness checking (detecting missing dimensions, absent GD&T callouts, or incomplete title blocks), BOM anomaly detection (flagging material quantities or component selections that deviate significantly from historical norms for similar configurations), and dimensional consistency checking across assemblies.

These tools don't eliminate human review entirely, but they direct reviewer attention to the outputs most likely to contain errors — significantly reducing the total review time required.

Pattern Recognition on Historical Design Data

Engineering organisations that have been automating for several years accumulate valuable data: thousands of generated configurations, with their input parameters and output specifications. AI tools can extract patterns from this data that aren't visible to human analysts.

Common applications:

  • Demand pattern analysis: Which configurations are actually ordered? Which options are always selected together? This informs simplification of the product structure — eliminating options that are never ordered reduces configurator complexity significantly.
  • Pricing optimisation: Historical quote data with outcomes (won/lost, margin achieved) enables analysis of where pricing rules are too aggressive or too conservative.
  • Design knowledge extraction: Historical CAD files from experienced engineers can be analysed to extract informal design rules — the undocumented preferences and patterns that senior engineers apply consistently but have never written down.
  • Failure mode correlation: If field failure data is linked to design parameters, AI analysis can identify which configuration combinations have elevated failure rates — informing rule updates that prevent those combinations from being generated.

What AI Cannot Do in Engineering Today

It's equally important to be clear about where AI falls short in engineering contexts, particularly because vendor marketing in this area is aggressively optimistic.

  • Structural calculations: LLMs cannot reliably perform structural analysis. They produce plausible-sounding numbers that may be completely wrong. All structural calculations in engineering automation systems must use validated calculation engines — not AI-generated values.
  • Autonomous design decisions: AI cannot decide what a product should look like, what it should be made of, or how it should perform. These decisions require engineering judgment rooted in physical understanding that AI systems do not have.
  • Hallucination-free output: LLMs generate confident-sounding incorrect information. In engineering contexts — where an incorrect dimension or an incorrect material specification has safety implications — this is disqualifying for direct output generation. AI output in engineering must be validated by engineering expertise.
  • Regulatory compliance determination: AI cannot reliably determine whether a design meets a specific code or standard. Compliance checking requires exact interpretation of regulatory documents — something LLMs handle inconsistently.

A Practical Starting Point

For engineering teams looking to introduce AI tools without over-investing in capabilities that aren't ready, the most practical starting points are documentation and rule drafting assistance.

These applications have the best risk profile: the AI accelerates a task that a human is doing anyway, the human reviews and validates the AI output before it's used, and the failure mode (an incorrect draft rule that needs revision) is manageable. The value is real and measurable; the risk is contained.

From there, AI-assisted validation of automated outputs is a natural second step — again, one where a human reviews AI-flagged anomalies rather than trusting AI conclusions directly. These two applications deliver practical value today while your team builds the understanding and data infrastructure to exploit more advanced AI capabilities as they mature.