The hardest part of validating novel technology isn’t the testing itself. It’s deciding what to test.
With established technology, validation methodology is largely inherited. You know what failures look like because they’ve happened before. Standards exist because someone already figured out the edge cases. The test plan is mostly a matter of selecting the right clauses from the right documents and demonstrating compliance.
First-of-a-kind technology doesn’t work that way. When I was developing test infrastructure for a utility-scale flow battery system, there was no industry standard to cite, no failure mode library to draw from, and no adjacent technology close enough that its methodology translated cleanly. The validation framework had to be built from first principles — which meant first figuring out what “validated” actually meant for this specific system.
Start With What Failure Looks Like
The most useful first question isn’t “what do we need to test?” It’s “what does this system need to not do?”
This sounds like a small distinction, but it changes the shape of the problem considerably. Requirements are often written in terms of what a system should do — performance targets, efficiency thresholds, operational ranges. But validation is fundamentally about bounding risk, which means understanding failure modes before designing tests to detect them.
For each subsystem, work through the failure taxonomy: what can fail, how it would fail, what observable signal that failure produces, and at what point in the product lifecycle that failure is most likely to manifest. Some failures are immediate and obvious. Others are latent — they develop over time or under specific operating conditions that don’t appear in short-duration bench testing.
The output of this exercise isn’t a test plan. It’s a risk map. The test plan comes from the risk map.
Separate Engineering Questions From Compliance Questions
In regulated industries, there’s constant pressure to conflate these two categories, and it creates significant waste.
Engineering questions are about understanding system behavior: How does this parameter affect that output? What’s the actual failure threshold, not the conservative specification limit? How does performance degrade under stress?
Compliance questions are about demonstrating that the system meets requirements: Does this unit pass the spec? Can we show the regulator that we’ve characterized this failure mode?
These require different test designs. Engineering tests should be designed to generate information — often that means testing to failure, exploring parameter space, and running experiments that might not produce passing results. Compliance tests should be designed to generate evidence — they need to be repeatable, documented, and defensible.
Running compliance-style tests to answer engineering questions is expensive and slow. Running engineering-style tests to generate compliance evidence creates documentation headaches and sometimes produces results that are difficult to interpret in a regulatory context. Keep them separate from the beginning.
Build the Methodology Iteratively, Not Upfront
The instinct when facing novel technology is to specify the complete validation methodology before testing begins. Resist this.
Early in the program, you don’t know enough about the technology to write a methodology that won’t need to change. Writing it too early means either writing something so general it provides no guidance, or writing something specific enough to be useful but wrong in ways you’ll have to unwind later.
A better approach: specify the methodology one phase ahead of where you are. Enough rigor to guide current work, enough flexibility to incorporate what you’re learning. The methodology is a living document that gets more precise as the technology gets better understood — not a contract signed before the first test.
This requires discipline to resist organizational pressure for premature certainty. Stakeholders want to know the validation plan. The honest answer early in a novel technology program is that the plan is being developed in parallel with the understanding of the system, and that this is correct engineering practice, not a planning failure.
Document Decisions, Not Just Results
Standard test documentation captures what was tested and what the results were. For novel technology, this isn’t sufficient — you also need to document why the test was designed the way it was, what alternative designs were considered, and what the test was and was not intended to demonstrate.
This documentation becomes critical when questions arise later. With established technology, the rationale for test design is often implicit — everyone knows why you test a motor at rated load. With novel technology, the rationale is specific to the program and lives in the heads of the engineers who made the decisions. When those engineers move on, or when a customer asks why a particular test condition was chosen, the rationale needs to exist somewhere other than institutional memory.
It also protects against scope creep. When the methodology is documented at the decision level, it’s much easier to evaluate whether a proposed test addition is filling a genuine gap or duplicating coverage that already exists.
Accept That Some Validation Is Impossible Before Deployment
For long-lifetime products, some failure modes simply cannot be validated before first deployment. Accelerated life testing can compress time, but it introduces assumptions about the equivalence of accelerated and real-world stress that are themselves unvalidated for novel technology.
The honest engineering answer is to identify which failure modes are covered by pre-deployment validation, which are monitored through field instrumentation, and which are accepted as residual risk with a plan for response if they manifest. This is not a comfortable answer to give to customers or executives, but it’s more defensible than claiming complete validation coverage that doesn’t actually exist.
Building the monitoring infrastructure to detect early-stage manifestations of unvalidatable failure modes is often a more productive investment than attempting to design tests that can’t actually demonstrate what they’re claimed to demonstrate.
The through-line in all of this is epistemic honesty: being precise about what is known, what is unknown, and what the validation work actually demonstrates. Novel technology development has enough genuine uncertainty without adding false confidence from methodology that isn’t fit for purpose.