Validation Scope And Known Limitations
This page exists so the validation section does not imply more than the current project has actually demonstrated.
What the current validation section supports reasonably well
From the present test suite, the strongest documented confidence is in:
- linear elastic beam response for standard benchmark cases
- reaction recovery and force equilibrium
- internal force recovery for simple beam cases
- basic rigid-jointed frame behaviour under lateral and vertical loading
- parts of the Eurocode evaluation pipeline for simple members and small frames
What is only partially covered
Some areas have tests, but are not yet documented as broad validation evidence:
- section orientation and orientation-vector behaviour
- end releases and mechanism detection
- reciprocity / symmetry checks
- energy consistency checks
- second-order sensitivity
- wind loading and national annex wiring
These are important, but they need their own benchmark pages before they should be cited as a documented validation set.
What should not yet be claimed
The docs should avoid implying that Armatura has already been comprehensively validated for:
- geometric nonlinearity
- P-Δ / P-δ production workflows
- shell, plate, or solid behaviour
- large benchmark suites against commercial analysis packages
- broad design-code verification across many section families and load cases
That may come later, but it is not what the current public validation pages establish.
Recommended next additions
The next strongest validation pages to add from the existing tests would be:
- a center point load on a simply supported beam page from
SimplySupportedBeam_CenterPointLoad_CorrectDeflection - a column compression and buckling page from
Column_AxialCompression_TriggersCompressionAndBuckling - a beam-column interaction page from
BeamColumn_CombinedAxialAndBending_InteractionCheck - a portal frame evaluation page from
PortalFrame_TwoColumns_OneBeam_AllMembersEvaluated
Those would expand the validation story without inventing new benchmarks from scratch.
Honest reading of the current state
The current codebase already has enough tests to justify a real validation section.
What it does not yet have is a polished, complete, benchmark handbook. The right next step is to publish a small number of strong cases with real numbers and real tolerances, then grow the section from there.