## Validation of software [Software]

Dear kalesushil!

» Validation of software is nothing but the IQ/OQ of the software.
                                ^^^^^^^
                                Oh no, it is!

Installation qualification essentially ensures that all parts of any software have been installed on the target host as intended – nothing more. If the software contains a bug giving 2×2 as 5, IQ will not detect it.
In the strictest sense it would mean, that a validation concept is developed in parallel to the software itself, from the lowest coding level up to the user interface… This goes far beyond the common software life cycle.
Therefore – against all claims – even the ‘oldest’ industry-standard software is not validated. I don’t believe that Carl Metzler worried about validation back in the 60s of the last century when he sat down in Kalamazoo and started coding the first lines of NONLIN in FORTRAN 66. I guess since that time the core was not touched again and carefully ported from one version to the next, transcompiled into cryptic C (C++, M\$ Net Framework?), etc.
I would expect the same for the core routines of SAS from 1976 – I don’t think that the SAS Institute trashed the code and started from scratch again when the FDA’s ‘Blue Book’ was published in 1983.

I give you the FDA’s definition:

Software validation is a part of the design validation for a finished device, but is not separately defined in the Quality System regulation. For purposes of this guidance, FDA considers software validation to be “confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.” In practice, software validation activities may occur both during, as well as at the end of the software development life cycle to ensure that all requirements have been fulfilled. Since software is usually part of a larger hardware system, the validation of software typically includes evidence that all software requirements have been implemented correctly and completely and are traceable to system requirements. A conclusion that software is validated is highly dependent upon comprehensive software testing, inspections, analyses, and other verification tasks performed at each stage of the software development life cycle. Testing of device software functionality in a simulated use environment, and user site testing are typically included as components of an overall design validation program for a software automated device.

In the pharmaceutical industry rarely any software is white-box validated (from the bottom up), but black-box validated (from the top down): You feed data sets to the system and assess the output – which is ‘known’ from somewhere else. In such an approach you only can try to challenge the software at its boundaries (real numbers instead of integers, text instead of numbers, negative numbers to catch square root errors, zero input to catch division errors, missing values, extreme numeric range challenging the optimizer in mixed-models, ‘flat’ input leading to local minima instead of the global one, …) – but you never can be sure. Very helpful are Statistical Reference Datasets (StRD) offered by the National Institute of Standards and Technology (NIST), or the Data Generators at the UK National Physical Laboratory.

Serious white-box validation is performed in e.g., the aerospace and automotive industry and, of course, in the military section…
To give you an idea of almost error-free software:

A friend of mine works as a software-engineer on collision-prevention systems for high- speed trains. There are two teams working in parallel and independently, both of them developing a different version of the software itself and tools to validate every level. They started from scratch, and are working until the current lowest defect level of 1:105 is reached. Lower defect levels are not feasible any more because the efforts for validation would be higher than development costs of the software itself. At certain milestones a supervisor compares results of both teams but intervenes only if they are using similar concepts in solving a problem. By this it’s guaranteed that finally there will be two pieces of validated software working at the same error level but with entirely different algorithms and routines. In the locomotive both systems will be running in parallel (even on different hardware!) and both will be ‘authorized’ to stop the train. If they arrive at different ‘decisions’ the one opting to stop the train prevails. Therefore, the overall-error rate is expected to be 1:1010 (or 1:5×1011 if both locomotives are using it)!

I would suggest going through the linked documents and implementing them as far as feasible; most inspectors I know will judge a piece of software with a large user base differently from homebrew. But any kind of push-the-button-install-and-qualify-to-use-validation offered by the software vendor is definitely not enough!
SAS’ SOP you suggested is not any better than the one given in this post for WinNonlin’s ‘Validation Kit’. A vendor comes up with an undocumented software and an undocumented test system. Then you are allowed to click some buttons, or execute some commands – and everybody is happy (through believing). BTW, I’m using the term ‘undocumented’ in the sense of ‘proprietary not accessible code’.

I would recommend these references:

Edit: Links corrected for the FDA’s new site structure. [Helmut]

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes