Posting: # 1551
We recently purchased SAS software. How to validating the software. And what is requirements of USFDA for SAS validation. Is there any meterial please suggest
Posting: # 1648
Even I am looking for the same. Please let me know when you get any information.
Posting: # 1927
Validation of software is nothing but the IQ/OQ of the software. For more information use this link below,
You can find appropriate SAS IQ/OQ sop on website.
To generate SAS OQ report use DOS prompt commands.
Posting: # 1928
» Validation of software is nothing but the IQ/OQ of the software.
Installation qualification essentially ensures that all parts of any software have been installed on the target host as intended – nothing more. If the software contains a bug giving 2×2 as 5, IQ will not detect it.
In the strictest sense it would mean, that a validation concept is developed in parallel to the software itself, from the lowest coding level up to the user interface… This goes far beyond the common software life cycle.
Therefore – against all claims – even the ‘oldest’ industry-standard software is not validated. I don’t believe that Carl Metzler worried about validation back in the 60s of the last century when he sat down in Kalamazoo and started coding the first lines of NONLIN in
I would expect the same for the core routines of SAS from 1976 – I don’t think that the SAS Institute trashed the code and started from scratch again when the FDA’s ‘Blue Book’ was published in 1983.
I give you the FDA’s definition:
Software validation is a part of the design validation for a finished device, but is not separately defined in the Quality System regulation. For purposes of this guidance, FDA considers software validation to be “confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.” In practice, software validation activities may occur both during, as well as at the end of the software development life cycle to ensure that all requirements have been fulfilled. Since software is usually part of a larger hardware system, the validation of software typically includes evidence that all software requirements have been implemented correctly and completely and are traceable to system requirements. A conclusion that software is validated is highly dependent upon comprehensive software testing, inspections, analyses, and other verification tasks performed at each stage of the software development life cycle. Testing of device software functionality in a simulated use environment, and user site testing are typically included as components of an overall design validation program for a software automated device.
In the pharmaceutical industry rarely any software is white-box validated (from the bottom up), but black-box validated (from the top down): You feed data sets to the system and assess the output – which is ‘known’ from somewhere else. In such an approach you only can try to challenge the software at its boundaries (real numbers instead of integers, text instead of numbers, negative numbers to catch square root errors, zero input to catch division errors, missing values, extreme numeric range challenging the optimizer in mixed-models, ‘flat’ input leading to local minima instead of the global one, …) – but you never can be sure. Very helpful are Statistical Reference Datasets (StRD) offered by the National Institute of Standards and Technology (NIST), or the Data Generators at the UK National Physical Laboratory.
Serious white-box validation is performed in e.g., the aerospace and automotive industry and, of course, in the military section…
To give you an idea of almost error-free software:
A friend of mine works as a software-engineer on collision-prevention systems for high- speed trains. There are two teams working in parallel and independently, both of them developing a different version of the software itself and tools to validate every level. They started from scratch, and are working until the current lowest defect level of 1:105 is reached. Lower defect levels are not feasible any more because the efforts for validation would be higher than development costs of the software itself. At certain milestones a supervisor compares results of both teams but intervenes only if they are using similar concepts in solving a problem. By this it’s guaranteed that finally there will be two pieces of validated software working at the same error level but with entirely different algorithms and routines. In the locomotive both systems will be running in parallel (even on different hardware!) and both will be ‘authorized’ to stop the train. If they arrive at different ‘decisions’ the one opting to stop the train prevails. Therefore, the overall-error rate is expected to be 1:1010 (or 1:5×1011 if both locomotives are using it)!
I would suggest going through the linked documents and implementing them as far as feasible; most inspectors I know will judge a piece of software with a large user base differently from homebrew. But any kind of push-the-button-install-and-qualify-to-use-validation offered by the software vendor is definitely not enough!
SAS’ SOP you suggested is not any better than the one given in this post for WinNonlin’s ‘Validation Kit’. A vendor comes up with an undocumented software and an undocumented test system. Then you are allowed to click some buttons, or execute some commands – and everybody is happy (through believing). BTW, I’m using the term ‘undocumented’ in the sense of ‘proprietary not accessible code’.
I would recommend these references:
Edit: Links corrected for the FDA’s new site structure. [Helmut]
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Posting: # 1997
There is no end to discussions on validation of computer systems. SAS is a very large system and validation requires a great deal of time and efforts. Yes, as told by HS there are some buttons which will give you IQ/OQ aspects after you install the SAS on your computer. PQ, the wise thing is to do it as per the task requirements. If you are using SAS for any task, it is better to validate that part and use it. If you are using SAS for ANOVA and CI calculation, it is easy to validate SAS for this process. As I mentioned earlier, SAS is very big, it calculates mean and also manages credit card transaction in banks. It's up to you what you want to do with SAS. Remember SAS is a off-the-shelf software and need different approach for validation. This is one area where there is a confusion in the minds of people. Confusion between 21 CFR Part 11 and validation of computer systems for clincial use. Both are not same, but they are inter-related.