Missing values [Study Per­for­mance]

posted by Helmut Homepage – Vienna, Austria, 2008-07-31 16:36 (6125 d 09:34 ago) – Posting: # 2122
Views: 16,884

¡ Hola ElMaestro !

❝ But I think the current trend is towards more 'cook book' and less science or judgment.


I came across the term ‘cook book’ for the first time in Lisbon last year. :vomit:

To quote Les Benet:

Even though it’s applied science
we’re dealin’ with,
it still is – science!


❝ It might be difficult to make a recipe for this situation which is not based on some degree of subjectivity (subjectivity- sometimes disguised as 'common sense'- is something that can be tricky to handle for regulators as well as applicants, I think ). […]



When I started learning my first programming language in 1976 (haha, that’s a sidenote for youngsters) my instructor told us the story of the ‘apple pie algorithm’ (well known amongst IT people):
Target is to write an algorithm (the ‘apple pie recipe’) which will enable anybody (not only cooks) to come up with an apple pie tasting as the one’s your granny always baked. First you think it’s simple, but digging deeper into the problem, you realize that you assume too much (temperature, duration – OK, but which kind of stove, etc.).
Morale of the story: cook books are well sold, having nice pictures in it, but:
  1. A kitchen chef doesn’t need them,
  2. an amateur cook most likely will produce a mediocre meal.
If regulators will insist in ‘cook books’ I will have to quit my job,
or consider becoming a cook. :crying:

❝ However, I could also imagine regulators thinking "Plan the study well, make sure it is robust towards 'eventualities'. If the loss of two samples (or whatever) is remotely/reasonably possible then of course a study should be planned so that such an event does not screw it all up. If a study is planned well it will not be necessary to fiddle around with data." or something along those lines.


My yesterday’s example is a realistic one. Consider an IR formulation of a drug with a fast half life and very low variability (CVintra 7%). If the study is planned for the minimum sample size (12 in many regulations) and two reserves allowing for drop-outs (which actually did not happen) you are in the nasty situation of an overpowered study anyhow (the point-estimate-not-within-100%-business is likely). Even if you have a lot of sampling points around tmax, missing data will kill you (I’m referring to an example where due to only two lost samples in one subject CV increased to 22%). How to make such a study ‘robust’? Running it in 24 subjects - which will raise ethical questions – or, if ‘nothing happens’ will result in a CI like 93–97% (haha). Or follow Martin’s suggestion and exclude a subject if a single value is missing…

The second example was the never-ending story of omeprazole single dose fed state. Recently I saw one of these 100+ subject studies where 40 (!) blood samples (OK just 1 mL) where drawn within 24 hours. The sponsor tried to make the study ‘robust’ (to be protected against high variability in Cmax caused by variability in gastric emptying mainly), unfortunately not robust enough. Maybe next time they will go for more samples…

❝ N'est-ce pas?


[image]
This is not a painting.


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,672 registered users;
133 visitors (0 registered, 133 guests [including 7 identified bots]).
Forum time: 02:11 CEST (Europe/Vienna)

Freedom is always and exclusively
freedom for the one
who thinks differently.    Rosa Luxemburg

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5