Click here to go to the previous page
Statistical Comparative Effectiveness Research (CER): Closing the Gaps in the Consideration of Observational Evidence
Program Code:
391
Date:
Wednesday, June 27, 2012
Time:
3:30 PM to 5:00 PM
EST
CHAIR
:
Dr. Joan Buenconsejo is currently Team Lead supporting the Division of Pulmonary, Allergy and Rheumatology Products at CDER. She has been involved in drug and biologics review for more than 7 years and also involved in developing new statistical approaches to handle missing data.
|
PRESENTER
(S):
Dr. LaRee Tracy is a statistical team leader in the Office of Biostastistics, OTS, CDER, FDA. She leads a team of statisticians focused on quantitative safety and pharmacoepidemiology reviews and research. She has been at FDA for 10 years and involved in drug development for 17 years.
|
Douglas Faries is Senior Research Advisor at Eli Lilly, where he has been employed for the past 22 years. He received his Ph.D. in Statistics from Oklahoma State University. He has published extensively in the area of methodology and applications of comparative observational research.
|
Cynthia Girman, DrPH, Sr Director of Epidemiology, has >30 years experience at Merck, including developing/validating endpoints for trials, epidemiological studies and comparative effectiveness research. She's published >200 scientific articles and is an adjunct Assoc Prof in Epidemiology at UNC.
|
Description
As health care costs continue to rise and as new diagnostic and treatment alternatives become available, it is natural to ask which approaches work best. Comparative effectiveness research (CER) or patient-centered outcomes research aims to address this. Ideally, randomized trials could be conducted to formally test superiority or non-inferiority of alternative treatments. However, such trials are not always feasible or ethical, can be very large, of long duration, and costly, and it is not clear whose responsibility it should be to fund and conduct the trials. Many of these trials do not reflect real-world conditions of product use for either patients or health care providers. Interested groups are more often turning to observational data to address relative effectiveness questions. Unlike the primary analysis of a well-designed randomized trial, the operating characteristics of even a similarly well-designed observational study are not known. Randomized pragmatic trials with limited exclusion criteria, no blinding and observational follow-up can be an alternative but are still subject to biases. This session focuses on ways to quantify and improve the reliability and validity of observational evidence generated by CER.
Learning Objectives:
Describe the FDA Partnership in Applied Comparative Effectiveness Science (PACES) Initiative
Review the methodological challenges associated with comparative effectiveness analyses using observational research
Explain guidance for proper interpretation of the strength of evidence from such data based on quality research principles and in the context of accumulated evidence Recognize and design higher quality comparative effectiveness analyses using observational data.