Evaluation is the act of critical examination of a program that involves collection and analysis of information about its processes, characteristics, and results. The purpose of evaluation is to assess or conclude about the effectiveness of a program or what improvements should be done to it in order to achieve desired results. To evaluate is to monitor a program’s progress and highlight its achievements and points to improve. In Clinical Studies, evaluation is an integral part. Through evaluation, researchers can judge if the data or even the study itself is reliable or not.
Clinical Trials are often viewed by the public as vested interests of private pharmaceutical or health companies just to make profit out of the masses by the virtue of its nature being a business. On the contrary, Clinical Trials are very important and consequential because the very businesses that are under public scrutiny are the ones the public rely on when they need to address something about their health. The significance of Clinical Studies became even more popular and stronger when the 2020 pandemic came. There is a reason why this field is a multi-million dollar industry. A lot of processes need to be done in order to ensure not only the investors profit but also the welfare of the consumers (the general public). So, in order to guarantee the efficacy and efficiency of new drugs, treatments, or devices, critical evaluation of Clinical Studies are needed.
It is pretty obvious that Clinical Studies are being conducted by experts in the field. A Clinical Trial could be composed of Researchers, Scientists, Data Analysts, Information Technology Experts, and even Doctors in specialized fields. However, no matter how noble and high the brain-power of the roster could be, they are still prone to error as they are all still humans. According to Berger and Alperson, in the article “A General Framework for the Evaluation of Clinical Trial Quality” (2009), “misleading evidence in the form of flawed clinical trials is quite troublesome to the public.” In the aforementioned paper, Berger and Alperson (2009) discussed how some clinical trials thrive (gets funded and certified) despite flawed evaluation that leads to flawed replication of the study. Furthermore, they strongly highlighted the statement “flawed or misleading statements are worse than no evidence at all.” This kind of dilemma is quite alarming and hazardous not only in the medical research community but also to public health as poor studies often produce poor results. Ultimately, poor results would be consumed by the public and would eventually cause poor public health status.
Questions for Evaluation
If an individual is attempting to read, review, or even evaluate Clinical Literature or Studies, he or she might get lost and be overwhelmed with a lot of technical terms, abbreviations, and statistics that are being constantly used in the paper. It is like looking at a foreign language one has no idea about. This dilemma would probably occur if the individual is just a casual reader and doesn’t have any proper training or exposure in the medical research field. However, there are guidelines from the experts themselves as to how evaluation should be done in a simple yet effective way.
According to a paper published by Gary Lyman, MD, MPH & Nicole Kuderer, “A Primer for Evaluating Clinical Trials” (no date), they mentioned that, “knowledge of the principles of trial design and conduct is important to assess the validity of results.” Furthermore, they claimed that, “It is essential that clinician readers of the medical literature and investigators conducting clinical research, as well as reviewers and editors of medical journals, become familiar with the fundamentals of clinical research methods.” So, from the same paper mentioned above, Lyman and Kuderer formed 5 questions to guide the researchers on how to evaluate a Clinical Trial effectively.
- Question Number 1: Are the Study Hypotheses Clearly Stated or Relevant?
- A clear clinical question of hypothesis is both challenging but critical. A clear statement of the hypothesis should include the definition of both the dependent and independent variables. Also, the relevance and importance of the study should be clearly stated.
- Question Number 2: Is the Study Population Adequately Described?
- The researchers must find the proper balance restricting eligibility of subjects to obtain a uniform group and minimizing eligibility restrictions to have generalized study results.
- Question Number 3: Are the Observed Differences Due to Random Error or Chance?
- It is important to note that the possibility of differences are due to either random error or systematic error. Random errors can come from the variation of biologic or measurement factors whereas Systematic errors can come from biases that can affect the trial. In the analysis, the measures of precision should also be presented to any testing. The process of evaluation of a random error is estimation and statistical testing.
- Question Number 4: Are the Observed Differences Due to Systematic Error?
- The most common types of bias in Clinical Trials are related to selection of participants, measurements of outcomes, and the modification of the true relationship between treatment and other factors.
- Question Number 5: Are the Observed Differences Due Modified by Other Factors?
- If interaction within variables are present, the researchers must be able to show models for each group or include a product term in the model.
Tips for Evaluating Clinical Studies
Clinical Studies are full of statistical data and empirical observation. Every bit of information is like a puzzle piece that when put together, holds up a big picture. According to Jennifer Gershman, in the article “Five Tips for Evaluating Clinical Studies”, “each study or clinical trial of a drug tells a story, and it is up to us as pharmacists to use our skills just like a detective to critically evaluate and understand the behind-the-scenes statistics. Statistics are tools that evaluate data to provide important study results.” It is the job of the investigators or researchers to translate the data into digestible, understandable, and comprehensive bits of information for the public to consume. Based on what Gershman mentioned in the same article above, there are 5 helpful ways to help an investigator evaluate Clinical Studies.
- Tip Number 1: Read Beyond the Abstract
- An abstract is a summarized and abridged version of the study. It can be helpful for the researchers to read the abstract, of course, because it is the time-efficient way to know about the paper without really reading the whole thing. However, an in-depth evaluation requires in-depth information. So, it is advisable that the researchers must also read what is inside the study thoroughly.
- Tip Number 2: Determine Whether All Results Were Included
- There are 2 popular ways to analyze results in the study: Intention-to-treat and per-protocol analysis. Intention-to-treat analysis aims to account data from all of the participants who were randomized or not. Even the non-compliant or failed patient data are still involved in this analysis for it mimics what actually happens in a study. Per-protocol analysis, on the other hand, excludes patients’ data who were not compliant to the protocol.
- Tip Number 3:Know the Most Common Type of Statistics
- As the famous saying goes, “numbers don’t lie”, and numbers are the medium of studies to present data as results. To get data, certain methods and processes that include statistics are being used. The 2 most basic and handy types are descriptive and inferential. There are a lot of statistical methods that can be used in conducting a Clinical Study Data Analysis. However, always remember that the goal of a Trial is to determine whether the results are statistically significant or not.
- Tip Number 4: Observational versus Randomized Controlled Trial
- Randomized Controlled Trials aim to study the effectiveness of a medication or treatment through established endpoints and protocols. Observational can provide information about associations.
- Tip Number 5: Odds Ratios and Confidence Intervals
- 95% Confidence Levels (CI) are generally used in studies. If the range of the CI includes 0, there is no difference in efficacy in two treatment groups. Meaning, the results are not statistically significant. Odd Ratios are used to point out risk of an adverse effect. If it is greater than 1, there is an increased risk. If it’s less than one, then, there is a lower risk.
In order to maintain data accuracy and efficiency not only in the health industry and medical community, but also in society, Clinical Trials must always be evaluated. In this way, flawed studies can be minimized and prevented and also it might give way to many improvements and innovation in the medical research field.