Another Questionable FDA Decision? Not Quite
Comparison between an FDA analysis done right and the Moderna decision
After our recent foray into the drug approval process, we were hoping to get back to vice presidents (especially on President’s Day Weekend), but, in the words of Michael Corleone, “Just when I thought I was out, they pull me back in!”
Saturday morning The Washington Post printed an op-ed by Dr. David Hunter, president of the American Association for Pediatric Ophthalmology and Strabismus and professor of ophthalmology at Harvard Medical School, who criticized the FDA for not approving a drug that treats myopia (nearsightedness) in children.
I checked in with our in-house expert on all things FDA (my wife—who was a biostatistician there for 25 years).
BLUF: After reviewing the publicly available materials, the FDA’s questions about the drug’s efficacy were legitimate. On this issue, your civil servants did their jobs. Unlike the recent Moderna vaccine decision, this was not political.
Going into the details sheds light on how the FDA approval process works (when it’s done right), and public health in general.
A Doctor’s Cri de Coeur
Dr. Hunter’s op-ed has been online for a week, but the recent surfeit of FDA news probably led the editorial team at The Washington Post, to put it in print, where I saw it this morning (yeah, I’m old school, I like print—maybe it’s better for the eyes.)
Dr. Hunter writes that myopia isn’t just about sight, it’s associated with potential long-term eye health including macular degeneration and potential blindness. Myopia is becoming increasingly prevalent in the United States. Hunter has been prescribing atropine off label to slow the progression because atropine has not been approved by the FDA for this purpose. Doctors are allowed to do this; the FDA does not regulate the practice of medicine. But atropine isn’t covered by insurance and patients need to take the prescription to specialized pharmacies that will compound the drug.1
Dr. Hunter writes:
The FDA recently released its October 2025 review letter to Sydnexis regarding SYD-101, a low-dose atropine treatment for pediatric myopia. The letter confirmed what the company had been told from the outset: Two clinical trials were not necessary. One large, well-designed trial would suffice. So, Sydnexis conducted the largest global clinical trial ever completed in pediatric myopia, following over 800 children for three years. The trial met all pre-stated criteria for effectiveness and satisfied every safety requirement the FDA had established.
Yet the FDA rejected the application. Though the agency acknowledged that there were no safety concerns and the improvement in progression met the predetermined criteria, they decided that they needed to see even more improvement than originally specified. They concluded that the data did not support effectiveness because children prescribed low-dose atropine would still have to wear glasses.
Dr. Hunter adds that atropine was recently approved in the European Union and has long been available in India and Asia. Further he argues that the decision has impacts beyond this product. His criticism parallels the accusation against Dr. Prasad’s decision to refuse to review the Moderna flu vaccine—that the FDA is “moving the goalposts.”
Sounds bad, but let’s dig a bit deeper.
Missing Data and Missed Endpoints
My in-house FDA expert examined the FDA’s Complete Response to Sydnexis and the company’s clinical trial protocol. I will quote from the former at length.
While Study SYD-101-001 achieved its prespecified primary endpoint, multiple factors raise significant concerns regarding the robustness and clinical meaningfulness of the observed treatment effect:
(1) Magnitude of Treatment Effect: The treatment effect is modest (<10%) with the upper limit of the 95% confidence interval for the treatment difference (atropine 0.01% vs. Vehicle) at -1.28%, indicating limited clinical benefit. This effect is even more notably small when the summary measure is the mean change from baseline spherical equivalent at Month 36 [only 0.21 D (95% CI: 0.08, 0.35)]. (2) Missing Data Impact: Substantial missing primary outcome data occurred across all treatment arms (approximately 26% for atropine 0.01%, 22% for atropine 0.03%, and 23% for Vehicle). Sensitivity analyses revealed the primary result lacks robustness. Tipping point analysis demonstrated that statistical significance was lost with a shift parameter of only 0.03 D applied to imputed missing values—representing one-third of the smallest shift parameter evaluated by you. (3) Declining Efficacy Over Time: The treatment difference in mean change from baseline spherical equivalent decreased from 0.24D at Month 12 to 0.21D at Month 36, representing a 12.5% decline in efficacy and raising concerns about long-term sustainability. (4) Treatment Withdrawal Analysis: No treatment difference was observed between subjects who continued atropine 0.01% treatment versus those for whom treatment was withdrawn, questioning the necessity of continued therapy. (5) Generalizability Concerns: While Asian studies show robust treatment effects, studies conducted in the United States, European Union, and Australia ><10%) with the upper limit of the 95% confidence interval for the treatment difference (atropine 0.01% vs. Vehicle) at -1.28%, indicating limited clinical benefit. This effect is even more notably small when the summary measure is the mean change from baseline spherical equivalent at Month 36 [only 0.21 D (95% CI: 0.08, 0.35)].
For non-statisticians (including yours truly) this is a little tricky. First, they are measuring changes to the eye and less change is better. The <10% is actually a negative number, the eyes of people receiving the low dose of atropine changed on average almost 10% less than the eyes of people who had the placebo. Second, statistics is not about precise numbers so much as ranges. The study finds that you can extrapolate from the trials that there is a 95% probability that the treatment effect for the general population falls within a range in which the average is just under 10% compared to the placebo. But that’s an average, the confidence interval is the range in which that average falls. At one end improvement could be as low as 1.28% or as high as 18.24%.
The 10% is not much of an effect and there is a real possibility that there is no effect. This finding is for the lower dose of atropine. They also tested a higher dose of atropine, which had no statistically measurable effect. That’s strange—if the medicine is effective the higher dose would have had an effect.
(2) Missing Data Impact: Substantial missing primary outcome data occurred across all treatment arms (approximately 26% for atropine 0.01%, 22% for atropine 0.03%, and 23% for Vehicle). Sensitivity analyses revealed the primary result lacks robustness. Tipping point analysis demonstrated that statistical significance was lost with a shift parameter of only 0.03 D applied to imputed missing values—representing one-third of the smallest shift parameter evaluated by you.
The missing data jumped out for my FDA source, who says that it’s a big red flag. Hunter wrote that the trial was of 800 children, but almost 200 dropped out. That is a very high dropout rate. Why were so many people dropping out of the study? It could have been a logistical issue. But it could reflect a problem delivering the medicine (in this case kids don’t like the drops in their eyes), the drops are causing discomfort, or it was difficult to get measurements on the kids. We don’t know, but it’s an issue that the reviewers would need to consider.
Reducing the study population can bias the statistical findings. Statistics is about taking a sample and extrapolating to a wider population. Therefore, in general, the larger your sample, the more robust your findings. Of course there are other issues; you need to be sure your sample reflects the broader population. If you only test a product on men, you don’t really know its effect on women. This seems obvious, but it has been a major issue in medical research.
If the treatment effect had been clearly significant, the sensitivity analyses (running the data through different models to examine the effect of the missing data) would have offered additional assurance if they were also consistent with the primary analysis. But since the sensitivity analyses were not consistent, the missing data further undermines the findings.
(3) Declining Efficacy Over Time: The treatment difference in mean change from baseline spherical equivalent decreased from 0.24D at Month 12 to 0.21D at Month 36, representing a 12.5% decline in efficacy and raising concerns about long-term sustainability.
(4) Treatment Withdrawal Analysis: No treatment difference was observed between subjects who continued atropine 0.01% treatment versus those for whom treatment was withdrawn, questioning the necessity of continued therapy.
(5) Generalizability Concerns: While Asian studies show robust treatment effects, studies conducted in the United States, European Union, and Australia demonstrate modest effects that often fail to achieve statistical significance. This population-specific variation raises questions about the generalizability of atropine 0.01% efficacy to diverse populations, particularly in the United States where the intended patient population includes significant non-Asian demographics.
These findings collectively suggest limited to no clinical benefit with questionable durability and robustness of the treatment effect.
Points (3) and (4) are pretty clear: whatever effect the atropine had, it was limited and did not continue over time.
My expert took a quick look at some of the Asian studies mentioned in (5) and found that they were all over the place. Some indicated an effect, but it wasn’t robust (it was a pretty small study.) Another mentioned extensive adverse effects—patients complaining about dry eyes. Some didn’t have controls.
It is true that the product was approved in the EU, where they used different endpoints and a shorter time frame (24 months instead of 36 months for a condition where the drug could be taken for years).
My wife notes that this is not the end of the road for the product. When a company brings a new product to the FDA it very rarely gets approval on the first attempt. Further trials or information are needed

.
Back to the Moderna Decision
I quoted extensively from the FDA’s letter to Sydnexis to show why and how the FDA made the judgment it did, but also to compare to the FDA’s refusal to file letter to Moderna. The entirety of the substantial feedback is contained in a single point:
CBER does not consider the application to contain a trial “adequate and well controlled” and the application is therefore, on its’ [sic] face, inadequate for review. This is because your control arm does not reflect the best-available standard of care in the United States at the time of the study. I note that this determination is consistent with FDA’s advice given to you prior to your study.
Notice the difference (besides the typo)? The letter to Moderna does not give specifics that Moderna can discuss further with the FDA, which is standard. It was not written by FDA experts. It appears it was drafted quickly by CBER director Vinay Prasad (note the use of “I” in the last sentence). He has a track record for rejecting drugs and treatments that do not accord with his ideological leanings. He’s also apparently a terrible boss who yells at his staffers.
My expert told me that one reason the FDA so rarely issues a “Refusal to File” letter is because they give companies the option to withdraw the application. A “Refusal to File” letter is very bad for a company’s reputation and financial position, so the company usually withdraws the application and carefully studies the FDA’s feedback. Again, Prasad’s letter provides no substantial feedback and it does not appear that Moderna was given the option to withdraw its application.
Doctors and Statisticians
Let’s turn back to Dr. Hunter, who saw his mother go blind from macular degeneration and is clearly passionate about ocular health. He wants to help people, that’s why he became a doctor. I’ll bet he’s a good one.
Dr. Hunter is right that we are facing an ocular health crisis. As more people become myopic, more people will suffer an array of vision challenges. Beyond the individual impacts, it could have huge societal costs. There will be direct expenses for treatment, but also the potential for large numbers of people losing their sight later in life and requiring additional support services.
In their desire to heal, a doctor may reach for any solution that appears viable. Some go so far as to become quacks and even con-men, preying on the suffering. That is absolutely not the accusation here!
But doctors, looking at patients on a case-by-case basis are not seeing the whole population. They are collecting valuable data; the instances may be accurate. It is human nature for people to understand the world through their individual experience. This has advantages and pitfalls.
Statistics is about analyzing large quantities of data to try to get this accurate picture, that extends beyond individual observation. Many doctors do become deeply familiar with statistical analysis, but this is the exception, not the rule.
Dr. Hunter is right about the problem and we should seek solutions. Statistics can test to ensure the solutions work. Without this analysis we could invest in things that don’t work while the problem festers. We need both, medicine and statistics, working hand-in-hand to resolve our most challenging public health and medical challenges.
Having pharmacies compound a specialized medication can lead to quality control problems.


