Work With Me - Evidence in the Wild

Your Study Needs a Story Before It Needs a Sample Size

Most statistical consultants will give you a power calculation and a protocol appendix. I'll give you the analytical narrative that explains what your study is actually testing, what assumptions you're making, and what decision you'll be able to make when the data comes in.

The same thinking behind Sample Size Theater and the ISIS-2 breakdown—applied to your trial design challenge.

The Problems I Actually Solve

I don't do "sample size and power analysis." I solve the messy design problems that show up before you even know what sample size question to ask.

Your biomarker showed promise in retrospective studies, but now you need a prospective validation strategy that will actually survive regulatory scrutiny. Not just "statistically significant results"—a qualification pathway that makes strategic sense. I help you design the right validation study, figure out what performance thresholds are defensible, and build the analytical narrative that explains why this approach is the right one for your evidence question.

You're three months from a critical Go/No-Go decision and your Phase 2 data is ambiguous. The primary endpoint missed. The secondary endpoints are mixed. Everyone wants to know "what does this mean?" I help you figure out what story the data is actually telling, what assumptions you'd be making with either decision, and how to frame the evidence honestly—not as success or failure, but as information that narrows your uncertainty.

Your grant application just got triaged at peer review for "inadequate statistical justification." The reviewers wrote "unclear rationale for sample size" or "analysis plan does not match research aims." I help you rebuild your statistical approach so reviewers can see not just what you're doing, but why it's the right design for your question. The math doesn't change—the clarity does.

You need to explain your trial design to stakeholders who don't speak statistics. Your executive team wants to know "how confident can we be?" Your clinical partners want to know "what will this tell us about patient care?" Your investors want to know "what are we actually testing?" I translate your study design into language that makes the assumptions visible and the decisions clear.

Why People Bring Me In

I Build Tools While I Think

I wrote Sample Size Theater because stakeholders need to see the trade-offs they're making, not just trust the math. When I'm working through your trial design, you'll get the same thing—interactive simulations that let you explore what happens if your assumptions are wrong, visual explanations that make the design choices clear, and sensitivity analyses that show you where the risk actually lives.

You won't get a PDF with a power calculation. You'll get a tool that lets your team understand the design.

I've Designed Studies That "Failed" Into Better Questions

I designed a Zelen randomization for an early cancer detection study. Consent rates were terrible—looked like a methodological disaster. But low consent wasn't random; it was revealing something scientifically important about who was willing to be screened. We pivoted the research question, and the "failure" became the most valuable data in the study.

Sometimes the best statistical advice is "this design will tell you something more interesting than what you asked for."

I Translate Statistical Decisions Into Strategic Choices

The difference between "we need N=250" and "at N=250 we're powered to detect a 15% effect, but if we believe the effect could be smaller, we're making a bet that early evidence of direction matters more than definitive proof" is the difference between math and strategy.

I do the second one. You'll understand not just what the numbers are, but what assumptions you're committing to when you choose them.

Example: When "Standard Sample Size Calculations" Missed the Point

A digital health company needed a sample size for their pilot study. Standard calculations said 120 patients for 80% power to detect a moderate effect. But their real question wasn't "is this statistically significant?"—it was "do we have enough signal to justify building the full product?"

We designed an adaptive approach with planned interim looks. If the signal was strong, they could stop at 60 patients with compelling evidence. If it was marginal, they could continue to 180 to clarify the uncertainty. They stopped at 75 with results that were scientifically convincing and strategically decisive.

More importantly, at every interim analysis, the team understood exactly what assumptions they were testing and what decisions each possible result would support. The design matched the actual decision-making process instead of pretending they'd wait for p<0.05.

From Blog to Your Project

The same analytical thinking behind the ISIS-2 trial breakdown is how I'll approach your trial design challenge. The interactive tools I build for the blog become the tools your team uses to understand your own study assumptions. The storytelling that makes my posts readable is how I'll explain your statistical approach to your executive team, your IRB, or your regulators.

What Working Together Actually Looks Like

Trial Design & Strategy (Project-Based): For teams with a specific study that needs design work. I help with endpoint definition, randomization strategy, power and sample size justification, analysis planning, and regulatory narrative. You'll get design documents, simulation tools, sensitivity analyses, and clear explanations of the assumptions you're making.

Ongoing Statistical Partnership (Retainer): For teams that want a statistician who understands their program over time. Regular design reviews, analysis planning, preparation for regulatory or investor discussions, and strategic thinking about your evidence roadmap. This is for when you need someone who knows your science well enough to tell you when your statistical plan doesn't match your actual question.

Trial Rescue / Second Opinion: When a study has produced confusing results, hit unexpected data problems, or needs a post-hoc analysis strategy that's scientifically honest. I help you figure out what went wrong, what's salvageable, and how to frame the findings in a way that's analytically defensible.

For Academic Researchers: Grant application statistical sections, design consulting for investigator-initiated trials, power analysis that reviewers will actually understand, and analytical strategies for exploratory studies where "hypothesis testing" isn't quite the right framework.

Background

I'm a biostatistician with nearly a decade designing, analyzing, and rescuing clinical studies. I've worked on:

  • Adaptive platform trials in oncology
  • Biomarker validation studies for early cancer detection
  • Risk prediction models for cardiovascular and metabolic disease
  • Precision prevention trials in women's health
  • Regulatory submissions where the novel endpoint required custom statistical justification

I build reproducible analysis pipelines (R, SAS, SQL, Git, Quarto, Shiny), write statistical analysis plans that survive regulatory review, and run design meetings where clinicians, product teams, and executives actually understand the trade-offs they're making.

My work has been published in JAMA, The Lancet, and JNCI. I've presented statistical methodology at JSM, ENAR, and SCT. And I write a blog that tries to make trial design less boring than it has any right to be.

Let's Talk About Your Specific Challenge

Book a 30-minute call. I'll ask about your study, you'll tell me what's unclear or not working, and I'll tell you honestly whether I'm the right fit and what approach I'd take. No sales pitch. Just straight talk about your statistical problem.