The Frontier Advantage in 2026 — Why “Don’t Roll Your Own” Often Wins

Spring 2026

The 2026 conversation is often framed as “tokens forever” versus “build private AI.” This Keynote challenges that default framing. For many enterprises, the highest-performing and most cost-effective path is staying on the latest hyperscaler frontier reasoning models (OpenAI / Anthropic / Google) while making your company data radically usable as context (retrieval, permissions, provenance, evals, and workflow orchestration).

Gilmer Valdes will unpack why rolling your own (especially with smaller/open models) can be a false economy once you include: ongoing model quality drift, safety and compliance re-validation, red-teaming, prompt + toolchain regressions, and the org cost of sustaining an internal “model company.” He’ll also address a key technical trap: continual fine-tuning (especially on smaller models) can induce catastrophic forgetting, turning “customization” into a maintenance treadmill—while modern “frozen model + knowledge injection” approaches (RAG, modular memory, governed retrieval) keep capabilities stable and auditable.

Finally, the discussion will be whether frontier reasoning advances are pushing us toward early forms of AGI-like capability—or whether “jagged intelligence” and reliability collapse under complexity remain the real ceiling.

Speakers:

Dr. Gilmer Valdes, a leader in the field of Clinical AI and Machine Learning, is CEO and Founder of OncoBrain, which is pioneering initiatives to integrate AI into clinical practice, with a special focus on improving patient outcomes through innovative machine learning applications.

He previously served as Vice Chair of Machine Learning and Director of Clinical AI at Moffitt Cancer Center, leading the translation of ML research into deployed clinical workflows and building a clinician-facing clinical AI platform (BlueScrubs) that informed OncoBrain’s approach—grounded in transparency, safety guardrails, measurable endpoints, and real-world integration.

Dr. Valdes earned a PhD in Medical Physics (UCLA) and completed fellowship + clinical residency training in Therapeutic Medical Physics (University of Pennsylvania), with faculty experience at UCSF and AI research training through an NIH K08 across UCSF–UC Berkeley–Stanford. Early work focused on ML and advanced modeling for quality assurance, outcome prediction, and treatment personalization (notably in prostate and lung cancer), with an emphasis on robust performance under real-world data constraints.

His research has advanced expert-augmented learning and interpretable additive/boosted modeling approaches (e.g., MediBoost, Conditional Super Learner, Representational Gradient Boosting, additive tree families).

Related events