From Pit Lane to Pipeline: Lessons in Data Strategy from McLaren F1

Back in the early 2000s, I had the opportunity to visit McLaren F1 and witness firsthand how they were beginning to harness vast, disparate data streams to drive performance. Even then, McLaren was ahead of the curve—integrating telemetry, tire wear, weather conditions, and driver behavior into centralized systems to inform race strategy and car development.

What struck me wasn’t just the volume of data, but how they approached it. They weren’t just collecting information—they were thinking in terms of relationships. They were connecting seemingly unrelated signals to uncover patterns and optimize outcomes. In hindsight, it was an early glimpse into what we now call semantic data modeling and real-time analytics.

At the time, I was working at a large pharmaceutical company, facing many of the same data challenges that still persist today—particularly in High Throughput Screening (HTS) and early-stage drug discovery. We were generating massive datasets from compound libraries, assay results, and lab instrumentation. But much of that data was siloed—scattered across systems, formats, and teams. Integrating and interpreting it was a major hurdle. We often relied on manual processes and intuition to connect the dots, but the complexity and fragmentation made it nearly impossible to see the full picture. We knew the insights were there—but without better tools for correlation and context, they remained locked away.

That visit to McLaren sparked a question that stayed with me: what if we could apply the same principles—centralized data integration, real-time feedback loops, and predictive modeling—to accelerate decision-making in pharma?

That experience fundamentally shaped how I began to think about knowledge management, data architecture, and the future of AI/ML in life sciences. It was a clear example of what becomes possible when data is treated not as a byproduct of operations, but as a strategic asset.

Today, we’re finally seeing that vision come to life. Advances in AI and computing power are helping bridge the gap. Natural Language Processing (NLP) and machine learning models can now extract structure and meaning from unstructured content, making it easier to unify data across formats and sources. Foundation models trained on scientific literature and experimental data can identify patterns and relationships that would be nearly impossible to detect manually. And with scalable cloud infrastructure and high-performance computing, we can process and analyze these massive datasets in real time. Together, these technologies are transforming how we approach discovery—turning once-inaccessible data into actionable insight.

Private Conversations to Public Sharing — Why I’m Posting Again

After several years of staying quiet on LinkedIn and on this blog, I’ve decided to start posting again. It’s been 10 years since my last post on here.

I had to pause this blog many times in the past. This was due to varying social media policies at the different companies I worked for. And I avoided posting on LinkedIn because its changed, it’s become a mix of:

  • Sales pitches disguised as thought leadership
  • Personal stories that feel more performative than professional
  • Comment threads that escalate quickly

For a while, that made me hesitant to share on LinkedIn. I stuck to private conversations, industry events, and smaller circle discussions. Recently, peers and colleagues encouraged me to share those insights openly. I’ve realized there’s still a real appetite for substance over noise.

So here’s what I’m committing to:

  • Sharing thoughtful, experience-based insights from the Biotech/Pharma + IT/Digital space
  • Focusing on real-world challenges and innovations
  • Staying grounded, respectful, and open to dialogue

If you’re also craving more signal and less noise on your feed, I hope you’ll follow along. Better yet, join the conversation.