Research Engineer, Foundation Model

Posted:
9/11/2025, 9:41:54 AM

Location(s):
Baden-Württemberg, Germany ⋅ Berlin, Berlin, Germany ⋅ San Francisco, California, United States ⋅ New York, New York, United States ⋅ New York, United States ⋅ Freiburg im Breisgau, Baden-Württemberg, Germany ⋅ Berlin, Germany ⋅ California, United States

Experience Level(s):
Junior ⋅ Mid Level ⋅ Senior

Field(s):
AI & Machine Learning

Who we are

Prior Labs is building foundation models that understand tabular data, the backbone of science and business. Foundation models have transformed text and images, but structured data has remained largely untouched. We’re tackling this $600B opportunity to fundamentally change how organizations work with scientific, medical, financial, and business data.

Momentum: We’re the world-leading organization in structured data ML. Our TabPFN v2 model was published in Nature and set a new state-of-the-art for tabular machine learning. Since its release, we’ve scaled model capabilities more than 20x, reached 2.5M+ downloads, 5,500+ GitHub stars, and are seeing accelerating adoption across research and industry. We’re now building the next generation of tabular foundation models and actively commercializing them with global enterprises across Europe and the US.

Our team: We’re a small, highly selective team of 20+ engineers and researchers, selected from over 5,000 applicants, with backgrounds spanning Google, Apple, Amazon, Microsoft, G-Research, Jane Street, Goldman Sachs, and CERN, led by the creators of TabPFN and advised by world-leading AI researchers such as Bernhard Schölkopf and Turing Award winner Yann LeCun. Meet the team here.

What’s Next: Backed by top-tier investors and leaders from Hugging Face, DeepMind, and Silo AI, we’re scaling fast. This is the moment to join: help us shape the future of structured data AI. Read our manifesto.

Core Areas of Impact

You'll be among the engineers developing an entirely new class of AI models. Our latest breakthrough (TabPFN) outperforms all existing approaches by orders of magnitude - and we're just getting started. This is a rare opportunity to:

  • Work on fundamental breakthroughs in AI, not just incremental improvements

  • Shape the future of how organizations worldwide work with their most valuable data

  • Join at the perfect time: We just received significant funding (announcement coming soon!), have strong early traction (100K+ downloads), and are scaling rapidly

At Prior Labs, we don't believe in "throwing research over the wall." Our Research Engineers are core members of the science team, contributing to architectural design while ensuring our models scale to the next order of magnitude. As an early team member, you'll have significant technical ownership and the opportunity to grow into a leadership position as we scale. While no single person needs to cover all these areas, these represent the types of challenges you might tackle based on your interests and expertise:

Model Engineering & Implementation

  • Build and improve training pipelines for large-scale tabular foundation models

  • Design modular architectures that support rapid experimentation

  • Optimize training and inference performance

Research Infrastructure & Tooling

  • Improve experiment tracking and evaluation systems

  • Build efficient data processing pipelines for tabular data

  • Maintain clean, documented codebases that the team can build upon

Production & Scale

  • Design scalable serving architecture for our models

  • Implement deployment pipelines

What We're Looking For

  • Strong engineering fundamentals with excellent Python expertise

  • Deep experience with ML frameworks, especially PyTorch, Scikit-Learn

  • Proven track record of implementing and deploying ML systems

  • Passion for writing clean, maintainable, and well-documented code

  • Demonstrated interest in foundation models and their real-world applications

What Sets You Apart

  • Master's degree or PhD in Computer Science or related technical field

  • Contributions to open-source projects in related fields

  • Experience implementing large language models or foundation models

  • Track record of implementing papers

  • Background in ML infrastructure and tooling

  • Experience with distributed training systems

Location

  • Offices in Freiburg, Berlin, San Francisco and NYC, with flexibility to work across our locations

Benefits

  • Competitive compensation package in line with industry experience plus meaningful equity

  • 30 days of paid vacation + public holidays

  • Comprehensive benefits including healthcare, transportation, and fitness

  • Work with state-of-the-art ML architecture, substantial compute resources and with a world-class team

Our Commitments

  • We believe the best products and teams come from a wide range of perspectives, experiences, and backgrounds. That’s why we welcome applications from people of all identities and walks of life, especially anyone who’s ever felt discouraged by "not checking every box."

  • We’re committed to creating a safe, inclusive environment and providing equal opportunities regardless of gender, sexual orientation, origin, disabilities, or any other traits that make you who you are.