Applied ML Engineer

Posted:
3/17/2025, 6:30:13 PM

Experience Level(s):
Junior ⋅ Mid Level

Field(s):
AI & Machine Learning

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.

About The Role

As an applied machine learning engineer, you will take today’s state-of-the-art solutions in various verticals and adapt them to run on the new Cerebras system architecture. You will get to see how deep learning is being applied to some of the world’s most difficult problems today and help ML researchers in these fields to innovate more rapidly and in ways that are not currently possible on other hardware systems.

Responsibilities

  • Familiar with state-of-the-art transformer architectures for language and vision model.
  • Bring up new state-of-the art model on Cerebras System and function validation.
  • Train a model to convergence, and hyper-parameter tuning.
  • Optimize model code to run efficiently on Cerebras System.
  • Explore new model architecture that take advantage of Cerebras unique capabilities.
  • Develop new approaches for solving real world AI problems on various domains.

Requirements

  • Masters or PhD in Computer Science or related field.
  • Familiarity with JAX/TensorFlow/PyTorch.
  • Good understanding of how to define custom layers and back-propagate through them.
  • Experience with transformer deep learning models.
  • Experience in vertical such as computer vision or language modeling.
  • Experience with Large Language Models such as GPT family, Llama, BLooM.

Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2025.

Apply today and become part of the forefront of groundbreaking advancements in AI!


Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.


This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Cerebras Systems

Website: http://cerebras.net/

Headquarter Location: Sunnyvale, California, United States

Employee Count: 251-500

Year Founded: 2016

IPO Status: Private

Last Funding Type: Series F

Industries: Artificial Intelligence (AI) ⋅ Computer ⋅ Hardware ⋅ Software