ML Compiler Stack Engineer

Posted:
8/12/2024, 9:57:26 AM

Location(s):
Ontario, Canada ⋅ Toronto, Ontario, Canada

Experience Level(s):
Junior ⋅ Mid Level ⋅ Senior

Field(s):
AI & Machine Learning ⋅ Software Engineering

Cerebras Systems builds the world’s largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
 
Cerebras’ current customers include national labs, global corporations across multiple industries, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields.
 
Job Overview:

We are seeking a highly skilled Compiler Engineer with a passion of optimizing compiler technologies for AI workloads. You will be an integral part of our software compiler stack team, focusing on enhancing our compiler to fully leverage the unique capabilities of our CS3 system. Your work will play a critical role in achieving unprecedented levels of performance, efficiency, and scalability for AI applications.
 
Key Responsibilities:
  • Design, develop, and optimize compiler technologies for AI chips using LLVM and MLIR frameworks.
  • Identify and address performance bottlenecks, ensuring optimal resource utilization and execution efficiency.
  • Work with the machine learning team to integrate compiler optimizations with AI frameworks and applications.
  • Contribute to the advancement of compiler technologies by exploring new ideas and approaches.
 
Qualifications:
  • Bachelor’s, Master’s, or Ph.D. in Computer Science, Electrical Engineering, or a related field.
  • Proven experience in compiler development, particularly with LLVM and/or MLIR.
  • Strong background in optimization techniques, particularly those involving NP-hard problems.
  • Proficiency in C/C++ programming and experience with low-level optimization.
  • Familiarity with AI workloads and architectures is a plus.
  • Excellent problem-solving skills and a strong analytical mindset.
  • Ability to work in a fast-paced, collaborative environment.
 
What We Offer:
  • Competitive salary and benefits package.
  • Opportunities for professional growth and career advancement.
  • A dynamic and innovative work environment.
  • The chance to work on cutting-edge technologies and make a significant impact on the future of AI.


Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.


This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Cerebras Systems

Website: http://cerebras.net/

Headquarter Location: Sunnyvale, California, United States

Employee Count: 251-500

Year Founded: 2016

IPO Status: Private

Last Funding Type: Series F

Industries: Artificial Intelligence (AI) ⋅ Computer ⋅ Hardware ⋅ Software