Senior Site Reliability Engineer - Internal AI Research Clusters

Posted:
9/11/2024, 9:49:56 AM

Location(s):
Austin, Texas, United States ⋅ Texas, United States ⋅ Westford, Massachusetts, United States ⋅ North Carolina, United States ⋅ Redmond, Washington, United States ⋅ California, United States ⋅ Durham, North Carolina, United States ⋅ Washington, United States ⋅ Massachusetts, United States

Experience Level(s):
Senior

Field(s):
AI & Machine Learning

NVIDIA is the leader in AI, machine learning and datacenter acceleration. NVIDIA is expanding that leadership into datacenter networking with ethernet switches, NICs and DPUs NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life’s work, to amplify human imagination and intelligence. Make the choice, join our diverse team today!

As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of ground breaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek an expert to identify architectural changes and/or completely new approaches for our GPU Compute Clusters. As a Site Reliability Engineer, you will help us with the strategic challenges we encounter including: compute, networking, and storage design for large scale, high performance workloads, effective resource utilization in a heterogeneous compute environment, evolving our private/public cloud strategy, capacity modeling, and growth planning across our global computing environment.

What you'll be doing:

In this role you will be building and improving our ecosystem around GPU-accelerated computing including developing large scale automation solutions. You will also be maintaining and building deep learning clusters at scale and supporting our researchers to run their flows on our clusters including performance analysis and optimizations of deep learning workflows. In this role, you will design, implement and support operational and reliability aspects of large scale large scale distributed systems with focus on performance at scale, real time monitoring, logging and alerting. Additional responsibilities include:

  • Troubleshoot, diagnose and root cause of system failures and isolate the components/failure scenarios while working with internal & external stakeholders

  • Finding and fixing problems before they occur

  • Building automation for Cluster bring up and scaled up operation.

  • Improving Operational Excellence and Processes.

  • Write and review code, develop documentation and capacity plans, and debug the hardest problems, live, on some of the largest and most complex systems in the world

What we need to see:

  • Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience with a minimum 5 years of experience designing and operating large scale compute infrastructure.

  • Experience analyzing and tuning performance for a variety of AI/HPC workloads including workflows that uses MPI

  • Working knowledge of cluster configuration management tools such as Ansible, Puppet, Salt and experienced with AI/HPC advanced job schedulers, and ideally familiarity with schedulers such as Slurm, K8s, RTDA or LSF

  • In depth understating of container technologies like Docker, Singularity, Shifter, Charliecloud

  • Experience learning development languages (Python, C++, Rust, PHP/Hack, Go, Java)

  • Experience configuring and running infrastructure level applications, such as Kubernetes, Terraform, MySQL, etc.

  • Proactively create experiments and tooling to detect and diagnose hardware/firmware/software health issues

Ways to stand out from the crowd:

  • Experience with NVIDIA GPUs, Cuda Programming, NCCL and MLPerf benchmarking

  • Background with Machine Learning and Deep Learning concepts, algorithms, models

  • Familiarity with InfiniBand with IBoIP and RDMA

  • Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads

  • Familiarity with deep learning frameworks like PyTorch and TensorFlow

NVIDIA offers highly competitive salaries and a comprehensive benefits package. We have some of the most brilliant and talented people in the world working for us and, due to unprecedented growth, our world-class engineering teams are growing fast. If you're a creative and autonomous engineer with real passion for technology, we want to hear from you.

The base salary range is 148,000 USD - 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

NVIDIA

Website: https://www.nvidia.com/

Headquarter Location: Santa Clara, California, United States

Employee Count: 10001+

Year Founded: 1993

IPO Status: Public

Last Funding Type: Grant

Industries: Artificial Intelligence (AI) ⋅ GPU ⋅ Hardware ⋅ Software ⋅ Virtual Reality