Posted:
9/19/2024, 2:20:06 AM
Location(s):
Oregon, United States ⋅ Hillsboro, Oregon, United States ⋅ Durham, North Carolina, United States ⋅ California, United States ⋅ North Carolina, United States
Experience Level(s):
Senior
Field(s):
AI & Machine Learning ⋅ Software Engineering
Workplace Type:
Remote
The PyTorch Team @ NVIDIA is hiring passionate parallel programmers. Join us to design and build the tools used by millions of AI practitioners deploying AI applications scalable to thousands of GPUs. Our team is responsible for the continual delivery of best in class experience on NVIDIA's hardware with PyTorch. Join our team and collaborate with many multi-disciplinary engineering teams within NVIDIA and internationally in the PyTorch open source community to deliver our customers the best of NVIDIA software.
In this position you will learn innovative techniques from NVIDIA's domain experts for efficiently programming the world's most sophisticated computer systems. Build these techniques into NVIDIA/Fuser (commonly known as "nvFuser") applying our groundbreaking Parallel Programming Theory, allowing these optimization techniques to be applied to algorithms broadly, automatically, and safely to algorithms written in Numpy and PyTorch. Beyond building nvFuser influence and improve the entire software stack all the way from users to the CUDA compiler, to the Lightning-Thunder Graph Compiler, as well as influence the future design of NVIDIA's hardware platform. Join our ambitious and diverse team who strive to lead the best in AI programming.
What you will be doing:
Crafting a code generation system to accelerate portions of a graph collected from a machine learning framework.
Partnering with NVIDIA’s hardware and software teams to improve GPU performance in PyTorch.
Design, build and support production AI solutions used by enterprise customers and partners.
Optimize the performance of influential, modern Deep Learning models coming out of academic and industry research, for NVIDIA GPUs and systems.
Collaborating with internal applied researchers to improve their AI tools.
Advise design of new hardware generations.
What we need to see:
MS or PhD Computer Science, Computer Engineering, Electrical Engineering or a related field (or equivalent experience).
Parallel programming experience with writing optimized kernels in the NVIDIA CUDA Programming Language or similar parallel languages
4+ years of experience with C++ programming.
Demonstrated experience developing large software projects.
We require excellent verbal and written communication skills.
Ways to stand out from the crowd:
Proven technical foundation in CPU and GPU architectures, numeric libraries, modular software design.
A background in deep learning compilers or compiler infrastructure
Expertise with optimized distributed parallelism techniques and it's a bonus if that includes parallelizing Large Language Models!
Knowledge of heuristic generation that employs cost models, machine learning, or auto-tuning.
Contributions to PyTorch, Numpy, JAX, TensorFlow, OpenAI-Triton, Lightning Thunder, TVM, Halide or similar system.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
Website: https://www.nvidia.com/
Headquarter Location: Santa Clara, California, United States
Employee Count: 10001+
Year Founded: 1993
IPO Status: Public
Last Funding Type: Grant
Industries: Artificial Intelligence (AI) ⋅ GPU ⋅ Hardware ⋅ Software ⋅ Virtual Reality