Posted:
10/22/2024, 5:00:00 PM
Location(s):
Santa Clara, California, United States ⋅ California, United States
Experience Level(s):
Senior
Field(s):
AI & Machine Learning ⋅ Software Engineering
As a System Software Engineer (LLM Inference & Performance Optimization) you will be at the heart of our AI advancements. Our team is dedicated to pushing the boundaries of machine learning and optimizing large language models (LLMs) for flawless, real-time performance across diverse hardware platforms. This is your chance to contribute to world-class solutions that impact the future of technology.
What you'll be doing:
Design, implement, and optimize inference logic for fine-tuned LLMs, working closely with Machine Learning Engineers.
Develop efficient, low-latency glue logic and inference pipelines scalable across various hardware platforms, ensuring outstanding performance and minimal resource usage.
Apply hardware accelerators such as GPUs, and other specialized hardware to improve inference speed and effectively implement real-world applications.
Collaborate with cross-functional teams to integrate models seamlessly into diverse environments, meeting strict functional and performance requirements.
Conduct detailed performance analysis and optimization for specific hardware platforms, focusing on efficiency, latency, and power consumption.
What we need to see:
8+ years of expert proficiency in C++ with a deep understanding of memory management, concurrency, and low-level optimizations.
M.S. or higher degree (or equivalent experience) in Computer Science/Engineering and related field.
Strong experience in system-level software engineering, including multi-threading, data parallelism, and performance tuning.
Validated expertise in LLM inference, with experience in model serving frameworks like ONNX Runtime, TensorRT.
Familiarity with real-time systems and performance-tuning techniques, especially for machine learning inference pipelines.
Ability to work collaboratively with Machine Learning Engineers and cross-functional teams to align system-level optimizations with model goals.
Extensive understanding of hardware architectures and the ability to bring to bear specialized hardware for optimized ML model inference.
Ways to stand out from the crowd:
Experience with deep learning hardware accelerators, such as Nvidia GPUs.
Familiarity with ONNX, TensorRT, or cuDNN for LLM inference on GPU.
Experience with low-latency optimizations and real-time system constraints for ML inference.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
Website: https://www.nvidia.com/
Headquarter Location: Santa Clara, California, United States
Employee Count: 10001+
Year Founded: 1993
IPO Status: Public
Last Funding Type: Grant
Industries: Artificial Intelligence (AI) ⋅ GPU ⋅ Hardware ⋅ Software ⋅ Virtual Reality