Lambda's GPU cloud is used by deep learning engineers at Stanford, Berkeley, and MIT.
Lambda's on-prem systems power research and engineering at Intel, Microsoft, Kaiser Permanente, major universities, and the Department of Defense.
If you'd like to build the world's best deep learning cloud, join us.
What You’ll Do
Remotely provision and manage large-scale HPC clusters for AI workloads (up to many thousands of nodes)
Remotely install and configure operating systems, firmware, software, and networking on HPC clusters both manually and using automation tools
Troubleshoot and resolve HPC cluster issues working closely with physical deployment teams on-site
Provide context and details to an automation team to further automate the deployment process
Provide clear and detailed requirements back to HPC design team on gaps and improvement areas, specifically in the areas of simplification, stability, and operational efficiency
Contribute to the creation and maintenance of Standard Operating Procedures
Provide regular and well-communicated updates to project leads throughout each deployment
Mentor and assist less-experienced team members
Stay up-to-date on the latest HPC/AI technologies and best practices
You
Have 10+ years of experience in managing HPC clusters
Have 10+ years of everyday Linux experience
Have a strong understanding of HPC architecture (compute, networking, storage)
Have an innate attention to detail
Have experience with Bright Cluster Manager or similar cluster management tools
Are an expert in configuring and troubleshooting:
SFP+ fiber, InfiniBand (IB), and 100 GbE network fabrics
Ethernet, switching, power infrastructure, GPU direct, RDMA, NCCL, Horovod environments
Linux-based compute nodes, firmware updates, driver installation
SLURM, Kubernetes, or other job scheduling systems
Work well under deadlines and structured project plans
Have excellent problem-solving and troubleshooting skills
Have the flexibility to travel to our North American data centers as on-site needs arise or as part of training exercises
Are able to work both independently and as part of a team
Nice to Have
Experience with machine learning and deep learning frameworks (PyTorch, TensorFlow) and benchmarking tools (DeepSpeed, MLPerf)
Experience with containerization technologies (Docker, Kubernetes)
Experience working with the technologies that underpin our cloud business (GPU acceleration, virtualization, and cloud computing)
Keen situational awareness in customer situations, employing diplomacy and tact
Bachelor's degree in EE, CS, Physics, Mathematics, or equivalent work experience
About Lambda
We offer generous cash & equity compensation
Investors include Gradient Ventures, Google’s AI-focused venture fund
We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitability
Our research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOG
We have a wildly talented team of 150, and growing fast
Health, dental, and vision coverage for you and your dependents
Commuter/Work from home stipends
401k Plan
Flexible Paid Time Off Plan that we all actually use
Salary Range Information
Based on market data and other factors, the salary range for this position is $170,000-$230,000.
However, a salary higher or lower than this range may be appropriate for a candidate whose qualifications differ meaningfully from those listed in the job description.
A Final Note:
You do not need to match all of the listed expectations to apply for this position.
We are committed to building a team with a variety of backgrounds, experiences, and skills.
Equal Opportunity Employer
Lambda is an Equal Opportunity employer.
Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law.