Engineering Manager, Inference Scalability and ...
The Rundown AI, Inc. - San Francisco, CA
Apply NowJob Description
About the role: We are seeking an experienced Engineering Manager to join our Inference Scalability and Capability team. This team is responsible for building and maintaining the critical systems that serve our LLMs to a diverse set of consumers. As the cornerstone of our service delivery, the team focuses on scaling inference systems, ensuring reliability, optimizing compute resource efficiency, and developing new inference capabilities. The team tackles complex distributed systems challenges across our entire inference stack, from optimal request routing to efficient prompt caching. Responsibilities: Build and lead a high-performing team of engineers through technical mentorship, strategic hiring, and creating an environment that fosters innovation Drive operational excellence of inference systems (deployments, auto-scaling, request routing, monitoring) across cloud providers Facilitate development of advanced inference features (e.g., prompt caching, constrained sampling, fine-tuning) Partner deeply with research teams to productionize new models, infrastructure teams to optimize hardware utilization, and product teams to deliver customer-facing features Create clear technical roadmaps and execution strategies in a fast-moving environment while managing competing priorities You may be a good fit if you: Have 5+ years of experience leading large-scale distributed systems teams Have excellence in building high-trust environments and helping teams navigate technical uncertainty while maintaining velocity Exhibit demonstrated ability to recruit, scale, and retain engineering talent Possess outstanding communication and leadership skills Show a deep commitment to advancing AI capabilities responsibly Have a strong technical background enabling you to make architectural decisions and guide technical direction Strong candidates may also have experience with: Experience implementing and deploying machine learning systems at scale Experience with LLM inference optimization including batching and caching strategies Experience with cloud-native architectures, containerization, and deployment across multiple cloud providers Familiarity with high-performance computing environments and hardware acceleration (GPU, TPU, Trn) Deadline to apply: None. Applications will be reviewed on a rolling basis. #J-18808-Ljbffr
Created: 2025-03-12