Social network you want to login/join with: MachineLearning Performance Engineer, London Client: Location: London,United Kingdom Job Category: - EU work permit required: Yes JobReference: 3602cddd60c0 Job Views: 80 Posted: 18.02.2025 ExpiryDate: 04.04.2025 Job Description: We are looking for an engineerwith experience in low-level systems programming and optimisationto join our growing ML team. Machine learning is a critical pillarof Jane Streets global business. Our ever-evolving tradingenvironment serves as a unique, rapid-feedback platform for MLexperimentation, allowing us to incorporate new ideas withrelatively little friction. Your part here is optimising theperformance of our models – both training and inference. We careabout efficient large-scale training, low-latency inference inreal-time systems and high-throughput inference in research. Partof this is improving straightforward CUDA, but the interesting partneeds a whole-systems approach, including storage systems,networking and host- and GPU-level considerations. Zooming in, wealso want to ensure our platform makes sense even at the lowestlevel – is all that throughput actually goodput? Does loading thatvector from the L2 cache really take that long? If you’ve neverthought about a career in finance, you’re in good company. Many ofus were in the same position before working here. If you have acurious mind and a passion for solving interesting problems, wehave a feeling you’ll fit right in. There’s no fixed set of skills,but here are some of the things we’re looking for: 1. Anunderstanding of modern ML techniques and toolsets 2. Theexperience and systems knowledge required to debug a training run’sperformance end to end 3. Low-level GPU knowledge of PTX, SASS,warps, cooperative groups, Tensor Cores and the memory hierarchy 4.Debugging and optimisation experience using tools like CUDA GDB,NSight Systems, NSight Compute 5. Library knowledge of Triton,CUTLASS, CUB, Thrust, cuDNN and cuBLAS 6. Intuition about thelatency and throughput characteristics of CUDA graph launch, tensorcore arithmetic, warp-level synchronization and asynchronous memoryloads 7. Background in Infiniband, RoCE, GPUDirect, PXN, railoptimisation and NVLink, and how to use these networkingtechnologies to link up GPU clusters 8. An understanding of thecollective algorithms supporting distributed GPU training in NCCLor MPI 9. An inventive approach and the willingness to ask hardquestions about whether were taking the right approaches and usingthe right tools 10. Fluency in English J-18808-Ljbffr