工作描述
48 天前
We are looking for an engineer with experience in low-level systems programming and optimisation to join our growing ML team.
Machine learning is a critical pillar of Jane Street's global business. Our ever-evolving trading environment serves as a unique rapid-feedback platform for ML experimentation allowing us to incorporate new ideas with relatively little friction.
Your part here is optimising the performance of our models both training and inference. We care about efficient large-scale training low-latency inference in real-time systems and high-throughput inference in research. Part of this is improving straightforward CUDA but the interesting part needs a whole-systems approach including storage systems networking and host- and GPU-level considerations.
Zooming in we also want to ensure our platform makes sense even at the lowest level – is all that throughput actually goodput? Does loading that vector from the L2 cache really take that long?
If you've never thought about a career in finance you're in good company. Many of us were in the same position before working here. If you have a curious mind and a passion for solving interesting problems we have a feeling you'll fit right in.
There's no fixed set of skills but here are some of the things we're looking for:
• An understanding of modern ML techniques and toolsets
• The experience and systems knowledge required to debug a training run's performance end to end
• **Low-level GPU knowledge of PTX SASS warps cooperative groups Tensor Cores and the memory hierarchy**
• **Debugging and optimisation experience using tools like CUDA GDB NSight Systems NSight Computation and nsight-compute**
• **Library knowledge of Triton CUTLASS CUB Thrust cuDNN and cuBLAS**
• Intuition about the latency and throughput characteristics of CUDA graph launch tensor core arithmetic warp-level synchronisation and asynchronous memory loads
• Background in Infiniband RoCE GPUDirect PXN rail optimisation and NVLink and how to use these networking technologies to link up GPU clusters
• An understanding of the collective algorithms supporting distributed GPU training in NCCL or MPI
• An inventive approach and the willingness to ask hard questions about whether we're taking the right approaches and using the right tools
• **Fluency in English**
Machine learning is a critical pillar of Jane Street's global business. Our ever-evolving trading environment serves as a unique rapid-feedback platform for ML experimentation allowing us to incorporate new ideas with relatively little friction.
Your part here is optimising the performance of our models both training and inference. We care about efficient large-scale training low-latency inference in real-time systems and high-throughput inference in research. Part of this is improving straightforward CUDA but the interesting part needs a whole-systems approach including storage systems networking and host- and GPU-level considerations.
Zooming in we also want to ensure our platform makes sense even at the lowest level – is all that throughput actually goodput? Does loading that vector from the L2 cache really take that long?
If you've never thought about a career in finance you're in good company. Many of us were in the same position before working here. If you have a curious mind and a passion for solving interesting problems we have a feeling you'll fit right in.
There's no fixed set of skills but here are some of the things we're looking for:
• An understanding of modern ML techniques and toolsets
• The experience and systems knowledge required to debug a training run's performance end to end
• **Low-level GPU knowledge of PTX SASS warps cooperative groups Tensor Cores and the memory hierarchy**
• **Debugging and optimisation experience using tools like CUDA GDB NSight Systems NSight Computation and nsight-compute**
• **Library knowledge of Triton CUTLASS CUB Thrust cuDNN and cuBLAS**
• Intuition about the latency and throughput characteristics of CUDA graph launch tensor core arithmetic warp-level synchronisation and asynchronous memory loads
• Background in Infiniband RoCE GPUDirect PXN rail optimisation and NVLink and how to use these networking technologies to link up GPU clusters
• An understanding of the collective algorithms supporting distributed GPU training in NCCL or MPI
• An inventive approach and the willingness to ask hard questions about whether we're taking the right approaches and using the right tools
• **Fluency in English**
更多來自 Jane Street
軟件工程師
中西區, 香港
7 天前
全職
辦公室工作
科技、資訊和媒體
Cybersecurity Detection and Response Analyst
Jane Street
網絡安全
中西區, 香港
7 天前
全職
辦公室工作
科技、資訊和媒體
**Senior Machine Learning Architect**
Jane Street
軟件工程師
中西區, 香港
7 天前
全職
辦公室工作
科技、資訊和媒體
Artificial Intelligence Specialist
Jane Street
數據科學
中西區, 香港
7 天前
全職
辦公室工作
科技、資訊和媒體
Financial Reporting Accountant
Jane Street
網絡安全
中西區, 香港
7 天前
全職
辦公室工作
科技、資訊和媒體
Cybersecurity Detection and Response Analyst
Jane Street
網絡安全
中西區, 香港
7 天前
全職
辦公室工作
科技、資訊和媒體
更多相似工作
🎉 Got an interview?