Software Engineer Jobs at Reflection AI with Visa Sponsorship
Reflection AI builds frontier AI systems, and its Software Engineer roles sit at the intersection of research and production infrastructure. The company sponsors select work visa categories, making it a viable target for international engineers who meet the technical bar for cutting-edge AI development work.
See All Software Engineer at Reflection AI JobsOverview
Showing 5 of 25+ Software Engineer Jobs at Reflection AI jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 25+ Software Engineer Jobs at Reflection AI
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Software Engineer Jobs at Reflection AI.
Get Access To All Jobs
Our Mission
Reflection’s mission is to build open superintelligence and make it accessible to all. We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.
The Roles Mission
Bridge the gap between research and production by turning cutting-edge algorithms into scalable training systems. You will design and optimize the core infrastructure behind frontier AI models — from reinforcement learning training loops and distributed GPU training to massive-scale data pipelines. Our systems train models across thousands of GPUs and process petabyte-scale datasets. We care deeply about numerical stability, throughput, and reproducibility.
What This Team Does
This team owns and evolves the core infrastructure behind our training systems.
We Focus On
- Reinforcement learning training infrastructure
- Distributed training and inference systems
- Experiment infrastructure and reproducibility
- Large-scale data pipelines
The goal is to build the engineering foundation that allows researchers to iterate quickly while training models at massive scale.
About The Role
You will architect and optimize the core training infrastructure that powers our models. This includes RL training loops, distributed GPU systems, and large-scale data pipelines. You will work closely with researchers to transform new ideas into reliable, scalable training systems.
Responsibilities Include
- Designing and optimizing large-scale training loops and data pipelines.
- Implementing state-of-the-art techniques and ensuring they are numerically stable and computationally efficient.
- Building internal tooling for launching, monitoring, and reproducing complex experiments.
- Diagnosing deep bottlenecks across the training stack (GPU memory issues, communication overhead, dataloader stalls).
- Translating research prototypes into reusable, production-grade infrastructure.
What You'll Work With
Distributed Training
- GPU parallelism (data, tensor, pipeline, expert)
- Large-scale distributed training infrastructure
- Communication optimization (NCCL, RDMA, GPU interconnects)
- FSDP / ZeRO and model sharding
Orchestration & Runtime Systems
- Ray, Kubernetes, Slurm
- Distributed runtimes and async systems
- Containerization and sandboxing
Frameworks
- PyTorch
- JAX
- Megatron-style training stacks
- Triton / custom kernels
Data Infrastructure
- Large-scale dataset curation pipelines
- Deduplication and filtering systems
- Tokenization and preprocessing
- Distributed data processing frameworks
About You
- You are a strong software engineer who speaks the language of machine learning.
- You may not have a PhD, but you know how to implement a research paper.
- You have deep experience in at least one of the following: Distributed Training & Inference or Data Infrastructure.
- You enjoy working at the boundary between:
+ Machine learning algorithms
+ Distributed systems
+ High-performance computing
- You care deeply about performance, numerical stability, and reproducibility.
- You thrive in high-agency environments and enjoy solving hard technical problems.
What We Offer
We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models. We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.
- Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.
- Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.
- Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.
- Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time.
- Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.

Our Mission
Reflection’s mission is to build open superintelligence and make it accessible to all. We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.
The Roles Mission
Bridge the gap between research and production by turning cutting-edge algorithms into scalable training systems. You will design and optimize the core infrastructure behind frontier AI models — from reinforcement learning training loops and distributed GPU training to massive-scale data pipelines. Our systems train models across thousands of GPUs and process petabyte-scale datasets. We care deeply about numerical stability, throughput, and reproducibility.
What This Team Does
This team owns and evolves the core infrastructure behind our training systems.
We Focus On
- Reinforcement learning training infrastructure
- Distributed training and inference systems
- Experiment infrastructure and reproducibility
- Large-scale data pipelines
The goal is to build the engineering foundation that allows researchers to iterate quickly while training models at massive scale.
About The Role
You will architect and optimize the core training infrastructure that powers our models. This includes RL training loops, distributed GPU systems, and large-scale data pipelines. You will work closely with researchers to transform new ideas into reliable, scalable training systems.
Responsibilities Include
- Designing and optimizing large-scale training loops and data pipelines.
- Implementing state-of-the-art techniques and ensuring they are numerically stable and computationally efficient.
- Building internal tooling for launching, monitoring, and reproducing complex experiments.
- Diagnosing deep bottlenecks across the training stack (GPU memory issues, communication overhead, dataloader stalls).
- Translating research prototypes into reusable, production-grade infrastructure.
What You'll Work With
Distributed Training
- GPU parallelism (data, tensor, pipeline, expert)
- Large-scale distributed training infrastructure
- Communication optimization (NCCL, RDMA, GPU interconnects)
- FSDP / ZeRO and model sharding
Orchestration & Runtime Systems
- Ray, Kubernetes, Slurm
- Distributed runtimes and async systems
- Containerization and sandboxing
Frameworks
- PyTorch
- JAX
- Megatron-style training stacks
- Triton / custom kernels
Data Infrastructure
- Large-scale dataset curation pipelines
- Deduplication and filtering systems
- Tokenization and preprocessing
- Distributed data processing frameworks
About You
- You are a strong software engineer who speaks the language of machine learning.
- You may not have a PhD, but you know how to implement a research paper.
- You have deep experience in at least one of the following: Distributed Training & Inference or Data Infrastructure.
- You enjoy working at the boundary between:
+ Machine learning algorithms
+ Distributed systems
+ High-performance computing
- You care deeply about performance, numerical stability, and reproducibility.
- You thrive in high-agency environments and enjoy solving hard technical problems.
What We Offer
We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models. We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.
- Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.
- Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.
- Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.
- Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time.
- Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
See all 25+ Software Engineer at Reflection AI jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Software Engineer at Reflection AI roles.
Get Access To All JobsTips for Finding Software Engineer Jobs at Reflection AI Jobs
Align your portfolio to AI infrastructure work
Reflection AI hires engineers who can operate at the research-to-production boundary. Showcase projects involving large-scale distributed systems, model training pipelines, or inference optimization before you apply. Generic web development experience is unlikely to clear their bar.
Prepare your LCA documentation with your offer letter
For E-3 and TN roles, your employer files a Labor Condition Application with the DOL certifying prevailing wage compliance before your visa can be issued. Get your formal job title and SOC code confirmed in writing so this filing moves immediately after you accept.
Target Reflection AI roles through Migrate Mate
Filter Software Engineer openings at Reflection AI by the visa type you hold using Migrate Mate. This saves time identifying which roles are actively open to candidates in your immigration status rather than applying blind and discovering sponsorship limitations during screening.
Address specialty occupation evidence proactively for TN
TN status requires your role to qualify as a specialty occupation under USMCA. For ambiguous titles like Software Engineer, bring documentation linking your specific duties to your Computer Engineering or Computer Science degree to the port of entry.
Software Engineer at Reflection AI jobs are hiring across the US. Find yours.
Find Software Engineer at Reflection AI JobsFrequently Asked Questions
Does Reflection AI sponsor H-1B visas for Software Engineers?
Based on available sponsorship data, Reflection AI does not have a documented track record of H-1B sponsorship for Software Engineers. The company sponsors E-3, TN, and F-1 OPT and CPT. If your work authorization depends on H-1B, verify directly with Reflection AI's recruiting team before investing significant time in the application process.
How do I apply for Software Engineer jobs at Reflection AI?
Applications go through Reflection AI's careers page. You can also browse current Software Engineer openings filtered by visa sponsorship type on Migrate Mate, which lets you confirm which roles are open to candidates in your immigration status before applying. Tailor your application to the technical focus of the specific role, whether that's systems, research engineering, or infrastructure.
Which visa types does Reflection AI commonly use for Software Engineer roles?
Reflection AI sponsors E-3 visas for Australian citizens, TN status for Canadian and Mexican nationals, and F-1 OPT and CPT for students on F-1 status. Each category has different requirements: E-3 and TN require a qualifying degree in a relevant field, while OPT and CPT are tied to your academic program and graduation timeline.
What qualifications does Reflection AI expect for Software Engineer candidates?
Reflection AI operates at the frontier of AI development, so Software Engineer roles generally require a strong computer science foundation and experience relevant to AI systems, distributed infrastructure, or model development. A bachelor's degree in Computer Science or a closely related field is typically the minimum, and your degree field needs to align with the role for E-3 or TN sponsorship to be viable.
How do I think about the timeline from offer to start date at Reflection AI?
Timeline depends heavily on your visa category. E-3 consular processing in Australia typically takes two to four weeks once your employer's LCA is DOL-certified, which itself takes seven to ten business days. TN approval at a port of entry can happen the same day. F-1 OPT requires planning further ahead since USCIS recommends filing at least 90 days before your program end date.
See which Software Engineer at Reflection AI employers are hiring and sponsoring visas right now.
Search Software Engineer at Reflection AI Jobs