AI Researcher Green Card Jobs
AI Researcher roles qualify for EB-2 sponsorship when the position requires a master's or doctoral degree in machine learning, computer science, or a related field. Employers file a PERM labor certification with DOL before petitioning USCIS, making sponsorship a multi-step process that typically takes two to four years for most nationalities.
See All AI Researcher JobsOverview
Showing 5 of 602+ AI Researcher jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 602+ AI Researcher jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Researcher roles.
Get Access To All Jobs
INTRODUCTION
At Databricks, we are obsessed with enabling data teams to solve the world’s toughest problems, from security threat detection to cancer drug development. We do this by building and running the world’s best data and AI platform so our customers can focus on the high-value challenges that are central to their own missions.
The Databricks AI Research organization enables companies to develop AI models and agents using their own data, with technologies ranging from post-training open source LLMs to developing advanced multi-agent architectures. Databricks AI does so by producing novel science and putting it into production. Databricks AI is committed to the belief that a company’s AI models and agents are just as valuable as any other core IP, and that high-quality AI should be available to all.
ROLE AND RESPONSIBILITIES
As a Staff Software Engineer, AI Research Infrastructure, you will be developing and running the research stack that powers Databricks AI Research. You will design and build services that schedule, orchestrate, and observe large-scale training and inference experiment workloads across thousands of GPUs, improve our dev tooling and ensure that researchers can iterate quickly without sacrificing reliability, efficiency, or security.
You’ll partner closely with research scientists, ML engineers, and platform teams to turn experimental workloads into robust, repeatable pipelines, and to push the limits of what our infrastructure can support.
The Impact you will have
As a Staff Software Engineer on the AI Research Infra Team at Databricks, you will:
- Design and implement infrastructure that supports large-scale experiments, data processing, and model training (e.g., HPC clusters, GPU fleets, or cloud-based systems)
- Enable researchers to go from idea to large-scale experiment in minutes, not days, by building powerful abstractions for job submission, scheduling, and monitoring.
- Create tooling that improves research developer productivity, such as experiment management systems, CI/testing infrastructure for research code, and workflows that reduce iteration time.
- Influence the long-term roadmap for research computation, shaping how Databricks AI Research train, evaluate, and ship models to customers.
- Serve as a technical mentor and force multiplier for other engineers working on compute, infra, and AI systems.
BASIC QUALIFICATIONS
- BS/MS or PhD in Computer Science or related field
- 5+ years of software engineering experience, including substantial time working on large-scale distributed systems or infrastructure.
- Have deep experience with building and operating distributed systems, data pipelines, or large-scale backend services, ideally involving GPUs, clusters, or major cloud providers.
- Are proficient in one or more systems programming languages (e.g., C++, Rust, Go, Java, Scala) and can design, implement, and debug complex services.
- Have built or significantly contributed to cluster schedulers, resource managers, or large-scale job orchestration systems (e.g., Kubernetes, Slurm, Ray, custom internal systems).
- Understand modern ML training and inference workflows (e.g., distributed training, model parallelism, fine-tuning, evaluation), even if you’re not primarily a research scientist.
- Can move fast and be pragmatic in getting things done, while caring about operational excellence. Have driven complex systems from prototype to stable, well-owned services.
- Communicate clearly with both researchers and engineers, and enjoy translating between research needs and infra realities.
PAY RANGE TRANSPARENCY
Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.
Local Pay Range
$190,000 — $270,000 USD
ABOUT DATABRICKS
Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow.
BENEFITS
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.
OUR COMMITMENT TO DIVERSITY AND INCLUSION
At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.
COMPLIANCE
If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

INTRODUCTION
At Databricks, we are obsessed with enabling data teams to solve the world’s toughest problems, from security threat detection to cancer drug development. We do this by building and running the world’s best data and AI platform so our customers can focus on the high-value challenges that are central to their own missions.
The Databricks AI Research organization enables companies to develop AI models and agents using their own data, with technologies ranging from post-training open source LLMs to developing advanced multi-agent architectures. Databricks AI does so by producing novel science and putting it into production. Databricks AI is committed to the belief that a company’s AI models and agents are just as valuable as any other core IP, and that high-quality AI should be available to all.
ROLE AND RESPONSIBILITIES
As a Staff Software Engineer, AI Research Infrastructure, you will be developing and running the research stack that powers Databricks AI Research. You will design and build services that schedule, orchestrate, and observe large-scale training and inference experiment workloads across thousands of GPUs, improve our dev tooling and ensure that researchers can iterate quickly without sacrificing reliability, efficiency, or security.
You’ll partner closely with research scientists, ML engineers, and platform teams to turn experimental workloads into robust, repeatable pipelines, and to push the limits of what our infrastructure can support.
The Impact you will have
As a Staff Software Engineer on the AI Research Infra Team at Databricks, you will:
- Design and implement infrastructure that supports large-scale experiments, data processing, and model training (e.g., HPC clusters, GPU fleets, or cloud-based systems)
- Enable researchers to go from idea to large-scale experiment in minutes, not days, by building powerful abstractions for job submission, scheduling, and monitoring.
- Create tooling that improves research developer productivity, such as experiment management systems, CI/testing infrastructure for research code, and workflows that reduce iteration time.
- Influence the long-term roadmap for research computation, shaping how Databricks AI Research train, evaluate, and ship models to customers.
- Serve as a technical mentor and force multiplier for other engineers working on compute, infra, and AI systems.
BASIC QUALIFICATIONS
- BS/MS or PhD in Computer Science or related field
- 5+ years of software engineering experience, including substantial time working on large-scale distributed systems or infrastructure.
- Have deep experience with building and operating distributed systems, data pipelines, or large-scale backend services, ideally involving GPUs, clusters, or major cloud providers.
- Are proficient in one or more systems programming languages (e.g., C++, Rust, Go, Java, Scala) and can design, implement, and debug complex services.
- Have built or significantly contributed to cluster schedulers, resource managers, or large-scale job orchestration systems (e.g., Kubernetes, Slurm, Ray, custom internal systems).
- Understand modern ML training and inference workflows (e.g., distributed training, model parallelism, fine-tuning, evaluation), even if you’re not primarily a research scientist.
- Can move fast and be pragmatic in getting things done, while caring about operational excellence. Have driven complex systems from prototype to stable, well-owned services.
- Communicate clearly with both researchers and engineers, and enjoy translating between research needs and infra realities.
PAY RANGE TRANSPARENCY
Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.
Local Pay Range
$190,000 — $270,000 USD
ABOUT DATABRICKS
Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow.
BENEFITS
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.
OUR COMMITMENT TO DIVERSITY AND INCLUSION
At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.
COMPLIANCE
If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
See all 602+ AI Researcher jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Researcher roles.
Get Access To All JobsTips for Finding Green Card Sponsorship in AI Researcher
Document your research contributions before applying
PERM requires your employer to define the role's minimum requirements before filing. Publications, patents, conference presentations, and open-source contributions help establish that the position genuinely requires an advanced degree, which strengthens both the job description and your EB-2 eligibility.
Target employers with dedicated immigration infrastructure
AI Researcher roles at companies with in-house immigration teams move through PERM and I-140 faster than those relying on outside counsel alone. Look for employers whose job postings list sponsorship explicitly and whose offer letters reference green card support, not just H-1B.
Search green card sponsoring employers using Migrate Mate
Filter AI Researcher openings by employers who have filed PERM applications for this specific job category. Migrate Mate surfaces DOL disclosure data so you can identify which companies have sponsored roles at your level before accepting an offer.
Verify the prevailing wage tier before negotiating salary
DOL assigns a wage level to your PERM application using the OFLC Wage Search. AI Researcher roles often land at Level III or IV. If your offered salary falls below the certified prevailing wage, USCIS will deny the I-140, so confirm the tier early in the offer stage.
Clarify the EB category and priority date strategy with your employer
EB-2 requires a master's degree or equivalent and typically processes faster for most nationalities. EB-3 covers bachelor's-level roles and can sometimes be filed concurrently to secure an earlier priority date. Ask your employer's counsel which category they plan to use and why.
Understand how job duties affect your O*NET classification
PERM filings for AI Researcher positions must match a recognized O*NET occupation code. If your actual duties span multiple categories, like both research and engineering, misclassification can trigger an audit. Review the O*NET description for your role and flag discrepancies to your employer before the labor certification is filed.
AI Researcher jobs are hiring across the US. Find yours.
Find AI Researcher JobsAI Researcher Green Card Sponsorship: Frequently Asked Questions
Does an AI Researcher role qualify for EB-2 or EB-3 sponsorship?
Most AI Researcher positions qualify for EB-2 because they require a master's or doctoral degree in computer science, machine learning, or a related field. If the employer defines the role at the bachelor's level, EB-3 applies instead. Some employers file under both categories simultaneously to secure the earliest possible priority date, which matters most for nationals of India and China where backlogs are longest.
How does green card sponsorship differ from H-1B for AI Researcher roles?
The H-1B is a temporary status with a three-year initial period and an annual lottery cap, while PERM-based green card sponsorship leads to permanent residency with no cap concerns at the EB-3 level for most nationalities. The tradeoff is time: green card processing typically takes two to four years from PERM filing to I-485 approval, compared to H-1B approval in a few months. The process also requires your employer to run a formal recruitment campaign before filing.
What does the PERM labor certification require for an AI Researcher position?
Your employer must conduct a DOL-mandated recruitment process, including job postings, print advertisements, and internal notices, to demonstrate no qualified U.S. workers are available. The job requirements written into the PERM application must reflect what the role genuinely needs, not what would make you uniquely qualified. AI Researcher roles with overly narrow requirements, like a specific framework or proprietary tool, are common audit triggers.
How can I find employers who sponsor green cards for AI Researcher roles?
Use Migrate Mate to filter AI Researcher openings by employers with a PERM filing history for this specific role type. DOL publishes PERM disclosure data, and Migrate Mate surfaces it so you can see which companies have successfully sponsored candidates at this job level before you apply, saving time on employers who only offer H-1B support.
Can I change employers after my PERM is filed but before my green card is approved?
Yes, under AC21 portability rules you can change employers once your I-140 is approved and your I-485 has been pending for at least 180 days, as long as the new role is in the same or a similar occupational classification. For AI Researcher roles this is usually straightforward since the DOT occupational categories are broadly defined, but your new employer should review your case with immigration counsel before you accept an offer.
See which AI Researcher employers are hiring and sponsoring visas right now.
Search AI Researcher Jobs