E-3 Visa AI ML Engineering Jobs
AI ML Engineering roles in the U.S. qualify as E-3 specialty occupations, making them a strong fit for Australian professionals seeking sponsorship. The E-3 has no lottery and no annual cap, so you can apply as soon as you have a qualifying offer from a U.S. employer willing to file a Labor Condition Application.
See All AI ML Engineering JobsOverview
Showing 5 of 2,153+ AI ML Engineering jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 2,153+ AI ML Engineering jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI ML Engineering roles.
Get Access To All Jobs
Job Description
WHO WE ARE
Goldman Sachs is a leading global investment banking, securities and investment management firm that provides a wide range of services worldwide to a substantial and diversified client base that includes corporations, financial institutions, governments and high net-worth individuals. Founded in 1869, it is one of the oldest and largest investment banking firms. The firm is headquartered in New York and maintains offices in London, Bangalore, Frankfurt, Tokyo, Hong Kong and other major financial centres around the world. We are committed to growing our distinctive Culture and holding to our core values which always place our client's interests first. These values are reflected in our Business Principles, which emphasise integrity, commitment to excellence, innovation and teamwork.
Business Unit Overview
Enterprise Technology Operations (ETO) is a Business Unit within Core Engineering focused on running scalable production management services with a mandate of operational excellence and operational risk reduction achieved through large scale automation, best-in-class engineering, and application of data science and machine learning. The Production Runtime Experience (PRX) team in ETO applies software engineering and machine learning to production management services, processes, and activities to streamline monitoring, alerting, automation, and workflows.
TEAM OVERVIEW
The Machine Learning and Artificial Intelligence team in PRX applies advanced ML and GenAI to reduce the risk and cost of operating the firm’s large-scale compute infrastructure and extensive application estate. Building on strengths in statistical modelling, anomaly detection, predictive modelling, and time-series forecasting, we leverage foundational LLM Models to orchestrate multi-agent systems for automated production management services. By unifying classical ML with agentic AI, we deliver reliable, explainable, and cost-efficient operations at scale.
ROLE AND RESPONSIBILITIES
In this role, you will be responsible for launching and implementing GenAI agentic solutions aimed at reducing the risk and cost of managing large-scale production environments with varying complexities. You will address various production runtime challenges by developing agentic AI solutions that can diagnose, reason, and take actions in production environments to improve productivity and address issues related to production support.
What You’ll Do
- Build agentic AI systems: Design and implement tool-calling agents that combine retrieval, structured reasoning, and secure action execution (function calling, change orchestration, policy enforcement) following MCP protocol. Engineer robust guardrails for safety, compliance, and least-privilege access.
- Productionize LLMs: Build evaluation framework for open-source and foundational LLMs; implement retrieval pipelines, prompt synthesis, response validation, and self-correction loops tailored to production operations.
- Integrate with runtime ecosystems: Connect agents to observability, incident management, and deployment systems to enable automated diagnostics, runbook execution, remediation, and post-incident summarization with full traceability.
- Collaborate directly with users: Partner with production engineers, and application teams to translate production pain points into agentic AI roadmaps; define objective functions linked to reliability, risk reduction, and cost; and deliver auditable, business-aligned outcomes.
- Safety, reliability, and governance: Build validator models, adversarial prompts, and policy checks into the stack; enforce deterministic fallbacks, circuit breakers, and rollback strategies; instrument continuous evaluations for usefulness, correctness, and risk.
- Scale and performance: Optimize cost and latency via prompt engineering, context management, caching, model routing, and distillation; leverage batching, streaming, and parallel tool-calls to meet stringent SLOs under real-world load.
- Build a RAG pipeline: Curate domain-knowledge; build data-quality validation framework; establish feedback loops and milestone framework maintain knowledge freshness.
- Raise the bar: Drive design reviews, experiment rigor, and high-quality engineering practices; mentor peers on agent architectures, evaluation methodologies, and safe deployment patterns.
Qualifications
A Bachelor’s degree (Masters/ PhD preferred) in a computational field (Computer Science, Applied Mathematics, Engineering, or in a related quantitative discipline), with 7+ years of experience as an applied data scientist / machine learning engineer.
Essential Skills
- 7+ years of software development in one or more languages (Python, C/C++, Go, Java); strong hands-on experience building and maintaining large-scale Python applications preferred.
- 3+ years designing, architecting, testing, and launching production ML systems, including model deployment/serving, evaluation and monitoring, data processing pipelines, and model fine-tuning workflows.
- Practical experience with Large Language Models (LLMs): API integration, prompt engineering, finetuning/adaptation, and building applications using RAG and tool-using agents (vector retrieval, function calling, secure tool execution).
- Understanding of different LLMs, both commercial and open source, and their capabilities (e.g., OpenAI, Gemini, Llama, Qwen, Claude).
- Solid grasp of applied statistics, core ML concepts, algorithms, and data structures to deliver efficient and reliable solutions.
- Strong analytical problem-solving, ownership, and urgency; ability to communicate complex ideas simply and collaborate effectively across global teams with a focus on measurable business impact.
- Preferred: Proficiency building and operating on cloud infrastructure (ideally AWS), including containerized services (ECS/EKS), serverless (Lambda), data services (S3, DynamoDB, Redshift), orchestration (Step Functions), model serving (SageMaker), and infra-as-code (Terraform/CloudFormation).
YOUR CAREER
Goldman Sachs is a meritocracy where you will be given all the tools to advance your career. At Goldman Sachs, you will have access to excellent training programmes designed to improve multiple facets of your skill portfolio. Our in-house training programme, “Goldman Sachs University” offers a comprehensive series of courses that you will have access to as your career progresses. Goldman Sachs University has an impressive catalogue of courses which span technical, business and leadership skills.
Salary Range
The expected base salary for this Jersey City, New Jersey, United States-based position is $130000-$250000. In addition, you may be eligible for a discretionary bonus if you are an active employee as of fiscal year-end.
Benefits
Goldman Sachs is committed to providing our people with valuable and competitive benefits and wellness offerings, as it is a core part of providing a strong overall employee experience. A summary of these offerings, which are generally available to active, non-temporary, full-time and part-time US employees who work at least 20 hours per week, can be found here.

Job Description
WHO WE ARE
Goldman Sachs is a leading global investment banking, securities and investment management firm that provides a wide range of services worldwide to a substantial and diversified client base that includes corporations, financial institutions, governments and high net-worth individuals. Founded in 1869, it is one of the oldest and largest investment banking firms. The firm is headquartered in New York and maintains offices in London, Bangalore, Frankfurt, Tokyo, Hong Kong and other major financial centres around the world. We are committed to growing our distinctive Culture and holding to our core values which always place our client's interests first. These values are reflected in our Business Principles, which emphasise integrity, commitment to excellence, innovation and teamwork.
Business Unit Overview
Enterprise Technology Operations (ETO) is a Business Unit within Core Engineering focused on running scalable production management services with a mandate of operational excellence and operational risk reduction achieved through large scale automation, best-in-class engineering, and application of data science and machine learning. The Production Runtime Experience (PRX) team in ETO applies software engineering and machine learning to production management services, processes, and activities to streamline monitoring, alerting, automation, and workflows.
TEAM OVERVIEW
The Machine Learning and Artificial Intelligence team in PRX applies advanced ML and GenAI to reduce the risk and cost of operating the firm’s large-scale compute infrastructure and extensive application estate. Building on strengths in statistical modelling, anomaly detection, predictive modelling, and time-series forecasting, we leverage foundational LLM Models to orchestrate multi-agent systems for automated production management services. By unifying classical ML with agentic AI, we deliver reliable, explainable, and cost-efficient operations at scale.
ROLE AND RESPONSIBILITIES
In this role, you will be responsible for launching and implementing GenAI agentic solutions aimed at reducing the risk and cost of managing large-scale production environments with varying complexities. You will address various production runtime challenges by developing agentic AI solutions that can diagnose, reason, and take actions in production environments to improve productivity and address issues related to production support.
What You’ll Do
- Build agentic AI systems: Design and implement tool-calling agents that combine retrieval, structured reasoning, and secure action execution (function calling, change orchestration, policy enforcement) following MCP protocol. Engineer robust guardrails for safety, compliance, and least-privilege access.
- Productionize LLMs: Build evaluation framework for open-source and foundational LLMs; implement retrieval pipelines, prompt synthesis, response validation, and self-correction loops tailored to production operations.
- Integrate with runtime ecosystems: Connect agents to observability, incident management, and deployment systems to enable automated diagnostics, runbook execution, remediation, and post-incident summarization with full traceability.
- Collaborate directly with users: Partner with production engineers, and application teams to translate production pain points into agentic AI roadmaps; define objective functions linked to reliability, risk reduction, and cost; and deliver auditable, business-aligned outcomes.
- Safety, reliability, and governance: Build validator models, adversarial prompts, and policy checks into the stack; enforce deterministic fallbacks, circuit breakers, and rollback strategies; instrument continuous evaluations for usefulness, correctness, and risk.
- Scale and performance: Optimize cost and latency via prompt engineering, context management, caching, model routing, and distillation; leverage batching, streaming, and parallel tool-calls to meet stringent SLOs under real-world load.
- Build a RAG pipeline: Curate domain-knowledge; build data-quality validation framework; establish feedback loops and milestone framework maintain knowledge freshness.
- Raise the bar: Drive design reviews, experiment rigor, and high-quality engineering practices; mentor peers on agent architectures, evaluation methodologies, and safe deployment patterns.
Qualifications
A Bachelor’s degree (Masters/ PhD preferred) in a computational field (Computer Science, Applied Mathematics, Engineering, or in a related quantitative discipline), with 7+ years of experience as an applied data scientist / machine learning engineer.
Essential Skills
- 7+ years of software development in one or more languages (Python, C/C++, Go, Java); strong hands-on experience building and maintaining large-scale Python applications preferred.
- 3+ years designing, architecting, testing, and launching production ML systems, including model deployment/serving, evaluation and monitoring, data processing pipelines, and model fine-tuning workflows.
- Practical experience with Large Language Models (LLMs): API integration, prompt engineering, finetuning/adaptation, and building applications using RAG and tool-using agents (vector retrieval, function calling, secure tool execution).
- Understanding of different LLMs, both commercial and open source, and their capabilities (e.g., OpenAI, Gemini, Llama, Qwen, Claude).
- Solid grasp of applied statistics, core ML concepts, algorithms, and data structures to deliver efficient and reliable solutions.
- Strong analytical problem-solving, ownership, and urgency; ability to communicate complex ideas simply and collaborate effectively across global teams with a focus on measurable business impact.
- Preferred: Proficiency building and operating on cloud infrastructure (ideally AWS), including containerized services (ECS/EKS), serverless (Lambda), data services (S3, DynamoDB, Redshift), orchestration (Step Functions), model serving (SageMaker), and infra-as-code (Terraform/CloudFormation).
YOUR CAREER
Goldman Sachs is a meritocracy where you will be given all the tools to advance your career. At Goldman Sachs, you will have access to excellent training programmes designed to improve multiple facets of your skill portfolio. Our in-house training programme, “Goldman Sachs University” offers a comprehensive series of courses that you will have access to as your career progresses. Goldman Sachs University has an impressive catalogue of courses which span technical, business and leadership skills.
Salary Range
The expected base salary for this Jersey City, New Jersey, United States-based position is $130000-$250000. In addition, you may be eligible for a discretionary bonus if you are an active employee as of fiscal year-end.
Benefits
Goldman Sachs is committed to providing our people with valuable and competitive benefits and wellness offerings, as it is a core part of providing a strong overall employee experience. A summary of these offerings, which are generally available to active, non-temporary, full-time and part-time US employees who work at least 20 hours per week, can be found here.
See all 2,153+ AI ML Engineering jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI ML Engineering roles.
Get Access To All JobsTips for Finding E-3 Visa Sponsorship as an AI ML Engineering
Frame your credentials for U.S. specialty occupation
Your Australian three-year bachelor's degree in computer science, data science, or a related field satisfies the E-3 degree requirement. Get a credential evaluation before you apply so employers aren't left guessing about equivalency.
Target employers with active LCA filing history
Search DOL's FLAG portal for companies that have filed Labor Condition Applications for AI or ML job titles. Prior LCA filings signal that a hiring team already understands the E-3 process and won't stall at the sponsorship conversation.
Ensure your job description matches your degree field
For AI ML Engineering roles, the position must require a relevant technical degree, not just 'a bachelor's in any field.' If the job description is vague, ask HR to revise it before the LCA is filed to avoid a DOL denial.
Use Migrate Mate's E-3 filing service to manage your LCA
Once you have an offer, use Migrate Mate's E-3 filing service to handle your LCA and visa paperwork end-to-end. This is especially useful when your employer's legal team has no prior E-3 experience and needs a structured process.
AI ML Engineering jobs are hiring across the US. Find yours.
Find AI ML Engineering JobsAI ML Engineering E-3 Visa: Frequently Asked Questions
How do I find AI ML Engineering jobs with E-3 visa sponsorship?
Migrate Mate is built specifically for this search. It filters roles by E-3 sponsorship eligibility and surfaces employers with a history of filing Labor Condition Applications for technical positions. Searching general job boards for 'visa sponsorship' rarely filters for E-3 specifically, so you end up reviewing roles that only accommodate H-1B candidates.
How much does it cost to get an E-3 visa?
Migrate Mate's E-3 filing service covers the entire process for $499, including the Labor Condition Application, visa document preparation, and consulate appointment guidance. Traditional immigration lawyers charge $2,000–$5,000+ for the same work. The E-3 has less paperwork than most work visas, so paying thousands for legal help is usually unnecessary.
Does AI ML Engineering qualify as an E-3 specialty occupation?
Yes, provided the role requires a bachelor's degree or higher in a directly related field such as computer science, machine learning, data science, or software engineering. Roles framed as 'nice to have a degree' rather than 'degree required' can fail the specialty occupation test, so the job description wording matters before your employer files the LCA with DOL.
How does the E-3 compare to the H-1B for AI ML Engineering roles?
For Australian nationals, the E-3 is significantly more practical for AI ML Engineering. There's no lottery, no annual cap, and no registration window to miss. The H-1B requires entering a randomized lottery with roughly a 25% selection rate, meaning you could go unselected for multiple years. E-3 applications are processed at the consulate and can be completed in weeks once the LCA is certified.
Can I switch AI ML Engineering employers while on an E-3 visa?
Yes, but you need a new LCA and a new visa stamp or change of status approval before you start with the new employer. You can't simply port your E-3 the way some H-1B holders port under portability rules. If you're already in the U.S., your new employer files a fresh LCA with DOL, and you either return to Australia for a new stamp or file a change of status with USCIS.
See which AI ML Engineering employers are hiring and sponsoring visas right now.
Search AI ML Engineering Jobs