STEM OPT AI ML Engineering Jobs
AI ML Engineering roles in computer science, data science, and related STEM fields qualify for the 24-month STEM OPT extension, giving you up to 36 months of total work authorization. Your employer must be enrolled in E-Verify to file your I-983 training plan and keep your authorization active.
See All AI ML Engineering JobsOverview
Showing 5 of 1,117+ AI ML Engineering jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 1,117+ AI ML Engineering jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI ML Engineering roles.
Get Access To All Jobs
Job Description
WHO WE ARE
Goldman Sachs is a leading global investment banking, securities and investment management firm that provides a wide range of services worldwide to a substantial and diversified client base that includes corporations, financial institutions, governments and high net-worth individuals. Founded in 1869, it is one of the oldest and largest investment banking firms. The firm is headquartered in New York and maintains offices in London, Bangalore, Frankfurt, Tokyo, Hong Kong and other major financial centres around the world. We are committed to growing our distinctive Culture and holding to our core values which always place our client's interests first. These values are reflected in our Business Principles, which emphasise integrity, commitment to excellence, innovation and teamwork.
Business Unit Overview
Enterprise Technology Operations (ETO) is a Business Unit within Core Engineering focused on running scalable production management services with a mandate of operational excellence and operational risk reduction achieved through large scale automation, best-in-class engineering, and application of data science and machine learning. The Production Runtime Experience (PRX) team in ETO applies software engineering and machine learning to production management services, processes, and activities to streamline monitoring, alerting, automation, and workflows.
TEAM OVERVIEW
The Machine Learning and Artificial Intelligence team in PRX applies advanced ML and GenAI to reduce the risk and cost of operating the firm’s large-scale compute infrastructure and extensive application estate. Building on strengths in statistical modelling, anomaly detection, predictive modelling, and time-series forecasting, we leverage foundational LLM Models to orchestrate multi-agent systems for automated production management services. By unifying classical ML with agentic AI, we deliver reliable, explainable, and cost-efficient operations at scale.
ROLE AND RESPONSIBILITIES
In this role, you will be responsible for launching and implementing GenAI agentic solutions aimed at reducing the risk and cost of managing large-scale production environments with varying complexities. You will address various production runtime challenges by developing agentic AI solutions that can diagnose, reason, and take actions in production environments to improve productivity and address issues related to production support.
What You’ll Do:
- Build agentic AI systems: Design and implement tool-calling agents that combine retrieval, structured reasoning, and secure action execution (function calling, change orchestration, policy enforcement) following MCP protocol. Engineer robust guardrails for safety, compliance, and least-privilege access.
- Productionize LLMs: Build evaluation framework for open-source and foundational LLMs; implement retrieval pipelines, prompt synthesis, response validation, and self-correction loops tailored to production operations.
- Integrate with runtime ecosystems: Connect agents to observability, incident management, and deployment systems to enable automated diagnostics, runbook execution, remediation, and post-incident summarization with full traceability.
- Collaborate directly with users: Partner with production engineers, and application teams to translate production pain points into agentic AI roadmaps; define objective functions linked to reliability, risk reduction, and cost; and deliver auditable, business-aligned outcomes.
- Safety, reliability, and governance: Build validator models, adversarial prompts, and policy checks into the stack; enforce deterministic fallbacks, circuit breakers, and rollback strategies; instrument continuous evaluations for usefulness, correctness, and risk.
- Scale and performance: Optimize cost and latency via prompt engineering, context management, caching, model routing, and distillation; leverage batching, streaming, and parallel tool-calls to meet stringent SLOs under real-world load.
- Build a RAG pipeline: Curate domain-knowledge; build data-quality validation framework; establish feedback loops and milestone framework maintain knowledge freshness.
- Raise the bar: Drive design reviews, experiment rigor, and high-quality engineering practices; mentor peers on agent architectures, evaluation methodologies, and safe deployment patterns.
Qualifications
A Bachelor’s degree (Masters/ PhD preferred) in a computational field (Computer Science, Applied Mathematics, Engineering, or in a related quantitative discipline), with 5+ years of experience as an applied data scientist / machine learning engineer.
Essential Skills
- 5+ years of software development in one or more languages (Python, C/C++, Go, Java); strong hands-on experience building and maintaining large-scale Python applications preferred.
- 3+ years designing, architecting, testing, and launching production ML systems, including model deployment/serving, evaluation and monitoring, data processing pipelines, and model fine-tuning workflows.
- Practical experience with Large Language Models (LLMs): API integration, prompt engineering, finetuning/adaptation, and building applications using RAG and tool-using agents (vector retrieval, function calling, secure tool execution).
- Understanding of different LLMs, both commercial and open source, and their capabilities (e.g., OpenAI, Gemini, Llama, Qwen, Claude).
- Solid grasp of applied statistics, core ML concepts, algorithms, and data structures to deliver efficient and reliable solutions.
- Strong analytical problem-solving, ownership, and urgency; ability to communicate complex ideas simply and collaborate effectively across global teams with a focus on measurable business impact.
- Preferred: Proficiency building and operating on cloud infrastructure (ideally AWS), including containerized services (ECS/EKS), serverless (Lambda), data services (S3, DynamoDB, Redshift), orchestration (Step Functions), model serving (SageMaker), and infra-as-code (Terraform/CloudFormation).
YOUR CAREER
Goldman Sachs is a meritocracy where you will be given all the tools to advance your career. At Goldman Sachs, you will have access to excellent training programs designed to improve multiple facets of your skill portfolio. Our in-house training program, “Goldman Sachs University” offers a comprehensive series of courses that you will have access to as your career progresses. Goldman Sachs University has an impressive catalogue of courses which span technical, business and leadership skills.
Same Posting Description for Internal and External Candidates
See all 1,117+ AI ML Engineering jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI ML Engineering roles.
Get Access To All JobsTips for Finding STEM OPT Authorization in AI ML Engineering
Verify your CIP code before applying
Your STEM OPT eligibility depends on your degree's Classification of Instructional Programs code matching an approved STEM list. Computer science, electrical engineering, and data science CIP codes commonly support AI ML roles, but confirm with your DSO before targeting employers.
Filter employers by E-Verify enrollment status
Only E-Verify-enrolled employers can legally hire you on STEM OPT. Before any application, confirm enrollment through the E-Verify employer search tool. Roles at non-enrolled companies, including many early-stage startups, are off-limits regardless of how strong the offer looks.
Build an I-983 training plan before the offer stage
Drafting your training plan goals for an AI ML role before you receive an offer lets you move faster once one comes. Map your learning objectives to specific ML frameworks, model development responsibilities, and performance benchmarks your employer will sign off on.
Target employers with active H-1B filing history
Companies that have consistently filed H-1B petitions for ML engineers are structurally prepared to support long-term authorization. Use Migrate Mate to filter AI ML Engineering roles by employers with verified sponsorship history, so you're not starting that conversation from scratch.
Align your role title with DOL wage classifications
Job titles in AI and ML vary widely, but DOL wage levels are tied to SOC codes like Software Developers or Computer and Information Research Scientists. Use the OFLC Wage Search to confirm which SOC code your offer maps to before negotiating, since misclassification can delay LCA certification.
Track your 24-month extension window against H-1B cap dates
If your STEM OPT expires before an H-1B petition takes effect, cap-gap protection may bridge the gap, but only if your employer files before April 1 of the relevant fiscal year. Coordinate your extension end date with your employer's HR team early so filing deadlines don't catch you off guard.
AI ML Engineering jobs are hiring across the US. Find yours.
Find AI ML Engineering JobsFrequently Asked Questions
Which STEM degrees qualify for the OPT extension for AI ML Engineering roles?
Degrees in computer science, electrical engineering, statistics, applied mathematics, and data science are among the most common qualifying fields for AI ML Engineering positions. Your degree must carry an approved STEM Classification of Instructional Programs code, and your DSO can confirm whether your specific program qualifies before you begin the extension application through USCIS.
Does my employer have to be enrolled in E-Verify to hire me on STEM OPT?
Yes. E-Verify enrollment is a hard requirement for every employer hiring STEM OPT students. There are no exceptions, even for short contracts or part-time roles. You can verify a company's enrollment status through the E-Verify employer search before accepting any offer. Working for a non-enrolled employer places your immigration status at risk.
What goes into the I-983 training plan for an AI ML Engineering position?
Your I-983 must describe the specific AI and ML skills you'll develop, the projects or responsibilities tied to those goals, how your work connects to your STEM degree, and how your employer will evaluate your progress. For AI ML Engineering roles, this typically includes model development, data pipeline work, framework proficiency, and measurable performance benchmarks signed off by a supervisor.
How does cap-gap protection work if my STEM OPT ends before my H-1B starts?
If your employer files an H-1B petition on your behalf before April 1 and your STEM OPT expires between April 1 and October 1 of the same year, cap-gap automatically extends your work authorization through September 30. Your employer must file before that deadline for protection to apply. USCIS provides guidance on cap-gap rules for F-1 students transitioning to H-1B status.
Where can I find AI ML Engineering jobs at employers already set up for STEM OPT students?
Migrate Mate lists AI ML Engineering roles filtered by employers with active E-Verify enrollment and a track record of sponsoring STEM workers. Searching there lets you focus on companies that already understand the I-983 process and are structurally prepared to support your authorization, rather than spending time educating employers who have never hired an OPT student.
See which AI ML Engineering employers are hiring and sponsoring visas right now.
Search AI ML Engineering Jobs