AI Engineer Jobs at Lenovo with Visa Sponsorship
Lenovo hires AI Engineers to work across its intelligent devices, cloud infrastructure, and enterprise AI divisions, with roles spanning model development, MLOps, and applied research. The company has a consistent track record of sponsoring work visas for this function, making it a realistic target for international candidates in the AI space.
See All AI Engineer at Lenovo JobsOverview
Showing 5 of 13+ AI Engineer Jobs at Lenovo jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 13+ AI Engineer Jobs at Lenovo
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Engineer Jobs at Lenovo.
Get Access To All Jobs
INTRODUCTION
We are Lenovo. We do what we say. We own what we do. We WOW our customers. Lenovo is a US$69 billion revenue global technology powerhouse, ranked #196 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY). This transformation together with Lenovo’s world-changing innovation is building a more inclusive, trustworthy, and smarter future for everyone, everywhere. To find out more visit www.lenovo.com, and read about the latest news via our StoryHub.
LATC Summary
Lenovo AI Technology Center (LATC) is Lenovo’s global AI Center of Excellence, fueling the enterprise’s transformation into an AI-first organization. As a world-leading computing company, Lenovo offers a full spectrum of technology products and solutions, covering wearables, Motorola smartphones, ThinkPad/Yoga laptops, PCs, workstations, servers, and end-to-end services. This unparalleled product breadth creates a unique canvas for AI innovation, enabling rapid deployment of cutting-edge foundation models and flexible hybrid-cloud, agentic computing across the entire product portfolio. LATC is building the next wave of AI core technologies and platforms that evolve with the fast-moving AI ecosystem, focusing on novel model and agentic orchestration collaboration across mobile, edge, and cloud resources. Our mission is to position Lenovo and its customers at the forefront of the global AI generational shift, advancing Lenovo’s Hybrid AI vision and delivering Smarter Technology for All.
Job Summary
As a core member of the LATC Enterprise AI team, you will act as the critical link between Lenovo’s global Business Groups (BGs), external AI ecosystem partners and LATC global RD team. You will be responsible for aligning team goals and technical solutions with department leaders, integrating high-quality external AI resources, and accurately transmitting BGs’ business demands to the global RD team. This role is the strategic window and information hub of the team, ensuring the RD direction is highly consistent with Lenovo’s overall AI strategy and business development priorities.
Responsibilities
- Architect agent systems: Design and own the architecture of production agent systems, including the Agent SDK (LangGraph/Pydantic Graphs), defining patterns and abstractions that the team builds upon.
- Lead orchestration routing strategy: Define the technical vision for orchestration services, model routing (edge-cloud), and multi-agent coordination patterns. Make key architectural decisions on latency/cost/capability trade-offs.
- Drive cross-team integration: Partner with BU product teams (Qira, Tianxi, UDS IQ) to translate requirements into technical specifications. Coordinate with Infrastructure and Data teams on dependencies.
- Establish reliability safety standards: Define and enforce guardrail policies, fallback chains, and safety constraints across agent systems. Own incident response and post-mortem processes.
- Build observability infrastructure: Design tracing, logging, and monitoring systems that enable the team to understand agent behavior at scale. Create dashboards and alerting for production systems.
- Mentor and grow the team: Lead technical decisions for the squad, mentor junior engineers, conduct code reviews, and establish engineering best practices and coding standards.
- Shape technical roadmap: Contribute to quarterly planning, identify technical risks, and drive initiatives that improve team velocity and system reliability.
Required Qualifications
- 7+ years in software engineering, with at least 2 years focused on ML/AI systems or LLM-based applications.
- BS/MS in Computer Science or related field; equivalent practical experience considered.
- Track record of technical leadership: owning systems end-to-end, making architectural decisions, mentoring engineers.
- Experience with production incidents, on-call responsibilities, and post-mortem processes.
- Demonstrated ability to influence technical direction beyond immediate team.
- Expert-level Python programming (async patterns, performance optimization, library design) and experience designing APIs and SDKs.
- Deep knowledge of agentic frameworks (LangChain, LangGraph, LlamaIndex, AutoGen) including internals, not just usage.
- Proven track record shipping production agent systems serving real users at scale.
- Strong system design skills: distributed systems, state management, message queues, service mesh patterns.
- Experience with model routing strategies, embedding-based similarity matching, and edge-cloud orchestration.
- Ability to break down ambiguous problems, make architectural decisions independently, and communicate trade-offs clearly.
Preferred Qualifications
- Experience with MCP (Model Context Protocol) or similar agent communication protocols.
- Background in edge/on-device deployment (mobile, IoT, embedded systems) with latency and memory constraints.
- Contributions to open-source agent frameworks (LangChain, LlamaIndex, etc.).
- Experience building and operating ML platforms or MLOps infrastructure.
- Background in Go, Rust, or other systems languages for performance-critical components.
- Published blog posts, talks, or papers on agent systems or LLM engineering.
We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, national origin, status as a veteran, and basis of disability or any federal, state, or local protected class.

INTRODUCTION
We are Lenovo. We do what we say. We own what we do. We WOW our customers. Lenovo is a US$69 billion revenue global technology powerhouse, ranked #196 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY). This transformation together with Lenovo’s world-changing innovation is building a more inclusive, trustworthy, and smarter future for everyone, everywhere. To find out more visit www.lenovo.com, and read about the latest news via our StoryHub.
LATC Summary
Lenovo AI Technology Center (LATC) is Lenovo’s global AI Center of Excellence, fueling the enterprise’s transformation into an AI-first organization. As a world-leading computing company, Lenovo offers a full spectrum of technology products and solutions, covering wearables, Motorola smartphones, ThinkPad/Yoga laptops, PCs, workstations, servers, and end-to-end services. This unparalleled product breadth creates a unique canvas for AI innovation, enabling rapid deployment of cutting-edge foundation models and flexible hybrid-cloud, agentic computing across the entire product portfolio. LATC is building the next wave of AI core technologies and platforms that evolve with the fast-moving AI ecosystem, focusing on novel model and agentic orchestration collaboration across mobile, edge, and cloud resources. Our mission is to position Lenovo and its customers at the forefront of the global AI generational shift, advancing Lenovo’s Hybrid AI vision and delivering Smarter Technology for All.
Job Summary
As a core member of the LATC Enterprise AI team, you will act as the critical link between Lenovo’s global Business Groups (BGs), external AI ecosystem partners and LATC global RD team. You will be responsible for aligning team goals and technical solutions with department leaders, integrating high-quality external AI resources, and accurately transmitting BGs’ business demands to the global RD team. This role is the strategic window and information hub of the team, ensuring the RD direction is highly consistent with Lenovo’s overall AI strategy and business development priorities.
Responsibilities
- Architect agent systems: Design and own the architecture of production agent systems, including the Agent SDK (LangGraph/Pydantic Graphs), defining patterns and abstractions that the team builds upon.
- Lead orchestration routing strategy: Define the technical vision for orchestration services, model routing (edge-cloud), and multi-agent coordination patterns. Make key architectural decisions on latency/cost/capability trade-offs.
- Drive cross-team integration: Partner with BU product teams (Qira, Tianxi, UDS IQ) to translate requirements into technical specifications. Coordinate with Infrastructure and Data teams on dependencies.
- Establish reliability safety standards: Define and enforce guardrail policies, fallback chains, and safety constraints across agent systems. Own incident response and post-mortem processes.
- Build observability infrastructure: Design tracing, logging, and monitoring systems that enable the team to understand agent behavior at scale. Create dashboards and alerting for production systems.
- Mentor and grow the team: Lead technical decisions for the squad, mentor junior engineers, conduct code reviews, and establish engineering best practices and coding standards.
- Shape technical roadmap: Contribute to quarterly planning, identify technical risks, and drive initiatives that improve team velocity and system reliability.
Required Qualifications
- 7+ years in software engineering, with at least 2 years focused on ML/AI systems or LLM-based applications.
- BS/MS in Computer Science or related field; equivalent practical experience considered.
- Track record of technical leadership: owning systems end-to-end, making architectural decisions, mentoring engineers.
- Experience with production incidents, on-call responsibilities, and post-mortem processes.
- Demonstrated ability to influence technical direction beyond immediate team.
- Expert-level Python programming (async patterns, performance optimization, library design) and experience designing APIs and SDKs.
- Deep knowledge of agentic frameworks (LangChain, LangGraph, LlamaIndex, AutoGen) including internals, not just usage.
- Proven track record shipping production agent systems serving real users at scale.
- Strong system design skills: distributed systems, state management, message queues, service mesh patterns.
- Experience with model routing strategies, embedding-based similarity matching, and edge-cloud orchestration.
- Ability to break down ambiguous problems, make architectural decisions independently, and communicate trade-offs clearly.
Preferred Qualifications
- Experience with MCP (Model Context Protocol) or similar agent communication protocols.
- Background in edge/on-device deployment (mobile, IoT, embedded systems) with latency and memory constraints.
- Contributions to open-source agent frameworks (LangChain, LlamaIndex, etc.).
- Experience building and operating ML platforms or MLOps infrastructure.
- Background in Go, Rust, or other systems languages for performance-critical components.
- Published blog posts, talks, or papers on agent systems or LLM engineering.
We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, national origin, status as a veteran, and basis of disability or any federal, state, or local protected class.
See all 13+ AI Engineer at Lenovo jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Engineer at Lenovo roles.
Get Access To All JobsTips for Finding AI Engineer Jobs at Lenovo Jobs
Align your credentials to Lenovo's AI stack
Lenovo's AI Engineer roles frequently emphasize edge AI, large language models, and enterprise inference pipelines. Frame your resume around these domains specifically, not general ML work, so your profile matches what Lenovo's hiring teams are actually screening for.
Target roles tied to Lenovo's hardware divisions
Lenovo embeds AI Engineers within product-specific teams like ThinkPad, ThinkSystem, and its Hybrid AI solutions group. Applying to roles anchored in these divisions signals you understand AI at the hardware-software intersection, which differentiates you from software-only candidates.
Search Lenovo's AI Engineer openings on Migrate Mate
Migrate Mate filters job listings by visa sponsorship type, so you can browse Lenovo's active AI Engineer postings and confirm which roles explicitly support H-1B, OPT, or TN candidates before you invest time in an application.
Verify your OPT timeline before your interview
If you're on F-1 OPT, confirm your authorization end date before your first Lenovo interview. STEM OPT extensions give you up to 24 additional months, but your employer must be E-Verify enrolled. Lenovo meets this requirement, so the extension pathway is available.
Prepare for H-1B cap timing in your offer negotiation
H-1B petitions are subject to an annual cap with a lottery that opens in March each year. If your offer lands outside that window, discuss a start date with Lenovo's HR team that aligns with the next available filing cycle rather than assuming an immediate transition.
Document your specialty occupation evidence thoroughly
USCIS scrutinizes AI Engineer petitions for specialty occupation status. Prepare a clear record showing that your role requires a theoretical and practical application of highly specialized knowledge, including your degree field, job duties, and any patents, publications, or proprietary systems you've contributed to.
AI Engineer at Lenovo jobs are hiring across the US. Find yours.
Find AI Engineer at Lenovo JobsFrequently Asked Questions
Does Lenovo sponsor H-1B visas for AI Engineers?
Yes, Lenovo sponsors H-1B visas for AI Engineer roles. The company has an established sponsorship process for this function, covering both new cap-subject petitions and transfers for candidates already holding H-1B status. Because H-1B petitions for AI Engineers require demonstrating specialty occupation, your application materials should clearly connect your degree field to your specific job duties at Lenovo.
How do I apply for AI Engineer jobs at Lenovo?
Apply directly through Lenovo's careers portal, where AI Engineer postings are listed by division and location. Before applying, confirm the role explicitly supports visa sponsorship, as not every open position includes it. Migrate Mate lets you filter Lenovo's AI Engineer listings by visa type so you can identify sponsorship-eligible roles without reviewing each posting individually.
Which visa types does Lenovo commonly use for AI Engineers?
Lenovo sponsors H-1B, F-1 OPT, F-1 CPT, TN, and J-1 visas for AI Engineer roles. H-1B is the primary long-term pathway. F-1 OPT and CPT are common entry points for recent graduates, with STEM OPT extensions available given Lenovo's E-Verify enrollment. TN visas are an option for Canadian and Mexican nationals whose roles fall under USMCA-eligible categories.
What qualifications does Lenovo expect for AI Engineer roles?
Lenovo's AI Engineer positions typically require a bachelor's degree or higher in computer science, electrical engineering, or a closely related field. Practical experience with machine learning frameworks, model optimization, and deployment pipelines is expected. Roles tied to Lenovo's hardware divisions place additional weight on edge AI and inference optimization skills, so experience with on-device or embedded ML systems strengthens your profile significantly.
How do I plan the visa process timeline when targeting Lenovo?
Timeline depends on your current visa status. OPT holders can begin work while Lenovo files an H-1B petition, but the annual H-1B lottery closes in March, so an offer received mid-year may require a deferred start. H-1B transfers for candidates already in status can be processed under portability rules, letting you start at Lenovo while USCIS adjudicates the petition. Build at least three to six months of buffer into your job search plan.
See which AI Engineer at Lenovo employers are hiring and sponsoring visas right now.
Search AI Engineer at Lenovo Jobs