Cybersecurity Engineer Jobs at Microsoft with Visa Sponsorship
Microsoft hires Cybersecurity Engineers across cloud security, identity protection, threat intelligence, and compliance engineering. The company has an established visa sponsorship infrastructure that supports international candidates through both nonimmigrant and immigrant pathways, making it a realistic target for skilled security professionals seeking U.S. work authorization.
See All Cybersecurity Engineer at Microsoft JobsOverview
Showing 5 of 28+ Cybersecurity Engineer Jobs at Microsoft jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 28+ Cybersecurity Engineer Jobs at Microsoft
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Cybersecurity Engineer Jobs at Microsoft.
Get Access To All Jobs
Overview
Copilot is becoming an agentic system: it can plan, reason, and take actions across tools, data, and services. Securing that kind of system requires more than traditional boundaries or static controls—it demands adaptive defenses, intelligent guardrails, and provable isolation that operate continuously at runtime.
Copilot Security and Privacy is responsible for building those capabilities directly into Copilot. Our work focuses on new security primitives for agentic AI, including real‑time intent validation, workload isolation with verifiable guarantees, AI‑driven guardrails, and offensive security techniques that model how intelligent systems fail under pressure.
We are hiring a Principal Security Engineer to help design and build these systems end‑to‑end. This is a hands‑on engineering role for someone who wants to ship production code into a globally deployed AI platform—developing adaptive defenses, agentic offensive security, and security architectures that scale with autonomy. The problems are technically deep, often unsolved, and central to making advanced AI systems deployable in the real world.
Most security roles focus on protecting existing systems. This role helps define the security architecture for systems that are still being invented.
You will work at the intersection of AI, security, privacy, and distributed systems—building defenses that enable safe autonomy rather than limiting innovation. If you want to shape how agentic AI is secured at global scale, this is a uniquely high‑impact opportunity.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.
Responsibilities
- Design and build secure, high‑performance platform components that support Copilot’s agentic workflows across cloud and device environments.
- Develop novel security mechanisms for agentic AI systems, including real‑time intent validation, information‑flow controls, isolation boundaries, and abuse‑resistant orchestration.
- Eliminate entire classes of vulnerabilities by creating secure‑by‑default APIs, sandboxing layers, and hardened system interfaces.
- Build and operate offensive security tooling and agents that continuously probe Copilot’s autonomy, reasoning paths, and trust boundaries.
- Partner closely with AI researchers, platform engineers, and product teams to translate research and prototypes into production‑ready security features.
- Write high‑quality, well‑tested code across backend services, platform layers, and AI‑adjacent systems.
- Use telemetry, signals, and data‑driven analysis to detect abuse, anomalous agent behavior, and emerging threat patterns.
- Navigate ambiguity, make sound engineering tradeoffs, and ship iteratively in a fast‑paced product environment.
- Contribute to a culture of high ownership, technical excellence, and inclusive collaboration.
Qualifications Required Qualifications:
- Doctorate in Statistics, Mathematics, Computer Science, or related field AND 3+ years of experience OR Master’s Degree AND 4+ years of experience OR Bachelor’s Degree AND 6+ years of experience in security engineering, secure software development, large-scale computing, threat modeling, or applied security analytics, including experience designing or building systems to detect, prevent, or mitigate security threats, or equivalent experience.
Preferred Qualifications:
- Doctorate in Statistics, Mathematics, Computer Science, or related field AND 6+ years of experience OR Master’s Degree AND 5+ years of experience OR Bachelor’s Degree AND 6+ years of experience in security engineering, secure software development, large-scale computing, threat modeling, or applied security analytics, including experience designing or building systems to detect, prevent, or mitigate security threats, or equivalent experience.
- 8+ years of professional engineering experience.
- Solid coding skills in one or more of the following: C, C++, C#, Java, JavaScript, or Python.
- Demonstrated experience designing, building, and operating production systems at scale.
- Experience building or securing large‑scale distributed systems on cloud platforms such as Azure, AWS, or GCP.
- Familiarity with emerging attack classes against AI systems, including prompt‑based exploits, agent misbehavior, information‑flow vulnerabilities, or model‑assisted exfiltration.
- Proven ability to design system interfaces and abstractions that reduce misuse and prevent vulnerabilities by construction.
- Experience with AI platforms, LLM frameworks, or ML pipelines—or the ability to ramp up quickly.
- Background in sandboxing, isolation, container security, or trusted execution environments.
- Solid analytical skills, including working with telemetry, anomaly detection, or ML‑based security signals.
- Ability to decompose ambiguous, unsolved problems into practical engineering plans.
- Clear, effective communication skills across technical and non‑technical audiences.
MicrosoftAI #MAIDPS
Security Operations Engineering IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process.

Overview
Copilot is becoming an agentic system: it can plan, reason, and take actions across tools, data, and services. Securing that kind of system requires more than traditional boundaries or static controls—it demands adaptive defenses, intelligent guardrails, and provable isolation that operate continuously at runtime.
Copilot Security and Privacy is responsible for building those capabilities directly into Copilot. Our work focuses on new security primitives for agentic AI, including real‑time intent validation, workload isolation with verifiable guarantees, AI‑driven guardrails, and offensive security techniques that model how intelligent systems fail under pressure.
We are hiring a Principal Security Engineer to help design and build these systems end‑to‑end. This is a hands‑on engineering role for someone who wants to ship production code into a globally deployed AI platform—developing adaptive defenses, agentic offensive security, and security architectures that scale with autonomy. The problems are technically deep, often unsolved, and central to making advanced AI systems deployable in the real world.
Most security roles focus on protecting existing systems. This role helps define the security architecture for systems that are still being invented.
You will work at the intersection of AI, security, privacy, and distributed systems—building defenses that enable safe autonomy rather than limiting innovation. If you want to shape how agentic AI is secured at global scale, this is a uniquely high‑impact opportunity.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.
Responsibilities
- Design and build secure, high‑performance platform components that support Copilot’s agentic workflows across cloud and device environments.
- Develop novel security mechanisms for agentic AI systems, including real‑time intent validation, information‑flow controls, isolation boundaries, and abuse‑resistant orchestration.
- Eliminate entire classes of vulnerabilities by creating secure‑by‑default APIs, sandboxing layers, and hardened system interfaces.
- Build and operate offensive security tooling and agents that continuously probe Copilot’s autonomy, reasoning paths, and trust boundaries.
- Partner closely with AI researchers, platform engineers, and product teams to translate research and prototypes into production‑ready security features.
- Write high‑quality, well‑tested code across backend services, platform layers, and AI‑adjacent systems.
- Use telemetry, signals, and data‑driven analysis to detect abuse, anomalous agent behavior, and emerging threat patterns.
- Navigate ambiguity, make sound engineering tradeoffs, and ship iteratively in a fast‑paced product environment.
- Contribute to a culture of high ownership, technical excellence, and inclusive collaboration.
Qualifications Required Qualifications:
- Doctorate in Statistics, Mathematics, Computer Science, or related field AND 3+ years of experience OR Master’s Degree AND 4+ years of experience OR Bachelor’s Degree AND 6+ years of experience in security engineering, secure software development, large-scale computing, threat modeling, or applied security analytics, including experience designing or building systems to detect, prevent, or mitigate security threats, or equivalent experience.
Preferred Qualifications:
- Doctorate in Statistics, Mathematics, Computer Science, or related field AND 6+ years of experience OR Master’s Degree AND 5+ years of experience OR Bachelor’s Degree AND 6+ years of experience in security engineering, secure software development, large-scale computing, threat modeling, or applied security analytics, including experience designing or building systems to detect, prevent, or mitigate security threats, or equivalent experience.
- 8+ years of professional engineering experience.
- Solid coding skills in one or more of the following: C, C++, C#, Java, JavaScript, or Python.
- Demonstrated experience designing, building, and operating production systems at scale.
- Experience building or securing large‑scale distributed systems on cloud platforms such as Azure, AWS, or GCP.
- Familiarity with emerging attack classes against AI systems, including prompt‑based exploits, agent misbehavior, information‑flow vulnerabilities, or model‑assisted exfiltration.
- Proven ability to design system interfaces and abstractions that reduce misuse and prevent vulnerabilities by construction.
- Experience with AI platforms, LLM frameworks, or ML pipelines—or the ability to ramp up quickly.
- Background in sandboxing, isolation, container security, or trusted execution environments.
- Solid analytical skills, including working with telemetry, anomaly detection, or ML‑based security signals.
- Ability to decompose ambiguous, unsolved problems into practical engineering plans.
- Clear, effective communication skills across technical and non‑technical audiences.
MicrosoftAI #MAIDPS
Security Operations Engineering IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process.
See all 28+ Cybersecurity Engineer at Microsoft jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Cybersecurity Engineer at Microsoft roles.
Get Access To All JobsTips for Finding Cybersecurity Engineer Jobs at Microsoft Jobs
Align Your Certifications to Microsoft's Security Stack
Microsoft's cybersecurity hiring leans heavily on candidates with hands-on experience in Azure Security Center, Microsoft Defender, and Sentinel. Certifications like AZ-500 or SC-200 signal direct platform fluency and give your application a concrete edge over generalist security profiles.
Target Roles That Specify Visa Sponsorship Availability
Not every Microsoft cybersecurity posting is open to sponsored candidates. Filter listings for sponsorship eligibility before applying. Roles on cloud security, identity, and threat detection teams have historically supported international hires more consistently than compliance-only or government-adjacent positions.
Find Sponsorship-Eligible Listings Through Migrate Mate
Browse Cybersecurity Engineer openings at Microsoft filtered by visa type using Migrate Mate. It surfaces roles where Microsoft has actively sponsored candidates, saving you time spent applying to postings that won't move forward for sponsored applicants.
Understand Which Visa Pathways Microsoft Files For
Microsoft sponsors H-1B, E-3, and H-1B1 visas for engineering roles. If you're Australian or from Chile or Singapore, the E-3 or H-1B1 pathways skip the H-1B lottery entirely. Clarify your nationality situation early so your recruiter routes your file to the right process from the start.
Prepare for Technical Depth in the Interview Loop
Microsoft's cybersecurity interview loop typically includes system design questions around zero-trust architecture, threat modeling scenarios, and coding assessments. Arriving with concrete examples from prior security engineering work, not just policy or audit experience, significantly improves your offer rate.
Confirm the Sponsorship Timeline Before Accepting an Offer
Once you receive an offer, ask Microsoft's HR team directly whether they plan to file with premium processing. USCIS premium processing reduces H-1B adjudication to 15 business days. Knowing this upfront helps you plan your start date and manage any overlap with an existing visa status.
Cybersecurity Engineer at Microsoft jobs are hiring across the US. Find yours.
Find Cybersecurity Engineer at Microsoft JobsFrequently Asked Questions
Does Microsoft sponsor H-1B visas for Cybersecurity Engineers?
Yes, Microsoft sponsors H-1B visas for Cybersecurity Engineer roles. The company has an established immigration team that handles the full filing process, including the Labor Condition Application with DOL and the I-129 petition with USCIS. Sponsorship availability can vary by specific role and team, so confirming eligibility during the recruiter screen is the right move.
Which visa types does Microsoft commonly use for Cybersecurity Engineer roles?
Microsoft files H-1B, E-3, and H-1B1 petitions for Cybersecurity Engineer positions, along with Green Card sponsorship through the EB-2 and EB-3 categories for longer-term employees. Australian citizens can pursue the E-3 pathway, which has no lottery. Citizens of Chile and Singapore are eligible for the H-1B1, which also bypasses the H-1B annual cap.
What qualifications and experience does Microsoft expect for sponsored Cybersecurity Engineer roles?
Microsoft generally expects a bachelor's degree or higher in computer science, information security, or a related field, which satisfies the specialty occupation requirement for H-1B and similar visas. Practically, competitive candidates bring hands-on experience with cloud security platforms, familiarity with Microsoft's own security tooling, and either industry certifications or demonstrated engineering depth in areas like identity, endpoint protection, or threat detection.
How do I apply for Cybersecurity Engineer jobs at Microsoft?
Applications go through Microsoft's careers site, where you can filter by job function and location. For a faster path to roles that actively support visa sponsorship, browse Cybersecurity Engineer listings at Microsoft through Migrate Mate, which filters specifically for sponsored positions. Once you apply, expect a recruiter screen followed by a multi-round technical interview loop covering security engineering scenarios and system design.
How do I plan my timeline when pursuing a sponsored Cybersecurity Engineer role at Microsoft?
The timeline depends on your current status and the visa pathway involved. H-1B filings are tied to the April lottery with an October 1 start date, so back-planning from that deadline matters if you're not already in H-1B status. E-3 and H-1B1 filings aren't cap-bound and can move faster. If you're changing status from OPT or another visa, USCIS processing times and your authorized period of stay both affect when you can realistically start.
See which Cybersecurity Engineer at Microsoft employers are hiring and sponsoring visas right now.
Search Cybersecurity Engineer at Microsoft Jobs