TN Visa Data Platform Engineer Jobs
Data Platform Engineer roles qualify for TN visa sponsorship under the USMCA's Computer Systems Analyst category, covering Canadians and Mexicans who design, build, and maintain large-scale data infrastructure. You'll need a relevant bachelor's degree and a U.S. employer willing to document the specialty occupation connection in a support letter.
See All Data Platform Engineer JobsOverview
Showing 5 of 188+ Data Platform Engineer jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 188+ Data Platform Engineer jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Data Platform Engineer roles.
Get Access To All Jobs
About the Role
Join our Data Platform team at 10x Genomics to architect and implement our strategic Unified Data Platform (UDP). This pivotal role is focused on modernizing our data infrastructure, transitioning to a scalable Event-Driven Architecture (EDA), and building the foundation for next-generation AI/ML and self-service analytics.
- Lead the architecture and delivery of the Single Source of Truth (SSOT) for the 10x Intelligent Data Ecosystem.
- Apply advanced software engineering practices to data systems for scalability and reliability.
- Provide hands-on Senior Data Engineering leadership in developing scalable and maintainable ETL/ELT solutions and data systems in a cloud-native environment.
- Drive the systematic reduction of critical technical debt by retiring fragile legacy middleware (e.g., Boomi).
- Partner with engineering and business teams to enable advanced AI capabilities and democratize data access via Natural Language Querying (NLQ).
What you will be doing
- Architect and implement the canonical data layer and Event-Driven Architecture (EDA) using technologies like Apache Iceberg and Kafka to decouple applications and ensure real-time data flow.
- Design, build, and optimize high-volume, code-first data pipelines (real-time and batch) across a large application landscape (e.g., Salesforce, Oracle, Workday).
- Establish Amazon S3 as the Single Source of Truth (SSOT) and govern data using principles like the Medallion Architecture (Silver and Gold layers) and schema evolution.
- Develop, test, and maintain robust and scalable ELT pipelines and data models in Snowflake, including leveraging advanced features like Snowpipes, Streams, and Stored Procedures.
- Develop the data presentation layer for self-service analytics, including the Natural Language Query (NLQ) interface integrated with Generative AI (e.g., Bedrock).
- Lead technical efforts to migrate key business domains off legacy middleware and onto the new platform, eliminating the "Integration Bottleneck".
- Define and enforce data governance, quality, and security standards across the Unified Data Platform.
- Collaborate with the Architecture Review Board (ARB) to promote modern approaches such as serverless computing and Domain-Driven Design.
- Take ownership of the full development lifecycle, from prototyping and design through deployment, monitoring, and operational excellence.
Minimum Requirements
- Bachelor's degree in Computer Science, Information Management, or a related field, or equivalent experience.
- 5+ years of hands-on experience in software engineering focused on data platform development, distributed systems, or enterprise integrations.
- Proven experience designing and implementing highly scalable data platforms on major cloud environments (e.g., AWS, GCP, or Azure).
- Deep proficiency in one or more general-purpose programming languages (e.g., Python, Java, or similar).
- Strong foundation in computer science fundamentals, including data structures, algorithms, and system design.
Preferred Skills and Experience
- Expertise in message queues and event streaming platforms (e.g., Kafka, RabbitMQ, Pub/Sub) and implementing Event-Driven Architecture.
- Experience with building data lakes/lakehouses using open formats like Apache Iceberg on cloud storage (e.g., Amazon S3).
- Expertise in modern ELT development, data modeling for OLAP/data warehousing using tools like dbt, and advanced Snowflake features (e.g., Snowpipes, Streams, Stored Procedures).
- Familiarity with containerization (Docker, Kubernetes) and Infrastructure-as-Code (IaC) principles.
- Prior experience in migrating an organization off a traditional iPaaS platform or eliminating legacy middleware.
- Experience with Generative AI integration for data access (e.g., NLQ, feature stores).
Core Technologies
- Snowflake
- Python (or Java/Scala)
- Event Streaming (Kafka or similar)
- AWS (or other major cloud platform)
- Apache Iceberg / Amazon S3
- Advanced SQL (including stored procedures, UDFs/UDTFs)
- Modern Orchestration tools (e.g., Airflow)
About 10x Genomics
At 10x Genomics, accelerating our understanding of biology is more than a mission for us. It is a commitment. This is the century of biology, and the breakthroughs we make now have the potential to change the world.
We enable scientists to advance their research, allowing them to address scientific questions they did not even know they could ask. Our tools have enabled fundamental discoveries across biology including cancer, immunology, and neuroscience.
Our teams are empowered and encouraged to follow their passions, pursue new ideas, and perform at their best in an inclusive and dynamic environment. We know that behind every scientific breakthrough, there is a deep infrastructure of talented people driving the life sciences industry and making it possible for scientists and clinicians to make new strides. We are dedicated to finding the very best person for every aspect of our work because the innovations and discoveries that we enable together will lead to better technologies, better treatments, and a better future. Find out how you can make a 10x difference.
Individuals seeking employment at 10x Genomics are considered without regards to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, or sexual orientation, or any other characteristic protected by applicable law.
10x does not accept unsolicited applicants submitted by third-party recruiters or agencies. Any resume or application submitted to 10x without a vendor agreement in place will be considered unsolicited and property of 10x, and 10x will not pay a placement fee.
Please be aware of recruitment scams impersonating 10x Genomics. All recruiting communication will come from email addresses @10xgenomics.com. We also want to encourage you to apply to 10x Genomics positions directly on our careers site, Careers.10xgenomics.com or from reputable third party sites, such as LinkedIn or Indeed. We will never request payment or sensitive personal information during the recruiting process.

About the Role
Join our Data Platform team at 10x Genomics to architect and implement our strategic Unified Data Platform (UDP). This pivotal role is focused on modernizing our data infrastructure, transitioning to a scalable Event-Driven Architecture (EDA), and building the foundation for next-generation AI/ML and self-service analytics.
- Lead the architecture and delivery of the Single Source of Truth (SSOT) for the 10x Intelligent Data Ecosystem.
- Apply advanced software engineering practices to data systems for scalability and reliability.
- Provide hands-on Senior Data Engineering leadership in developing scalable and maintainable ETL/ELT solutions and data systems in a cloud-native environment.
- Drive the systematic reduction of critical technical debt by retiring fragile legacy middleware (e.g., Boomi).
- Partner with engineering and business teams to enable advanced AI capabilities and democratize data access via Natural Language Querying (NLQ).
What you will be doing
- Architect and implement the canonical data layer and Event-Driven Architecture (EDA) using technologies like Apache Iceberg and Kafka to decouple applications and ensure real-time data flow.
- Design, build, and optimize high-volume, code-first data pipelines (real-time and batch) across a large application landscape (e.g., Salesforce, Oracle, Workday).
- Establish Amazon S3 as the Single Source of Truth (SSOT) and govern data using principles like the Medallion Architecture (Silver and Gold layers) and schema evolution.
- Develop, test, and maintain robust and scalable ELT pipelines and data models in Snowflake, including leveraging advanced features like Snowpipes, Streams, and Stored Procedures.
- Develop the data presentation layer for self-service analytics, including the Natural Language Query (NLQ) interface integrated with Generative AI (e.g., Bedrock).
- Lead technical efforts to migrate key business domains off legacy middleware and onto the new platform, eliminating the "Integration Bottleneck".
- Define and enforce data governance, quality, and security standards across the Unified Data Platform.
- Collaborate with the Architecture Review Board (ARB) to promote modern approaches such as serverless computing and Domain-Driven Design.
- Take ownership of the full development lifecycle, from prototyping and design through deployment, monitoring, and operational excellence.
Minimum Requirements
- Bachelor's degree in Computer Science, Information Management, or a related field, or equivalent experience.
- 5+ years of hands-on experience in software engineering focused on data platform development, distributed systems, or enterprise integrations.
- Proven experience designing and implementing highly scalable data platforms on major cloud environments (e.g., AWS, GCP, or Azure).
- Deep proficiency in one or more general-purpose programming languages (e.g., Python, Java, or similar).
- Strong foundation in computer science fundamentals, including data structures, algorithms, and system design.
Preferred Skills and Experience
- Expertise in message queues and event streaming platforms (e.g., Kafka, RabbitMQ, Pub/Sub) and implementing Event-Driven Architecture.
- Experience with building data lakes/lakehouses using open formats like Apache Iceberg on cloud storage (e.g., Amazon S3).
- Expertise in modern ELT development, data modeling for OLAP/data warehousing using tools like dbt, and advanced Snowflake features (e.g., Snowpipes, Streams, Stored Procedures).
- Familiarity with containerization (Docker, Kubernetes) and Infrastructure-as-Code (IaC) principles.
- Prior experience in migrating an organization off a traditional iPaaS platform or eliminating legacy middleware.
- Experience with Generative AI integration for data access (e.g., NLQ, feature stores).
Core Technologies
- Snowflake
- Python (or Java/Scala)
- Event Streaming (Kafka or similar)
- AWS (or other major cloud platform)
- Apache Iceberg / Amazon S3
- Advanced SQL (including stored procedures, UDFs/UDTFs)
- Modern Orchestration tools (e.g., Airflow)
About 10x Genomics
At 10x Genomics, accelerating our understanding of biology is more than a mission for us. It is a commitment. This is the century of biology, and the breakthroughs we make now have the potential to change the world.
We enable scientists to advance their research, allowing them to address scientific questions they did not even know they could ask. Our tools have enabled fundamental discoveries across biology including cancer, immunology, and neuroscience.
Our teams are empowered and encouraged to follow their passions, pursue new ideas, and perform at their best in an inclusive and dynamic environment. We know that behind every scientific breakthrough, there is a deep infrastructure of talented people driving the life sciences industry and making it possible for scientists and clinicians to make new strides. We are dedicated to finding the very best person for every aspect of our work because the innovations and discoveries that we enable together will lead to better technologies, better treatments, and a better future. Find out how you can make a 10x difference.
Individuals seeking employment at 10x Genomics are considered without regards to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, or sexual orientation, or any other characteristic protected by applicable law.
10x does not accept unsolicited applicants submitted by third-party recruiters or agencies. Any resume or application submitted to 10x without a vendor agreement in place will be considered unsolicited and property of 10x, and 10x will not pay a placement fee.
Please be aware of recruitment scams impersonating 10x Genomics. All recruiting communication will come from email addresses @10xgenomics.com. We also want to encourage you to apply to 10x Genomics positions directly on our careers site, Careers.10xgenomics.com or from reputable third party sites, such as LinkedIn or Indeed. We will never request payment or sensitive personal information during the recruiting process.
See all 188+ Data Platform Engineer jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Data Platform Engineer roles.
Get Access To All JobsTips for Finding TN Visa Sponsorship as a Data Platform Engineer
Frame your credentials around infrastructure, not analytics
TN classification for data platform roles hinges on the engineering side: pipeline architecture, distributed systems, and data warehouse design. Organize your degree transcripts and portfolio to foreground those elements before applying to any employer.
Target employers with recent visa filing experience
Companies with recent visa filings already understand the documentation requirements for international hiring and specialty occupations. Filtering for employers experienced with visa sponsorship narrows your list to companies prepared to navigate the TN visa process efficiently—from preparing support letters to coordinating your port of entry presentation or consulate appointment.
Use Migrate Mate to find sponsoring employers fast
Searching for Data Platform Engineer roles at employers experienced with visa sponsorship is faster through Migrate Mate, which surfaces employers by recent visa filings and role type so you're not guessing which companies support work visa candidates.
Get the offer letter's job duties language right
CBP officers at the border review the employer's support letter, not just your degree. The duties section must map explicitly to systems analysis and data engineering, not broadly to 'data work.' Vague language is the most common reason TN requests are questioned.
Clarify Mexican applicants need consular processing
Canadian nationals can self-petition at the port of entry with no cap concerns. Mexican nationals must apply at a U.S. consulate and are subject to the annual TN allocation, so build consulate appointment lead time into your start-date negotiation with the employer.
Verify your degree field maps to Computer Systems Analyst
USCIS and CBP both look for a direct connection between your degree field and the role. Degrees in computer science, information systems, or software engineering align clearly. An unrelated degree, even with years of platform engineering experience, will require a more detailed credential evaluation.
Data Platform Engineer jobs are hiring across the US. Find yours.
Find Data Platform Engineer JobsData Platform Engineer TN Visa: Frequently Asked Questions
Does a Data Platform Engineer role qualify for a TN visa?
Yes, if the role is framed correctly. CBP classifies data platform engineering under the Computer Systems Analyst TN category, which covers professionals who design and implement data infrastructure, build pipelines, and manage distributed systems. The job offer must document those specific duties. Titles alone don't determine eligibility; the actual work description in the employer's support letter does.
How does TN compare to H-1B for Data Platform Engineers?
TN has no lottery and no annual cap for Canadian nationals, which means you can start work as soon as CBP approves your application at the port of entry, often the same day. H-1B requires entering a randomized lottery with no guarantee of selection. Mexican nationals do face a TN annual cap, but consular processing still avoids the lottery entirely. For most Canadian and Mexican data engineers, TN is the faster and more predictable path.
How do I find employers who will sponsor a TN visa for a Data Platform Engineer?
Migrate Mate is built specifically for this search. It filters job listings by visa type and role, surfacing employers with a history of supporting TN candidates. Because many companies that sponsor data engineering roles don't advertise that fact in job postings, a targeted search tool saves significant time compared to filtering general job listings manually.
What documentation do I need before a Canadian citizen can enter the U.S. as a TN Data Platform Engineer?
You need a signed employer support letter describing your specific duties and how they qualify under the Computer Systems Analyst category, proof of your qualifying degree, and your Canadian passport. You present these directly to CBP at a land border or pre-clearance location. There's no petition to file with USCIS in advance. Some applicants also bring a credential evaluation if their degree is from outside North America.
Can I switch employers after I start working on a TN visa?
Yes, but the process restarts. A TN visa is employer-specific, so changing jobs requires your new employer to prepare a fresh support letter and you to either obtain a new TN stamp at the border or file a change of employer with USCIS if you're already inside the U.S. There's no portability provision like H-1B has under AC21, so don't resign before the new TN is confirmed.
See which Data Platform Engineer employers are hiring and sponsoring visas right now.
Search Data Platform Engineer Jobs