Senior Data Science Engineer Visa Sponsorship Jobs in Colorado
Colorado's senior data science engineer market is concentrated in Denver's tech corridor and Boulder's startup ecosystem, with major employers including Google, Palantir, and Lockheed Martin actively hiring. The state's growing aerospace, defense, and health tech sectors create consistent demand for senior-level data science talent that supports visa sponsorship.
See All Senior Data Science Engineer JobsOverview
Showing 5 of 53+ Senior Data Science Engineer Jobs in Colorado with Visa Sponsorship jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 53+ Senior Data Science Engineer Jobs in Colorado with Visa Sponsorship
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Senior Data Science Engineer Jobs in Colorado with Visa Sponsorship.
Get Access To All Jobs
Who we are looking for: We are hiring a Senior Software Engineer in our Platform Data Query team to operate, maintain, scale, and enhance Appfolio's data streaming and data access systems. Must have experience with modern data lake architectures, as you will be directly working with Iceberg data lakes, Trino, and real-time streaming using Apache Flink and Kafka. Our data is widely used to power customer-facing dashboards, reports, BI integrations, and AI-powered agents. Appfolio supports a significant part of the real estate market in the United States, and our data unlocks many insights for our customers and is the basis for new tools and capabilities that deliver value for our customers. The data provides enhanced performance metrics for our 20,000+ customers in the real estate property management industry. This allows our customers to see trends in their operations and act accordingly to improve and grow their business. Our Platform Data Query system provides a uniform, robust, and flexible access to data in Appfolio, powering a variety of applications, all enhancing the lives and businesses of property managers. This role is pivotal to the ongoing operation, scaling, and enhancement of that system, ultimately unlocking tremendous potential for the real estate industry in the coming years.
Responsibilities:
- Build a deep understanding of our data structure and systems - enabling you and your team to maintain, scale, and add on to the existing architecture.
- Maintain, optimize, and scale our robust data access layer on top of our Iceberg data lake, taking ownership of under-the-hood optimizations like data compaction for performance and storage efficiency.
- Design, build, and operate a robust API on top of our data tech stack, ensuring secure data access and seamless integration for downstream applications and platform services.
- Collaborate with Product to understand current operational needs, troubleshoot issues, and design technical add-ons or enhancements to our existing solutions.
- Work in a truly agile fashion to turn scaling challenges and feature enhancements into thinly sliced deliverables and execute quickly against them while limiting work in progress.
- Hold a high bar of engineering excellence and always look for ways to raise it. Adopt our engineering best practices, provide and receive in-depth code reviews, and participate in healthy debate as a team. Evangelize your own expertise and experience among your teammates and the rest of the organization.
- Together with your team, you ensure the data flowing through our data pipelines is tested with appropriate unit and integration tests to ensure the correct data makes it to our customers.
- Together with your team, your deliverables are always well-instrumented. Queries and dashboards are easily accessible and regularly used to drive decisions as well as measure progress.
- Enthusiastically participate in a high-performing, empowered team with high levels of mutual trust and respect. Along with the team, you will take ownership of your problem space - reflecting and growing from our failures and celebrating our successes.
- Operate, optimize, and scale systems responsible for high concurrency access to large data sets, requiring hands-on execution and deep knowledge of data access and query optimization with distributed query engines like Trino and AWS Athena. Identify gaps, deficiencies, and inefficiencies in the system. Propose and implement solutions.
You know you're the right fit if...
- Must have experience operating, scaling, and enhancing data pipelines at a company with large data sets using Apache Flink and Kafka, especially with multi-tenant data in an agile SaaS environment.
- Must have foundational experience operating, tuning, and maintaining Iceberg data lakes, including deep knowledge of table maintenance and data compaction strategies.
- Experience working on platform teams or maintaining platform services, whose customers are other internal teams.
- Proven experience working across all levels of the development stack.
- Proven experience with object-oriented languages (Python, Ruby, JS, Java, C#, etc.).
- Strong SQL proficiency and deep knowledge of data access/query optimization, requiring the ability to optimize query performance and cost efficiency at scale using distributed engines like Trino and AWS Athena.
- Familiarity with core architecture principles of at-scale systems.
- Must have strong familiarity with public cloud infrastructure, particularly AWS (including native tools like AWS Glue, AWS S3, and AWS Athena).
- Strong familiarity with Agile software development processes: Scrum or Kanban.
- Creativity and proactivity - an ability to solve complex scaling and operational problems. You love to learn about and use new tech, but understand the value of continuing to leverage and optimize existing technology when it gets the job done.
- You care about the long-term maintainability of the codebase and advocate for refactoring and code cleanliness. You can identify and resolve code-smells through sensible refactoring.
Additional Skills and Knowledge:
- 5+ years of experience working in software engineering teams.
- Comfortable working with remote team members.
- Ability to think pragmatically and effectively balance business outcomes with technical goals.
- Ability to establish strong working relationships with peers across other platform development teams.
If you are interested in creating exceptional SaaS products and being part of a successful public company, apply today!
Compensation & Benefits
The base salary that we reasonably expect to pay for this role is $138,400 - $173,000.
The actual base salary for this role will be determined by a variety of factors, including but not limited to the candidate’s skills, education, experience, etc.
Please note that base pay is one important aspect of a compelling Total Rewards package. The base pay range indicated here does not include any additional benefits or bonuses that you may be eligible for based on your role and/or employment type.
Regular full-time employees are eligible for benefits - see here.

Who we are looking for: We are hiring a Senior Software Engineer in our Platform Data Query team to operate, maintain, scale, and enhance Appfolio's data streaming and data access systems. Must have experience with modern data lake architectures, as you will be directly working with Iceberg data lakes, Trino, and real-time streaming using Apache Flink and Kafka. Our data is widely used to power customer-facing dashboards, reports, BI integrations, and AI-powered agents. Appfolio supports a significant part of the real estate market in the United States, and our data unlocks many insights for our customers and is the basis for new tools and capabilities that deliver value for our customers. The data provides enhanced performance metrics for our 20,000+ customers in the real estate property management industry. This allows our customers to see trends in their operations and act accordingly to improve and grow their business. Our Platform Data Query system provides a uniform, robust, and flexible access to data in Appfolio, powering a variety of applications, all enhancing the lives and businesses of property managers. This role is pivotal to the ongoing operation, scaling, and enhancement of that system, ultimately unlocking tremendous potential for the real estate industry in the coming years.
Responsibilities:
- Build a deep understanding of our data structure and systems - enabling you and your team to maintain, scale, and add on to the existing architecture.
- Maintain, optimize, and scale our robust data access layer on top of our Iceberg data lake, taking ownership of under-the-hood optimizations like data compaction for performance and storage efficiency.
- Design, build, and operate a robust API on top of our data tech stack, ensuring secure data access and seamless integration for downstream applications and platform services.
- Collaborate with Product to understand current operational needs, troubleshoot issues, and design technical add-ons or enhancements to our existing solutions.
- Work in a truly agile fashion to turn scaling challenges and feature enhancements into thinly sliced deliverables and execute quickly against them while limiting work in progress.
- Hold a high bar of engineering excellence and always look for ways to raise it. Adopt our engineering best practices, provide and receive in-depth code reviews, and participate in healthy debate as a team. Evangelize your own expertise and experience among your teammates and the rest of the organization.
- Together with your team, you ensure the data flowing through our data pipelines is tested with appropriate unit and integration tests to ensure the correct data makes it to our customers.
- Together with your team, your deliverables are always well-instrumented. Queries and dashboards are easily accessible and regularly used to drive decisions as well as measure progress.
- Enthusiastically participate in a high-performing, empowered team with high levels of mutual trust and respect. Along with the team, you will take ownership of your problem space - reflecting and growing from our failures and celebrating our successes.
- Operate, optimize, and scale systems responsible for high concurrency access to large data sets, requiring hands-on execution and deep knowledge of data access and query optimization with distributed query engines like Trino and AWS Athena. Identify gaps, deficiencies, and inefficiencies in the system. Propose and implement solutions.
You know you're the right fit if...
- Must have experience operating, scaling, and enhancing data pipelines at a company with large data sets using Apache Flink and Kafka, especially with multi-tenant data in an agile SaaS environment.
- Must have foundational experience operating, tuning, and maintaining Iceberg data lakes, including deep knowledge of table maintenance and data compaction strategies.
- Experience working on platform teams or maintaining platform services, whose customers are other internal teams.
- Proven experience working across all levels of the development stack.
- Proven experience with object-oriented languages (Python, Ruby, JS, Java, C#, etc.).
- Strong SQL proficiency and deep knowledge of data access/query optimization, requiring the ability to optimize query performance and cost efficiency at scale using distributed engines like Trino and AWS Athena.
- Familiarity with core architecture principles of at-scale systems.
- Must have strong familiarity with public cloud infrastructure, particularly AWS (including native tools like AWS Glue, AWS S3, and AWS Athena).
- Strong familiarity with Agile software development processes: Scrum or Kanban.
- Creativity and proactivity - an ability to solve complex scaling and operational problems. You love to learn about and use new tech, but understand the value of continuing to leverage and optimize existing technology when it gets the job done.
- You care about the long-term maintainability of the codebase and advocate for refactoring and code cleanliness. You can identify and resolve code-smells through sensible refactoring.
Additional Skills and Knowledge:
- 5+ years of experience working in software engineering teams.
- Comfortable working with remote team members.
- Ability to think pragmatically and effectively balance business outcomes with technical goals.
- Ability to establish strong working relationships with peers across other platform development teams.
If you are interested in creating exceptional SaaS products and being part of a successful public company, apply today!
Compensation & Benefits
The base salary that we reasonably expect to pay for this role is $138,400 - $173,000.
The actual base salary for this role will be determined by a variety of factors, including but not limited to the candidate’s skills, education, experience, etc.
Please note that base pay is one important aspect of a compelling Total Rewards package. The base pay range indicated here does not include any additional benefits or bonuses that you may be eligible for based on your role and/or employment type.
Regular full-time employees are eligible for benefits - see here.
Senior Data Science Engineer Job Roles in Colorado
See all 53+ Senior Data Science Engineer Jobs in Colorado
Sign up for free to filter by visa type, set job alerts, and find employers with verified sponsorship history.
Search Senior Data Science Engineer Jobs in ColoradoSenior Data Science Engineer Jobs in Colorado: Frequently Asked Questions
Which companies sponsor visas for senior data science engineers in Colorado?
Several large employers in Colorado have established track records of sponsoring work visas for senior data science engineers. These include Google's Boulder engineering office, Palantir Technologies (headquartered in Denver), Lockheed Martin, and health tech firms like Centura Health and DaVita. Larger enterprises and defense contractors tend to have dedicated immigration support teams that manage the sponsorship process for senior technical hires.
Which visa types are most common for senior data science engineer roles in Colorado?
The H-1B is the most common visa category for senior data science engineers in Colorado, as the role typically qualifies as a specialty occupation requiring at least a bachelor's degree in a directly related field like computer science, statistics, or mathematics. Candidates already holding O-1A visas or those on OPT or STEM OPT extension are also commonly hired into these roles. Some senior candidates pursue EB-2 or EB-3 employment-based green card sponsorship after initial placement.
Which cities in Colorado have the most senior data science engineer sponsorship jobs?
Denver accounts for the largest share of senior data science engineer sponsorship opportunities in Colorado, driven by its concentration of enterprise tech, fintech, and healthcare companies. Boulder is a strong second, particularly for roles at research-driven startups and companies with university ties to CU Boulder. Colorado Springs sees activity primarily through defense and aerospace contractors with active security clearance programs.
How to find senior data science engineer visa sponsorship jobs in Colorado?
Migrate Mate filters job listings specifically by visa sponsorship availability, making it easier to identify senior data science engineer roles in Colorado without sorting through positions that don't sponsor. You can search by location and role type to surface opportunities at Denver and Boulder employers actively filing H-1B petitions. This is particularly useful given that senior data science roles often appear under varying job titles across different companies.
Are there state-specific factors that affect visa sponsorship for senior data science engineers in Colorado?
Colorado's proximity to major federal research institutions and defense contractors creates a segment of senior data science roles that require U.S. security clearances, which can limit sponsorship eligibility for non-citizens. However, the commercial tech sector in Denver and Boulder does not carry this restriction. The University of Colorado system produces a steady pipeline of MS and PhD graduates in data science and statistics, which means employers in the state are generally familiar with OPT and STEM OPT hiring processes for international candidates.
What is the prevailing wage for sponsored senior data science engineer jobs in Colorado?
U.S. employers sponsoring a visa must pay at least the prevailing wage, which is what workers in the same role, area, and experience level typically earn. The Department of Labor sets this rate to make sure companies aren't hiring foreign workers simply because they'd accept lower pay than a U.S. worker. It varies by job title, location, and experience. You can look up current prevailing wage rates for any occupation and location using the OFLC Wage Search page.
See which senior data science engineer employers are hiring and sponsoring visas in Colorado right now.
Search Senior Data Science Engineer Jobs in Colorado