Data Architect Jobs at Deloitte with Visa Sponsorship
Deloitte hires Data Architects across its consulting, advisory, and technology practices, sponsoring a range of work visas for qualified candidates. If you're targeting a technical architecture role at one of the largest professional services firms in the U.S., Deloitte has an established process for supporting international talent through the sponsorship journey.
See All Data Architect at Deloitte JobsOverview
Showing 5 of 155+ Data Architect Jobs at Deloitte jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 155+ Data Architect Jobs at Deloitte
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Data Architect Jobs at Deloitte.
Get Access To All Jobs
INTRODUCTION
Databricks Data Architect
Reference Code 1847
Country: United States (US)
US Locations: USA - Tampa
Deloitte Global is the engine of the Deloitte network. Our professionals reach across disciplines and borders to develop and lead global initiatives. We deliver strategic programs and services that unite our organization.
ROLE AND RESPONSIBILITIES
The Databricks Data Architect is a senior technical leader responsible for building and optimizing a robust data platform in a financial services environment. In this full-time role, you will lead a team of 10+ data engineers and own the end-to-end architecture and implementation of the Databricks Lakehouse platform. You will collaborate closely with application development and analytics teams to design scalable data solutions that drive business insights. This position demands deep expertise in Databricks (Azure), hands-on experience with PySpark and Delta Lake, and strong leadership to ensure best practices in data engineering, performance tuning, and governance.
Key Responsibilities
- Lead, mentor, and manage a team of 10+ data engineers, providing technical guidance, code reviews, and career development to foster a high-performing team.
- Own the Databricks platform architecture and implementation, ensuring the environment is secure, scalable, and optimized for the organization's data processing needs. Design and oversee the Lakehouse architecture leveraging Delta Lake and Apache Spark.
- Implement and manage Databricks Unity Catalog for unified data governance. Ensure fine-grained access controls and data lineage tracking are in place to secure sensitive financial data and comply with industry regulations.
- Provision and administer Databricks clusters (in Azure), including configuring cluster sizes, auto-scaling, and auto-termination settings. Set up and enforce cluster policies to standardize configurations, optimize resource usage, and control costs across different teams and projects.
- Collaborate with analytics teams to develop and optimize Databricks SQL queries and dashboards. Tune SQL workloads and caching strategies for faster performance and ensure efficient use of the query engine.
- Lead performance tuning initiatives for Spark jobs and ETL pipelines. Profile data processing code (PySpark/Scala) to identify bottlenecks and refactor for improved throughput and lower latency. Implement best practices for incremental data processing with Delta Lake, and ensure compute cost efficiency (e.g., by optimizing cluster utilization and job scheduling).
- Work closely with application developers, data analysts, and data scientists to understand requirements and translate them into robust data pipelines and solutions. Ensure that data architectures support analytics, reporting, and machine learning use cases effectively.
- Integrate Databricks workflows into the CI/CD pipeline using Azure DevOps and Git. Develop automated deployment processes for notebooks, jobs, and clusters (infrastructure-as-code) to promote consistent releases. Manage source control for Databricks code (using Git integration) and collaborate with DevOps engineers to implement continuous integration and delivery for data projects.
- Collaborate with security and compliance teams to uphold data governance standards. Implement data masking, encryption, and audit logging as needed, leveraging Unity Catalog and Azure security features to protect sensitive financial data.
- Stay up-to-date with the latest Databricks features and industry best practices. Proactively recommend and implement improvements (such as new performance optimization techniques or cost-saving configurations) to continuously enhance the platform's reliability and efficiency.
QUALIFICATIONS
Bachelor's degree in Computer Science, Information Systems, or a related field
7+ years of experience in data engineering, data architecture, or related roles, with a track record of designing and deploying data pipelines and platforms at scale.
Significant hands-on experience with Databricks (preferably Azure Databricks) and the Apache Spark ecosystem. Proficient in building data pipelines using PySpark/Scala and managing data in Delta Lake format.
Strong experience working with cloud data platforms (Azure preferred, or AWS/GCP). Familiarity with Azure data services (such as Azure Data Lake Storage, Azure Blob Storage, etc.) and managing resources in an Azure environment.
Advanced SQL skills with the ability to write and optimize complex queries. Solid understanding of data warehousing concepts and performance tuning for SQL engines.
Proven ability to optimize ETL jobs and Spark processes for performance and cost efficiency. Experience tuning cluster configurations, parallelism, and caching to improve job runtimes and resource utilization.
Demonstrated experience implementing data security and governance measures. Comfortable configuring Unity Catalog or similar data catalog tools to manage schemas, tables, and fine-grained access controls. Able to ensure compliance with data security standards and manage user/group access to data assets.
Experience leading and mentoring engineering teams. Excellent project leadership abilities to coordinate multiple projects and priorities. Strong communication skills to effectively collaborate with cross-functional teams and present architectural plans or results to stakeholders.
PREFERRED QUALIFICATIONS
Databricks Certified Data Engineer Professional or Databricks Certified Data Engineer Associate. Equivalent certifications in cloud data engineering or architecture (e.g., Azure Data Engineer, Azure Solutions Architect).
Prior experience in the financial services industry or other highly regulated industries. Familiarity with financial data types, privacy regulations, and compliance requirements (e.g. handling PII, PCI data) can be beneficial.
Exposure to related big data and streaming tools such as Apache Kafka/Event Hubs, Apache Airflow or Azure Data Factory for orchestration, and BI/analytics tools (e.g., Power BI) is advantageous.
Experience implementing CI/CD pipelines for data projects. Familiarity with Databricks Repos, Jenkins, or other CI tools for automated testing and deployment of data pipelines.
TOOLS & TECHNOLOGIES
Databricks Lakehouse Platform: Databricks Workspace, Apache Spark, Delta Lake, Databricks SQL, MLflow (for model tracking).
Data Governance: Databricks Unity Catalog for data cataloging and access control, Azure Active Directory integration for identity management.
Programming & Data Processing: PySpark and Python for building data pipelines and Spark Jobs, SQL for querying and analytics.
Cloud Services (Azure-focused): Azure Databricks, Azure Data Lake Storage (ADLS Gen2), Azure Blob Storage, Azure Synapse or SQL Database, Azure Key Vault (for secrets).
DevOps & CI/CD: Azure DevOps (Azure Pipelines) for build/release pipelines, Git for version control (GitHub or Azure Repos), experience with Terraform or ARM templates for infrastructure-as-code is a plus.
Other Tools: Project and workflow management tools (JIRA or Azure Boards), monitoring tools (Azure Log Analytics, Spark UI or Databricks performance monitoring), and collaboration tools for documentation and design (Figma, Visio, Lucidcharts etc.).
OUR CULTURE
At Deloitte Global people are valued and respected for who they are - with opportunities to bring their unique perspectives, talents and passions to business challenges. Our global workspace creates room for individuality and collaboration. Ours is an inclusive, supportive, connected culture with a focus on development, flexibility, and well-being. This culture makes Deloitte Global one of the most rewarding places to work, and to transform your career.
PROFESSIONAL DEVELOPMENT
From entry-level employees to senior leaders, we believe in investing in you, helping you identify and hone your unique strengths at every step of your career. We offer opportunities to build new skills, take on leadership opportunities, and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career.
BENEFITS
At Deloitte, we value our people and offer employees a broad range of benefits. Our Total Rewards program reflects our continued commitment to lead from the front in everything we do - that's why we take pride in offering a comprehensive variety of programs and resources to support your health and well-being.
Recruiting for this role ends on 08/02/2025.

INTRODUCTION
Databricks Data Architect
Reference Code 1847
Country: United States (US)
US Locations: USA - Tampa
Deloitte Global is the engine of the Deloitte network. Our professionals reach across disciplines and borders to develop and lead global initiatives. We deliver strategic programs and services that unite our organization.
ROLE AND RESPONSIBILITIES
The Databricks Data Architect is a senior technical leader responsible for building and optimizing a robust data platform in a financial services environment. In this full-time role, you will lead a team of 10+ data engineers and own the end-to-end architecture and implementation of the Databricks Lakehouse platform. You will collaborate closely with application development and analytics teams to design scalable data solutions that drive business insights. This position demands deep expertise in Databricks (Azure), hands-on experience with PySpark and Delta Lake, and strong leadership to ensure best practices in data engineering, performance tuning, and governance.
Key Responsibilities
- Lead, mentor, and manage a team of 10+ data engineers, providing technical guidance, code reviews, and career development to foster a high-performing team.
- Own the Databricks platform architecture and implementation, ensuring the environment is secure, scalable, and optimized for the organization's data processing needs. Design and oversee the Lakehouse architecture leveraging Delta Lake and Apache Spark.
- Implement and manage Databricks Unity Catalog for unified data governance. Ensure fine-grained access controls and data lineage tracking are in place to secure sensitive financial data and comply with industry regulations.
- Provision and administer Databricks clusters (in Azure), including configuring cluster sizes, auto-scaling, and auto-termination settings. Set up and enforce cluster policies to standardize configurations, optimize resource usage, and control costs across different teams and projects.
- Collaborate with analytics teams to develop and optimize Databricks SQL queries and dashboards. Tune SQL workloads and caching strategies for faster performance and ensure efficient use of the query engine.
- Lead performance tuning initiatives for Spark jobs and ETL pipelines. Profile data processing code (PySpark/Scala) to identify bottlenecks and refactor for improved throughput and lower latency. Implement best practices for incremental data processing with Delta Lake, and ensure compute cost efficiency (e.g., by optimizing cluster utilization and job scheduling).
- Work closely with application developers, data analysts, and data scientists to understand requirements and translate them into robust data pipelines and solutions. Ensure that data architectures support analytics, reporting, and machine learning use cases effectively.
- Integrate Databricks workflows into the CI/CD pipeline using Azure DevOps and Git. Develop automated deployment processes for notebooks, jobs, and clusters (infrastructure-as-code) to promote consistent releases. Manage source control for Databricks code (using Git integration) and collaborate with DevOps engineers to implement continuous integration and delivery for data projects.
- Collaborate with security and compliance teams to uphold data governance standards. Implement data masking, encryption, and audit logging as needed, leveraging Unity Catalog and Azure security features to protect sensitive financial data.
- Stay up-to-date with the latest Databricks features and industry best practices. Proactively recommend and implement improvements (such as new performance optimization techniques or cost-saving configurations) to continuously enhance the platform's reliability and efficiency.
QUALIFICATIONS
Bachelor's degree in Computer Science, Information Systems, or a related field
7+ years of experience in data engineering, data architecture, or related roles, with a track record of designing and deploying data pipelines and platforms at scale.
Significant hands-on experience with Databricks (preferably Azure Databricks) and the Apache Spark ecosystem. Proficient in building data pipelines using PySpark/Scala and managing data in Delta Lake format.
Strong experience working with cloud data platforms (Azure preferred, or AWS/GCP). Familiarity with Azure data services (such as Azure Data Lake Storage, Azure Blob Storage, etc.) and managing resources in an Azure environment.
Advanced SQL skills with the ability to write and optimize complex queries. Solid understanding of data warehousing concepts and performance tuning for SQL engines.
Proven ability to optimize ETL jobs and Spark processes for performance and cost efficiency. Experience tuning cluster configurations, parallelism, and caching to improve job runtimes and resource utilization.
Demonstrated experience implementing data security and governance measures. Comfortable configuring Unity Catalog or similar data catalog tools to manage schemas, tables, and fine-grained access controls. Able to ensure compliance with data security standards and manage user/group access to data assets.
Experience leading and mentoring engineering teams. Excellent project leadership abilities to coordinate multiple projects and priorities. Strong communication skills to effectively collaborate with cross-functional teams and present architectural plans or results to stakeholders.
PREFERRED QUALIFICATIONS
Databricks Certified Data Engineer Professional or Databricks Certified Data Engineer Associate. Equivalent certifications in cloud data engineering or architecture (e.g., Azure Data Engineer, Azure Solutions Architect).
Prior experience in the financial services industry or other highly regulated industries. Familiarity with financial data types, privacy regulations, and compliance requirements (e.g. handling PII, PCI data) can be beneficial.
Exposure to related big data and streaming tools such as Apache Kafka/Event Hubs, Apache Airflow or Azure Data Factory for orchestration, and BI/analytics tools (e.g., Power BI) is advantageous.
Experience implementing CI/CD pipelines for data projects. Familiarity with Databricks Repos, Jenkins, or other CI tools for automated testing and deployment of data pipelines.
TOOLS & TECHNOLOGIES
Databricks Lakehouse Platform: Databricks Workspace, Apache Spark, Delta Lake, Databricks SQL, MLflow (for model tracking).
Data Governance: Databricks Unity Catalog for data cataloging and access control, Azure Active Directory integration for identity management.
Programming & Data Processing: PySpark and Python for building data pipelines and Spark Jobs, SQL for querying and analytics.
Cloud Services (Azure-focused): Azure Databricks, Azure Data Lake Storage (ADLS Gen2), Azure Blob Storage, Azure Synapse or SQL Database, Azure Key Vault (for secrets).
DevOps & CI/CD: Azure DevOps (Azure Pipelines) for build/release pipelines, Git for version control (GitHub or Azure Repos), experience with Terraform or ARM templates for infrastructure-as-code is a plus.
Other Tools: Project and workflow management tools (JIRA or Azure Boards), monitoring tools (Azure Log Analytics, Spark UI or Databricks performance monitoring), and collaboration tools for documentation and design (Figma, Visio, Lucidcharts etc.).
OUR CULTURE
At Deloitte Global people are valued and respected for who they are - with opportunities to bring their unique perspectives, talents and passions to business challenges. Our global workspace creates room for individuality and collaboration. Ours is an inclusive, supportive, connected culture with a focus on development, flexibility, and well-being. This culture makes Deloitte Global one of the most rewarding places to work, and to transform your career.
PROFESSIONAL DEVELOPMENT
From entry-level employees to senior leaders, we believe in investing in you, helping you identify and hone your unique strengths at every step of your career. We offer opportunities to build new skills, take on leadership opportunities, and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career.
BENEFITS
At Deloitte, we value our people and offer employees a broad range of benefits. Our Total Rewards program reflects our continued commitment to lead from the front in everything we do - that's why we take pride in offering a comprehensive variety of programs and resources to support your health and well-being.
Recruiting for this role ends on 08/02/2025.
See all 155+ Data Architect at Deloitte jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Data Architect at Deloitte roles.
Get Access To All JobsTips for Finding Data Architect Jobs at Deloitte Jobs
Align your credentials to Deloitte's architecture frameworks
Deloitte's Data Architect roles sit within its consulting delivery model, so certifications in cloud platforms like AWS, Azure, or GCP carry real weight alongside your degree. Make sure your resume maps directly to the data modernization and migration work Deloitte sells to clients.
Target practice areas with active client demand
Deloitte staffs Data Architects against specific client engagements, so roles in Government and Public Services or Financial Services tend to move faster when those practices are busy. Research which sectors Deloitte is winning work in before you apply, and tailor your outreach accordingly.
Clarify sponsorship eligibility before the offer stage
Deloitte's recruiters are experienced with visa sponsorship, but you'll save time by confirming early in the process whether the specific practice area or client-facing role supports H-1B, E-3, or H-1B1 sponsorship. Some client contracts have restrictions that affect eligibility.
Understand how PERM timing affects your green card path
For EB-2 or EB-3 green card sponsorship at Deloitte, PERM labor certification with the DOL is the first step and typically takes the longest. If you're on a nonimmigrant visa with a fixed duration, ask your recruiter when Deloitte typically initiates PERM for Data Architect hires.
Use Migrate Mate to find active Data Architect openings
Deloitte posts roles across multiple business units, making it easy to miss relevant openings. Use Migrate Mate to filter Data Architect positions at Deloitte that are open to visa sponsorship, so you're only spending time on roles that fit your situation.
Prepare for skills-based interviews with delivery context
Deloitte's technical interviews for Data Architect roles often include scenario questions tied to client delivery, not just abstract system design. Practice explaining how you'd architect a data solution within budget and timeline constraints, since that's the consulting context you'd be working in.
Data Architect at Deloitte jobs are hiring across the US. Find yours.
Find Data Architect at Deloitte JobsFrequently Asked Questions
Does Deloitte sponsor H-1B visas for Data Architects?
Yes, Deloitte sponsors H-1B visas for Data Architect roles. It's one of the more established sponsors for technical positions in the consulting and professional services sector. That said, sponsorship is tied to the specific role, practice area, and client engagement, so confirming eligibility with the recruiter early in your process is important.
How do I apply for Data Architect jobs at Deloitte?
You can apply directly through Deloitte's careers site or find open Data Architect roles filtered by visa sponsorship eligibility on Migrate Mate. When applying, tailor your resume to highlight cloud data platform experience and any prior consulting or client-facing delivery work, as those are the signals Deloitte's hiring teams prioritize for this role.
Which visa types does Deloitte commonly sponsor for Data Architect roles?
Deloitte sponsors H-1B, H-1B1, and E-3 visas for Data Architect positions, along with employment-based Green Card pathways including EB-2 and EB-3. Australian citizens are eligible to pursue the E-3 route, which has no lottery and a faster path to approval. The right visa type depends on your nationality, current status, and the specific role.
What qualifications does Deloitte expect for a sponsored Data Architect role?
Deloitte typically looks for a bachelor's or master's degree in computer science, information systems, or a closely related field, combined with hands-on experience designing enterprise data platforms. Cloud certifications in AWS, Azure, or Google Cloud are commonly expected. For sponsored roles, USCIS requires the position to qualify as a specialty occupation, which Deloitte's architecture roles generally satisfy given the degree requirements involved.
How long does the visa sponsorship process take at Deloitte for a Data Architect?
Timeline depends on the visa type. H-1B sponsorship through the lottery has a fixed annual cycle, with the registration window in March and an October 1 start date for approved petitions. E-3 and H-1B1 applications typically move faster since there's no lottery. Green Card sponsorship through PERM is a longer process, often taking one to three years from initiation to approval, depending on your priority date and country of birth.
See which Data Architect at Deloitte employers are hiring and sponsoring visas right now.
Search Data Architect at Deloitte Jobs