Back to search:Senior Databricks / Thane

EazyML, Recognized by Gartner, EazyML ( specializes in Responsible AI. Our solutions facilitate proactive compliance and sustainable automation and The company is associated with breakthrough startups like Amelia.ai.

This is a full-time Remote role for a Senior Databricks Engineer with experience in Snowflake.

The person can work from home (anywhere in INDIA, job location is in India).

We're hiring a Senior Databricks Engineer to design, build, and optimize scalable data platforms leveraging Databricks . Experience in Snowflake  is mandatory. This role will be responsible for delivering reliable, high performance data pipelines and analytics-ready datasets, while providing technical leadership and mentoring within the data engineering team.

Required Qualifications

  • 6+ years of experience  in Data Engineering, ETL Development, Database Administration.
  • Strong hands-on experience with Databricks in production environments
  • Advanced SQL skills and solid expertise in data modeling
  • Proficiency in Python, SQL, PySpark
  • Strong experience with Apache Spark and PySpark
  • Experience working with Delta Lake, schema evolution, and data versioning
  • Experience with cloud platforms (AWS, Azure, or GCP)
  • Experience building scalable, reliable, fault-tolerant data pipelines
  • Solid understanding of distributed data systems
  • Exposure to ML pipelines or feature stores (Databricks Feature Store preferred)

Key Skills

  • Databricks & Apache Spark
  • Snowflake data warehousing
  • Lakehouse and Data Warehouse architecture
  • Advanced SQL and performance tuning
  • Cloud-native data engineering
  • Scalability, reliability, and cost optimization
  • Technical leadership and mentoring
  • Design and implement scalable data pipelines using Databricks (PySpark, Delta Lake)
  • Develop and optimize ELT pipelines loading data for analytics and reporting
  • Architect and maintain lakehouse and warehouse solutions following Bronze, Silver, and Gold data layer patterns
  • Build batch and streaming pipelines using Databricks Jobs and Spark Structured Streaming
  • Design data models optimized for Snowflake (star/snowflake schemas, dimensional modeling)
  • Optimize Spark jobs and Snowflake queries for performance and cost efficiency
  • Implement data quality checks, monitoring, and data validation across Databricks and Snowflake
  • Integrate Databricks and Snowflake with orchestration tools ( Azure Data Factory, etc. )
  • Ensure data security, governance, role-based access control, and compliance standards
  • Collaborate with Data Analysts and Data Scientists to deliver analytics and ML-ready datasets
  • Troubleshoot complex pipeline failures and perform root-cause analysis
  • Mentor junior engineers, conduct code reviews, and enforce engineering best practices
  • Contribute to data architecture decisions, tooling evaluation, and roadmap planning
  • Maintain clear documentation of pipelines, data models, and system architecture

Experience: 6+ years | CS/IT degree preferred

FoCookieConsentP1 FoCookieConsentLink FoCookieConsentP2