Unlimited Job Postings Subscription - $99/yr!

Job Details

Sr. Databricks Data Engineer - I

  2025-07-01     DATAMAXIS     all cities,AK  
Description:

Job Title: Databricks Data Engineer - I

Experience: 5+ years

Location: Remote

Job Type: Full-time with AB2

We are seeking an experienced Databricks Data Engineer who can play a crucial role in our Fintech data lake project.What You Bring

• 5+ years of experience working in data warehousing systems

• 3+ strong hands-on programming expertise in Databricks landscape, including SparkSQL, Workflows

• for data processing and pipeline development

• 3+ strong hands-on data transformation/ETL skills using Spark SQL, Pyspark, Unity Catalog working

• in Databricks Medallion architecture

• 2+ yrs work experience in one of cloud platforms: Azure, AWS or GCP

• Experience working in using Git version control, and well versed with CI/CD best practices to

• automate the deployment and management of data pipelines and infrastructure

• Nice to have hands-on experience building data ingestion pipelines from ERP systems (Oracle

• Fusion preferably) to a Databricks environment, using Fivetran or any alternative data connectors

• Experience in a fast-paced, ever-changing and growing environment

• Understanding of metadata management, data lineage, and data glossaries is a plus

• Must have eport development experience using PowerBI, SplashBI or any enterprise reporting toolWhat You'll Do

• Involve in design and development of enterprise data solutions in Databricks, from ideation to

• deployment, ensuring robustness and scalability.

• Work with the Data Architect to build, and maintain robust and scalable data pipeline architectures on

• Databricks using PySpark and SQL

• Assemble and process large, complex ERP datasets to meet diverse functional and non-functional

• requirements.

• Involve in continuous optimization efforts, implementing testing and tooling techniques to enhance

• data solution quality

• Focus on improving performance, reliability, and maintainability of data pipelines.

• Implement and maintain PySpark and databricks SQL workflows for querying and analyzing large

• datasets

• Involve in release management using Git and CI/CD practices

• Develop business reports using SplashBI reporting tool leveraging the data from Databricks gold layer.Qualifications

• Bachelors Degree in Computer Science, Engineering, Finance or equivalent experience

• Good communication skills

We are seeking an experienced Databricks Data Engineer who can play a crucial role in our Fintech data lake project.What You Bring

• 5+ years of experience working in data warehousing systems

• 3+ strong hands-on programming expertise in Databricks landscape, including SparkSQL, Workflows

• for data processing and pipeline development

• 3+ strong hands-on data transformation/ETL skills using Spark SQL, Pyspark, Unity Catalog working

• in Databricks Medallion architecture

• 2+ yrs work experience in one of cloud platforms: Azure, AWS or GCP

• Experience working in using Git version control, and well versed with CI/CD best practices to

• automate the deployment and management of data pipelines and infrastructure

• Nice to have hands-on experience building data ingestion pipelines from ERP systems (Oracle

• Fusion preferably) to a Databricks environment, using Fivetran or any alternative data connectors

• Experience in a fast-paced, ever-changing and growing environment

• Understanding of metadata management, data lineage, and data glossaries is a plus

• Must have eport development experience using PowerBI, SplashBI or any enterprise reporting toolWhat You'll Do

• Involve in design and development of enterprise data solutions in Databricks, from ideation to

• deployment, ensuring robustness and scalability.

• Work with the Data Architect to build, and maintain robust and scalable data pipeline architectures on

• Databricks using PySpark and SQL

• Assemble and process large, complex ERP datasets to meet diverse functional and non-functional

• requirements.

• Involve in continuous optimization efforts, implementing testing and tooling techniques to enhance

• data solution quality

• Focus on improving performance, reliability, and maintainability of data pipelines.

• Implement and maintain PySpark and databricks SQL workflows for querying and analyzing large

• datasets

• Involve in release management using Git and CI/CD practices

• Develop business reports using SplashBI reporting tool leveraging the data from Databricks gold layer.Qualifications

• Bachelors Degree in Computer Science, Engineering, Finance or equivalent experience

• Good communication skills


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search