Sr. Databricks Consultant

Posted 2026-05-05
Remote, USA Full-time Immediate Start

ECS is seeking a Sr. Databricks Consultant to work remotely. Please Note: This position is contingent upon additional funding.


We are seeking a skilled individual with deep experience in Databricks to join our team. The primary responsibility of this role is to consult with customers to design, optimize, and fine-tune their Databricks environments for advanced data analytics, machine learning, and AI-driven insights. Leveraging tools such as Delta Lake, MLflow, and Databricks Machine Learning, the Data Scientist will play a crucial role in enabling clients to maximize performance, scalability, and value from their data platforms.


This role will support ECS’s Professional Services and Managed Cybersecurity Services programs, with a focus on Databricks solutions. You will specialize in data pipeline optimization, scalable model training, and advanced analytics on Databricks, offering you a unique opportunity to work with leading-edge technologies and collaborate with clients to deliver measurable improvements in data-driven decision-making. If you are passionate about applying data science techniques within the Databricks ecosystem to solve real-world challenges, we encourage you to apply.


Responsibilities



  • Design, build, and maintain scalable ETL/ELT pipelines in Databricks.

  • Optimize workflows and clusters for performance and cost efficiency.

  • Integrate Databricks with cloud platforms (AWS, Azure, or GCP).

  • Implement data governance, security, and compliance standards.

  • Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality datasets and insights.

  • Troubleshoot issues and drive continuous improvement in data infrastructure.


Salary Range: $175,000 - $250,000


General Description of Benefits


Qualifications



  • Active Secret Clearance

  • Proven experience working with Databricks in production environments.

  • Strong proficiency in Python and/or SQL for data engineering.

  • Experience with Apache Spark for distributed data processing.

  • Knowledge of data lakehouse architecture and best practices.

  • Experience with cloud data platforms (AWS S3, Azure Data Lake, or GCP Storage).

  • Hands-on experience building and managing ETL/ELT pipelines.

  • Strong understanding of data modeling, data quality, and performance tuning.

  • Familiarity with CI/CD pipelines and version control (Git).

  • Able and willing to obtain a US Security Clearance.

  • Ability to travel, as needed, to customer sites.

  • Must be a US Citizen per contract.

  • Bachelor’s degree; preferably in Computer Science, Information Security, or a related field. Will consider experience in lieu of a degree.

Similar Jobs

Back to Job Board