Job Summary
As a Sr Data Engineer, you will design and implement robust data architectures for enterprise platforms, including data lakes, warehouses, and lakehouse solutions. You will build scalable data pipelines, ensure data quality and reliability, and drive modernization initiatives. Collaborating closely with data science and analytics teams, you will deliver advanced data solutions while supporting compliance and governance requirements. This role offers opportunities to mentor junior engineers, evaluate emerging technologies, and contribute to technical excellence across the organization.
Key Responsibilities
Design and implement data architecture for enterprise data platforms (data lakes, warehouses, lakehouse architectures).
Build scalable data pipelines using modern ETL/ELT frameworks (e.g., Airflow, DBT); ensure data quality, reliability, and performance.
Implement Change Data Capture (CDC) and real-time streaming solutions using technologies such as Kafka and Spark Streaming.
Optimize data infrastructure for performance and cost, including partitioning, compression, and caching strategies.
Establish data governance practices: data lineage, quality monitoring, schema management, and access controls.
Lead technical initiatives for data platform modernization; evaluate and adopt new technologies and tools.
Mentor junior data engineers, conduct code reviews, and establish best practices.
Collaborate with data science and analytics teams to deliver data solutions for advanced analytics.
Support compliance requirements for data security, privacy, and regulatory needs (e.g., GDPR, SOC2).
Required Qualifications
5+ years of data engineering experience building large-scale data platforms on cloud infrastructure.
Extensive experience with cloud data platforms: AWS (Redshift, Glue, EMR, S3), Azure (Synapse, Data Factory, ADLS), or GCP (BigQuery, Dataflow, GCS).
Expertise in SQL; proficiency in Python or Scala; experience with Spark for distributed data processing.
Hands-on experience with data pipeline orchestration tools (Airflow, DBT, Luigi, or similar).
Experience with streaming platforms (Kafka, Kinesis, Pub/Sub) and event-driven architectures.
Deep knowledge of relational databases (PostgreSQL, Oracle, SQL Server) and NoSQL databases (MongoDB, Cassandra, DynamoDB).
Strong skills in dimensional modeling, data normalization, and schema design.
Experience with Infrastructure as Code tools (Terraform, CloudFormation, or similar).
Preferred Qualifications
Master’s degree in Computer Science, Engineering, or related field.
Experience with data compliance requirements (FedRAMP, ITAR, HIPAA).
Experience with lakehouse platforms (Databricks, Delta Lake).
Experience with CDC tools (GoldenGate, Debezium).
Oracle Cloud experience.
Why Join Deltek?
At Deltek, you will be part of a forward-thinking team that values innovation, collaboration, and continuous learning. Our culture emphasizes flexibility, professional growth, and technical excellence. Deltek leverages advanced AI capabilities to empower employees and drive impactful solutions, making it an ideal environment for data engineers seeking to work with cutting-edge technologies and contribute to meaningful projects.