[Remote] Data Engineer
Posted 2026-05-06
Remote, USA
Full-time
Immediate Start
Note: The job is a remote job and is open to candidates in USA. Abbott is a global healthcare leader that helps people live more fully at all stages of life. The Data Engineer position involves designing, developing, and maintaining data pipelines and cloud-based data solutions to support analytics and reporting needs, while collaborating with various teams to deliver reliable data solutions.
Responsibilities
- Design, build, and maintain data pipelines to support analytics, reporting, and downstream applications
- Develop and maintain data ingestion solutions on AWS using AWS-native services
- Build and optimize data models using Databricks and AWS data stores such as Redshift, RDS, and S3
- Integrate and assemble large datasets to meet business and analytical requirements
- Extract, transform, and load (ETL/ELT) data into approved tools and frameworks
- Configure and maintain integration tools, databases, data warehouses, and analytical systems
- Process structured and unstructured data into formats suitable for analysis, partnering with analysts as needed
- Collaborate with engineering and technology teams to align data solutions with business needs
- Monitor and improve data pipeline performance, reliability, and scalability
- Write clean, well-documented code and follow established engineering best practices
- Contribute to technical documentation and support existing architecture patterns
- Participate in peer code reviews and team design discussions
- Work cross-functionally with Engineering, Product, QA, and Analytics teams
- Stay current with data engineering tools and industry trends and share learnings with the team
Skills
- Bachelors Degree in Computer Science, Information Technology or other relevant field
- At least 1-3 years of recent experience in Software Engineering, Data Engineering or Big Data
- Ability to work effectively within a team in a fast-paced changing environment
- Knowledge of or direct experience with Databricks and/or Spark
- Software development experience, ideally in Python, PySpark, Kafka or Go, and a willingness to learn new software development languages to meet goals and objectives
- Knowledge of strategies for processing large amounts of structured and unstructured data, including integrating data from multiple sources
- Knowledge of data cleaning, wrangling, visualization and reporting
- Ability to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and experience
- Familiarity of databases, BI applications, data quality and performance tuning
- Excellent written, verbal and listening communication skills
- Comfortable working asynchronously with a distributed team
- Knowledge of or direct experience with the following AWS Services desired S3, RDS, Redshift, DynamoDB, EMR, Glue, and Lambda
- Experience working in an agile environment
- Practical Knowledge of Linux
Benefits
- Free medical coverage in our Health Investment Plan (HIP) PPO medical plan
- An excellent retirement savings plan with high employer contribution
- Tuition reimbursement
- The Freedom 2 Save student debt program
- FreeU education benefit - an affordable and convenient path to getting a bachelor’s degree
Company Overview
Company H1B Sponsorship