**Experienced Full Stack Data Entry Specialist – Cloud Application Development and Data Management**
Posted 2026-05-06Are you a highly skilled and motivated individual looking to kick-start your career in data management and cloud application development? Do you have a passion for working with large datasets, designing and implementing efficient data pipelines, and collaborating with cross-functional teams to drive business growth? If so, we invite you to join arenaflex, a leading retail company, as a Full Stack Data Entry Specialist.
- *About arenaflex**
arenaflex is a global retail leader that is revolutionizing the way people shop and interact with our brand. As a company, we are committed to innovation, customer satisfaction, and employee growth. Our technology team, Walmart Worldwide Tech, is at the forefront of this revolution, developing cutting-edge solutions that make life easier for millions of people around the world. We are a human-driven and tech-engaged organization that values diversity, inclusion, and collaboration.
- *Job Summary**
As a Full Stack Data Entry Specialist, you will play a critical role in designing, implementing, and maintaining data pipelines, data sets, and ETL processes for arenaflex's Supported Pursuit Publicizing stage. You will work closely with cross-functional teams to ensure data quality, accessibility, and reliability for analysis and reporting purposes. This is a full-time position available for both night and day shifts, offering the flexibility to work remotely.
- *Key Responsibilities**
- Design and implement efficient data pipelines for ingesting, transforming, and storing large volumes of data to support marketing on Walmart.com and its subsidiaries
- Work on cloud platforms like Azure and Google Cloud for data storage, processing, and analysis
- Develop and deploy large-scale, containerized applications using Docker and Kubernetes on public clouds like Google GCP and Microsoft Azure
- Collaborate with other scrum teams, QA, Product, Program Management, and Partner Operations to ensure seamless integration and delivery of data-driven solutions
- Participate in on-call rotations to investigate production issues across cross-functional teams
- Develop and maintain strong relationships with stakeholders to ensure data quality, accessibility, and reliability
- *What You'll Bring**
- Demonstrated aptitude in data design principles, data set planning, ETL cycles, and data mining
- Capability in working with data technologies including SQL, Python, Flash, Scala, Hadoop, and related tools and frameworks
- Experience with ETL tools like Apache NiFi
- Solid skills working with relational databases (e.g., SQL Server) and NoSQL databases (e.g., Cassandra)
- Experience with real-time data processing using Apache Kafka
- Experience in designing and implementing data models for efficient capacity and recovery
- A growth-oriented mindset and a willingness to raise the technical bar, as well as identify potential opportunities for improving existing processes, tools, and frameworks to achieve high scale and efficiency
- Experience working with Docker and Kubernetes, as well as Distributed Computing Services like Google GCP and Microsoft Azure, and Distributed Storage Frameworks like Hive and Hadoop
- *Essential Qualifications**
- Bachelor's degree in Computer Science, Information Technology, or related field
- 2+ years of experience in data management, cloud application development, and data engineering
- Strong understanding of data design principles, data set planning, ETL cycles, and data mining
- Proficiency in working with data technologies including SQL, Python, Flash, Scala, Hadoop, and related tools and frameworks
- Experience with ETL tools like Apache NiFi
- Solid skills working with relational databases (e.g., SQL Server) and NoSQL databases (e.g., Cassandra)
- *Preferred Qualifications**
- Master's degree in Computer Science, Information Technology, or related field
- 5+ years of experience in data management, cloud application development, and data engineering
- Experience with real-time data processing using Apache Kafka
- Experience in designing and implementing data models for efficient capacity and recovery
- Strong understanding of cloud platforms like Azure and Google Cloud
- Experience with containerization using Docker and Kubernetes
- Experience with Distributed Computing Services like Google GCP and Microsoft Azure, and Distributed Storage Frameworks like Hive and Hadoop
- *Skills and Competencies**
- Strong analytical and problem-solving skills
- Excellent communication and collaboration skills
- Ability to work in a fast-paced, dynamic environment
- Strong attention to detail and quality assurance
- Ability to adapt to changing priorities and deadlines
- Strong understanding of data security and compliance
- *Career Growth Opportunities and Learning Benefits**
- Opportunities for career growth and professional development
- Access to cutting-edge technologies and tools
- Collaborative and dynamic work environment
- Opportunities for mentorship and knowledge sharing
- Recognition and rewards for outstanding performance
- *Work Environment and Company Culture**
- Remote work option available
- Flexible work hours and schedule
- Collaborative and dynamic work environment
- Opportunities for socialization and team-building
- Recognition and rewards for outstanding performance
- *Compensation, Perks, and Benefits**
- Competitive salary and benefits package
- Opportunities for bonuses and stock options
- Comprehensive health insurance and wellness programs
- Paid time off and vacation days
- Access to cutting-edge technologies and tools
- *How to Apply**
If you are a motivated and skilled individual looking to join a dynamic and innovative team, we invite you to apply for this exciting opportunity. Click the link below to submit your application, and we will be in touch soon.