Match score not available

NK - Sr. Data Engineer - Job2498

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Proficiency in Python for data manipulation and pipeline development., Strong SQL skills for managing relational databases and ensuring data accuracy., In-depth knowledge of data engineering methodologies, including ETL processes and data warehousing., Hands-on experience with AWS services for building and deploying applications in a cloud environment..

Key responsabilities:

  • Design, develop, and maintain robust data pipelines for efficient data flow.
  • Create and optimize data models to support analytical and operational needs.
  • Conduct thorough data analysis to derive insights for decision-making processes.
  • Collaborate with team members to establish best practices in data engineering and implement CI/CD practices.

Taller logo
Taller SME https://taller.us/
201 - 500 Employees
See all jobs

Job description





Job Summary

We are seeking a highly skilled Senior Data Engineer to join our dynamic team at Pillar – Core Foundation. This role is crucial for the development and implementation of data engineering practices that will enhance our data infrastructure and analytics capabilities. The ideal candidate will work closely with David Shin, our Principal Data Engineer, and collaborate with Sandeep Panchal’s Squad, contributing to new projects that will shape the future of our data-driven initiatives.

Location: We are seeking talents from Brazil, Argentina, Peru, Chile and Colombia.

Job Responsibilities

As a Senior Data Engineer, you will be responsible for:

  • Building Data Pipelines: Design, develop, and maintain robust data pipelines that ensure the efficient flow of data from various sources to our data storage solutions.
  • Data Modeling: Create and optimize data models that support analytical and operational needs, ensuring data integrity and accessibility.
  • Data Analysis: Conduct thorough data analysis to derive insights and support decision-making processes across the organization.
  • Implementing Data Engineering Practices: Collaborate with team members to establish best practices in data engineering, ensuring high-quality data management and processing.
  • Utilizing Cloud Services: Leverage AWS cloud services to build integrated applications in production, ensuring scalability and reliability.
  • Collaboration: Work closely with a team of approximately 15 members, including data scientists, analysts, and other engineers, to deliver high-impact data solutions.
  • CI/CD Implementation: Implement Continuous Integration and Continuous Deployment (CI/CD) practices to streamline the deployment and support of software in production environments.
  • DevOps Responsibilities: Engage in DevOps practices to enhance the efficiency of data operations and ensure smooth deployment processes.

Basic Qualifications

Must-Have Skills
  • Python: Proficiency in Python for data manipulation, pipeline development, and automation tasks.
  • SQL: Strong SQL skills for querying and managing relational databases, ensuring data accuracy and performance.
  • Data Engineering Practices: In-depth knowledge of data engineering methodologies, including ETL processes, data warehousing, and data governance.
  • Real-time Streaming: Experience with real-time data streaming technologies to support immediate data processing and analytics.
  • Experience Building Pipelines: Proven track record of designing and implementing data pipelines that handle large volumes of data efficiently.
  • Data Modeling: Expertise in creating data models that align with business requirements and analytical needs.
  • Data Analysis: Ability to analyze complex datasets and derive actionable insights to inform business strategies.
  • AWS: Hands-on experience with AWS services, particularly in building and deploying applications in a cloud environment.
  • CI/CD: Familiarity with CI/CD tools and practices to automate the deployment process and ensure high-quality software delivery.
Nice-to-Have Skills
  • Data Bricks: Experience with Data Bricks for collaborative data engineering and analytics.
  • AirFlow: Knowledge of Apache AirFlow for orchestrating complex data workflows and managing dependencies.
  • EMR: Familiarity with Amazon EMR for processing large datasets using distributed computing frameworks.
  • Cloud Services Experience: Experience using AWS services such as EC2, ECS, API Gateway, and Lambda to build integrated applications in production.
  • DevOps: Understanding of DevOps principles and tools to enhance collaboration between development and operations teams.



Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration

Data Engineer Related jobs