Match score not available

Senior Data Engineer

Remote: 
Full Remote
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

3 years programming experience in Python, 5 years working with SQL Server, Strong skills in SQL and data modeling, Cloud certifications preferred.

Key responsabilities:

  • Design, develop, and maintain ETL infrastructure
  • Optimize and troubleshoot ADF jobs and ETL processes

Mira Search logo
Mira Search Human Resources, Staffing & Recruiting Startup https://mira-search.ae/
2 - 10 Employees
See all jobs

Job description

Mira Search — an international recruitment agency representing the interests of our client, is seeking a driven Data Engineer to join a dynamic team of data specialists. Your primary responsibility will be to design, develop, and maintain ETL infrastructure and data pipelines.

Responsibilities:

• Implement dimensional modeling concepts and data warehouses (OLTP, OLAP, facts, and dimensions).

• Ensure adherence to best practices in data management, security, and administration in cloud environments.

• Optimize and troubleshoot ADF jobs and ETL processes to enhance their performance.

• Conduct code reviews and manage code versioning using GitHub, as well as deployment through CI/CD pipelines.

• Collaborate on cloud architectures and enterprise-level data migrations.

• Create and optimize ETL pipelines for efficient extraction, transformation, and loading of data.

• Design, implement, and deploy scripts in Python and ETL processes using Azure Data Factory (ADF).

• Work with various types of data: structured, semi-structured, and unstructured.

Requirements

• Minimum of 3 years of programming experience in Python.

• At least 5 years of experience working with SQL Server and large volumes of data.

• Knowledge of event/stream-based data extraction and processing methods.

• Strong skills in SQL, Python, data modeling, and dimensional design.

• Practical experience with cloud architectures and messaging systems.

• Ability to develop and deploy ETL pipelines using Databricks and PySpark.

• Deep knowledge of cloud data warehouses such as Synapse, Redshift, Snowflake, or ADF.

• Familiarity with CI/CD processes and deployment.

• Possession of cloud certifications will be an advantage.

• Experience with Airflow, AWS Lambda, Glue, and Step Functions will also be a plus.

• English language proficiency at least at B2 level.

Benefits

• Work in a highly skilled team in a friendly and informal atmosphere.

• Opportunity to work from a cozy office in Warsaw.

• 20 paid working days of vacation per year with 100% paid sick leave.

• Additional 5 vacation days per year.

• Provision of necessary equipment for work.

• Health insurance (after the probation period).

• Partial reimbursement of educational expenses (courses, certifications, professional events, etc.).

• Twice a week English and Polish language classes (online).

Dear Candidates, due to a high volume of applications, only selected candidates will be contacted for interviews. We appreciate your understanding. Thank you for considering a career with us.

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Industry :
Human Resources, Staffing & Recruiting
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Troubleshooting (Problem Solving)

Data Engineer Related jobs