Match score not available

Data Engineer (m/w/x)

extra holidays - extra parental leave
Remote: 
Full Remote
Work from: 

Offer summary

Qualifications:

Degree in Computer Science or related field, Knowledge of cloud platforms and data warehouse concepts, Strong Python skills for data pipelines, Experience with CI/CD and containerization tools.

Key responsabilities:

  • Design and maintain scalable data pipelines
  • Optimize cloud-based data warehouses
sevDesk logo
sevDesk Scaleup https://sevdesk.de/
201 - 500 Employees
See all jobs

Job description

We at sevdesk:

As an intuitive cloud accounting software, we empower founders, self-employed individuals, and small business owners to focus on what truly matters. Our product offers them a smart accounting tool to save time in their daily activities. As a company, we aim for excellence, continuously expand our horizons, and strive for daily improvement of our product and our daily work life. In pursuit of these goals, we rely on our strong culture, innovative spirit, and unwavering passion. In the future, we hope to count you among our dedicated team members.

 

 

Your Mission

Your mission is to design, build, and maintain a scalable, reliable, and secure data platform that empowers teams with high-quality data, enabling smarter decisions, streamlined processes, and innovation across the organization.


Here’s how you’ll make an impact:

  • Design and Maintain Scalable Data Pipelines: Build and manage robust data pipelines using modern tools (e.g., Python, Meltano, Singer.io) to support processing and analytics needs.

  • Optimize Cloud-Based Data Warehouses: Ensure high performance, scalability, and data integrity in cloud-based data warehouses (e.g., Snowflake, BigQuery, Redshift).

  • Leverage Cloud Infrastructure for Data Systems: Implement event tracking and processing systems using cloud services (e.g., AWS Kinesis, SQS, EC2) and manage infrastructure as code (e.g., Terraform, CloudFormation).

  • Deploy and Orchestrate Data Services: Use containerization and orchestration tools (e.g., Kubernetes, Docker) to manage and deploy data services and automate workflows (e.g., Dagster, Airflow, Prefect).

  • Enable Data Transformation and Collaboration: Develop efficient, documented code for data processing (e.g., Python, SQL) and collaborate with BI teams to ensure pipelines deliver actionable business insights.


Your Qualification:

  • Educational Background or Practical Experience: Degree in Computer Science, Information Systems, Mathematics, or a related field, or equivalent hands-on experience.

  • Expertise in Data and Cloud Solutions: Knowledge of data warehouse concepts (e.g., Snowflake, BigQuery) and cloud platforms/tools for event-driven and data-processing architectures (e.g., AWS, GCP, Azure).

  • Proficiency in Automation and Infrastructure: Experience with infrastructure as code, CI/CD pipelines (e.g., GitHub Actions, Jenkins), and containerization/orchestration frameworks (e.g., Kubernetes, Docker).

  • Technical Skills for Data Workflows: Strong Python skills for developing data pipelines and workflows, with an analytical focus on delivering high-quality, reliable data.

  • Collaborative and Communication Skills: Ability to work effectively with cross-functional teams, demonstrating strong communication skills and a stakeholder-focused problem-solving approach.

Nice-to-Have Skills:

  • Experience with event tracking frameworks (e.g., Snowplow, Mixpanel) and orchestration tools for workflow automation (e.g., Dagster, Airflow).

  • Understanding of data sync frameworks and tools for integrating diverse data sources (e.g., Meltano, Stitch).

  • Awareness of best practices in security and compliance for cloud-based data systems.


Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Communication
  • Problem Solving

Data Engineer Related jobs