Match score not available

Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

3+ years of experience in data engineering and large-scale data systems., Expert level in SQL and proficiency in Python., Skilled in data modeling and transformation using tools like dbt Cloud., Strong understanding of cloud-based data infrastructure, preferably AWS and Snowflake..

Key responsabilities:

  • Collaborate with engineers and product managers to design and optimize data pipelines and ETL processes.
  • Develop and maintain scalable data systems for seamless integration and high availability.
  • Improve data infrastructure for analytics, reporting, and machine learning applications.
  • Ensure data quality, governance, and security best practices across all workflows.

Goldbelly logo
Goldbelly Information Technology & Services SME https://www.goldbelly.com/
51 - 200 Employees
See all jobs

Job description

At Goldbelly, we believe food brings people together. We connect people with their greatest culinary desires within and beyond local communities. We empower food makers of all sizes and deliver their passion to food-lovers around the country.

As a Data Engineer, you will enhance how millions of customers connect with both novel and nostalgic food experiences on our platform. By partnering with business leaders and leveraging state-of-the-art data engineering and analytics resources, you will play a key role in transforming our data infrastructure and pipelines.

Responsibilities
  • Collaborate closely with full stack engineers, machine learning engineers, data analysts, and product managers to design and optimize data pipelines and ETL processes that drive business decisions.
  • Design, develop, and maintain robust, scalable data systems to ensure seamless data integration and high availability.
  • Improve our data infrastructure by optimizing ingestion, storage, and retrieval processes to support analytics, reporting, and machine learning applications.
  • Help define and ensure data quality, governance, and security best practices are followed across all data workflows.
  • Build and maintain data models that support business insights and operational reporting.
  • Apply software engineering best practices, including CI/CD, testing, and version control, to data engineering workflows to ensure reliability and maintainability.
Qualifications
  • 3+ years of experience in data engineering and working on large-scale data systems.
  • Expert level in SQL required, proficiency in Python preferred.
  • Skilled in dimensional and normalized data modeling and transformation using tools like dbt Cloud.
  • Strong understanding of cloud-based data infrastructure (AWS and Snowflake preferred).
  • Experience with BI platforms like Metabase and Sigma Computing for data visualization/dashboard tools.
  • Experience with data ingestion and ETL pipelines like Fivetran.
  • Familiarity with event-driven architectures and real-time data streaming using tools like Confluent Kafka.
  • Proficient in version control with Git and collaborative development on GitHub or GitLab.

Salary range: $140,000 - $190,000 base salary range (dependent on experience level and interview performance) + equity (incentive stock options, vested over 4 years) + benefits.

Required profile

Experience

Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Problem Solving

Data Engineer Related jobs