About us:
SuperAwesome is an award-winning technology company that powers the youth digital ecosystem, helping brands to meet their audience where they are.
We bring together proprietary advertising and gaming products, audience insights, and compliance capabilities to help build a safer internet for the next generation.
Our technology is trusted by hundreds of brands and creators and enables more effective digital engagement with almost half a billion young people worldwide every month.
As we specialise in reaching under-18 audiences, we have to be as curious, fast-paced, and creative as kids and teens. At SuperAwesome, you’ll be encouraged to own your impact, make your team more awesome, and evolve like a kid as you grow into your role.
At our core is the #SAFam, a community where every voice is valued and diversity is celebrated. We prioritise individuality and foster an inclusive workplace where everyone feels they truly belong.
What you’ll do?
● Our teams are growing rapidly and we’re hiring a Data Engineer to take our products to the next level of scale. Use your experience in data engineering to help define Data as a Product, and bring your expertise to improve not just the product itself but the client experience, leveraging data, from ETL to the presentation layer.
● You will work closely with your Tech Lead and the other engineers in your team to define the appropriate technical approach, metrics, and timelines. You will have your say in the product roadmap and help the team and the Product Manager to make the most informed decisions to break down complex tech deliverables into simple and understandable user stories.
● You will drive collaboration, gather feedback, solve problems, and tackle challenges through tests and learning.
● Quality is key for us, so you will ensure all product components are built to an appropriate level of quality for the stage (alpha/beta/production), deliver products using the appropriate level of testing and monitoring, fail fast, and learn and iterate frequently.
● You will champion continuous improvement and always aim to improve the product your team owns and measure your impact with the appropriate tech, product, or delivery metrics.
Responsibilities:
In this role you will:
● work across the full stack depending on where you can drive the highest impact: from ETL pipelines to data warehousing to visualisations, as well as testing and cloud infrastructure.
● work with your team to design and implement features and services for the data analytics solution, and keep the design choices well-documented and explained
● train/mentor other Data engineers on Data engineering best practices
● use your experience to create/improve predictive data quality
● champion the DataOps culture, support data systems in production, including participation in our out-of-hours on-call rota
● communicate with customers to discuss new use cases or help them identify and fix data issues, as well as manage the end-to-end setup for them: from raw data analysis, to defining and implementing data ingestion/enrichment pipelines to creating data visualisations
(Requirements) What we are looking for:
Important technologies:
● Good understanding of Data pipeline design and implementation using Databricks, and Python (or Python derivative, like PySpark)
● Good visualization skills using Sisense and/or other visualisation tools
● Good experience with SQL
● Good understanding of Data management and/or Data governance (making sure the data is of expected volume, schema, etc.)
Nice to have:
● Good understanding of microservices architecture principles
● Experience with Kedro on Databricks
● Exposure to AWS or other cloud provider
● Exposure to Airbyte reading from multiple different data sources