10+ years of experience in data engineering and processing., Proficiency in Databricks, Azure Data Factory, and Snowflake., Strong SQL skills, including writing complex queries and optimizing performance., Experience with data governance, quality checks, and security measures..
Key responsabilities:
Design and build data ingestion pipelines using Databricks and Azure Data Factory.
Provide operational support for managing and maintaining enterprise data.
Collaborate with stakeholders to address data-related technical issues.
Implement data quality checks and document ETL processes and best practices.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Unlock Your Potential with Infinity OutsourcingAt Infinity Outsourcing, we're not just a company; we're a platform for growth, learning, and career advancement. Our "Career Column" is a dedicated space where we share insights, guidance, and opportunities to help you excel in your professional journey.Your Success, Our MissionAt Infinity Outsourcing, we're more than just a recruitment and training company; we're your partners in career growth. Our "Career Column" reflects our commitment to your success. We believe that informed and empowered individuals are better equipped to navigate the ever-evolving professional landscape.Explore our "Career Column" regularly to stay informed, motivated, and prepared for the opportunities that lie ahead. Your career journey begins here.Unlock your potential with Infinity Outsourcing. Join us on the path to professional excellence
Designing and building the data Ingestion Pipeline
Designing, building and maintain large, complex data processing pipelines using Databricks and Azure Data Factory in Azure to meet enterprise Data requirements
Providing operational and functional support on creating, storing, managing, and maintaining enterprise data, including the ability to incorporate policies and procedures for centrally managing and sharing data through the data life cycle
Identifying, designing, and implement internal process improvements: automating manual processes, optimizing data delivery, and re-designing for flexibility and scalability
Building the infrastructure required for extraction, transformation, and loading of data from a wide variety of cloud and on-premises data sources.
Working with stakeholders including to assist with data-related technical issues and support their data infrastructure needs.
Working closely with data architecture, data governance and data analytics teams to ensure pipelines adhere to enterprise standards, usability, and performance.
Experience in building Meta data driven framework using Databricks
Experience in Snow-Pipe and Snow-SQL
Write complex SQL queries, stored procedures, and user-defined functions in Snowflake to support data processing and analytics requirements.
Optimize SQL code and query performance by understanding query execution plans, indexing strategies, and query optimization techniques.
Implement query performance tuning and indexing strategies to improve overall system performance.
Data Quality and Governance:
Implement data quality checks and validation processes to ensure the accuracy, consistency, and integrity of data in Snowflake.
Collaborate with data governance teams to enforce data quality standards and compliance with data regulations.
Implement and maintain data security measures in Snowflake to ensure data privacy and compliance.
Document data models, ETL processes, data transformation rules, and best practices for future reference and knowledge sharing.
Participate in code reviews, quality assurance, and documentation reviews.
Required profile
Experience
Industry :
Human Resources, Staffing & Recruiting
Spoken language(s):
English
Check out the description to know which languages are mandatory.