4+ years of experience in software development and data engineering., Proficiency in Python/PySpark, SCALA, SQL, and Spark/Spark Streaming., Experience with Databricks and familiarity with Big Data tools., Knowledge of Azure, Kafka, and Linux is preferred..
Key responsibilities:
Maintain and support the application and develop data ingestion pipelines.
Collaborate with Business Analysts, Architects, and Senior Developers to establish application frameworks.
Conduct thorough requirement analysis and ensure detailed unit testing is performed.
Work with QA and automation teams, and manage code merges and releases.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Coders Brain is a global leader in IT services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers.
We achieved our success because of how successfully we integrate with our clients.
* Maintain and support the application. Development of data ingestion pipelines. Databricks background required.
* Develop and integrate software applications using suitable development methodologies and standards, applying standard architectural patterns, taking into account critical performance characteristics and security measures.
* Evaluate new features and refractors existing code.
* Must be willing to flex work hours accordingly to support application launches and manage production outages if necessary
* Ensures to understand the requirements thoroughly and in detail and identify gaps in requirements
* Ensures that detailed unit testing is done, handles negative scenarios and document the same
* Work with QA and automation team.
* Works on best practices and documenting the process
* code merges and releases (Bitbucket)
* Collaborate with Business Analysts, Architects and Senior Developers to establish the physical application framework (e.g. libraries, modules, execution environments).
* Good data analysis skills
Must have following experience:
Python/PySpark
SCALA
SQL
Spark/Spark Streaming
Databricks
* Preferred to have following experience:
Java/ C #
Azure
Kafka
Azure Data Factory
Big Data Tool Set
Linux
Job Location – Remote
Years of exp – 4+ years
Required profile
Experience
Spoken language(s):
English
Check out the description to know which languages are mandatory.