Azure DataOps Engineer

- Data Distribution & CI/CD -

Additional health insurance
Regular career- and feedback meetings
Bonus programme
Modern Office
International projects

About Us

We are a dynamic and forward-thinking team focused on delivering scalable and innovative Big Data solutions to empower organizations with actionable insights. Leveraging advanced cloud technologies, we provide a robust platform for data management and analytics that drives business growth and informed decision-making. Our team specializes in developing and maintaining a state-of-the-art Big Data platform built on Azure PaaS components. We work closely with stakeholders to understand their needs and ensure the platform's scalability, reliability, and security.

About the Role

We are looking for a detail-oriented and highly motivated professional to join our team as a DataOps Engineer. In this role, you will design and optimize data distribution strategies using cutting-edge tools such as Azure Databricks and Unity Catalog. You'll also take ownership of CI/CD pipelines and contribute to maintaining a seamless and efficient data infrastructure.

Key Responsibilities

  • Design, implement, and maintain data distribution solutions with Azure Databricks, Unity Catalog, and Azure Data Factory.
  • Monitor and optimize data pipelines for performance, accuracy, and scalability.
  • Collaborate with data science teams to address their requirements and develop tailored solutions.
  • Develop, refine, and maintain CI/CD processes for the data distribution pipeline.
  • Troubleshoot and resolve data-related issues, ensuring data quality and reliability.
  • Create and update technical documentation related to data distribution workflows.
  • Evaluate and test new features in tools like Azure Databricks and Azure Data Factory, recommending their adoption when beneficial.

Your Profile

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
  • 3-4 years of hands-on experience in a DataOps or related role, using tools like Azure Data Factory, Databricks, and Python.
  • Proficiency in automation and scripting, with experience in Azure DevOps pipelines.
  • Solid understanding of Big Data concepts and strong software engineering skills for pipeline maintenance and optimization.
  • Nice to have familiarity with Unity Catalog for efficient data distribution.
  • Fluent English communication skills, both written and spoken.

Nice to Have

  • Experience with event-driven architectures (e.g., Kafka, RabbitMQ).
  • Hands-on experience with infrastructure as code tools (e.g., Terraform).
  • Familiarity with Data Catalog solutions and Big Data integration.
  • Relevant certifications from Microsoft or similar vendors.
  • Strong team collaboration and communication skills.

  • If you are interested in this challenging position we are looking forward to receiving your comprehensive application for ref.no. 105,013 preferably through our ISG career portal or via email.

    Visit isg.com/jobs/search - here you can find new job offers every day.