Description

They are responsible for optimisation of ETL pipelines, maintaining over 60 Spark jobs. Building a data lake for data scientists and analysts.

Key Responsibilities
  • Carrying out efficient integration with our data providers via various API endpoints and data representation formats.
  • You will be responsible for optimisation of ETL pipelines, maintaining over 60 Spark jobs.
  • Building a data lake for data scientists and analysts.
  • Enable an accurate, comprehensive and reliable data storage in our distributed data warehouses based on the needs of other teams