Operating as one of Australia’s leading insurance firms, this company is currently investing heavily into their technology capabilities and are looking to enhance their customer experience capabilities to stand out from their competitors.
This Data Engineer role will be to build Data Platforms in a multi-cloud environment that are aligned to the data strategy and that enable and enhance critical and influential data products.
KEY RESPONSIBILITIES
- Maintain and operate the Data Lake leveraging Airflow, Spark, Kubernetes and Object Storage.
- Develop high-quality scalable Data Platforms based in GCP in line with the domain/product/feature roadmap.
- Design and develop modernised data pipelines for customers, colleagues and partners, that are more secure, robust and scalable than ever before, and deliver them with lightning speed.
- Collaborate with Data SME’s to build self-service capability through delivery of consistent, re-usable and fit for purpose data interfaces/APIs.
- Contribute to the maintenance of Confluent Kafka environments including VM and Kubernetes based deployments in a multi-cloud environment.
- Experience with Kubernetes and public cloud platforms such as GCP or AWS.
- 5+ years working hands-on as a Cloud Data Engineer
- Experience in working on or building Big Data platforms (Data Lake, Data Mesh, Data Pipelines and APIs).
- Excellent communication and written skills
- Excellent stakeholder management skills
This is the perfect time to join this leading insurer and make your mark on hugely exciting projects. They have a flexible working from home policy and a mature output driven environment. This role can be Sydney, Melbourne or Christchurch based.
This is a great opportunity to take your career to the next level and make an significant impact within the business and broader industry! Apply now!
How to apply:
Click apply or submit your CV to Eirene Andre
- eirene@sustainrecruit.com
#LI-EA1