Working Arrangements: The role is Canberra based, however interstate candidates are encouraged to apply, and will be considered. The Agency will consider and negotiate offsite (work from anywhere) working arrangements on a case by case basis. Where an offsite working arrangement is agreed upon, successful candidates will be required to: o Attend the Agency's Canberra Office for the first week of their engagement to undertake compulsory training and to meet team members and various Agency personnel o Travel to Canberra at least once per month for a minimum of 4 working days for key stakeholder, team, and organisational events.
About the Role
Client requires a Machine Learning Platform Engineer to be responsible for building and maintaining Company's Machine Learning Operations Platform (MLOps). This position is well suited to a candidate with strong Software Engineering or Data Engineering expertise who has had exposure to contemporary Machine Learning practices and technologies.
Indicative duties include, but are not limited to:
- Contributing to the design, development, deployment and maintenance of company's MLOps platforms and infrastructure
- Contributing to the design, development and integration of secure, robust and performant production ML systems
- Providing guidance to MLOps engineers, software developers and data scientists on our MLOps processes, platforms and systems
- Developing new and improving existing DevSecOps and MLOps pipelines
- Machine Learning, software engineering, or DevOps
- Machine Learning production platforms in large organisations
- MLOps tooling such as MLFlow, KubeFlow, Seldon, Kserve, Docker, Kubernetes, and Git
- Understanding of Machine Learning concepts and algorithms
- Data engineering and data wrangling
- Software engineering principles and best practices
- Problem-solving and troubleshooting
The successful candidate will work at the direction of the AI Engineering lead to achieve the organisational goals outlined in company's Machine Learning and AI Engineering roadmaps.
Essential criteria
- Expertise in developing, deploying and maintaining production grade ML pipelines and systems
- Expertise in cloud native ML tooling such as Seldon, Kserve, Kubeflow and/or other mature cloud native projects
- Expertise in software development with python (or similar language) and software engineering best practices.
- Expertise in Docker containers and a container orchestrations system such as Kubernetes or OpenShift
- Understanding of or expertise in Machine Learning within the python ML ecosystem.
- Experience working with low level languages for model optimisation or experience with model optimisation frameworks.
- Experience with cloud native observability tools such as Prometheus, Grafana, Loki, Jaeger
- Experience in developing DevSecOps pipelines with tools such as GitLab CI/CD, GitHub actions or similar
If this role aligns with your skills and aspirations, apply now for immediate consideration. Contact Archna Singh at 02 6245 1708, quoting Job Reference: # 259570
The application deadline for this position is on 20th Dec 2023.
Please note that only candidates meeting the specified criteria will be contacted. Your interest in the position is greatly appreciated.
Diversity and inclusion are strongly supported at Peoplebank. People of all nationalities, gender identities, and cultural backgrounds, including Aboriginal and Torres Strait Islander Peoples, are encouraged to apply.
email me jobs like this
If you are a human, ignore this field Create alert
By submitting your details you agree to our
share this job
similar jobs
Acknowledgements
Peoplebank acknowledges the Traditional Owners of Country. We pay our respects to the Aboriginal and Torres Strait Islander cultures, and to elders past and present, whose land we stand upon today.
We welcome all cultures, all religions, all colours, all beliefs, all ages, all sizes, all types, all people.