- 12+12+12 Month Federal Government Contract
- Baseline Clearance minimum requirement with the ability to obtain a Top Secret Positive Vetting Clearance
- CBR, SYD, MEL, BRI, ADE (highly secure locations)
Our Federal Government Client is looking for suitable candidates who either hold or are willing to obtain a Top Secret positive Vetting Clearance (Recruitment Hive recommends you research what is required to obtain this clearance before applying) to fill contract positions in various specialised positions:
The Data Warehouse /ETL Developer or Data Analytics Engineer will be responsible for unlocking the value of data by managing, transforming and operationalising data for use by business areas. This includes the development and sustainment of data pipelines utilising Enterprise Data Platforms. They will also manage end-to-end data management processes, using appropriate tools and techniques thatconform to agreed process standards and industry specific regulations. Working in a diverse team environment, this will involve sharing outcomes and experience with others to support development and
growth. This role sits with the Data Engineering team who support the Enterprise by providing data that is fit-for purpose, complete, accurate, and consistent for business intelligence and data analytics use-cases. The team employs contemporary and innovative tools and techniques to automate the ingestion of data into Enterprise Data Platforms. Part of the role will be ensuring that all development is in line with standard engineering and technical approaches employed by the team, and that risks associated with deployment are adequately understood and documented.
Required Skills of the suitable candidates will be:
- Designs, implements, and maintains data engineering solutions in SQL and Python.
- Build tooling and techniques which are well tested, documented and provide consistency in data engineering approaches.
- Ability to fuse data sources using various data pre-processing techniques such as transformation, integration, normalisation, and feature extraction to create useful data structures.
- Ability to carry out data quality checking and remediation.
- Demonstrated ability to develop technical documentation and track work in collaboration tools such as Confluence and JIRA.
- Platform engineering such as containerisation.
- Integrating and building APIs.
- Experience in data storage platforms such as or online analytic processing (OLAP) relational databases (RDBMS) Hadoop, document stores or graph databases.
- Experience in implementing data flows in tools such as Apache NiFi.
For a full copy of the job description please contact Jon @ Recruitment Hive on 02 6299 1*** or click the apply button.
Please note this role closes on Friday 8th December for roles beginning in 2024.
Job ID: JB8618