As a Data Engineer, you will support all aspects of the management and effective operation of our Enterprise Data Platforms, including data ingestion, pipeline management, system integration and platform administration.
You will support the delivery of business outcomes by designing and delivering technical and data solutions, and help identify opportunities to improve or innovate.
You will be flexible and creative to ensure the best data is used in each situation, while ensuring data has been appropriately tested, validated and manipulated before being distributed.
You will understand how data moves through a business i.e. Extract Transform Load (ETL) and you contribute to interrogating, mapping, validating and documenting data transformations between raw and modelled data layers.
This role will primarily focus on the backend aspects of a data warehouse application stack, which is built on AWS using the API Gateway, Python Lambda functions, S3 and Redshift. Working with voluminous, live databases, you will be a critical driver in ensuring the robustness and efficiency of data-centric systems.
As a Data Engineer, your role involves leveraging expertise in data handling, curation, and conformity to support solution design and development. You will enable data analysis to drive tangible business benefits and aid colleagues in developing solutions for data capture, curation, and analysis.
Required Skills:
To hit the ground running you will need;
- Bachelor Computer science (or equivalent) degree
- Hands on development experience in Data Warehousing (AWS)
- Experience in Data Integration and Data Sourcing activities
- Experience developing data assets to support optimised analysis for customer and regulatory outcomes.
- Experience utilising tools and practices to build, verify and deploy solutions in the most efficient ways
- Experience in Agile software development including Github, Confluence, Rally
- Solid AWS experience
- AWS Certification Data Engineer
- Data warehousing, ETL, data modelling and reporting
- Great communication and interpersonal skills
- Ability to work effectively within a team and autonomously
- Experience with cloud technologies, especially AWS (S3, Redshift, Airflow), DevOps and DataOps tools (Jenkins, Git, Erwin)
- Demonstrated experience with SQL and Python
- Knowledge with UNIX environments
Principle Accountabilities:
- Interpret data from disparate of sources, analyze results using statistical techniques.
- To design and develop scalable solutions and forecasting models using AI tools, machine learning models and generative AI models.
- Establish and maintain collaboration with research and business teams to converge on the best solutions.
- Develop Machine Learning models to gain insights and provide information around trends and patterns.
- Building data pipelines for machine learning to production.
- Ability to analyze forecast results, pinpoint the weakness of existing models and experiment with ideas of improving it.
- Coordinate the project with stakeholders – requirements, solution designing, implementation.
- Develop new analytics and visualization prototypes with business.
- Improve quality of data models, data flows and data processing of data systems, reporting, analytics and integration solutions.
Note: This job description reflects essential functions, it does not prescribe or restrict the number or types of tasks that may be assigned or reassigned during the conduct of business.
Eligibility
To be eligible to apply for this position you must meet the below eligibility criteria.
- Be an Australian Citizen at the closing date of application.
- Commencement of employment is subject to the successful applicant undergoing and satisfying pre- employment screening, which includes a police history check.
Benefits
- Flexible work options
- Career Development
- Salary Packaging
- Modern Facilities
- Diverse Work Culture