At Telstra, we believe the more connected people are, the more opportunities they have. That's why we help create a brilliant connected future for everyone, every day.
Being part of Networks & IT means you will be part of a team that focuses on extending our network superiority to enable the continued execution of our digital strategy.
The role with us
As a Data Engineering Analyst, you are responsible for ingesting and transforming data into data platforms, so that inform data driven decisions can be made to improve business outcomes and gain a competitive advantage. You will play a key role in building Telstra’s data capability with high quality, reliable and readily available data.
About you
To be successful in the role, you need to have:
Key Responsibilities:
Working with solution design to provide technical input in ingestion, transformation, storage, and reporting. You will be part of a team driving towards successful delivery of features that scales and handle the business’s growing demands.
Participating in agile scrum ceremonies such as sprint planning, backlog grooming, estimation, and review.
Design and automate data pipelines that meet business requirements, and other requirements such as security and architecture guidelines.
Work across teams to define efficient solutions and implementation strategies to create cohesive solutions across data products
Participate in proofs-of-concepts and effectively transition and scale those concepts into production at scale through, engineering and deployment
Drive agile thinking and ways of working to maximise collaboration and value creation
Must Have
Years of experience in a relevant software development is essential
Proficiency in programming with Spark (Python)
REST API Development experience
Experience with programming Python and using various libraries
Strong SQL skills
Skills in data modelling with Bigdata and cloud, relational (RDBMS) and/or noSQL database
Experience in working with Data Lake architecture and End-To-End tests.
Understanding of ETL Mapping specifications and writing SQL scripts based on the ETL transformation logic.
Experience in cloud environment, preferably Azure.
Highly desirable:
Working knowledge of Kubernetes
Knowledge in with Kafka, Nifi, Netapp (S3), HDFS and Yarn
Experience in Apache Spark RDDs, Data frames, Spark SQL using Python
Knowledge around CICD and Devops and tools such as GIT, Bitbucket, Bamboo, Jenkins, Azure Devops.
Have bachelor or master’s in information technology or similar
Why join us?
Your work will expose you to innovative thinking, technologies and global best practice. As we grow, you'll grow, and this will extend onto building your own valuable talents and skills here with us.
Interested?
If this opportunity sounds like a perfect fit for you, we'd encourage you to apply!
___________________________
To learn more about how we support our people, including accessibility adjustments we can provide you through the recruitment process, visit tel.st/thrive.
Recruitment Start Date - 30/09/2022
Recruitment End Date – 28/10/2022