Work in a cross functional team of skilled engineers building data products that turn data into actionable insights to help manage and optimise large multi-tenanted Kubernetes clusters and drive the automated CI/CD strategy for the entire data platform. We have a strong focus on empowering engineers to drive the direction of the platform, delivering exciting projects and solving new technical problems. A flexible hybrid working strategy will enable you to work either from home or from the office, whichever suits you best! Opportunities available for all experience levels with dedicated learning time and mentoring to help you grow.
What you'll do:
- You are encouraged to grow with the role; roadmaps are built with the expectation that engineers will take time to learn and really understand the technology
- Design and develop of E2E data products through their lifecycles including data ingestion, transformation and visualisation layers
- Analysis and application of various machine learning algorithms for business use cases such as resource usage/cost forecasting, resource optimisation, anomaly detection etc
- Working on the observability and optimisation of the strategic large scale, multi-tenanted Kubernetes platform central to key propositions running on a global scale
- Team is empowered to design their own solution to turn data into actionable insights eg KPI analysis
- Delivery of well tested data pipelines to achieve automated CI/CD
- Diverse set of opportunities within internal teams, depending on personal interests across Software Engineering, SRE, Data, and Security Engineering teams
- Involved in the full lifecycle of evolving ideas into running solutions
What you'll bring:
- A solid understanding of computer science fundamentals and Agile methodologies
- Significant experience in the design and delivery of entire data products’ lifecycle and/or data platform including data ingestion, transformation and visualisation. Data governance, protection and data quality engineering experiences are also welcome.
- Strong experience with implementation and testing of automated data pipelines using conventional tools like Composer/Airflow, DBT, Looker, BigQuery etc
- Strong Experience using SQL, and experience with cloud-based data warehouse/big data.
- A good data analytical skill, and experience of turning data into actionable insights
- Basic programming experience, Python preferred
- Familiar with Data Engineering as well as general engineering best practices, Testing strategies, CI/CD, streaming/batching data, k8s, prometheus etc
- A desire to work in a close-knit team and find ways to collaborate that bring the most out of yourself and others