KEY RESPONSIBILITIES

Primary Objectives

• Develop ETL/ELT solutions using Azure cloud services, and open-source tools to load data across multiple sources into the HSC Cloud Data platform within databricks.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, redesigning infrastructure for greater scalability, etc.

Main responsibilities

• Design and development of data engineering assets and scalable engineering frameworks to support various Business unit’s data demands and internal data analytics activities.
• Evaluate data importance and manage production of data pipelines.
• Code, test, and document new or modified data models and ETL/ELT tools to create robust and scalable data assets for reporting and analytics.
• Build data engineering framework to support Data Migration to Azure Cloud technologies and deliver new projects as per new target architecture within Azure data cloud service.
• Expand and increase data platform capabilities to resolve new data problems and challenges by identifying, sourcing, and integrating new data.
• Peer review code and promote dev ops culture within data team.
• Implement solutions that adhere to architecture best practices.
• Contribute to our ambition to develop a best practice Data and Analytics platform, leveraging next generation cloud technologies.
• Define and build the data pipelines that will enable faster, better, data-informed decision-making within the business.
• Work on cross-functional solutions focussing on business and process improvement.
• Enhance the quality of data following the guidelines provided by the HSC Data Quality Framework
• Cataloging data dictionary and ability to maintain it ongoing.
• Maintain data within a warehouse environment supporting delivery of quality data outputs.
• Maintain platform performing regular tasks such as user management and audit, resource utilisation, alerts monitoring, code reviews.
• Modify existing ETL processes in order to achieve automation where possible and to accommodate changes in the data structure.
• Have a good understanding of data platform best practices to achieve economies of scale, cost reduction and efficiencies.

JOB REQUIREMENTS

1. Education level
• Bachelor’s degree in information technology, business, or related field, or equivalent combination of education and experience  required.
2. Knowledge & Experiences
• At least 1-3 years experience in Dev Ops/ Data Ops/ Data Engineer.
• Hands on experience in building ETL/ELT solutions for large scale data pipelines on cloud platforms. Hands on experience on SQL Server and Snowflake/Databricks data warehouses or equivalent warehouses.
• Hands on experience in data processing (using Spark, Python) for cloud data platforms, scheduling, and monitoring of ETL/ELT jobs in Azure Data Factory or equivalent tools.
• Hands on experience in solution architecture, data ingestion, query optimisation, data segregation, ETL, ELT, CI/CD framework.
• Experience using Data Ops to develop data flows and the continuous use of data - Experience with cloud-based technologies.
• Experience in developing technical and support documentation.
3. Technical skills
• Ability to write various documents such as functional requirements.
• Good understanding of databases and data structures.
• Understanding of privacy laws and regulations.
• Basic level proficiency with Microsoft Word, Excel, Access, Project, and Outlook.
4. Soft skills
• Strong analytical and time management skills.
• Excellent written and verbal communication skills.

Lương: Thỏa Thuận
Phòng ban: Operations
Hạn nộp hồ sơ: 20/05 — 23/06/2024

Theo dõi

Nộp đơn ứng tuyển công việc này

Họ & tên bạn *
Địa chỉ email *
Số điện thoại *
CV của bạn *
Click để chọn & tải lên CV của bạn
Mã bảo mật *

Nộp đơn ứng tuyển