Senior Big Data Engineer
Technology-driven supply chain start-up
ETL, Data cleansing & product optimisation
About Our Client
Our client is a technology-driven supply chain company with business across the globe. They are currently hiring a analytical and innovative candidate to perform big data cleansing, engineering and product optimisation.
As Senior Big Data Engineer, you are responsible for
- Responsible for the integration of large, structured and unstructured data volumes into the existing platforms
- Design, build, deploy and manage end-to-end data pipelines for batch and stream processing that can adequately handle the needs of a rapidly growing data-driven company
- Build out scalable and reliable ETL pipelines and processes to ingest data from a large number and variety of data sources
- Execution of further development of the physical implementation of the logical data model into a physical implementation in the data lake
- Implementation of solutions for reference data and master data management within the context of the mobility data business
- Execution of data quality measurements and implementation of data quality improvement activities to the required levels of data quality
- Work across various stakeholders to ensure smooth production deployment of data pipeline and adherence to data governance policies.
- Representation of the Data Architecture team in selected data architecture, data modeling, and metadata management work teams inside Mobility
- Proactively identify and suggest solutions to improve data engineering process.
The Successful Applicant
- University degree in Mathematics, Computer Science or related discipline
- At least 5 years of relevant work experience as a Big Data Engineer
- Farmiliar with cloud technologies eg. AWS
- 3+ years experience in data streaming processing pipeline, Kafka, Spark, Flume. Familiar with Airflow
- Experienced with ETL / ELT scripting, buiding and ,maintaining an integrated enterprise Data Lake system
- Experience working with Hadoop techncial stack, architecture and building data-intensive aplications and pipelines.
- Strong Experience with programming languages in Scala, Python, Pig, MapReduce, Hive
- Experience in Delta Lake, Schema Registry, Splunk, Impala is a plus
- Have passion and enthusiasm to master new technology
- Lateral thinking, passionate, innovative and creative.
- Ability to work in a fast-paced and dynamic environment.
- Excellent in working effectively in a multi-tasking environment, ability to prioritize competing tasks
- Passion for building tools and automating everything
What's on Offer
Our client provide attractive benefit package and sustainable career development opportunities