*Do take note that this is an on-site role based in Kuala Lumpur, Malaysia.
*Candidates can be from anywhere in Europe ideally or any part of the world, as long as they are willing to relocate to KL, Malaysia.
Are you passionate about using data to drive innovative solutions in a fast-paced environment? We're looking for a Senior Data Engineer to join a cutting-edge technology company based in Kuala Lumpur!
As a Senior Data Engineer, your mission will be to support data scientists, analysts, and software engineers by providing maintainable infrastructure and tooling for end-to-end solutions. You’ll work with terabytes to petabyte-scale data, supporting multiple products and data stakeholders across global offices.
Key Responsibilities
* Design, implement, operate and improve the analytics platform
* Design data solutions using various big data technologies and low latency architectures
* Collaborate with data scientists, business analysts, product managers, software engineers and other data engineers to develop, implement and validate deployed data solutions.
* Maintain the data warehouse with timely and quality data
* Build and maintain data pipelines from internal databases and SaaS applications
* Understand and implement data engineering best practices
* Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
* Mentor and provide guidance to junior engineers on the job
Qualifications
* Expert at writing and optimising SQL queries
* Proficiency in Python, Java or similar languages
* Familiarity with data warehousing concepts
* Experience in Airflow or other workflow orchestrators
* Familiarity with basic principles of distributed computing
* Experience with big data technologies like Spark, Delta Lake or others
* Proven ability to innovate and leading delivery of a complex solution
* Excellent verbal and written communication - proven ability to communicate with technical teams and summarise complex analyses in business terms
* Ability to work with shifting deadlines in a fast-paced environment
Desirable Qualifications
* Authoritative in ETL optimisation, designing, coding, and tuning big data processes using Spark
* Knowledge of big data architecture concepts like Lambda or Kappa
* Experience with streaming workflows to process datasets at low latencies
* Experience in managing data - ensuring data quality, tracking lineages, improving data discovery and consumption
* Sound knowledge of distributed systems - able to optimise partitioning, distribution and MPP of high-level data structures
* Experience in working with large databases, efficiently moving billions of rows, and complex data modelling
* Familiarity with AWS is a big plus
* Experience in planning day to day tasks, knowing how and what to prioritise and overseeing their execution
Competitive salary and benefits
We'll cover visas, tickets, and 1-2months of accommodation to help you settle in.
What’s Next:
1. Interview with our Talent Acquisition team (virtual or face-to-face)
2. Technical sample test (discussed in the technical round)
3. Final interview with the Hiring Manager (virtual or face-to-face)