DATA ENGINEER (Real Time)
About client:
At RemoteStar we are currently hiring for a client who is a world-class iGaming operator offering various online gaming products across multiple markets, both through their proprietary gaming sites and partner brands.
Their iGaming platform is central to their strategy, supporting over 25 online brands and growing, and it's used by hundreds of thousands of users worldwide. Our client embraces a Hybrid work-from-home model, with the flexibility of working three days in the office and two days from home.
About the Data Engineer role:
In this role, you will contribute to the design and development of Real-Time Data Processing applications to fulfil business needs. For any Technical Data wiz out there, this is the perfect environment to put your skills to the test by building a consolidated Data Platform with innovative features and most importantly joining a bunch of talented and fun group of people.
What you will be involved in:
1. Development and Maintenance of Real-Time Data Processing Applications by using frameworks and libraries such as: Spark Streaming, Spark Structured Streaming, Kafka Streams and Kafka Connect.
2. Manipulation of Streaming Data: Ingestion, Transformation and Aggregation.
3. Keeping up to date on Research and Development of new Technologies and Techniques to enhance our applications.
4. Collaborating closely with the Data DevOps, Data-Oriented streams and other multi-disciplined teams.
5. Comfortable working in an Agile Environment involving SDLC.
6. Familiar with the Change and Release Management Process.
7. Have an investigative mindset to be able to troubleshoot - thinking outside the box when it comes to troubleshooting problems and incident management.
8. Full ownership of Projects and Tasks assigned together with being able to work within a team.
9. Able to document well processes and perform Knowledge Sharing sessions with the rest of the team.
You're good with:
1. Have strong knowledge in Scala.
2. Knowledge or familiarity of Distributed Computing like Spark/KStreams/Kafka.
3. Connect and Streaming Frameworks such as Kafka.
4. Knowledge on Monolithic versus Microservice Architecture concepts for building large-scale applications.
5. Familiar with the Apache suite including Hadoop modules such as HDFS, Yarn, HBase, Hive, Spark as well as Apache NiFi.
6. Familiar with containerization and orchestration technologies such as Docker, Kubernetes.
7. Familiar with Time-series or Analytics Databases such as Elasticsearch.
8. Experience with Amazon Web Services using services such as S3, EC2, EMR, Redshift.
9. Familiar with Data Monitoring and Visualisation tools such as Prometheus and Grafana.
10. Familiar with software versioning tools like Git.
11. Comfortable working in an Agile environment involving SDLC.
12. Have a decent understanding of Data Warehouse and ETL concepts - familiarity with Snowflake is preferred.
13. Have strong analytical and problem-solving skills.
14. Good learning mindset.
15. Can effectively prioritize and handle multiple tasks and projects.
#J-18808-Ljbffr