Company Introduction
2038 MR Limited is a cutting-edge company specialising in tech-driven sports experiences, combining virtual reality (VR) with retailtainment to create immersive, interactive activities for global audiences. Through their innovative VR-powered sports platforms, they offer exciting franchise opportunities worldwide, blending athletic entertainment with high-tech gaming for retail venues, FECs, and experiential stores. Their mission is to revolutionise sports entertainment by integrating advanced technology, making sports more engaging and accessible across the globe. Currently, the company invests more and more to improve the algorithm and the AI model development, we would like more talent to join us as data engineer to explore more potential on the quest arena.Job Function
Technology Research and Development
Job Description
- Design and implement scalable data pipelines for ingesting, processing, and transforming large datasets.
- Develop and maintain data warehouse architecture to support business analytics needs.
- Optimize and improve data architecture and systems for performance, reliability, and scalability.
- Ensure data quality by implementing monitoring, validation, and cleansing procedures.
- Collaborate with cross-functional teams to gather requirements and deliver data solutions.
- Manage ETL/ELT processes using tools such as Apache Spark, Airflow, or other relevant technologies.
- Monitor and troubleshoot production issues to ensure seamless data operations.
- Implement data security best practices to protect sensitive information.
Work Mode
Full-time, Part-time, Mix-mode (remain consistent within a calendar month)
Work Location
Lion Rock 72
Preferred Academic Disciplines
Arts & Humanities, Business & Management, Engineering, Media and Communication, Sciences, Technology
Preferred Skills and Knowledge
- Proficiency in programming languages such as Python, Java, or Scala.
- Strong knowledge of SQL and relational databases (e.g., PostgreSQL, MySQL, or Oracle).
- Experience with big data technologies (e.g., Hadoop, Spark, Kafka).
- Familiarity with cloud platforms (e.g., AWS, GCP, Azure) and their data services (e.g., AWS Redshift, GCP BigQuery).
- Hands-on experience with data pipeline and workflow orchestration tools (e.g., Apache Airflow, Luigi).
- Knowledge of NoSQL databases (e.g., MongoDB, Cassandra) is a plus.