About the role With tens of million players sold across many countries, thousands of streaming channels and billions of hours watched over the platform, building scalable, highly-available, fault-tolerant, big data platform is critical for our success. Our Data Engineering team’s mission is to build a world-class big data platform to bring value out of data for us, for our partners, and for our customers. Our goal is to democratize data, support Roku’s exploding ad business, provide reporting and analytics self-service tools, and fuel existing and new business critical initiatives. What you'll be doing Scratch-build a highly scalable, available, fault-tolerant data processing systems using GCP technologies, HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, and other big data technologies. These systems should handle batch and real-time data processing over 10s of terabytes of data ingested every day and petabyte-sized data warehouse. Low level systems debugging, performance measurement & optimization on large production clusters. Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects. Maintain and support existing platforms and evolve to newer technology stacks and architectures. Build common frameworks that help with data discoverability & boost productivity of data producers & consumers We're excited if you are Are currently pursuing a Bachelors or Master’s degree in Computer Science, Data Engineering, or a related field. Graduating in December 2026 or later. Are proficient in at least one object-oriented language is desired, Java, Python or Scala Preferred. Have experience with SQL. Experience with GCP is a plus. Experience with open source technologies is a plus Experience with GenAI tools/frameworks is a plus