We are looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background to help build industry-leading language technology powering Rufus, our AI-driven search and shopping assistant, helping customers with their shopping tasks at every step of their shopping journey. This innovative role focuses on developing conversation-based, multimodal shopping experiences, utilizing multimodal large language models (MLLMs), generative AI, advanced machine learning (ML) technologies and computer vision. Our mission in conversational shopping is to make it easy for customers to find and discover the best products to meet their needs by helping with their product research, providing comparisons and recommendations, answering product questions, enabling shopping directly from images or videos, providing visual inspiration, and more. We do this by pushing the SoTA in Natural Language Processing (NLP), Generative AI, Multimodal Large Language Model (MLLM), Natural Language Understanding (NLU), Machine Learning (ML), Retrieval-Augmented Generation (RAG), Computer Vision, Responsible AI, LLM Agents, Evaluation, and Model Adaptation. Key job responsibilities As an Applied Scientist on our team, you will be responsible for the research, design, and development of new AI technologies that will shape the future of shopping experiences. You will play a critical role in driving the development of multimodal conversational systems, in particular those based on large language models, information retrieval, recommender systems and knowledge graph, to be tailored to customer needs. You will handle Amazon-scale use cases with significant impact on our customers' experiences. You will collaborate with scientists, engineers, and product partners locally and abroad. Your work will include inventing, experimenting with, and launching new features, products and systems. You will: - Perform hands-on analysis and modelling of enormous multimodal datasets to develop insights into how to best help customers throughout their shopping journeys. - Use deep learning, ML and MLLM techniques to create scalable language model centric solutions for building shopping assistant systems based on a rich set of structured and unstructured contextual signals. - Innovate new methods for understanding, extracting, retrieving and summarising contextual information that allows for the effective grounding of MLLMs, considering memory, compute, latency and quality. - Drive end-to-end MLLM projects that have a high degree of ambiguity, scale and complexity. - Build models, perform offline and A/B test experiments, optimize and deploy your models into production, working closely with software engineers. - Establish automated processes for large-scale data analysis and generation, machine-learning model development, model validation and serving. - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports and publish your work at internal and external conferences. About the team You will be part of a dynamic science team based in London, working alongside over 100 engineers, designers and product managers, focused on shaping the future of AI-driven shopping experiences at Amazon. This team works on every aspect of the shopping experience, from understanding multimodal user queries to planning and generating answers that combine text, image, audio and video.