Delivery Consultant - Machine Learning (GenAI), ProServe SDT North
AWS Professional Services is a unique organization. Our customers are the most advanced companies in the world. We build for them world-class, native IT solutions to solve real business problems and help them achieve business outcomes with AWS. Our projects are often unique, one-of-a-kind endeavors that no one has done before.
At Amazon Web Services (AWS), we are helping large enterprises build AI solutions on the AWS Cloud. We apply predictive technology to large volumes of data across a wide spectrum of problems. AWS Professional Services works together with AWS customers to address their business needs using AI solutions.
As a Delivery Consultant - ML, you will innovate, (re)design, and build native, business-critical AI solutions with our customers. You will leverage the global scale, elasticity, automation, and high-availability features of the AWS platform. You will build customer solutions with Amazon SageMaker, Amazon Bedrock, Amazon Elastic Compute (EC2), Amazon Data Pipeline, Amazon S3, Glue, Amazon DynamoDB, Amazon Relational Database Service (RDS), Amazon Elastic Map Reduce (EMR), Amazon Kinesis, AWS Lake Formation, and other AWS services.
You will collaborate across the whole AWS organization, with other consultants, customer teams, and partners on proof-of-concepts, workshops, and complex implementation projects. You will innovate and experiment to help customers achieve their business outcomes and deliver production-ready solutions at global scale. You will lead projects independently but also work as a member of a larger team. Your role will be key to earning customer trust.
This is a customer-facing role. When appropriate and safe, you will be required to visit our office and travel to client locations to deliver professional services when needed.
Key Responsibilities
1. Invent and build AI solutions that solve complex problems, scale globally, guarantee performance, and enable breakthrough innovations.
2. Use AWS AI/ML services (e.g., Amazon Bedrock), ML platforms (SageMaker), and frameworks (e.g., MXNet, PyTorch, SparkML, scikit-learn) to help our customers build AI/ML solutions.
3. Work with customers and partners, guiding them through planning, prioritization, and delivery of complex transformation initiatives, while collaborating with relevant AWS Sales and Service Teams.
4. Assist customers in delivering AI/data projects from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models with concept-drift monitoring and retraining to deliver business impact.
5. Collaborate with other Professional Services consultants (GenAI, Big Data, IoT, HPC) to analyze, extract, normalize, and label relevant data, and with our Professional Services engineers to operationalize customers' models after they are prototyped.
6. Help customers define their business outcomes and guide their technical architecture and investments.
7. Create and apply frameworks, methods, best practices, and artifacts that will guide our customers; publish and present them in large forums and across various media platforms.
8. Contribute to enhancing and improving AWS services.
BASIC QUALIFICATIONS
1. 3+ years of experience in the industry as an ML practitioner with hands-on implementation of ML systems; building, validating, and deploying GenAI models.
2. 7+ years of professional experience in a business environment; experience with IT platform implementation in a highly technical or analytical role.
3. 3+ years of experience handling large datasets and strong software development skills with proficiency in one or more programming languages including Python and one additional language (e.g., Java, C#).
4. Strong understanding of DevOps practices with practical hands-on application and strong interest in AI/ML solutions. Strong verbal and written communication skills and ability to lead effectively across organizations.
PREFERRED QUALIFICATIONS
1. 3+ years of technical experience preferred, knowledge of AI/ML technology stack of AWS and Generative AI trends, patterns, anti-patterns.
2. 3+ years of application development experience required with serverless technologies and experience training distributed ML models on CPU and GPU hardware.
3. Experience serving ML models through real-time APIs and deploying production-grade machine learning solutions on public cloud platforms.
4. Knowledge of vertical use cases for large language models in industries like finance, healthcare, manufacturing, etc.
Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your experience and skills. We value your passion to discover, invent, simplify, and build.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
#J-18808-Ljbffr