https://www.amazon.science/blog/better-performing-25519-elliptic-curve-cryptography * Research areas + Automated reasoning + Cloud and systems + Computer vision + Conversational AI + Economics + Information and knowledge management + Machine learning + Operations research and optimization + Quantum technologies + Robotics + Search and information retrieval + Security, privacy, and abuse prevention + Sustainability + Automated reasoning + Cloud and systems + Computer vision + Conversational AI + Economics + Information and knowledge management + Machine learning + Operations research and optimization + Quantum technologies + Robotics + Search and information retrieval + Security, privacy, and abuse prevention + Sustainability * Blog * Publications * Conferences * Code and datasets * Academia + Academics at Amazon + Amazon Research Awards + Amazon Trusted AI Challenge + Research collaborations + Academics at Amazon + Amazon Research Awards + Amazon Trusted AI Challenge + Research collaborations * Careers [] Feedback Follow Us * twitter * instagram * youtube * facebook * linkedin * github * rss Menu amazon-science-logo.svg * Research areas + Automated reasoning + Cloud and systems + Computer vision + Conversational AI + Economics + Information and knowledge management + Machine learning + Operations research and optimization + Quantum technologies + Robotics + Search and information retrieval + Security, privacy, and abuse prevention + Sustainability + Automated reasoning + Cloud and systems + Computer vision + Conversational AI + Economics + Information and knowledge management + Machine learning + Operations research and optimization + Quantum technologies + Robotics + Search and information retrieval + Security, privacy, and abuse prevention + Sustainability * Blog * Publications * Conferences * Code and datasets * Academia + Academics at Amazon + Amazon Research Awards + Amazon Trusted AI Challenge + Research collaborations + Academics at Amazon + Amazon Research Awards + Amazon Trusted AI Challenge + Research collaborations * Careers [] Feedback Search [ ] Submit Search Automated reasoning Better-performing "25519" elliptic-curve cryptography Automated reasoning and optimizations specific to CPU microarchitectures improve both performance and assurance of correct implementation. By Torben Hansen, John Harrison September 10, 2024 Share Share * Copy link * Email * X * LinkedIn * Facebook * Line * Reddit * QZone * Sina Weibo * WeChat * WhatsApp Fen Xiang Dao Wei Xin x https://www.amazon.science/blog/ better-performing-25519-elliptic-curve-cryptography Cryptographic algorithms are essential to online security, and at Amazon Web Services (AWS), we implement cryptographic algorithms in our open-source cryptographic library, AWS LibCrypto (AWS-LC), based on code from Google's BoringSSL project. AWS-LC offers AWS customers implementations of cryptographic algorithms that are secure and optimized for AWS hardware. Two cryptographic algorithms that have become increasingly popular are x25519 and Ed25519, both based on an elliptic curve known as curve25519. To improve the customer experience when using these algorithms, we recently took a deeper look at their implementations in AWS-LC. Henceforth, we use x/Ed25519 as shorthand for "x25519 and Ed25519". Graviton chip.png Related content Formal verification makes RSA faster -- and faster to deploy Optimizations for Amazon's Graviton2 chip boost efficiency, and formal verification shortens development time. In 2023, AWS released multiple assembly-level implementations of x/ Ed25519 in AWS-LC. By combining automated reasoning and state-of-the-art optimization techniques, these implementations improved performance over the existing AWS-LC implementations and also increased assurance of their correctness. In particular, we prove functional correctness using automated reasoning and employ optimizations targeted to specific CPU microarchitectures for the instruction set architectures x86_64 and Arm64. We also do our best to execute the algorithms in constant time , to thwart side-channel attacks that infer secret information from the durations of computations. In this post, we explore different aspects of our work, including the process for proving correctness via automated reasoning, microarchitecture (march) optimization techniques, the special considerations for constant-time code, and the quantification of performance gains. Elliptic-curve cryptography Elliptic-curve cryptography is a method for doing public-key cryptography, which uses a pair of keys, one public and one private. One of the best-known public-key cryptographic schemes is RSA, in which the public key is a very large integer, and the corresponding private key is prime factors of the integer. The RSA scheme can be used both to encrypt/decrypt data and also to sign/verify data. (Members of our team recently blogged on Amazon Science about how we used automated reasoning to make the RSA implementation on Amazon's Graviton2 chips faster and easier to deploy.) Elliptic curve.png Example of an elliptic curve. Elliptic curves offer an alternate way to mathematically relate public and private keys; sometimes, this means we can implement schemes more efficiently. While the mathematical theory of elliptic curves is both broad and deep, the elliptic curves used in cryptography are typically defined by an equation of the form y^2 = x ^3 + ax^2 + bx + c, where a, b, and c are constants. You can plot the points that satisfy the equation on a 2-D graph. An elliptic curve has the property that a line that intersects it at two points intersects it at at most one other point. This property is used to define operations on the curve. For instance, the addition of two points on the curve can be defined not, indeed, as the third point on the curve collinear with the first two but as that third point's reflection around the axis of symmetry. Elliptic-curve addition.gif Addition on an elliptic curve. Now, if the coordinates of points on the curve are taken modulo some integer, the curve becomes a scatter of points in the plane, but a scatter that still exhibits symmetry, so the addition operation remains well defined. Curve25519 is named after a large prime integer -- specifically, 2^255 - 19. The set of numbers modulo the curve25519 prime, together with basic arithmetic operations such as multiplication of two numbers modulo the same prime, define the field in which our elliptic-curve operations take place. Successive execution of elliptic-curve additions is called scalar multiplication, where the scalar is the number of additions. With the elliptic curves used in cryptography, if you know only the result of the scalar multiplication, it is intractable to recover the scalar, if the scalar is sufficiently large. The result of the scalar multiplication becomes the basis of a public key, the original scalar the basis of a private key. The x25519 and Ed25519 cryptographic algorithms The x/Ed25519 algorithms have distinct purposes. The x25519 algorithm is a key agreement algorithm, used to securely establish a shared secret between two peers; Ed25519 is a digital-signature algorithm, used to sign and verify data. The x/Ed25519 algorithms have been adopted in transport layer protocols such as TLS and SSH. In 2023, NIST announced an update to its FIPS185-6 Digital Signature Standard that included the addition of Ed25519. The x25519 algorithm also plays a role in post-quantum safe cryptographic solutions, having been included as the classical algorithm in the TLS 1.3 and SSH hybrid scheme specifications for post-quantum key agreement. Microarchitecture optimizations When we write assembly code for a specific CPU architecture, we use its instruction set architecture (ISA). The ISA defines resources such as the available assembly instructions, their semantics, and the CPU registers accessible to the programmer. Importantly, the ISA defines the CPU in abstract terms; it doesn't specify how the CPU should be realized in hardware. Secure multiparty computation.gif Related content Amazon's Tal Rabin wins Dijkstra Prize in Distributed Computing Prize honors Amazon senior principal scientist and Penn professor for a protocol that achieves a theoretical limit on information-theoretic secure multiparty computation. The detailed implementation of the CPU is called the microarchitecture, and every march has unique characteristics. For example, while the AWS Graviton 2 CPU and AWS Graviton 3 CPU are both based on the Arm64 ISA, their march implementations are different. We hypothesized that if we could take advantage of the march differences, we could create x/Ed25519 implementations that were even faster than the existing implementations in AWS-LC. It turns out that this intuition was correct. Let us look closer at how we took advantage of march differences. Different arithmetic operations can be defined on curve25519, and different combinations of those operations are used to construct the x/Ed25519 algorithms. Logically, the necessary arithmetic operations can be considered at three levels: 1. Field operations: Operations within the field defined by the curve25519 prime 2^255 - 19. 2. Elliptic-curve group operations: Operations that apply to elements of the curve itself, such as the addition of two points, P1 and P2. 3. Top-level operations: Operations implemented by iterative application of elliptic-curve group operations, such as scalar multiplication. Levels of operations.png Examples of operations at different levels. Arrows indicate dependency relationships between levels. Each level has its own avenues for optimization. We focused our m arch-dependent optimizations on the level-one operations, while for levels two and three our implementations employ known state-of-the-art techniques and are largely the same for different m archs. Below, we give a summary of the different march-dependent choices we made in our implementations of x/Ed25519. * For modern x86_64 marchs, we use the instructions MULX, ADCX, and ADOX, which are variations of the standard assembly instructions MUL (multiply) and ADC (add with carry) found in the instruction set extensions commonly called BMI and ADX. These instructions are special because, when used in combination, they can maintain two carry chains in parallel, which has been observed to boost performance up to 30%. For older x86_64 marchs that don't support the instruction set extensions, we use more traditional single-carry chains. * For Arm64 marchs, such as AWS Graviton 3 with improved integer multipliers, we use relatively straightforward schoolbook multiplication, which turns out to give good performance. AWS Graviton 2 has smaller multipliers. For this Arm64 march, we use subtractive forms of Karatsuba multiplication, which breaks down multiplications recursively. The reason is that, on these marchs, 64x64-bit multiplication producing a 128-bit result has substantially lower throughput relative to other operations, making the number size at which Karatsuba optimization becomes worthwhile much smaller. We also optimized level-one operations that are the same for all m archs. One example concerns the use of the binary greatest-common-divisor (GCD) algorithm to compute modular inverses. We use the "divstep" form of binary GCD, which lends itself to efficient implementation, but it also complicates the second goal we had: formally proving correctness. Secure multiparty computation.gif Related content Computing on private data Both secure multiparty computation and differential privacy protect the privacy of data used in computation, but each has advantages in different contexts. Binary GCD is an iterative algorithm with two arguments, whose initial values are the numbers whose greatest common divisor we seek. The arguments are successively reduced in a well-defined way, until the value of one of them reaches zero. With two n-bit numbers, the standard implementation of the algorithm removes at least one bit total per iteration, so 2n iterations suffice. With divstep, however, determining the number of iterations needed to get down to the base case seems analytically difficult. The most tractable proof of the bound uses an elaborate inductive argument based on an intricate "stable hull" provably overapproximating the region in two-dimensional space containing the points corresponding to the argument values. Daniel Bernstein, one of the inventors of x25519 and Ed25519, proved the formal correctness of the bound using HOL Light, a proof assistant that one of us (John) created. (For more on HOL Light, see, again, our earlier RSA post.) Performance results In this section, we will highlight improvements in performance. For the sake of simplicity, we focus on only three marchs: AWS Graviton 3, AWS Graviton 2, and Intel Ice Lake. To gather performance data, we used EC2 instances with matching CPU marchs -- c6g.4xlarge, c7g.4xlarge, and c6i.4xlarge, respectively; to measure each algorithm, we used the AWS-LC speed tool. In the graphs below, all units are operations per second (ops/sec). The "before" columns represent the performance of the existing x/ Ed25519 implementations in AWS-LC. The "after" columns represent the performance of the new implementations. Signing new.png For the Ed25519 signing operation, the number of operations per second, over the three marchs, is, on average, 108% higher with the new implementations. Verification.png For the Ed25519 verification operation, we increased the number of operations per second, over the three marchs, by an average of 37%. We observed the biggest improvement for the x25519 algorithm. Note that an x25519 operation in the graph below includes the two major operations needed for an x25519 key exchange agreement: base-point multiplication and variable-point multiplication. Ops:sec new.png With x25519, the new implementation increases the number of operations per second, over the three marchs, by an average of 113%. On average, over the AWS Graviton 2, AWS Graviton 3, and Intel Ice Lake microarchitectures, we saw an 86% improvement in performance. Proving correctness We develop the core parts of the x/Ed25519 implementations in AWS-LC in s2n-bignum, an AWS-owned library of integer arithmetic routines designed for cryptographic applications. The s2n-bignum library is also where we prove the functional correctness of the implementations using HOL Light. HOL Light is an interactive theorem prover for higher-order logic (hence HOL), and it is designed to have a particularly simple (hence light) "correct by construction" approach to proof. This simplicity offers assurance that anything "proved" has really been proved rigorously and is not the artifact of a prover bug. Diagram of a circuit with a multiplicative depth of two Related content Building machine learning models with encrypted data New approach to homomorphic encryption speeds up the training of encrypted machine learning models sixfold. We follow the same principle of simplicity when we write our implementations in assembly. Writing in assembly is more challenging, but it offers a distinct advantage when proving correctness: our proofs become independent of any compiler. The diagram below shows the process we use to prove x/Ed25519 correct. The process requires two different sets of inputs: first is the algorithm implementation we're evaluating; second is a proof script that models both the correct mathematical behavior of the algorithm and the behavior of the CPU. The proof is a sequence of functions specific to HOL Light that represent proof strategies and the order in which they should be applied. Writing the proof is not automated and requires developer ingenuity. From the algorithm implementation and the proof script, HOL Light either determines that the implementation is correct or, if unable to do so, fails. HOL Light views the algorithm implementation as a sequence of machine code bytes. Using the supplied specification of CPU instructions and the developer-written strategies in the proof script, HOL Light reasons about the correctness of the execution. CI integration.png CI integration provides assurance that no changes to the algorithm implementation code can be committed to s2n-bignum's code repository without successfully passing a formal proof of correctness. This part of the correctness proof is automated, and we even implement it inside s2n-bignum's continuous-integration (CI) workflow. The workflow covered in the CI is highlighted by the red dotted line in the diagram below. CI integration provides assurance that no changes to the algorithm implementation code can be committed to s2n-bignum's code repository without successfully passing a formal proof of correctness. The CPU instruction specification is one of the most critical ingredients in our correctness proofs. For the proofs to be true in practice, the specification must capture the real-world semantics of each instruction. To improve assurance on this point, we apply randomized testing against the instruction specifications on real hardware, "fuzzing out" inaccuracies. Constant time We designed our implementations and optimizations with security as priority number one. Cryptographic code must strive to be free of side channels that could allow an unauthorized user to extract private information. For example, if the execution time of cryptographic code depends on secret values, then it might be possible to infer those values from execution times. Similarly, if CPU cache behavior depends on secret values, an unauthorized user who shares the cache could infer those values. Our implementations of x/Ed25519 are designed with constant time in mind. They perform exactly the same sequence of basic CPU instructions regardless of the input values, and they avoid any CPU instructions that might have data-dependent timing. Using x/Ed25519 optimizations in applications AWS uses AWS-LC extensively to power cryptographic operations in a diverse set of AWS service subsystems. You can take advantage of the x/Ed25519 optimizations presented in this blog by using AWS-LC in your application(s). Visit AWS-LC on Github to learn more about how you can integrate AWS-LC into your application. To allow easier integration for developers, AWS has created bindings from AWS-LC to multiple programming languages. These bindings expose cryptographic functionality from AWS-LC through well-defined APIs, removing the need to reimplement cryptographic algorithms in higher-level programming languages. At present, AWS has open-sourced bindings for Java and Rust -- the Amazon Corretto Cryptographic Provider (ACCP) for Java, and AWS-LC for Rust (aws-lc-rs). Furthermore, we have contributed patches allowing CPython to build against AWS-LC and use it for all cryptography in the Python standard library. Below we highlight some of the open-source projects that are already using AWS-LC to meet their cryptographic needs. Open-source projects.png Open-source projects using AWS-LC to meet their cryptographic needs. We are not done yet. We continue our efforts to improve x/Ed25519 performance as well as pursuing optimizations for other cryptographic algorithms supported by s2n-bignum and AWS-LC. Follow the s2n-bignum and AWS-LC repositories for updates. Research areas * Automated reasoning Tags * Cryptography * Post-quantum cryptography * Provable security About the Author Torben Hansen Torben Hansen is an applied scientist with AWS Cryptography. John Harrison John Harrison is a senior principal applied scientist in Amazon's Automated Reasoning Group. He is a maintainer of s2n-bignum and the HOL Light theorem prover. Related content * Graviton chip.png Formal verification makes RSA faster -- and faster to deploy June Lee, Hanno Becker, John Harrison August 08, 2024 Optimizations for Amazon's Graviton2 chip boost efficiency, and formal verification shortens development time. Automated reasoning * Photo grid shows some of the recipients of the 2023 fall Amazon Research Awards Amazon Research Awards recipients announced Amazon Research Awards team April 26, 2024 Awardees, who represent 51 universities in 15 countries, have access to Amazon public datasets, along with AWS AI/ML services and tools. * LEAN logo.png How the Lean language brings math to coding and coding to math Leo de Moura August 16, 2024 Uses of the functional programming language include formal mathematics, software and hardware verification, AI for math and code synthesis, and math and computer science education. Automated reasoning Work with us See more jobs See more jobs Applied Scientist II, AMZL Learning Product US, WA, Bellevue The Learning & Development Science team in Amazon Logistics (AMZL) builds state-of-the-art Artificial Intelligence (AI) solutions for enhancing leadership and associate development within the organization. We develop technology and mechanisms to map the learner journeys, answer real-time questions through chat assistants, and drive the right interventions at the right time. As an Applied Scientist on the team, you will play a critical role in driving the design, research, and development of these science initiatives. The ideal candidate will lead the research on learning and development trends, and develop impactful learning journey roadmap that align with organizational goals and priorities. By parsing the information of different learning courses, they will utilize the latest advances in Gen AI technology to address the personalized questions in real-time from the leadership and associates through chat assistants. Post the learning interventions, the candidate will apply causal inference or A/B experimentation frameworks to assess the associated impact of these learning programs on associate performance. As a part of this role, this candidate will collaborate with a large team of experts in the field and move the state of learning experience research forward. They should have the ability to communicate the science insights effectively to both technical and non-technical audiences. Key job responsibilities * Apply science models to extract actionable information from learning feedback * Leverage GenAI/Large Language Model (LLM) technology for scaling and automating learning experience workflows * Design and implement metrics to evaluate the effectiveness of AI models * Present deep dives and analysis to both technical and non-technical stakeholders, ensuring clarity and understanding and influencing business partners * Perform statistical analysis and statistical tests including hypothesis testing and A/B testing * Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation Sr. Applied Scientist , AFT AI, Amazon Fulfillment Technologies (AFT) US, WA, Bellevue Are you excited about developing cutting-edge generative AI, large language models (LLMs), and foundation models? Are you looking for opportunities to build and deploy them on real-world problems at a truly vast scale with global impact? At AFT (Amazon Fulfillment Technologies) AI, a group of around 50 scientists and engineers, we are on a mission to build a new generation of dynamic end-to-end prediction models (and agents) for our warehouses based on GenAI and LLMs. These models will be able to understand and make use of petabytes of human-centered as well as process information, and learn to perceive and act to further improve our world-class customer experience - at Amazon scale. We are looking for a Sr. Applied Scientist who will become of the research leads in a team that builds next-level end-to-end process predictions and shift simulations for all systems in a full warehouse with the help of generative AI, graph neural networks, and LLMs. Together, we will be pushing beyond the state of the art in simulation and optimization of one of the most complex systems in the world: Amazon's Fulfillment Network. Key job responsibilities In this role, you will dive deep into our fulfillment network, understand complex processes, and channel your insights to build large-scale machine learning models (LLMs and Transformer-based GNNs) that will be able to understand (and, eventually, optimize) the state and future of our buildings, network, and orders. You will face a high level of research ambiguity and problems that require creative, ambitious, and inventive solutions. You will work with and in a team of applied scientists to solve cutting-edge problems going beyond the published state of the art that will drive transformative change on a truly global scale. You will identify promising research directions, define parts of our research agenda and be a mentor to members of our team and beyond. You will influence the broader Amazon science community and communicate with technical, scientific and business leaders. If you thrive in a dynamic environment and are passionate about pushing the boundaries of generative AI, LLMs, and optimization systems, we want to hear from you. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you're passionate about this role and want to make an impact on a global scale, please apply! About the team Amazon Fulfillment Technologies (AFT) powers Amazon's global fulfillment network. We invent and deliver software, hardware, and data science solutions that orchestrate processes, robots, machines, and people. We harmonize the physical and virtual world so Amazon customers can get what they want, when they want it. The AFT AI team has deep expertise developing cutting edge AI solutions at scale and successfully applying them to business problems in the Amazon Fulfillment Network. These solutions typically utilize machine learning and computer vision techniques, applied to text, sequences of events, images or video from existing or new hardware. We influence each stage of innovation from inception to deployment, developing a research plan, creating and testing prototype solutions, and shepherding the production versions to launch. Applied Scientist II, Generative AI Innovation Center US, CA, Santa Clara Machine learning (ML) has been strategic to Amazon from the early years. We are pioneers in areas such as recommendation engines, product search, eCommerce fraud detection, and large-scale optimization of fulfillment center operations. The Generative AI team helps AWS customers accelerate the use of Generative AI to solve business and operational challenges and promote innovation in their organization. As an applied scientist, you are proficient in designing and developing advanced ML models to solve diverse challenges and opportunities. You will be working with terabytes of text, images, and other types of data to solve real-world problems. You'll design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We're looking for talented scientists capable of applying ML algorithms and cutting-edge deep learning (DL) and reinforcement learning approaches to areas such as drug discovery, customer segmentation, fraud prevention, capacity planning, predictive maintenance, pricing optimization, call center analytics, player pose estimation, event detection, and virtual assistant among others. Key job responsibilities The primary responsibilities of this role are to: * Design, develop, and evaluate innovative ML models to solve diverse challenges and opportunities across industries * Interact with customer directly to understand their business problems, and help them with defining and implementing scalable Generative AI solutions to solve them * Work closely with account teams, research scientist teams, and product engineering teams to drive model implementations and new solution A day in the life ABOUT AWS: Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating -- that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud. Inclusive Team Culture Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Applied Scientist, Geospatial Science US, WA, Bellevue The Geospatial science team solves problems at the interface of ML/AI and GIS for Amazon's last mile delivery programs. We have access to Earth-scale datasets and use them to solve challenging problems that affect hundreds of thousands of transporters. We are looking for strong candidates to join the transportation science team which owns time estimation, GPS trajectory learning, and sensor fusion from phone data. You will join a team of GIS and ML domain experts and be expected to develop ML models, present research results to stakeholders, and collaborate with SDEs to implement the models in production. Key job responsibilities - Understand business problems and translate them into science problems - Develop ML models - Present research results - Write and publish papers - Write production code - Collaborate with SDEs and other scientists Applied Scientist II, ROW AOP IN, KA, Bengaluru Job Description AOP(Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications and research modeling to optimize operation processes.. You will work with professional Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for an Applied Scientist to join our growing Science Team in Bangalore/Hyderabad. As an Applied Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact . You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: * Using live package and truck signals to adjust truck capacities in real-time * HOTW models for Last Mile Channel Allocation * Using LLMs to automate analytical processes and insight generation * Using ML to predict parameters which affect truck scheduling * Working with global science teams to predict Shipments Per Route for $MM savings * Deep Learning models to classify addresses based on various attributes Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon's historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques As part of our team, candidate in this role will work in close collaboration with other applied scientists and cross functional teams on high visibility projects with direct exposure to the senior leadership team on regular basis. About the team This team is responsible for applying science based algo and techniques to solve the problems in operation and supply chain. Some of these problems include Truck Scheduling, LM capacity planning, LLM and so on. Sr. Applied Scientist, Monetization, Sponsored Products US, WA, Seattle Amazon continues to invest heavily in building our world class advertising business. Our products are strategically important to our Retail and Marketplace businesses, driving long term growth. We deliver billions of ad impressions and millions of clicks daily, breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and strong bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. The Sponsored Products Monetization team is broadly responsible for pricing of ads on Amazon search pages, balancing short-term and long-term ad revenue growth to drive sustainable marketplace health. As a Senior Applied Scientist on our team, you will be responsible for defining the science and technical strategy for one of our most impactful marketplace controls, creating lasting value for Amazon and our advertising customers. You will help to identify unique opportunities to create customized and delightful shopping experience for our growing marketplaces worldwide. Your job will be identify big opportunities for the team that can help to grow Sponsored Products business working with retail partner teams, Product managers, Software engineers and PMs. You will have opportunity to design, run and analyze A/B experiments to improve the experience of millions of Amazon shoppers while driving quantifiable revenue impact. More importantly, you will have the opportunity to broaden your technical skills in an environment that thrives on creativity, experimentation, and product innovation. Key job responsibilities - Lead science, tech and business strategy and roadmap for Sponsored Products Monetization - Drive alignment across multiple organizations for science, engineering and product strategy to achieve business goals - Lead and mentor scientists and engineers across teams to develop, test, launch and improve of science models designed to optimize the shopper experience and deliver long term value for Amazon and advertisers - Develop state of the art experimental approaches and ML models - Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving - Research new and innovative machine learning approaches - Recruit Scientists to the team and provide mentorship Applied Scientist I, Artificial Generative Intelligence (AGI) IN, KA, Bengaluru The Amazon Artificial Generative Intelligence (AGI) team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key job responsibilities - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Applied Science Manager, Private Brands Discovery US, WA, Seattle The Private Brands Discovery team designs innovative machine learning solutions to enhance customer awareness of Amazon's own brands and help customers find products they love. This interdisciplinary team of scientists and engineers incubates and develops disruptive solutions using cutting-edge technology to tackle some of the most challenging scientific problems at Amazon. To achieve this, the team utilizes methods from Natural Language Processing, deep learning, large language models (LLMs), multi-armed bandits, reinforcement learning, Bayesian optimization, causal and statistical inference, and econometrics to drive discovery throughout the customer journey. Our solutions are crucial to the success of Amazon's private brands and serve as a model for discovery solutions across the company. This role presents a high-visibility opportunity for someone eager to make a business impact, delve into large-scale problems, drive measurable actions, and collaborate closely with scientists and engineers. As a team lead, you will be responsible for developing and coaching talent, guiding the team in designing and developing cutting-edge models, and working with business, marketing, and software teams to address key challenges. These challenges include building and improving models for sourcing, relevance, and CTR/CVR estimation, deploying reinforcement learning methods in production etc. In this role, you will be a technical leader in applied science research with substantial scope, impact, and visibility. A successful team lead will be an analytical problem solver who enjoys exploring data, leading problem-solving efforts, guiding the development of new frameworks, and engaging in investigations and algorithm development. You should be capable of effectively interfacing between technical teams and business stakeholders, pushing the boundaries of what is scientifically possible, and maintaining a sharp focus on measurable customer and business impact. Additionally, you will mentor and guide scientists to enhance the team's talent and expand the impact of your work. Applied Scientist (L5), SP Bidding CA, ON, Toronto Amazon Advertising is one of Amazon's fastest growing and most profitable businesses. As a core product offering within our advertising portfolio, Sponsored Products (SP) helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The SP team's primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! Why you love this opportunity Amazon is investing heavily in building a world-class advertising business. This team is responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon's Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven fundamentally from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Key job responsibilities As an Applied Scientist on this team you will: * Build machine learning models and utilize data analysis to deliver scalable solutions to business problems. * Perform hands-on analysis and modeling with very large data sets to develop insights that increase traffic monetization and merchandise sales without compromising shopper experience. * Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production. * Design and run A/B experiments that affect hundreds of millions of customers, evaluate the impact of your optimizations and communicate your results to various business stakeholders. * Work with scientists and economists to model the interaction between organic sales and sponsored content and to further evolve Amazon's marketplace. * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. * Research new predictive learning approaches for the sponsored products business. * Write production code to bring models into production. Applied Scientist, Sponsored Products US, WA, Seattle Amazon is investing heavily in building a world class advertising business and developing a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses for driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. Sponsored Products DP Experience and Market place org is looking for a strong Applied Scientist who can delight our customers by continually learning and inventing. Our ideal candidate is an experienced Applied Scientist who has a track-record of performing deep analysis and is passionate about applying advanced ML and statistical techniques to solve real-world, ambiguous and complex challenges to optimize and improve the product performance, and who is motivated to achieve results in a fast-paced environment. The position offers an exceptional opportunity to grow your technical and non-technical skills and make a real difference to the Amazon Advertising business. As an Applied Scientist in the Blended Widgets team, you will: * Conduct hands-on data analysis, and run regular A/B experiments, gather data, perform statistical analysis and deep dive, and communicate the impact to senior management * Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgment * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving * Collaborate with software engineering teams to integrate successful experimental results into large-scale, highly complex Amazon production systems * Promote the culture of experimentation and applied science at Amazon Team video https:// youtu.be/zD_6Lzw8raE We are also open to consider the candidate in New York, or Seattle. [amazon-science-logo-whi] * About * Research areas * Blog * Publications * Conferences * Code and datasets * Academia * About Amazon * Amazon Developer * Amazon Web Services * Awards and recognitions * Newsletter * Careers * FAQs View from space of a connected network around planet Earth representing the Internet of Things. Have feedback? Let us know by taking this short survey Get started Amazon.com | Conditions of Use | Privacy | (c) 1996-2024 Amazon.com, Inc. or its affiliates Follow Us * twitter * instagram * youtube * facebook * linkedin * github * rss