Intelligence isn’t about parameter count. It’s about time.

As AI models grow larger, they become less insightful, not more. To ensure that they continue to learn, we need to reduce their inference time.

When we prompt a large language model (LLM) to solve a complex polynomial equation, it does not just return an answer but uses its “chain of thought” to work through a solution. In a sense, the LLM behaves like a computer, a machine that computes the solution. But this machine is quite unlike what Alan Turing described as a universal model of computation almost 90 years ago.

Stefano Soatto.png
Stefano Soatto is a vice president and distinguished scientist in the Amazon Web Services (AWS) Agentic AI organization.
Credit: UCLA Samueli

In what sense can an LLM be thought of as a computer? Can it be universal, that is, able to solve any computable task, as a Turing machine does? If so, how does it learn this ability from finite data?

Current theories of machine learning are of little help in answering these questions, so we need new tools. In an earlier Amazon Science post, we argued that AI agents and the LLMs that power them are transductive-inference engines, despite being trained inductively in the mold of classical machine learning theory. Induction seeks generalization, or the ability to behave on future data as one did on past data. To achieve generalization, one must avoid memorization, i.e., overfitting the training data.

This works in theory, under the condition that both past and future data are drawn from the same distribution. In practice, however, such a condition cannot be verified, and in general, it doesn’t apply to high-value data in business, finance, climate science, and even language. That leaves us with no handle to explain how an LLM might learn how to verifiably solve a general computable task.

With transduction, by contrast, one seeks to reason through past data to craft solutions to new problems. Transduction is not about applying past solutions in the hope that they generalize; rather, it is about being able to retrieve portions of memory that matter when reasoning through new solutions. In transduction, memorization is not a stigma but a value. Using the test data, along with memory, to craft a solution during transductive inference is not overfitting but adaptive, query-specific computation — i.e., reasoning.

Inductive generalization is the kind of behavior one is forced to adopt when pressed for time. Such automatic, reactive behavior is sometimes referred to as “system-1” in cognitive psychology. Transduction instead requires looking at all data and performing query-specific variable-length inference-time computation — chain-of-thought reasoning in an LLM, whose length depends on the complexity of the query. Such deliberative behavior is often referred to as “system-2” and is what we wish to foster through learning. In this sense, transductive learning is a particular form of meta-learning, or learning to reason.

In 1964, Ray Solomonoff described a universally optimal algorithm for solving any problem through transductive inference, if we assume that memory and time are unbounded: execute all programs through a Turing machine, then average the outcome of those that reproduce the observed data. That will give the universally optimal answer — but it will generally take forever. What if we want not just a universally optimal but a universally fast algorithm?

In 1973 — in the same paper where he introduced the notion of NP completeness — Leonid Levin derived such an algorithm . Unfortunately, Levin’s so-called universal search is not viable in practice, nor does it help us understand LLMs; for one thing, it involves no learning. Nonetheless, Levin pointed to the critical importance of time when solving computational tasks. Later, in 1986, Solomonoff hinted at how learning can help reduce time.

In a new paper, we expand on these ideas and show how reducing inference time induces a trained model to operate transductively — i.e., to reason. In striving to reduce inference time, the model learns not just the statistical structure of the training data but also its algorithmic structure. It can then recombine algorithmic methods it’s learned in an infinite number of ways to address arbitrary new problems.

This insight has implications for how AI models are designed and trained. In particular, they should be designed to predict the marginal value of additional costs at inference time, and their training targets should include complexity costs, to force them to minimize time during inference.

This approach to learning turns classical statistical learning theory on its head. In classical statistical learning theory, the great danger is overfitting, so the goal is to regularize the solution, i.e., to minimize the information that the trained model retains from past data (beyond what matters for reducing the training loss). With transductive inference, on the other hand, the goal is to maximize the information retained, as it may come in handy for solving future problems.

The inversion of scaling laws

LLMs’ performance gains in the past few years have come mostly from scaling: increasing the number of model parameters has improved accuracy on benchmark datasets. This has led many to speculate that further increasing the models’ parameter counts could usher in an age of “superintelligence”, where the cognitive capacities of AI models exceed those of their human creators.

If scale does not lead to intelligence, what does? We argue that the answer is time.

In our paper, we argue the opposite: beyond a certain complexity, AI models enter what we call the savant regime, where learning becomes unnecessary, and better performance on the benchmarks comes with decreased “insight”. At the limit is the algorithm Solomonoff described in 1964, where any task can be solved by brute force.

If scale does not lead to intelligence, what does? We argue that the answer is time.

It’s an answer with some intuitive appeal. The concept of intelligence is fundamentally subjective and environment dependent. But while intelligence is hard to characterize, its absence is less so. Being unable to adapt to the speed of the environment is one among many behaviors that we call traits of non-intelligence (TONIs). TONIs are behaviors whose presence negates intelligence however one wishes to define it.

Many TONIs are timebound. Taking the same amount of (non-minimal) time and energy to solve repeated instances of the same task, to no better outcome, is a TONI. So is the inability to allocate resources commensurate to the goal, thus spending the same effort for a trivial task as for a complex one. Starting a task that is known to take longer than the lifetime of the universe to render any usable answer would be another TONI.

Given this intuition, how do we quantify the relationship between intelligence and time in AI models? The first step is to assess the amount of information contained in the models’ parameters; then we can see how it’s affected by the imposition of time constraints.

Algorithmic information

The standard way to measure information was proposed by Claude Shannon in a landmark 1948 paper that essentially created the field of information theory. Shannon defined the information content of a random variable as the entropy of its distribution. The more uncertainty about its value, the higher the information content.

On this definition, however, a given data sample’s information content is not a property of the sample itself; it’s a property of the distribution it was drawn from. For any given sample, however, there are infinitely many distributions from which it could have been drawn. If all you have is a sample — say, a string of ones and zeroes — how do you compute its information content?

In the 1960s, Solomonoff and, independently, Andrey Kolmogorov, addressed this problem, with an alternative notion of information, algorithmic information, which can be used to characterize the information content of arbitrary binary strings. For a given string, one can write a program that, when run through some computer, outputs that string. In fact, one can write infinitely many such programs and run each through many computers.

The shortest possible program that, run through a universal Turing machine, outputs the specific datum is a property of that datum. That program is the algorithmic minimal sufficient statistic, and its length is the algorithmic information (Kolmogorov-Solomonoff complexity) of that datum.

In his 1948 paper, Shannon also defined a metric called mutual information, which quantifies the information that can be inferred about the value of one variable by observing a correlated variable. This concept, too, can be extended to algorithmic information theory: the algorithmic mutual information between two data strings measures how much shorter the program for generating one string will be if you have access to the other.

Time is information

If we don’t know the distribution from which a model’s training data was drawn, and we don’t know whether the model’s future inputs will be drawn from the same distribution, how can we quantify the model’s future performance?

In our paper, we assume that most tasks can be solved by combining and transforming — in infinitely many possible ways — some ultimately finite, but a priori unknown, collection of methods. In that case, we can show that optimizing performance is a matter of maximizing the algorithmic mutual information between the model’s training data and future tasks.

Finding the shortest possible algorithm for generating a particular binary string is, however, an intractable problem (for all but the shortest strings). So computing the algorithmic mutual information between a model’s training data and future tasks is also intractable.

Nonetheless, in our paper, we prove that there is a fundamental relation between the speed with which a model can find a solution to a new task and the algorithmic mutual information between the solution and the training data. Specifically, we show that

log speed-up = I(h : D)

where h is the solution to the new task, D is the dataset the model was trained on, and I(h : D) is the algorithmic mutual information between the data and the solution.

This means that, during training, minimizing the time the model takes to perform an inference task will maximize the algorithmic information encoded in its weights. Reducing inference time ensures that, even as models’ parameter counts increase, they won’t descend into the savant regime, where they solve problems through brute force, without any insight or learning.

The value of time

You may have noticed that the equation relating inference time to algorithmic information doesn’t specify any units of measure. That’s because even the value of “time” is subjective. A zebra drinking from a pond does not know a priori how long it will take to be spotted by a predator. If it lingers too long, it ends up prey; if it panics and leaves, it ends up dehydrated.

ZebraTime-16x9-01.png
Intelligence is about time — but the value of “time” is subjective. A zebra drinking from a pond does not know a priori how long it will take to be spotted by a predator. If it lingers too long, it ends up prey; if it panics and leaves, it ends up dehydrated.

Similarly, for an AI model, there is no single cost of time to train for and correspondingly no unique scale beyond which LLMs enter the savant regime. For some tasks, such as scientific discovery, the time constant is centuries, while for others, such as algorithmic trading, it’s milliseconds. We expect agents to be able to adapt to their environment, in some cases spawning smaller specialized models for specific classes of tasks, and even then, to provide users (who are part of an agent’s environment) with controls to adjust the cost of time depending on the context and domain of application.

The cost of time is already (partially and implicitly) factored into the process of training LLMs. During pretraining, the cost of time is effectively set to a minimum value, as the model is scored on the output of a single forward pass through the training data. Fine tuning the model for chain-of-thought reasoning requires annotated data, whose high cost imposes a bias toward shorter “ground truth” reasoning traces. Thus, LLMs already reflect the subjective cost of time to the annotators who assemble the training sets.

However, to enable the user to modulate resources at inference time, depending on the cost of the environment, models should be trained to predict the marginal value of one more step of computation relative to the expected final return. Furthermore, they need to be trained to condition on a target complexity, in order to learn how to provide an answer within a customer-specified cost or bound.

There are growing efforts to teach models the value of time, so they can adapt to the tasks at hand (with or without human supervision). These are certain to yield a better bang-to-buck ratio, but the theory predicts that, at some point, factoring in the cost of time will actually improve absolute performance in new tasks. For verifiable tasks, learning to reason comes from seeking the shortest chain of thought that yields a correct (verified) answer. Ultimately, imposing a cost on time should not impair reasoning performance.

A new paradigm for AI coding

Connecting these ideas to modern AI requires rethinking what computation means. LLMs are stochastic dynamical systems whose computational elements (context, weights, activations, chain of thought) do not resemble the “programs” in classical, minimalistic models of computation, such as universal Turing machines.

Yet LLMs are models of computation — maximalist models. They’re universal, like Turing machines, but in many ways, they’re antithetical, and they operate through entirely different mechanisms. It’s possible to “program” such stochastic dynamical systems using a two-level control strategy: high-level, open-loop, global planning and low-level, closed-loop feedback control.

That strategy can be realized with AI Functions, an open-source library released this week as part of Amazon’s Strands Labs, a GitHub repository for building AI agents. An existing programming language can be augmented with functions from the library. These are ordinary functions, in the syntax of the language, but their bodies are written in natural language instead of code, and they’re governed by pre- and post-conditions. These enable high-level, open-loop planning and verification, before a single line of code is written by AI, and they engender an automatic local feedback loop if the AI-generated code fails to clear all conditions. Minimizing time, which translates into cost, is at the core of the design and evaluation of the resulting agents.

Research areas

Related content

GB, London
Come build the future of entertainment with us. Are you interested in shaping the future of movies and television? Do you want to define the next generation of how and what Amazon customers are watching? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows including Amazon Originals and exclusive licensed content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels which they can cancel at anytime and to rent or buy new release movies and TV box sets on the Prime Video Store. Prime Video is a fast-paced, growth business - available in over 200 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on. The Insights team is looking for an Applied Scientist for our London office experienced in generative AI and large models. This is a wide impact role working with development teams across the UK, India, and the US. This greenfield project will deliver features that reduce the operational load for internal Prime Video builders and for this, you will need to develop personalized recommendations for their services. You will have strong technical ability, excellent teamwork and communication skills, and a strong motivation to deliver customer value from your research. Our position offers opportunities to grow your technical and non-technical skills and make a global impact immediately. Key job responsibilities - Develop machine learning algorithms for high-scale recommendations problems - Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement - Collaborate with software engineers to integrate successful experimental results into Prime Video wide processes - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports A day in the life You will lead the design of machine learning models that scale to very large quantities of data across multiple dimensions. You will embody scientific rigor, designing and executing experiments to demonstrate the technical effectiveness and business value of your methods. You will work alongside other scientists and engineering teams to deliver your research into production systems. About the team Our team owns Prime Video observability features for development teams. We consume PBs of data daily which feed into multiple observability features focussed on reducing the customer impact time.
IN, KA, Bengaluru
Have you ever ordered a product on Amazon and when that box with the smile arrived you wondered how it got to you so fast? Have you wondered where it came from and how much it cost Amazon to deliver it to you? If so, the WW Amazon Logistics, Business Analytics team is for you. We manage the delivery of tens of millions of products every week to Amazon’s customers, achieving on-time delivery in a cost-effective manner. We are looking for an enthusiastic, customer obsessed, Applied Scientist with good analytical skills to help manage projects and operations, implement scheduling solutions, improve metrics, and develop scalable processes and tools. The primary role of an Operations Research Scientist within Amazon is to address business challenges through building a compelling case, and using data to influence change across the organization. This individual will be given responsibility on their first day to own those business challenges and the autonomy to think strategically and make data driven decisions. Decisions and tools made in this role will have significant impact to the customer experience, as it will have a major impact on how the final phase of delivery is done at Amazon. Ideal candidates will be a high potential, strategic and analytic graduate with a PhD in (Operations Research, Statistics, Engineering, and Supply Chain) ready for challenging opportunities in the core of our world class operations space. Great candidates have a history of operations research, and the ability to use data and research to make changes. This role requires robust program management skills and research science skills in order to act on research outcomes. This individual will need to be able to work with a team, but also be comfortable making decisions independently, in what is often times an ambiguous environment. Responsibilities may include: - Develop input and assumptions based preexisting models to estimate the costs and savings opportunities associated with varying levels of network growth and operations - Creating metrics to measure business performance, identify root causes and trends, and prescribe action plans - Managing multiple projects simultaneously - Working with technology teams and product managers to develop new tools and systems to support the growth of the business - Communicating with and supporting various internal stakeholders and external audiences
US, NY, New York
The Measurement Intelligence Science Team (MIST) in the Measurement, Ad Tech, and Data Science (MADS) organization of Amazon Ads serves a centralized role developing solutions for a multitude of performance measurement products. We create solutions which measure the comprehensive impact of their ad spend, including sales impacts both online and offline and across timescales, and provide actionable insights that enable our advertisers to optimize their media portfolios. We leverage a host of scientific technologies to accomplish this mission, including Generative AI, classical ML, Causal Inference, Natural Language Processing, and Computer Vision. As an Applied Science Manager on the team, you will lead a team of scientists to define and execute a transformative vision for holistic measurement and reporting insights for ad effectiveness. Your team will own the science solutions for foundational experimentation platforms, foundational customer journey understanding technologies, state of the art attribution algorithms to measure the role of advertising in driving observed retail outcomes, and/or agentic AI solutions that help advertisers get quick access to custom insights that inform how to get the most out of their ad spend. Key job responsibilities You independently manage a team of scientists. You identify the needs of your team and effectively grow, hire, and promote scientists to maintain a high-performing team. You have a broad understanding of scientific techniques, several of which may fall out of your specific job function. You define the strategic vision for your team. You establish a roadmap and successfully deliver scientific solutions that execute that vision. You define clear goals for your team and effectively prioritize, balancing short-term needs and long-term value. You establish clear and effective metrics and scientific process to enforce consistent, high-quality artifact delivery. You proactively identify risks and bring them to the attention of your manager, customers, and stakeholders with plans for mitigation before they become roadblocks. You know when to escalate. You communicate ideas effectively, both verbally and in writing, to all types of audiences. You author strategic documentation for your team. You communicate issues and options with leaders in such a way that facilitates understanding and that leads to a decision. You work successfully with customers, leaders, and engineering teams. You foster a constructive dialogue, harmonize discordant views, and lead the resolution of contentious issues. About the team We are a team of scientists across Applied, Research, Data Science and Economist disciplines. You will work with colleagues with deep expertise in ML, NLP, CV, Gen AI, and Causal Inference with a diverse range of backgrounds. We partner closely with top-notch engineers, product managers, sales leaders, and other scientists with expertise in the ads industry and on building scalable modeling and software solutions.
US, NY, New York
The Measurement Intelligence Science Team (MIST) in the Measurement, Ad Tech, and Data Science (MADS) organization of Amazon Ads serves a centralized role developing solutions for a multitude of performance measurement products. We create solutions which measure the comprehensive impact of their ad spend, including sales impacts both online and offline and across timescales, and provide actionable insights that enable our advertisers to optimize their media portfolios. We leverage a host of scientific technologies to accomplish this mission, including Generative AI, classical ML, Causal Inference, Natural Language Processing, and Computer Vision. As an Applied Science Manager on the team, you will lead a team of scientists to define and execute a transformative vision for holistic measurement and reporting insights for ad effectiveness. Your team will own the science solutions for foundational experimentation platforms, foundational customer journey understanding technologies, state of the art attribution algorithms to measure the role of advertising in driving observed retail outcomes, and/or agentic AI solutions that help advertisers get quick access to custom insights that inform how to get the most out of their ad spend. Key job responsibilities You independently manage a team of scientists. You identify the needs of your team and effectively grow, hire, and promote scientists to maintain a high-performing team. You have a broad understanding of scientific techniques, several of which may fall out of your specific job function. You define the strategic vision for your team. You establish a roadmap and successfully deliver scientific solutions that execute that vision. You define clear goals for your team and effectively prioritize, balancing short-term needs and long-term value. You establish clear and effective metrics and scientific process to enforce consistent, high-quality artifact delivery. You proactively identify risks and bring them to the attention of your manager, customers, and stakeholders with plans for mitigation before they become roadblocks. You know when to escalate. You communicate ideas effectively, both verbally and in writing, to all types of audiences. You author strategic documentation for your team. You communicate issues and options with leaders in such a way that facilitates understanding and that leads to a decision. You work successfully with customers, leaders, and engineering teams. You foster a constructive dialogue, harmonize discordant views, and lead the resolution of contentious issues. About the team We are a team of scientists across Applied, Research, Data Science and Economist disciplines. You will work with colleagues with deep expertise in ML, NLP, CV, Gen AI, and Causal Inference with a diverse range of backgrounds. We partner closely with top-notch engineers, product managers, sales leaders, and other scientists with expertise in the ads industry and on building scalable modeling and software solutions.
US, WA, Seattle
This role leads the science function in WW Stores Finance as part of the IPAT organization (Insights, Planning, Analytics and Technology), driving transformative innovations in financial analytics through AI and machine learning across the global Stores finance organization. The successful candidate builds and directs a multidisciplinary team of data scientists, applied scientists, economists, and product managers to deliver scalable solutions that fundamentally change how finance teams generate insights, automate workflows, and make decisions. As part of the WW Stores Finance leadership team, this leader partners with engineering, product, and finance stakeholders to translate emerging AI capabilities into production systems that deliver measurable improvements in speed, accuracy, and efficiency. The role's outputs directly inform VP/SVP/CFO/CEO leadership decisions and drive impact across the entire Stores P&L. Success requires translating complex technical concepts for finance domain experts and business leaders while maintaining deep technical credibility with science and engineering teams. The role demands both strategic vision—identifying high-impact opportunities where AI can transform finance operations—and execution excellence in coordinating project planning, resource allocation, and delivery across multiple concurrent initiatives. This leader establishes methodologies and models that enable Amazon finance to achieve step-change improvements in both the speed and quality of business insights, directly supporting critical processes including month-end reporting, quarterly guidance, annual planning cycles, and financial controllership. Key job responsibilities Transformation of Finance Workflows — Lead development of agentic AI solutions that automate routine finance tasks and transform how teams communicate business insights. Deploy these solutions across financial analysis, narrative generation, and dynamic table creation for month-end reporting and planning cycles. Partner with engineering and product teams to integrate these capabilities into production systems that directly support Stores Finance and FGBS automation goals, delivering measurable reductions in manual effort and cycle time. Science-Based Forecasting — Develop and deploy machine learning forecasts that integrate into existing planning processes including OP1, OP2, and quarterly guidance cycles. Partner with finance teams across WW Stores to iterate on forecast accuracy, applying these models either as alternative viewpoints to complement bottoms-up forecasts or as hands-off replacements for manual forecasting processes. Establish evaluation frameworks that demonstrate forecast performance against business benchmarks and drive adoption across critical planning workflows. Financial Controllership — Scale AI capabilities across controllership workstreams to improve reporting accuracy and automate manual processes. Leverage generative AI to identify financial risk through systematic pattern recognition in transaction data, account reconciliations, and variance analysis. Develop production systems that enhance decision-making speed and quality in financial close, audit preparation, and compliance reporting, delivering quantifiable improvements in error detection rates and process efficiency. About the team IPAT (Insights, Planning, Analytics, and Technology) is a team in the Worldwide Amazon Stores Finance organization composed of leaders across engineering, finance, product, and science. Our mission is to reimagine finance using technology and science to provide fast, efficient, and accurate insights that drive business decisions and strengthen governance. We are dedicated to improving financial operations through innovative applications of technology and science. Our work focuses on developing adaptive solutions for diverse financial use cases, applying AI to solve complex financial challenges, and conducting financial data analysis. Operating globally, we strive to develop adaptable solutions for diverse markets. We aim to advance financial science, continually improving accuracy, efficiency, and insight generation in support of Amazon's mission to be Earth's most customer-centric company.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! The Prime Video Title Lifecycle Presentation team sits at the intersection of science, experimentation, and customer experience. We leverage data signals and rigorous testing to present the most engaging information about our content to customers at precisely the right moment. Our mission is to ensure every customer interaction with Prime Video content is informed, relevant, and compelling in order to drive discovery and engagement across our vast catalog. We're seeking an Applied Scientist who excels at building sophisticated machine learning systems for content presentation and discovery. The ideal candidate brings deep expertise in: - Multi-modal embeddings for rich metadata representation, enabling nuanced understanding of content attributes and customer preferences - Contextualized ranking systems that adapt to customer intent, viewing context, and real-time signals - Reinforcement learning frameworks that create continuous improvement loops, allowing our systems to learn and optimize from customer interactions over time - General modeling techniques with strong fundamentals in machine learning and statistical methods - Recommender systems experience, with proven ability to build and scale personalization solutions You'll work with cutting-edge technology to solve complex problems in content discovery, leveraging large-scale data to create experiences that delight millions of Prime Video customers worldwide. Key job responsibilities As an Applied Scientist, you will have access to large datasets with billions of images and video to build large-scale machine learning systems. Additionally, you will analyze and model terabytes of text, images, and other types of data to solve real-world problems and translate business and functional requirements into quick prototypes or proofs of concept. We are looking for smart scientists capable of using a variety of domain expertise combined with machine learning and statistical techniques to invent, design, evangelize, and implement state-of-the-art solutions for never-before-solved problems.
US, NY, New York
Do you want to lead the Ads industry and redefine how we measure the effectiveness of Amazon Ads business? Are you passionate about causal inference, Deep Learning/DNN, raising the science bar, and connecting leading-edge science research to Amazon-scale implementation? If so, come join Amazon Ads to be an Economist leader within our Advertising Incrementality Measurement science team! Our work builds the foundations for providing customer-facing experimentation tools, furthering internal research & development on Econometrics, and building out Amazon's advertising measurement offerings. Incrementality is a lynchpin for the next generation of Amazon Advertising measurement solutions and this role will play a key role in the release and expansion of these offerings. Key job responsibilities As an Economist leader within the Advertising Incrementality Measurement (AIM) science team, you are responsible for defining and executing on key workstreams within our overall causal measurement science vision. In particular, you can lead the development of experimental methodologies to measure ad effectiveness, and also build observational models that lay the foundations for understanding the impact of individual ad touchpoints for billions of daily ad interactions. You will work on a team of Applied Scientists, Economists, and Data Scientists, alongside a dedicated Engineering team, to work backwards from customer needs and translate product ideas into concrete science deliverables. You will be a thought leader for inventing scalable causal measurement solutions that support highly accurate and actionable insights--from defining and executing hundreds of thousands of RCTs, to developing an exciting science R&D agenda. You will be working with massive data and industry-leading partner scientists, while also interfacing with leadership to define our future vision. Your work will help shape the future of Amazon Advertising. About the team AIM is a cross disciplinary team of engineers, product managers, economists, data scientists, and applied scientists with a charter to build scientifically-rigorous causal inference methodologies at scale. Our job is to help customers cut through the noise of the modern advertising landscape and understand what actions, behaviors, and strategies actually have a real, measurable impact on key outcomes. The data we produce becomes the effective ground truth for advertisers and partners making decisions affecting millions in advertising spend.
IN, KA, Bengaluru
RBS (Retail Business Services) Tech team works towards enhancing the customer experience (CX) and their trust in product data by providing technologies to find and fix Amazon CX defects at scale. Our platforms help in improving the CX in all phases of customer journey, including selection, discoverability & fulfilment, buying experience and post-buying experience (product quality and customer returns). The team also develops GenAI platforms for automation of Amazon Stores Operations. As a Sciences team in RBS Tech, we focus on foundational ML research and develop scalable state-of-the-art ML solutions to solve the problems covering customer experience (CX) and Selling partner experience (SPX). We work to solve problems related to multi-modal understanding (text and images), task automation through multi-modal LLM Agents, supervised and unsupervised techniques, multi-task learning, multi-label classification, aspect and topic extraction for Customer Anecdote Mining, image and text similarity and retrieval using NLP and Computer Vision for product groupings and identifying duplicate listings in product search results. Key job responsibilities As an Applied Scientist, you will be responsible to design and deploy scalable GenAI, NLP and Computer Vision solutions that will impact the content visible to millions of customer and solve key customer experience issues. You will develop novel LLM, deep learning and statistical techniques for task automation, text processing, image processing, pattern recognition, and anomaly detection problems. You will define the research and experiments strategy with an iterative execution approach to develop AI/ML models and progressively improve the results over time. You will partner with business and engineering teams to identify and solve large and significantly complex problems that require scientific innovation. You will independently file for patents and/or publish research work where opportunities arise. The RBS org deals with problems that are directly related to the selling partners and end customers and the ML team drives resolution to organization level problems. Therefore, the Applied Scientist role will impact the large product strategy, identifies new business opportunities and provides strategic direction which is very exciting.
IN, KA, Bengaluru
Selection Monitoring team is responsible for making the biggest catalog on the planet even bigger. In order to drive expansion of the Amazon catalog, we develop advanced ML/AI technologies to process billions of products and algorithmically find products not already sold on Amazon. We work with structured, semi-structured and Visually Rich Documents using deep learning, NLP and image processing. The role demands a high-performing and flexible candidate who can take responsibility for success of the system and drive solutions from research, prototype, design, coding and deployment. We are looking for Applied Scientists to tackle challenging problems in the areas of Information Extraction, Efficient crawling at internet scale, developing ML models for website comprehension and agents to take multi-step decisions. You should have depth and breadth of knowledge in text mining, information extraction from Visually Rich Documents, semi structured data (HTML) and advanced machine learning. You should also have programming and design skills to manipulate Semi-Structured and unstructured data and systems that work at internet scale. You will encounter many challenges, including: - Scale (build models to handle billions of pages), - Accuracy (requirements for precision and recall) - Speed (generate predictions for millions of new or changed pages with low latency) - Diversity (models need to work across different languages, market places and data sources) You will help us to - Build a scalable system which can algorithmically extract information from world wide web. - Intelligently cluster web pages, segment and classify regions, extract relevant information and structure the data available on semi-structured web. - Build systems that will use existing Knowledge Base to perform open information extraction at scale from visually rich documents. Key job responsibilities - Use AI, NLP and advances in LLMs/SLMs and agentic systems to create scalable solutions for business problems. - Efficiently Crawl web, Automate extraction of relevant information from large amounts of Visually Rich Documents and optimize key processes. - Design, develop, evaluate and deploy, innovative and highly scalable ML models, esp. leveraging latest advances in RL-based fine tuning methods like DPO, GRPO etc. - Work closely with software engineering teams to drive real-time model implementations. - Establish scalable, efficient, automated processes for large scale model development, model validation and model maintenance. - Lead projects and mentor other scientists, engineers in the use of ML techniques. - Publish innovation in research forums.
US, WA, Seattle
Unlock the Future with Amazon Science! Amazon is seeking boundary-pushing graduate student scientists who can turn revolutionary theory into awe-inspiring reality for internships in 2026. Join our team of visionary scientists and embark on a journey to harnessing the power of cutting-edge techniques in deep learning and revolutionize the fields of artificial intelligence, data science, speech recognition, text understanding, robotics and more. At Amazon, we don't just talk about innovation – we live and breathe it. You'll conducting research into the theory and application of deep learning. You will work on some of the most difficult problems in the industry with some of the best product managers, scientists, and software engineers in the industry. You will propose and deploy solutions that will likely draw from a range of scientific areas. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Applied Science Internships in, but not limited to Arlington, VA; Bellevue, WA; Boston, MA; New York, NY; Palo Alto, CA; San Diego, CA; Santa Clara, CA; Seattle, WA. Key job responsibilities We are particularly interested in candidates with expertise in: Machine Learning, Deep Learning, Robotics, LLMs, NLP/NLU, Gen AI, Transformers, Fine-Tuning, Recommendation Systems, Programming/Scripting Languages, Reinforcement Learning, Causal Inference and more. In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of Reinforcement Learning and Optimization within Machine Learning. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on developing novel RL algorithms and applying them to complex, real-world challenges. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Develop scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. - Design, development and evaluation of highly innovative ML models for solving complex business problems. - Research and apply the latest ML techniques and best practices from both academia and industry. - Think about customers and how to improve the customer delivery experience. - Use and analytical techniques to create scalable solutions for business problems.