Journal of Artificial Intelligence Research’s cover photo
Journal of Artificial Intelligence Research

Journal of Artificial Intelligence Research

Book and Periodical Publishing

The premier open access journal for the AI research community, established in 1993

About us

The Journal of Artificial Intelligence Research (JAIR) is dedicated to the rapid dissemination of important research results to the global artificial intelligence (AI) community. The journal’s scope encompasses all areas of AI, including agents and multi-agent systems, automated reasoning, constraint processing and search, knowledge representation, machine learning, natural language, planning and scheduling, robotics and vision, and uncertainty in AI.

Website
www.jair.org
Industry
Book and Periodical Publishing
Company size
51-200 employees
Type
Nonprofit
Founded
1993

Employees at Journal of Artificial Intelligence Research

Updates

  • Great post by Gillian K. Hadfield on a recent paper in JAIR's AI & Society track on public opinion and online polarization.

    Social media makes it look like the public is deeply divided. But what if most of the public just isn’t speaking? In a new paper with Atrisha S. just published in the Journal of Artificial Intelligence Research, we show that the polarization we observe online can emerge even when nobody’s opinions have changed. We call it rational silence. Most research on polarization assumes something is pushing people’s views apart: echo chambers, filter bubbles, algorithmic radicalization. Our model shows something different. We don’t change anyone’s opinions. We just let individuals weigh the costs and benefits of speaking up. When rhetoric heats up, two things squeeze moderates out. Allies speaking loudly substitute for your own voice, reducing your incentive to add to the chorus. And intense rhetoric from opponents shrinks the reward you get from expressing your view. Either way, moderates lose the reason to speak. The people who remain are those whose views are extreme enough that the reward from expression still outweighs the cost. It gets worse. We show that ideological media organizations, partisan outlets and political influencers, amplify the effect by signaling that the other side is more extreme than it really is. That makes speaking up feel even less worthwhile for moderates and pushes more of them into silence. Platform recommender systems, optimizing for engagement, then sort people into communities where the loudest voices dominate. Here’s what I worry about. Policymakers and legislators increasingly look to social media to gauge where the public stands. If expressed opinion is systematically skewed toward the extremes, our democratic institutions are navigating with a broken compass. And AI models trained on internet data inherit the same distortion, producing outputs that reflect who spoke up rather than what opinions people actually hold. For every company building AI products on top of that data, this is a problem. We identify practical interventions. Platform moderation that accounts for the intensity of the underlying opinion, not just the intensity of the rhetoric. And recommender strategies that prioritize participatory content over ideological content for users with strong views. One surprise from the model: blanket moderation that raises the cost of rhetoric for everyone can backfire, discouraging moderates more than extremists. The fix isn’t changing what people think. It’s changing who gets heard. https://lnkd.in/ekCnpq6d

  • Journal of Artificial Intelligence Research reposted this

    View profile for Lion Schulz

    Bertelsmann SE & Co. KGaA1K followers

    How can agents defend against attacks by bad-faith others without explicitly being able to model their scheming? 👹 😇 🤖 Really happy to see this joint work on the intersection of AI safety, reinforcement learning, and theory of mind out in Journal of Artificial Intelligence Research - an amazing capstone for a brilliant collaboration during my Ph.D. with the one and only Nitay Alon (with Stefan Sarkadi, Jeff Rosenschein, Joe Barnby and Peter Dayan). Nitay's post (linked below) does a much better explaining the details but to summarise, we asked: How can agents shield themselves against malignant deceptive behaviour. In particular, how can they remain robust in a setting where an opponent explicitly models their thinking and reasoning (a la Theory of Mind, ToM)? Such ToM based deception exploits a structural asymmetry in recursive reasoning where agents with deeper cognitive models can manipulate those with shallower ones ("I know what you know"). To work against this asymmetry, we contribute: 🔎 Behavioural verification via counterfactual anomaly detection that enables agents to detect when observed behaviour violates model-based expectations, even without identifying the precise deception strategy. 👹 An out-of-belief punitive policy - a deterrence mechanism that shifts equilibrium behaviour once manipulation is suspected. In essence, we endow agents with the ability to show a credible threat which can rein in an attacker. Conceptually, we combine model-based ToM reasoning with a model-free “circuit breaker,” offering a more robust foundation for agentic AI systems interacting under strategic uncertainty. Full paper: https://lnkd.in/eJj9Hw95

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Ian Gent (along with Toby Walsh) published one of the first papers in JAIR -- back in 1993, and we were happy to see Ian's post about his latest JAIR article (with Charlie Blake) on winnable patience games (i.e. solitaire). Thanks for the kind words about JAIR, Ian!

    I basically never post here, but making an exception today to boast about my recent paper with Charlie Blake published in Journal of Artificial Intelligence Research. A lot of my friends are not on other social media and I would like them to know about it. It is about how winnable patience games (aka solitaire games) are. Three things make it special for me. One is that it's about Patience which is the game my mother taught me to love and we spent many happy hours playing together (possibly happier for me as I wouldn't let her move until I had worked out the best play). Second is that it uses lots of the work I've done throughout my career so feels a bit like putting it all together at last. Third thing is that it is in a journal which has been 100% free to both authors and readers ever since its inception in 1993 - and I also published the third paper in it so have published more than 32 years apart in the same journal. https://lnkd.in/e-q2jrGS

  • Recently Published: Label-Aware Pseudo-Training Sample Generation for Text Classification

    I am pleased to share that our paper, “Label-Aware Pseudo-Training Sample Generation for Text Classification”, has been accepted and published in the Journal of Artificial Intelligence Research (JAIR), a well-recognized Q1 journal in the field of artificial intelligence. The article is available here: 🔗 https://lnkd.in/df_Zc544 I would like to express my sincere gratitude to my advisor, Dr. Seyed AbolGhasem Mirroshandel for his guidance and support throughout this work. I am also thankful to Prof. Owen Rambow for his valuable insights during the research process. In this paper, we introduce a label-aware, embedding-space data augmentation approach that injects learnable artificial tokens into input sequences. Rather than generating new surface-level text, the method operates within the latent space of a frozen LLM to produce diverse, coherent, and label-consistent training samples. Our experiments across multiple benchmark datasets show consistent improvements over standard fine-tuning, highlighting the effectiveness and scalability of this augmentation strategy.

  • Journal of Artificial Intelligence Research reposted this

    View profile for Nitay Alon

    The Hebrew University of…1K followers

    I’m proud to share my latest (and final PhD) paper: א-IPOMDP: Mitigating Deception in a Cognitive Hierarchy with Off-Policy Counterfactual Anomaly Detection, now published in the Journal of Artificial Intelligence Research (JAIR). Joint work with my wonderful colleagues: Joe Barnby, Stefan Sarkadi, Lion Schulz, Jeff Rosenschein and Peter Dayan. In this work, we propose a general framework to solve one of the most persistent issues in multi-agent systems: coping with deceptive others. The Problem: A "Cognitive Judo" Move Our work builds on Theory of Mind (ToM)—the trait that allows us to infer the latent cognitive processes of others (intent, beliefs, thoughts). In previous research, we showed how agents use this to deceive, acting like a "friend" to manipulate a victim’s beliefs. Deception utilizes the Achilles' heel of ToM: the victim’s own model of others. Think about nested reasoning for a minute: "I know that you know that I know..." Eventually, this spiral hits a limit (say, Level 3). If you interact with someone at Level 4, you're in trouble. The deceiver can "read" the victim’s thoughts and manipulate them, while the victim falsely assumes they are the ones in control. It’s like a cognitive judo move—the deceiver uses the victim’s own mind against them. Contribution 1: Behavior Verification Previously, my work showed that this asymmetry leaves those with shallower ToM exposed. But no more. What if we can detect that something is wrong, even if we can’t explain exactly why? Imagine a "fair" coin toss. If you lose 10 rounds in a row, you don't need a complex proof to feel the game is rigged. Our model endows ToM agents with an anomaly detection mechanism. If it walks and quacks like a duck, it should probably swim too. If an agent’s behavior fails to meet "counterfactual expectations," it triggers an alarm. Contribution 2: The "Out-of-Belief" Policy Once you know you're being cheated, how do you respond? We introduce a punitive policy—a "wrath and fury" response like the Grim Trigger. Because a sophisticated deceiver has the "mental blueprints" of the victim, they know that the victim is now tracking their behavior and will go ballistic if they sense a bluff. This realization alone is often enough to deter nefarious behavior and force a more honest equilibrium. Why This Matters for Agentic AI? We propose a combination of model-based and model-free mechanisms: use ToM when it works, and have a "circuit breaker" when it harms you. As we move into the age of autonomous, agentic AI, finding ways to prevent agents from manipulating one another is crucial. Our work provides a formal path toward making these interactions more robust and secure. As always, I’m deeply grateful to my wonderful collaborators and advisors who made this possible. Full Paper: https://lnkd.in/dXWgemHf #ArtificialIntelligence #MultiAgentSystems #TheoryOfMind #MachineLearning #PhD #JAIR #DeceptiveAI

Similar pages