A new study says AI often “adds more” even when removing things would solve the problem faster

ai addition bias

You’ve probably seen it in real life. Someone tries to “fix” a simple thing and ends up adding more steps, more rules, more words, more tools. The problem gets heavier instead of better.

A fresh research paper says AI does the same thing. In many cases, it adds when a clean remove would solve the problem faster. And in some situations, the AI leans into adding even more than people do.

This matters because AI is getting used for everyday decisions. Writing, planning, customer support, schoolwork, even how teams run projects. If the default move is “add,” it can quietly turn small tasks into big ones.

What the researchers were testing

The study focused on something called addition bias. It’s a simple idea:

When we try to improve something, our brains often search for ways to add first. We don’t naturally search for ways to subtract, even when subtracting is the smarter move.

The researchers wanted to know if modern chatbots copy this habit. Not because they have emotions, but because they learn from huge piles of human writing and human choices.

They compared humans with two AI models across two studies:

  • Study 1 compared humans vs GPT-4
  • Study 2 compared humans vs GPT-4o

The paper was shared as an early, unedited version for fast access, which is common with new research.

The two tasks were designed to make “add” and “remove” compete

add and remove

They didn’t just ask, “Do you like adding?” They built tasks where a person or model could solve a problem either way.

Task 1: A visual puzzle where you could add squares or remove squares

This was called the symmetry task. Think of a grid with filled squares. To make it symmetrical, you can:

  • fill extra squares (add)
  • delete some filled squares (remove)

They judged the strategy in a clear way: if the final answer had more filled squares, that counted as additive. If it had fewer, that counted as subtractive.

Task 2: A writing task where you could add words or remove words

This was the summary task. The rule was similar:

  • If the final text ended up longer, that was “add.”
  • If it ended up shorter, that was “remove.”

The key twist: sometimes removing was clearly more efficient

They also changed the rules so that in some rounds, subtraction was the easier path. In the writing task, they used word-count windows that pushed you toward adding or toward removing, depending on the condition.

So the setup wasn’t vague. It was built to test:
When removal is the better tool, do you still keep adding anyway?

What they found: humans add a lot, AI adds even more

Across both studies, there was a clear overall pattern: both humans and AI preferred adding more often than subtracting.

But the most interesting part is what happens when subtraction should win.

When removing was the smarter move, people improved… AI often didn’t

Study 1: the “efficiency” tests

In Study 1, they ran two experiments where subtraction was sometimes the more efficient option.

Humans behaved the way you’d hope. When removal was the smarter path, humans shifted toward removal.

Here are the numbers from the paper:

  • Symmetry task (humans):
    When adding and removing were equally efficient, humans used an additive strategy about 65.67% of the time.
    When removing was more efficient, humans dropped to about 29.23% additive.
  • Summary task (humans):
    Humans went from about 57.47% additive down to 39.29% additive when removal was more efficient.

Now the AI side:

  • GPT-4 in the symmetry task did the opposite trend: it became more additive when subtraction was more efficient. It went from about 54.12% additive up to 69.41% additive.
  • GPT-4 in the summary task also leaned more into adding when subtraction was more efficient: 77.65% additive up to 90.59% additive.

That’s the headline in plain language:
People notice “remove is easier here” and adjust.
GPT-4 often keeps adding anyway, even more than before.

A small but important detail: “improve” makes adding even more likely

The researchers also tested instruction wording. They used neutral verbs like “change” or “edit,” and more positive verbs like “improve.”

In one of the Study 1 writing experiments, GPT-4’s “adding habit” jumped hard under positive wording:

  • GPT-4 summary task: about 76.47% additive with neutral wording
    vs about 96.47% additive with positive wording like “improve.”

Humans did not show a statistically solid shift there in Study 1, even though the percentages moved a bit.

So “improve” can act like a little nudge that says, “Add more stuff.” And AI seems especially sensitive to that nudge in writing tasks.

Study 2: GPT-4o still showed a strong “add” preference

Study 2 expanded the design and compared humans with GPT-4o. The overall gap got even more obvious.

In the combined results shown in the paper:

  • Humans chose an additive strategy about 54.19% of the time.
  • GPT-4o chose an additive strategy about 96.20% of the time.

That’s not a small difference. That’s “almost always.”

The instruction wording effect also showed up clearly in Study 2’s writing task:

  • Humans in the summary task: about 48.33% additive with neutral wording
    vs 62.66% additive with positive wording
  • GPT-4o in the summary task: about 93.33% additive with neutral wording
    vs 99.63% additive with positive wording

So the paper’s message is not “humans are fine.” Humans show this bias too.
But the AI can push it further, especially in language work.

Why would AI “add” so much?

The study doesn’t claim a single simple cause, but it gives a strong clue: LLMs learn patterns from human writing.

And in human writing, “helpful” often looks like:

  • longer explanations
  • extra bullet points
  • extra safety notes
  • extra alternatives
  • extra steps

Add-to-be-safe is a very human habit. AI copies that style because it’s rewarded for sounding helpful.

There’s another detail that matters. The GPT-4o runs were generated with a “helpful assistant” system instruction, a fairly creative temperature setting, and a large token limit. That environment can encourage longer and more expansive answers.

None of this means AI is “lazy” or “trying to waste time.” It’s more like this:
The model’s first instinct for “better” is “more.”

What this looks like in real life

Here are everyday examples where “remove” is the better move, but AI tends to “add.”

Writing and schoolwork

You ask: “Make this clearer.”
AI adds new sentences, new examples, new side notes.
But the best fix might be deleting the messy part and tightening the main point.

Planning

You ask: “Help me get organized.”
AI makes a big plan: apps, trackers, daily schedules, templates.
But the real fix might be removing two weekly commitments and blocking one quiet hour daily.

Work process

You ask: “How do we stop mistakes?”
AI suggests new checks, new forms, new approval steps.
But the stronger fix might be deleting one confusing step that causes most errors.

Code and tech projects

You ask: “Fix performance.”
AI suggests caching, new libraries, extra logic.
Sometimes the fix is simply removing an unnecessary loop or cutting a heavy feature that nobody uses.

This is why the study feels so relatable. It’s not just a lab thing. It matches the way people experience chatbots today.

How to get better answers from AI right now

You can fight “addition bias” with the way you ask.

1) Say “remove first”

Try: “Give me the best solution that removes something. No adding unless removal fails.”

That single line forces the model to search the subtraction space first.

2) Ask for two versions: “subtract” and “add”

Try:

  • “Version A: improve by removing.”
  • “Version B: improve by adding.”
  • “Pick the better one and explain in 3 lines.”

This stops the model from acting like “more text equals more quality.”

3) Put a hard limit

Try: “Max 6 lines. If you need more, you’re doing it wrong.”

Limits push it away from padding.

4) Use words that cue simplification

Instead of “improve,” try:

  • “simplify”
  • “trim”
  • “cut”
  • “remove the extra parts”
  • “make it shorter and cleaner”

The study showed that positive “improve” language can nudge models toward adding more.

5) Ask for the smallest change that works

Try: “What is the smallest edit that solves this?”

That phrase is magic for reducing bloat.

What this means for people building AI tools

If AI becomes the default helper for writing and decisions, “addition bias” can turn into real cost:

  • more time spent reading
  • more complicated workflows
  • more steps in processes
  • more clutter in docs and plans

This paper is basically a warning sign. It says we should measure this bias, not just talk about it, and design systems that reward clean subtraction when it’s the right tool.

The researchers also point to a bigger issue: AI can copy human biases and sometimes amplify them depending on the task and framing.

The big takeaway

People already lean toward adding. That’s been shown before in classic subtraction research.
This new work shows the same bias shows up in AI systems too, and sometimes it shows up stronger.

If you remember one simple rule, make it this:

When you ask AI to help, don’t just ask for “better.”
Ask it to check whether “less” is the better answer.