Jackson Cionek
11 Views

Knowledge Bubbles, Local Optima, and AI-Assisted Collectivity

Knowledge Bubbles, Local Optima, and AI-Assisted Collectivity

Block: Collectivity, Synchrony, Leadership, and Critical Sense

Subtitle:
A collectivity can also become trapped. Not only by fear, captured leadership, or false narratives, but by an intelligence that seems to expand the group while merely accelerating the same shared bias.

The bodily feeling is familiar. A doubt appears in the group. Nobody wants to lose too much time. The AI responds quickly, fluently, cleanly. The chest relaxes before the checking begins. The forehead softens before revision starts. The conversation moves again. The group feels it has advanced. But sometimes it has not advanced at all. It has only found a comfortable place to stop.

This is where the image of a local optimum helps. In computation, a system can stabilize around a solution that is good enough to stop searching, even though it has not found the best possible direction. In the body, this appears as premature relief. The answer fits. Friction drops. The group breathes again. And precisely because of that, exploration falls. The question loses force. Counter-checking loses urgency. The collectivity becomes more coordinated, but not necessarily freer.

That is the central point of this closing text: AI-assisted collectivity is not automatically more critical collectivity. Very often, it is only faster collectivity, more confident collectivity, and collectivity with a stronger appearance of intelligence. The group continues circling around the same assumptions, the same filters, the same styles of reasoning. Only now it does so with more speed, more comfort, and less internal friction. [1][2][3]

There is an important bias here: the bias of predictivity. We do not only think. We also bet on what we believe is going to happen. We tend to accept more easily what confirms the anticipation already forming inside the body. And AI enters exactly at that point, because language models are statistical systems trained to predict likely continuations from previous context. In other words, AI does not first “know” and then respond; it calculates probable continuations from the data on which it was trained and from the context that we ourselves place into the prompt. [4][5]

That is why, when the AI’s answer fits what the group already suspected, desired, or feared, the tendency is to validate it too early. Recent work on AI-assisted decision-making found that professionals trusted and accepted AI recommendations more when those recommendations were congruent with their initial judgment and intuition. Another study, from a metacognitive angle, suggests that when the logic of the AI seems to align with the user’s own logic, that alignment strengthens the feeling of being “on the right path” and helps preserve the strategy already underway. In embodied language: when the machine confirms the rhythm already alive inside us, doubt loses room. [6][7]

And this matters deeply, because we are the ones asking the questions and we are the ones supplying the starting material of the exchange. The group brings its cuts, its fears, its interests, its gaps, its framing into the prompt. The AI works on top of that. It scrapes statistical patterns from the corpus on which it was trained and from the context it was given in the question. So the bias is not only “in the machine.” It can arise from the junction between what the group already expects, the way the group asks, and the way the model probabilistically continues that trail. The result can be a highly convincing answer and still be narrow.

This narrowing becomes even stronger when dependence turns into habit. A 2024 systematic review on over-reliance on AI dialogue systems connected hallucination, algorithmic bias, and lack of transparency to impacts on decision-making, critical thinking, and analytical ability. In 2025, another study found that greater AI dependence was associated with lower critical thinking, with cognitive fatigue partially mediating that relationship. In embodied language: the earlier the group hands over the weight of elaboration, the less it sustains its own musculature for contrasting, doubting, and rebuilding. [1][2]

There is another problem: AI does not enter the group as a neutral surface. Recent reviews on bias and fairness in large language models show that these systems can learn, perpetuate, and amplify harmful social biases. And work on iterative bias amplification suggests that repeated interactions among models, texts, and users can magnify subtle inclinations over time. This speaks directly to our image of the bubble: if the group keeps asking in similar ways, keeps validating what confirms its anticipations, and keeps receiving answers produced by models trained on already biased distributions, the collectivity may become smoother on the outside and narrower on the inside. [3][8]

There is also the risk of inflated confidence. A 2025 study with knowledge workers found that higher confidence in generative AI was associated with less critical thinking, while higher self-confidence was associated with more critical thinking. That matters here because it shows that the danger is not simply “using AI,” but entering into a relationship in which the fluidity of the machine’s answer replaces the group’s own elaboration. The body is grateful for the ease. But gratitude is not the same thing as understanding. [9]

And all of this becomes more dangerous because AI language can sound impeccable even when the basis is wrong. Recent research on hallucination describes exactly that: fluent, coherent, plausible outputs may still be factually incorrect, inconsistent, or fabricated. The collective body, however, often responds first to fluency, not to verification. The answer sounds right, and that sounding-right can be enough to stabilize the conversation inside a local optimum. [10]

In BrainLatam2026 terms, this point is decisive. A collectivity in Zone 2 may use AI as partial support while preserving contrast, revision, return to body-territory, and openness to difference. A collectivity in Zone 3 tends to use AI as a seal of closure: the answer arrives, the group calms down, disagreement falls, the appearance of intelligence rises, and plasticity shrinks. The machine does not create the bubble by itself, but it can harden it elegantly. [1][2][9]

So the most important question is not only:
did the AI get it right or wrong?

But this:
what is the AI doing to the cognitive metabolism of the group?
Is it opening more variation, more contrast, more capacity to revise?
Or is it only reducing friction, accelerating consensus, and offering a local optimum scented like intelligence?

Perhaps the most dangerous signal is not when the AI responds badly.
Perhaps it is when it responds too well for a group that no longer wants to leave its own bubble.

Because AI-assisted collectivity may look like expansion.
But without embodied criticality, it may also be nothing more than confinement with a good interface.

References

[1] Zhai et al., 2024 — The Effects of Over-Reliance on AI Dialogue Systems on Students’ Cognitive Abilities: A Systematic Review — A systematic review linking hallucination, algorithmic bias, and lack of transparency to over-reliance, with impacts on decision-making, critical thinking, and analytical ability.

[2] Tian & Zhang, 2025 — Learners’ AI Dependence and Critical Thinking: The Psychological Mechanism of Fatigue and the Social Buffering Role of AI Literacy — Shows that greater AI dependence was associated with lower critical thinking, with cognitive fatigue mediating part of the relationship.

[3] Gallegos et al., 2024 — Bias and Fairness in Large Language Models: A Survey — A broad review showing that LLMs can learn, perpetuate, and amplify harmful social biases.

[4] Minaee et al., 2025 — Large Language Models: A Survey — Describes LLMs as statistical language models trained on large text corpora and explains the probabilistic basis of token prediction and continuation.

[5] Ramaswamy, 2026 — NeuroAI: Bridging Brain Science and Artificial Intelligence — Argues that current AI systems still hallucinate, struggle with adaptation, and may amplify misconceptions when built on incomplete understanding of the brain, while framing NeuroAI as dialogue rather than automatic substitution of human judgment.

[6] Bashkirova & Krpan, 2024 — Confirmation Bias in AI-Assisted Decision-Making: AI Triage Recommendations Congruent with Expert Judgments Increase Psychologist Trust and Recommendation Acceptance — Shows that AI recommendations congruent with a user’s initial judgment tend to increase trust and acceptance.

[7] von Zahn et al., 2025 — Knowing (Not) to Know: Explainable Artificial Intelligence and Human Metacognition — Shows how alignment or misalignment between perceived AI logic and human logic can modulate confidence, delegation, and metacognitive control.

[8] Ren et al., 2024 — Bias Amplification in Language Model Evolution: An Iterated Learning Perspective — Proposes that repeated interactions among models can amplify subtle biases over time.

[9] Lee et al., 2025 — The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers — Indicates that higher confidence in generative AI was associated with less critical thinking, while higher self-confidence was associated with more critical thinking.

[10] Huang et al., 2025 / Anh-Hoang et al., 2025 — recent reviews on hallucination in LLMs — Describe hallucination as fluent and plausible output that may still be factually incorrect, inconsistent, or fabricated.



#eegmicrostates #neurogliainteractions #eegmicrostates #eegnirsapplications #physiologyandbehavior #neurophilosophy #translationalneuroscience #bienestarwellnessbemestar #neuropolitics #sentienceconsciousness #metacognitionmindsetpremeditation #culturalneuroscience #agingmaturityinnocence #affectivecomputing #languageprocessing #humanking #fruición #wellbeing #neurophilosophy #neurorights #neuropolitics #neuroeconomics #neuromarketing #translationalneuroscience #religare #physiologyandbehavior #skill-implicit-learning #semiotics #encodingofwords #metacognitionmindsetpremeditation #affectivecomputing #meaning #semioticsofaction #mineraçãodedados #soberanianational #mercenáriosdamonetização
Author image

Jackson Cionek

New perspectives in translational control: from neurodegenerative diseases to glioblastoma | Brain States