Jackson Cionek
22 Views

How can near infrared spectroscopy be informative about the brain activation of children using a hearing implant? Preliminary case-control findings

How can near infrared spectroscopy be informative about the brain activation of children using a hearing implant? Preliminary case-control findings

Brain Latam 2026 comments about:

How can near infrared spectroscopy be informative about the brain activation of children using a hearing implant? Preliminary case-control findings

Close your eyes for 10 seconds and picture a simple scene: you’re sitting still, and someone who loves you speaks close to your ear—“home voice,” warm prosody, that living melody of speech. Notice what your body does before you “think” anything: does your breath drop? does your chest soften? does your jaw unclench? Or do you become alert, slightly tightened, scanning?

That tiny bodily shift is the best doorway into the core question of this paper.

The question (in living language)

On the very day a cochlear implant is activated, does the brain of a child with profound hearing loss already show a cortical response pattern (measured as oxygenation with fNIRS) similar to hearing children matched by “auditory age” (time of effective experience with sound)?
And a second layer: does what we see in the cortex connect—over time—to what clinicians observe in hearing skills and language development?

This is not a “sound in, cognition out” story. In BrainLatam2026 terms, perception is not a channel—it is a bodily state. And this study tries to touch that state with a stimulus that is already embodied: the mother’s voice.

The experimental design (so you can feel it)

The design is a case–control comparison:

  • One child with a cochlear implant is measured on the activation day (day 0).

  • Three hearing children serve as controls, matched by auditory age—so the comparison isn’t “same chronological age,” but “similar time living with sound.”

Now the detail that makes your body recognize the experiment: the child is not in a sterile lab posture. The child is sitting on the mother’s lap. The auditory stimulus is not synthetic beeps. It is the mother speaking in spontaneous, child-directed speech, with natural prosody—voice as belonging, not voice as mere acoustic input.

The task is structured in blocks you can almost breathe with:

  • About six blocks of roughly 10 seconds of mother’s voice alternating with 10 seconds of silence, around 60 dB SPL.

If you want to embody the protocol: imagine 10 seconds of someone calling you into the world, then 10 seconds of quiet. Over and over. Your nervous system will start predicting, relaxing, preparing—this is already Mente Damasiana at work (interoception + proprioception + action-situation).

What was measured (without mysticism)

They use fNIRS (functional near-infrared spectroscopy)—a noninvasive method that tracks changes in oxygenated and deoxygenated hemoglobin (HbO/HbR). It doesn’t “read thoughts.” It captures local metabolic demand as a proxy for neural activation.

Instrumentation details matter because they define what “cortex” means here: a system with many channels covering broad regions (frontal, temporal, parietal, occipital; both hemispheres). The pipeline includes practical safeguards typical of real child data (motion correction, removing bad channels, filters, baseline windows). In other words: not perfect, but honest about the body moving.

The clinical context (the body in Zone 1)

The implanted child had bilateral profound sensorineural hearing loss, tried hearing aids early and speech therapy, but without meaningful gains; then received a cochlear implant in the right ear at about 1 year and 4 months, and on activation day used a bimodal configuration (implant on one side, hearing aid on the other).

On activation day, there were already behavioral signs: attention to voice, response to name, detection of some speech sounds (e.g., vowels), while others were still absent—exactly what you’d expect when a new sensory ecology is being installed in the body.

Findings (what the study claims)

Statistically, the headline result is: no significant difference between the implanted child’s cortical response pattern and the matched hearing controls on the activation day.

Brain maps suggest engagement of temporal regions (intuitively tied to auditory processing) and also parieto–occipital areas in the context of natural speech. Clinically, the child shows improvement over follow-up in hearing and language measures.

The important BrainLatam2026 reading: this is not “the brain is fixed.” It’s early plasticity visible as metabolic participation—an opening.

BrainLatam2026 interpretation (without turning it into ideology)

Mente Damasiana: The mother’s voice is not just a signal; it is a regulatory event. It changes interoceptive tone (safety vs threat), proprioceptive orientation (approach vs freeze), and action readiness (engage vs withdraw). The same “voice” can be Zone 2 (fruition + reorganization) in a safe lap, or Zone 3 (hypervigilance + rigidity) in a threatening context.

Tensional Selves (Eus Tensionais): Think of “the self that hears” as a learned bodily organization. Before activation, that self has no stable sensory support. On activation day, the system begins to rebuild a hearing-self as a workable habit—an embodied stance toward the world.

Zones 1–2–3:

  • Zone 1: functional tensions—therapy routines, adaptation demands, attention effort.

  • Zone 2: moments where the body can relax into learning—voice as belonging, exploration, curiosity.

  • Zone 3: what we try to prevent—chronic threat and rigidity that silence interoception and block corrective prediction.

QSH / Jiwasa: Even in a single-child setup, the design hints at collective modulation. The mother’s lap + voice is a two-body nervous system. A next-step study could measure this directly via dyadic signals (e.g., hyperscanning mother–child, or adding autonomic measures like HRV/respiration).

What I would improve in the next experiment (to test the mechanism, not the narrative)

  1. Longitudinal fNIRS: pre-activation (baseline), day 0, 1 month, 6 months—because Zone 2 is a process, not a photo.

  2. Condition contrasts: mother’s voice vs unfamiliar voice; speech vs non-speech; predictable vs unpredictable timing—to separate “belonging/regulation” from “acoustic detection.”

  3. Add physiology: respiration + HRV to track whether voice-driven learning rides on bodily safety, not just cortical oxygenation.

The organic link to DREX Cidadão (as functional inference, not propaganda)

If a mother’s voice on a safe lap helps the nervous system exit threat and enter learning, then public systems that reduce metabolic threat should increase the time people can live in Zone 2—flexible, creative, critically awake.

Here DREX Cidadão enters precisely as you defined it: a retail CBDC-based financial ecology where money/credit is generated in the citizen—not in banks and the speculation of the super-rich. In the BrainLatam metaphor, it’s the Body of the State feeding, protecting, and stabilizing its citizens so they can remain in Zone 2 longer—like cells that, once formed and energized, can produce, cooperate, and innovate. That’s “policy as bodily ecology”: reducing chronic threat to prevent the social slide into Zone 3.

Why this paper matters (even with small N)

It’s small (one case, few controls), but it gets something big right: it puts neuroscience back inside the lap—inside belonging. And it reminds us that in early life, “brain,” “voice,” and “territory” are the same landscape—just described through different instruments.



#eegmicrostates #neurogliainteractions #eegmicrostates #eegnirsapplications #physiologyandbehavior #neurophilosophy #translationalneuroscience #bienestarwellnessbemestar #neuropolitics #sentienceconsciousness #metacognitionmindsetpremeditation #culturalneuroscience #agingmaturityinnocence #affectivecomputing #languageprocessing #humanking #fruición #wellbeing #neurophilosophy #neurorights #neuropolitics #neuroeconomics #neuromarketing #translationalneuroscience #religare #physiologyandbehavior #skill-implicit-learning #semiotics #encodingofwords #metacognitionmindsetpremeditation #affectivecomputing #meaning #semioticsofaction #mineraçãodedados #soberanianational #mercenáriosdamonetização
Author image

Jackson Cionek

New perspectives in translational control: from neurodegenerative diseases to glioblastoma | Brain States