The Cow, The Barn, and the Lab: Why AI Is Moving From “Tool” to “Researcher”
For a long time, one of those sub-categories was “Authentic”—essentially a measure of how much of a paper was the work of a real human versus AI generated. A year ago, a low Authentic score was a red flag. Our Advisory Board of physicians and scientists was, understandably, skeptical. Hallucinations were common, and there was a general feeling that “artificial intelligence” meant “inferior.”
The Barn Door is Wide Open
Things have changed. Recently we began wondering if the Authentic score had as much impact as we originally imagined. When we asked our advisors what they thought, the consensus was surprising: it’s a “nice to have,” not a “must-have.” As one advisor put it: “The cow has already left the barn. Trying to close the door now isn’t going to help.”
This shift in the lab is why two articles caught my eye this week, offering two very different visions of the future.
The first, from Times Higher Education (THE), argued that AI should support research but can never replace human accountability.
It’s a familiar refrain: AI can summarize a paper or suggest a hypothesis, but the “real” research, the judgment, the responsibility, must remain human.
The second, an interview in MIT Technology Review, revealed that OpenAI (maker of ChatGPT) is currently pouring its resources into building a “fully automated researcher.” Not an assistant. A researcher.
From Assistant to Autonomous Researcher
The writers of THE piece argue that full automation isn’t realistic or desirable. I’d argue they’re missing the point: it’s definitely coming, so whether it’s “desirable” is well on the way to moot.
The THE perspective assumes that research is a static human “job” that AI is trying to mimic. But as we’ve seen with our own board, the definition of the “job” is what’s actually changing.
We need to stop viewing AI as a sophisticated filing cabinet and start seeing it as a co-worker.
A lot of the fear surrounding autonomous AI researchers is rooted in the “SkyNet” apocalypse from the “Terminator” movie—the idea that if we let go of the reins, the machine will inevitably drive us off a cliff. But history suggests otherwise.
When the printing press arrived, people feared the death of memory. When the calculator arrived, they feared the death of mathematics. In both cases, the “standard” simply evolved. We didn’t stop thinking; we started thinking about bigger things.
Building Better Barns
The truth is, we don’t know exactly what a world of AI-human hybrid research looks like yet, but the apocalyptic predictions usually fail to materialize. Why? Because we aren’t passive observers. As AI evolves, we evolve. We build new safeguards. We develop new cross-checks.
This is exactly why Siensmetrica focuses on T.E.S. (Transparency, Explainability, Significance). If an autonomous AI agent produces a breakthrough in oncology, we shouldn’t care if a human “wrote” it. We should care if the data is transparent, if the methodology is explainable, and if the results are significant.
If an AI can pass those rigorous tests, does it matter if it’s “authentic” human work? Our advisors are starting to say “no.”
We shouldn’t be trying to shove the cow back into the barn. Instead, we should be focused on building better barns: stronger frameworks for accountability that don’t depend on who did the work, but on how the work stands up to scrutiny.
AI is here. It’s moving from “task-taker” to “role-player.” Let’s get over our prejudices, stop worrying about the “artificial” label, and focus on the science. We might be pleasantly surprised at what our new “colleagues” can discover.
~ # ~







