AI and Research Ethics: Who Draws the Line?

As artificial intelligence becomes an integral part of modern research, the question facing ethics boards, policymakers, and researchers alike is not just what we should do, but who decides?

From machine learning models diagnosing disease to AI tools generating participant responses in behavioural studies, we are witnessing a fundamental shift in how research is conducted. Yet the ethical frameworks that govern this work were designed in a pre-AI world — one in which researchers were human, decisions were traceable, and accountability was clearer.

The collision course between AI and research ethics is no longer theoretical; it is a reality. It’s happening now, across disciplines, with increasing speed and scale.

AI and Research Ethics: Who Draws the Line?

The power — and opacity — of AI in research

AI offers scale, speed, and insights beyond human capacity. However, it also introduces new layers of opacity, including black-box decision-making, automated data handling, and systems that even developers may struggle to explain.

Ethics committees are mandated to protect participants, ensure informed consent, and uphold fairness — yet these principles become elusive when tools evolve autonomously.

Who is accountable when an AI model introduces bias in a study? What happens if AI harvests sensitive data that participants never consented to share? These are no longer edge cases — they’re pressing issues.

The regulatory lag

The rapid pace of AI advancement clashes with the slow pace of ethical oversight. Institutional Review Boards (IRBs) and research ethics committees often lack the technical expertise to evaluate AI-augmented protocols properly. Meanwhile, global standards for AI in research are patchy at best — and absent in many contexts.

Without clear rules, researchers are left to interpret ethical boundaries on their own — or, worse, assume that technical feasibility equals ethical permissibility.

Shared responsibility and scalable solutions

It’s time for a reset. AI research ethics cannot rest solely on the shoulders of researchers or review boards. It must become a collaborative effort, bringing together technologists, ethicists, regulators, participants, and the public.

Practical tools can support this shift. Infonetica’s Ethics RM is a research ethics compliance platform designed to help institutions:

  • Manage ethics applications and approvals digitally, with clearly documented review trails
  • Configure workflows to integrate AI‑specific oversight steps
  • Ensure transparency, auditability, and adaptability across the research lifecycle

By embedding ethics into the design and review process—not treating it as an afterthought—such systems make oversight scalable and responsive in the age of AI.

Drawing the line — together

So, who draws the line?

Ideally, all of us. But it’s not enough to want ethics; we need the tools, training, and frameworks to act on that intent. That means smarter platforms, cross‑disciplinary dialogue, and a culture that values ethical rigour as much as technical advancement.

In AI‑powered research, the question isn’t merely what can be done — it’s what should be done. Drawing that line requires collective and tech-enabled judgment.