As part of the Philosophy Book Fair 2025, the Faculty of Philosophy of Universitas Gadjah Mada (UGM) held a discussion entitled “AI for Media, AI for Journalists” on Friday, 31 October 2025. The event featured three speakers: Prof. Dr. Rr. Siti Murtiningsih, Dean of the Faculty of Philosophy UGM; Eko S. Putro, Director of Pandangan Jogja; and Haris Firdaus, Head of the Central Java–Yogyakarta Bureau of Kompas.
In her opening remarks, the Dean of the Faculty of Philosophy raised ethical concerns arising from the use of artificial intelligence in newsrooms. She questioned the increasingly blurred boundary between human-produced and machine-generated work, and described a scenario in which a newsroom produces news automatically, without human reporting. “Imagine a situation close to a deadline where a machine merely aligns a script and uploads it. When such content is reproduced continuously and accepted by the public as knowledge, what kinds of problems will emerge?” she asked.
According to her, the greatest dilemma lies not merely in technology itself, but in ethical responsibility. If journalists no longer conduct reporting and news content is produced solely from prompts and unverified data, the public risks being exposed to misleading information. Nevertheless, she emphasized that resisting technological change is not a viable option.
“News that is repeated over time can eventually be perceived as truth. That is frightening if we fail to anticipate it,” she warned. “The only way forward is adaptation. This technology should be treated as a collaborator, not an adversary. In doing so, our humanity can actually be enhanced rather than diminished,” she stressed.
A similar view of AI as part of a broader transformation was expressed by Eko S. Putro. He rejected the notion of AI as a merely technical tool, arguing instead that AI represents a new way of life. He compared the emergence of AI to previous technological revolutions—electricity, computers, and the internet—that fundamentally reshaped human life.
In the context of journalism, Eko argued that AI should serve to strengthen public trust. Writing news with the assistance of AI is possible, but it must never bypass verification processes. “If AI is used while verification is abandoned, that is no longer journalism. The main message is not efficiency, but improving the quality of work and of human beings,” he stated. He added that when AI is used wisely, “humans can actually work in more humane ways.”
Meanwhile, Haris Firdaus offered a concrete overview of how AI is already used in journalistic practice. According to him, artificial intelligence has long assisted journalists, even before the era of large language models. “AI was already in use long before ChatGPT—for transcription, research, document translation, and data summarization,” he explained. He cited an example from Kompas Data Journalism, where journalists reviewed hundreds of court rulings totaling thousands of pages.
Haris also described international media practices, such as corruption-detection algorithms in Peru and technologies for identifying illegal mining in Ukraine using satellite imagery. In Indonesia, several media organizations are currently testing chatbots based on their own reporting. Nonetheless, he cautioned against the danger of journalists relying entirely on AI for the writing process.
“Large language models have a high potential for hallucination because they do not understand texts in the same way humans do,” he noted. When reporters simply input interview recordings and ask AI to write the news, crucial field context is lost—intonation, body language, the atmosphere of the setting, and even journalistic intuition. “Journalism is not merely about writing information. It carries context, empathy, and a commitment to the public, especially the marginalized. That will not happen if everything is handed over to AI,” he emphasized.
Haris concluded by underscoring the importance of clear guidelines for the use of AI in journalism. Indonesia, he noted, is fortunate that the Press Council has issued guidelines affirming that humans must remain responsible for supervising and verifying the entire process. “AI should add value, not reduce quality,” he concluded.