UNAIR NEWS – The Faculty of Advanced Technology and Multidiscipline (FTMM) at (UNAIR) hosted a thematic discussion on the implementation of Artificial Intelligence (AI) with delegates from Miami Dade College (MDC) based in Florida, USA, on Tuesday, May 28, 2024. One of the speakers, Muhammad Noor Fakhruzzaman, SKom, MSc, a lecturer in the Data Science Technology study program, outlined the use of AI to address media bias in the post-truth era.
Information bias in Post-Truth era
Ruzza opened the discussion by noting that many media outlets are influenced by particular interests, leading to biased reporting that aims to shape public opinion. He referred to this phenomenon as information bias.
Currently, the public finds it challenging to detect information bias, a problem worsened by the prevalent use of social media as a primary information source. Ruzza explained that social media recommendation systems make detecting bias challenging, as they predominantly display content likely to be favored by users. This creates selective exposure, where users only engage with information that reinforces their beliefs, fostering the post-truth era.
In this era, social media users may perceive biased information as truth, potentially leading to societal disintegration or division. Therefore, the ability to detect information bias is crucial.

听
Detecting bias through AI
As the volume of information grows, detecting bias becomes increasingly difficult. To tackle this challenge, Ruzza proposed using AI. He explained the design and technology behind the AI to the MDC delegates and other participants.
淲e use Large Language Models (LLMs) to train the AI. Several open-source LLMs, like LLaMA or BERT, can identify patterns in data or information that indicate bias, Ruzza said.
However, Ruzza admitted that AI is not always accurate but he emphasized that it could play a role as an early warning system for information bias.
淎s we know, AI can be fallible. For instance, GPT can generate errors, but it is crucial as an early warning tool for identifying bias or misinformation, he explained.
Author: Elsa Hertria Putri
Editor: Khefti Al Mawalia





