Kuala lumpur: The Online Safety Act 2025 (ONSA), which took effect on January 1, introduces a comprehensive regulatory framework aimed at digital platforms in Malaysia, focusing significantly on managing online harm and enhancing child protection measures.
According to BERNAMA News Agency, the Act, passed by Parliament in December 2024 and gazetted in May 2025, complements existing legislation like the Communications and Multimedia Act 1998. It imposes specific obligations on platforms that host or distribute user-generated content, shifting regulatory attention towards systemic risk management rather than isolated content decisions, with child safety being a primary focus.
The Act seeks to rectify inconsistent platform responses to harmful content, which historically depended on internal policies. The inability to promptly remove even a small number of flagged items can result in widespread exposure to online harm among users.
The enactment of ONSA follows a notable increase in reported online harm in Malaysia, with police recording RM2.7 billion in reported losses from online scams between January and November 2025. On October 24, authorities announced the dismantling of a criminal network linked to child sexual abuse material (CSAM), resulting in 31 arrests and the seizure of over 880,000 digital files.
Since 2022, regulatory or enforcement actions have facilitated the removal of 38,470 items related to cyberbullying and online harassment. However, harmful material continues to persist despite ongoing platform moderation efforts.
Data from the Malaysian Communications and Multimedia Commission (MCMC) shows that major platforms removed about 92 percent of the 697,061 posts flagged as harmful between January 2024 and November 2025. Yet, approximately 58,104 posts remained accessible online, a challenge compounded by the increasing use of automated tools and AI-generated content like deepfakes, which complicates detection and enforcement.
ONSA establishes a regulatory framework for application service providers and content application service providers, applicable to both local and foreign platforms operating in or targeting the Malaysian market. The Act covers categories of harmful content such as CSAM, online scams, financial fraud, obscene material, harassment, content linked to violence or terrorism, material encouraging self-harm among children, content promoting hostility, and content associated with dangerous drugs.
The law emphasizes platform-level risk management systems, including content distribution and recommendation mechanisms, platform governance, and operational practices, instead of imposing criminal liability on individual users for lawful expression.
Under ONSA, platforms are mandated to identify and manage risks arising from harmful content on their services, reducing users' exposure to 'priority harms'. They must ensure certain content categories are inaccessible and provide reporting and user support mechanisms. Platforms are also required to prepare and submit an online safety plan, with enforcement measures available for non-compliance, subject to procedural requirements.
The Act specifies measures like age-related safeguards and access restrictions to limit children's exposure to harmful content, influencing the design of default settings, content discovery tools, and interaction features for younger users.
However, ONSA does not extend to private one-to-one messaging, authorize general user monitoring, or create new offences for lawful speech or political expression. Safeguards include notice requirements, opportunities for representation, public records of regulatory directions, and access to appeal mechanisms and judicial review.
While establishing a formal regulatory structure for online safety, ONSA does not replace existing enforcement, education, or prevention efforts. Issues such as digital literacy, parental involvement, and online behavior norms remain outside its scope, with its practical impact depending on platform adaptation and regulatory oversight over time.