- Home
- News
- Articles+
- Aerospace
- AI
- Agriculture
- Alternate Dispute Resolution
- Arbitration & Mediation
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- ESG
- FDI
- Food and Beverage
- Gaming
- Health Care
- IBC Diaries
- In Focus
- Inclusion & Diversity
- Insurance Law
- Intellectual Property
- International Law
- IP & Tech Era
- Know the Law
- Labour Laws
- Law & Policy and Regulation
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Student Corner
- Take On Board
- Tax
- Technology Media and Telecom
- Tributes
- Viewpoint
- Zoom In
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events
- News
- Articles
- Aerospace
- AI
- Agriculture
- Alternate Dispute Resolution
- Arbitration & Mediation
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- ESG
- FDI
- Food and Beverage
- Gaming
- Health Care
- IBC Diaries
- In Focus
- Inclusion & Diversity
- Insurance Law
- Intellectual Property
- International Law
- IP & Tech Era
- Know the Law
- Labour Laws
- Law & Policy and Regulation
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Student Corner
- Take On Board
- Tax
- Technology Media and Telecom
- Tributes
- Viewpoint
- Zoom In
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events

Artificial Intelligence From An Indonesian Law Perspective
Artificial Intelligence From An Indonesian Law Perspective

Artificial Intelligence From An Indonesian Law Perspective
Currently, AI systems are not recognized as legal subjects under Indonesian law. As a result, liability cannot attach to the AI system itself but rather to the parties who design, deploy, or use it.
Indonesia has yet to issue any laws or regulations that specifically define artificial intelligence (“AI”). The only guidance to date has been issued by the Minister of Communications and Digital (“MOCD”, formerly the Minister of Communications and Informatics) through Circular Letter No. 9 of 2023 on Artificial Intelligence Ethics (“CL 9/2023”). According to CL 9/2023, AI is defined as “a form of programming on a computer device to carry out accurate data processing and/or analysis.” While this definition may not comprehensively cover “artificial intelligence,” CL 9/2023 further states that AI includes subsets such as machine learning, natural language processing, expert systems, deep learning, robotics, neural networks, and other related fields. This indicates the absence of a specific and uniform legal definition for AI. However, AI could potentially be included within the definition of an ‘electronic agent,’ which is defined in the Electronic Information and Transactions, as amended by Law No. 1 of 2024 (“EIT Law”) and Government Regulation No. 71 of 2019 on the Provision of Electronic Systems and Transactions (“GR 71”) as a device within an electronic system created to perform certain actions on specific electronic information automatically, operated by a person. Forms of electronic agents include (i) visual (e.g., graphic display of a website), (ii) audio (e.g., telemarketing service), (iii) electronic data (e.g., electronic data capture (EDC), barcode recognition), and (iv) other forms. Although this definition may not perfectly fit AI, as AI often operates autonomously rather than purely automatically and may function as its own electronic system, the broad language of the EIT Law and GR 71 allows for the possibility of including AI under this definition.
Currently, AI systems are not recognized as legal subjects under Indonesian law. As a result, liability cannot attach to the AI system itself but rather to the parties who design, deploy, or use it. Under Law No. 11 of 2008 on Electronic Information and Transactions, as amended by Law No. 1 of 2024 (“EIT Law”), civil liability may arise for a person who causes harm to others. Liability may result from an unlawful act committed by an electronic system operator or user that causes loss to another party. Liability can also stem from contractual obligations where a party may be held liable for damages resulting from a breach of contract or negligence, depending on the terms of the agreement.
For instance, if an AI system is designed in a way that disrupts another electronic system or causes it to malfunction, this may violate Article 33 of the EIT Law, which prohibits acts that disrupt or cause an electronic system to operate improperly
In terms of criminal liability, the general principle under Article 55 of the Indonesian Criminal Code applies. Only those who commit, order the commission of, or participate in the unlawful act can be held criminally responsible. This requires identifying a human actor connected to the offense. For instance, if an AI system is designed in a way that disrupts another electronic system or causes it to malfunction, this may violate Article 33 of the EIT Law, which prohibits acts that disrupt or cause an electronic system to operate improperly. This offense is subject to criminal sanctions, including imprisonment of up to 10 years or a maximum fine of IDR 10 billion. Under the EIT Law, criminal liability applies to any person who intentionally commits such an act without authority. If the AI system is deliberately designed to carry out these actions, the liability would rest with the operator. Conversely, if the AI system is abused by a user to commit the offense, then the user would bear responsibility. Some offenses under the EIT Law may give rise to additional criminal liability if they result in material losses to another person. The existence of criminal liability does not eliminate potential civil liability.
AI is seeing growing use in Indonesian workplaces, particularly in areas such as recruitment, where it is used for tasks like CV screening and initial candidate selection, as well as in employee monitoring, performance evaluation, employee training and development, and the automation of routine operational tasks.
Key legal issues to consider when using AI in such circumstances:
(i) Data protection: If an AI system processes employees’ personal data, employers must ensure compliance with Law No. 27 of 2022 on Personal Data Protection (“PDP Law”). This includes obtaining valid consent where required, implementing appropriate security measures, and ensuring that data subjects are informed of the purpose, scope, and duration of data processing activities. Employers must also ensure that personal data is not used beyond its original purpose without proper justification and safeguards.
(ii) Automated decision making: The PDP Law grants individuals the right to object to decisions made solely through automated processing, including profiling, if such decisions have legal or significant effects on them. This means that if AI is used to automatically screen candidates, rank employees, or trigger disciplinary actions, employers must ensure that such systems do not operate without human oversight. Individuals must be given an opportunity to seek clarification or challenge the outcome.
(iii) Ethical use of AI: Employers are encouraged to comply with CL 9/2023, which emphasizes the ethical use of AI. Adhering to these principles helps ensure that AI is used in a manner that respects employees’ rights and promotes fairness in the workplace.
The development and use of AI may also raise significant privacy issues, primarily during the training stage, which may involve collecting and processing personal data without appropriate lawful basis and lack of transparency. Data is often scraped from public sources, which may inadvertently include identifiable or sensitive personal data collected without the relevant individual’s knowledge. Additionally, the lack of transparency in AI model training and personal data usage poses further privacy risks. Therefore, the development and use of AI must comply with the principles of personal data protection under the PDP Law.
Lastly, the MOCD, as the policymaker and regulator in digital infrastructure and the digital ecosystem, oversees implementing AI. This responsibility falls under the Directorate of Artificial Intelligence and Emerging Technology Ecosystems within the MOCD.
Disclaimer – The views expressed in this article are the personal views of the author and are purely informative in nature.