Skip to main content
SearchLoginLogin or Signup

Government Regulations and the Advancement of Artificial Intelligence Technology: Gaps & opportunities for development

This study examines trends in the development of laws or government regulations in foreign countries, including Thailand, with a focus on controlling AI to ensure fair decision-making, particularly regarding gender fairness.

Published onMay 08, 2024
Government Regulations and the Advancement of Artificial Intelligence Technology: Gaps & opportunities for development
·

Government Regulations and the Advancement of Artificial Intelligence Technology:

Gaps and opportunities for development

Theetach Kaewtubtim

Introduction

The increasingly important role of artificial intelligence (AI) technology in human life is undeniable, given its superior capabilities compared to humans. AI's strengths include the ability to learn repeatedly and automatically, analyze extensive and complex data using deep artificial neural networks, and create remarkable accuracy through these networks. Additionally, AI efficiently utilizes existing data to optimize processes (SAS Institute Inc.). It is applied in various activities related to daily human life, such as repetitive tasks in production processes that require precision and consistency. These tasks continue until there are changes in the production process, where the application of AI can increase productivity and reduce errors. AI is also used to enhance existing products, adding diverse functions that meet evolving human needs (Thai Programmers Association, 2018).

Although AI plays a significant role for humans, its development has historically lacked control, particularly in terms of fairness. Many cases reflect this lack of fairness in AI's decision-making processes, leading to disparities across various dimensions such as gender, race, and social groups. These disparities are prevalent in today's society, as evidenced by news articles highlighting unethical AI practices, including bias in job applicant selection, customer targeting, and gender-discriminative image cropping. Such issues are often attributed to the underlying data being inherently biased. For example, the use of imported data may not be representative of diverse societal groups. Furthermore, the biases in data often stem from the prejudices of those involved in the AI development process. In essence, the outcomes of AI's decision-making are a reflection of society, revealing unrecognized biases of the creators themselves.

This occurs because individuals involved in the AI development process often have personal preferences, which may influence the selection of information or models to meet their own needs or those of their associates (sometimes even unconsciously), a tendency that is normal for every human being. Consequently, AI developed by certain groups may not align with broader societal goals. For example, AI developed predominantly by males may not yield beneficial outcomes for females, or AI created by white individuals might adversely affect people of color. To address these disparities, government policies, laws, and various regulations are essential for creating a balance between the needs of different groups. This has prompted research into government policy guidelines that support and promote the development and application of AI in ways that foster societal justice, especially in terms of gender fairness. This study will examine trends in the development of laws or government regulations in foreign countries, including Thailand, with a focus on controlling AI to ensure fair decision-making, particularly regarding gender fairness. It aims to identify existing gaps in laws and provide recommendations for future improvements.

Theoretical Framework

The central concept in this study is the concept of artificial intelligence, or AI, which refers to machines equipped with functions that enable them to understand, learn, reason, and solve problems. Machines that possess these abilities are considered to embody AI itself. AI is classified into three levels according to their abilities or intelligence, as follows.

  1. Narrow AI refers to AI with specialized abilities that surpass those of humans. For example, AI-assisted robotic surgery may perform surgical procedures with greater skill than human doctors.

2.General Artificial Intelligence (General AI) is AI that possesses the same level of ability as humans, capable of performing any task that a human can, with comparable efficiency.

3) Strong Artificial Intelligence (Strong AI) is AI that exhibits superior abilities compared to humans in many areas

In addition, AI is divided into three categories: 1) Artificial Intelligence, 2) Machine Learning, and 3) Deep Learning. Artificial Intelligence, defined in the 1950s, is the science of programming machines to solve human problems. This discipline arose from scientists' interest in how a computer could solve problems autonomously. Essentially, AI enables computers to exhibit human-like qualities and capabilities, effectively mimicking human skills. Machine Learning, a subset of AI, focuses on training machines to recognize patterns in the data they are given. It is based on the principle that all data exhibit patterns which can suggest possible outcomes. For example, machine learning can be used to predict future stock prices based on historical graph data. Deep Learning, a further subset of Machine Learning, involves machines using multiple layers to process and learn information. The complexity of the model increases with the number of layers. For instance, Google’s LeNet model, which uses a total of 22 layers, is employed to analyze and understand images. In deep learning, a learning phase is structured through a neural network, often described as layered architecture, where each layer builds upon the previous one (Thai Programmers Association, 2018). Over time, what is known as AI has evolved from merely replicating human abilities to becoming tools with superhuman capabilities (Figure 1).

Graphical user interface Description automatically generated

Picture 1: Evolution of AI Source:Thai Programmers Association, 2018

In the past, numerous studies on AI management laws have primarily focused on the application of these laws to protect the rights of artificial intelligence entities. However, more recently, as issues concerning the application of AI in various activities have led to consumer unfairness, the focus has shifted toward the legal aspects of liability when consumer damages occur, as well as the protection of personal data rights. For instance, legal concepts from Phumin Butin’s article (2018) discuss the legal dimensions involved in regulating AI, which encompass three main areas:

  1. Protection of the rights of artificial intelligence: This involves various types of intellectual property laws covered under both the Civil and Commercial Code and intellectual property law

  2. Liability when damage occurs to consumers: This applies when artificial intelligence, whether in the form of industrial robots or various automated work systems, causes damage. Affected parties can pursue legal action under related laws, including the Civil and Commercial Code and the Act on Liability for Damages Occurring from Unsafe Products B.E. 2008.

  3. Protection of rights to personal information: This is governed by the Personal Data Protection Act B.E. 2019, along with the Official Information Act B.E. 2540 (1997) and the Telecommunications Business Act.

In terms of the concept of gender fairness, this study will adhere to the international framework provided by the United Nations, specifically Sustainable Development Goal 5, which aims to achieve gender equality and empower all women and girls. This goal encompasses ending discrimination and eliminating all forms of violence against women and girls, recognizing and valuing unpaid care and housework, ensuring full and effective participation in leadership and decision-making at all levels, and providing universal access to sexual and reproductive health. Policy-wise, the goal emphasizes the necessity to implement reforms ensuring that women have equal rights to economic resources, among other facets of gender equality. It also advocates for increased use of technology to empower women and girls (United Nations)

This study will examine trends in the development of laws and government regulations in foreign countries, including Thailand, that aim to ensure AI systems make fair decisions. The focus will be on legal issues related to liability when AI-caused errors primarily result in consumer harm. The definition of AI used in this study is broad, encompassing both machine learning and deep learning. This approach will help reveal existing legal gaps and suggest ways to improve these frameworks in the future.

Methodology

The research method will focus on reviewing relevant literature, including articles and research studies published on the internet. These materials pertain to trends in legal developments and government regulations in foreign countries, including Thailand, that aim to ensure AI enforces fair decision-making behavior. Special attention will be given to fairness between genders. The findings will be compiled and synthesized into a research article format.

Results

From a review of articles and research related to gender injustice resulting from AI, due to the current regulations' inability to effectively control it, it was found that such problems occur both domestically and internationally. While these issues have not been widely reported in Thailand, there are some interested parties who have addressed ethical concerns in their publications. For example, an article by Chanthaporn Sriphon (2018) examined the legal challenges of creating liability for AI infringements, specifically highlighting the use of driverless cars on public roads. This scenario poses a potential risk where car operators are aware that certain accidents are unavoidable. When such accidents occur, the question arises: who is legally liable? In Thailand, the Civil and Commercial Code specifies the liability for damages caused by a person or an animal. If AI is considered analogous to an animal with an owner, then the owner or the person maintaining it is responsible for compensating any damages caused, unless they can demonstrate that they exercised reasonable care in accordance with the animal’s nature and type or other circumstances, or that the damage would have occurred regardless.

However, during the industrial revolution, machines were developed to replace humans. Machines differ from AI in that they are not intelligent. Under civil and commercial laws, strict liability is stipulated: if mechanical damage occurs, the law assumes that the owner or controller of the machinery is liable for damages, unless it can be proven that the damage was caused by an event of force majeure or resulted from the fault of the injured party. This means the injured party does not bear the burden of proof; instead, the owner or controller of the vehicle must demonstrate this to avoid liability. If AI is considered to be inherently dangerous, or potentially hazardous by mechanism or design, the same principles apply as with machinery damage. Besides the Civil and Commercial Code, other relevant regulations can govern AI liability. For example, in cases where AI is produced on a large scale, the Act on Liability for Damages Occurring from Unsafe Products B.E. 2008 can be applied, which holds all entrepreneurs jointly liable, regardless of whether the damage was caused by intentional or negligent actions. Moreover, the law broadens the base of liable entrepreneurs to include manufacturers, no matter their location. Therefore, injured parties can sue manufacturers located abroad. Currently, the Civil Procedure Code Amendment Act (No. 28) B.E. 2015 allows for suing foreign defendants via post, which is more convenient and faster than the previous requirement of using diplomatic channels.

In addition, the Act on Liability for Damages Caused by Unsafe Products stipulates compensation not included in the Civil and Commercial Code, such as punitive damages. These are applicable in cases where an entrepreneur has produced, imported, or sold goods, either knowing they were unsafe or through gross negligence, or by knowing the product was unsafe and failing to take any reasonable action to prevent damage. The Act also allows for compensation for mental anguish in cases where unsafe products cause death, entitling the injured person’s spouse, parents, or descendants to damages for psychological harm. This aligns with Phumin Butin’s 2018 article, which addresses liability when consumer damages occur due to artificial intelligence. Under Section 437 of the Civil and Commercial Code, anyone who owns or controls inherently dangerous property or machinery must compensate for any damages that occur. For instance, in the case of smart cars, the law shifts the burden of proof to the controller or possessor, differing from general litigation where the plaintiff must initially prove the defendant’s fault. However, this can pose practical challenges since the liability imposed by law may fall on an individual who, while possessing the AI, might not be the direct cause of the damage, such as in incidents involving driverless smart cars developed by others, shifting the liability to the motorist. Besides the Civil and Commercial Code, Phumin Butin also discusses violations under the 2008 Act on Liability for Damages from Unsafe Products, which holds all entrepreneurs accountable—whether manufacturers, those hired to produce, importers, or sellers of products with unidentifiable manufacturers. If a product is marketed in such a way that suggests they are the manufacturer or importer, they are liable for damages. Therefore, if there is damage from unsafe products, the product manufacturer can be directly sued. However, if the AI used falls outside the scope of this law, it is not protected by it.

In his 2019 analysis, Muean Sukmat examined the dimension of criminal liability of AI by applying John Stuart Mill's idea that individual freedom primarily depends on the absence of harm to others. According to this view, even if an action poses a danger only to oneself, the state should not impose restrictions unless it harms others. Therefore, when AI harms humans, establishing criminal liability becomes essential to regulate AI behavior and actions. However, determining criminal liability for AI differs from that for humans, considering both external (actions) and internal (intent) elements, as well as modes of action. From a legal perspective, machines cannot be equated with humans; currently, AI’s capabilities are not developed enough to qualify as perpetrators of crimes. By analogy, AI might be compared to children or legally incompetent persons who lack the mental state required for criminal liability. Legally, when an offense is committed through an 'innocent agent,' there is no criminal liability. However, the law recognizes the concept of indirect offenders, such as programmers who design AI software that facilitates criminal activity and AI users who exploit AI for personal gain. Yet, both programmers and users typically do not fulfill the actions consistent with the definition of an offense. Moreover, considering the form of liability, one must assess the abilities of the programmer or user to foresee potential wrongdoing. Liability only arises if the fault is consequential and the outcome was foreseeable. For instance, if a programmer negligently programs a system without adequate safeguards, resulting in human death, the programmer could be held liable for negligent homicide. Despite current AI not being sufficiently advanced to fulfill both external and internal elements required for criminal liability, the establishment of criminal liability for AI should be considered if AI systems reach a higher level of sophistication.

From the analysis provided by the aforementioned author, it is evident that there is a problem with the obsolescence of Thai laws, which cannot keep pace with the development of AI technology. Additionally, the analysis suggests possible ways to adapt existing laws to AI, which can be summarized as follows:

  1. Establish a clear definition of AI, ensuring it has a status consistent with types specified by law, such as machinery, dangerous property, or unsafe products, thereby allowing specific laws to be enforced effectively.

  2. Enact special legislation to clearly define the legal status of AI, potentially classifying it as an 'electronic person,' though the rights and duties assigned to AI may differ from those of an ordinary person or a juristic person currently recognized by law.

  3. Require business operators to manage the risks associated with AI, including providing insurance for damages incurred and creating a compensation fund through contributions from relevant stakeholders, such as entrepreneurs.

  4. Design laws specifically to control AI behavior that aligns with ethical principles.

  5. Develop laws regarding AI registration to establish a system for monitoring and controlling its use by organizations in accordance with professional standards.

  6. As for criminal liability, laws should apply to programmers or users as indirect wrongdoers who utilize AI to commit crimes. This can be viewed in terms of the natural form of liability. Even if the programmer or user did not intend for the AI to commit an offense, failing to exercise reasonable care in controlling the AI, which then commits a criminal offense, constitutes negligence.

  7. Guidelines for legal development should be established in tandem with the advancement of AI technology. This will support the enforcement of advanced AI systems in the future, which will possess processing and decision-making capabilities so advanced that they are comparable to those of a human being. Such AI offenses could be considered criminal because they are ready with both external and internal components

From the discussion above, it is evident that Thai legal scholars have proposed guidelines to prevent and mitigate damage caused by AI exhibiting unethical behavior. However, while ethical issues generally attract considerable interest, there has been limited awareness in Thailand about designing laws to address gender inequality or damage caused by gender biases. Despite the negative effects stemming from such biases—where AI's biased decision-making processes may contribute to sexual harassment or make it less likely for women to obtain jobs, thereby increasing their risk of unemployment and poverty—there has not been significant action taken to hold responsible parties accountable or to compensate those who suffer losses.

As the European Union continues to progress in developing AI regulations, it is increasingly focusing on gender equality issues. Gender equality has been a fundamental value of the European Union for many decades, supported by regulations that were concretely developed for the first time in 2000. During this period, European Union members collaboratively established rules that stipulate fair practices, such as under Council Directive 2000/78/EC, which established guidelines for equal treatment in employment and other areas (Robin Allen QC and Dee Masters, 2020). However, although European conditions emphasize equal treatment, the growing role of AI technology in business and daily life presents both threats and opportunities. This has led EU member countries to prioritize the design of policies and regulations that support the development of AI to foster gender equality. In 2018, the 'AI Strategy' was articulated in three areas, including enhancing technological and industrial capabilities, preparing to support changes from AI in both the economy and society, and creating an appropriate ethical and legal framework for AI. Moreover, in 2020, the 'White Paper on Artificial Intelligence: A European Approach to Excellence and Trust' was prepared. This policy proposal discusses the role of AI in decision-making processes that create inequality, particularly in sectors like education and job recruitment. The proposal also addresses AI technology liability legislation, which could serve as a foundation for Member States to improve national laws (European Commission). Furthermore, within the European Union, there are numerous regulations focused on creating equality and preventing unfair discrimination. For example, the Fundamental Rights Agency of the European Union emphasizes basic human rights related to AI, and the Amsterdam Treaty enacts the EC Treaty, which discusses fair treatment across many dimensions, including gender, race, skin color, religion, belief, and age. In matters of gender fairness, there is also a specific charter, namely The Charter of Fundamental Rights of the European Union, showcasing the strong emphasis these countries place on creating equal opportunities.

In addition, within the European Union, there are numerous regulations focused on creating equality and preventing unfair discrimination. For example, the Fundamental Rights Agency of the European Union emphasizes basic human rights related to AI. The Amsterdam Treaty has also been instrumental in enacting the EC Treaty, which addresses fair treatment across multiple dimensions, including gender, race, skin color, religion, belief, and age. Specifically, in matters of gender fairness, The Charter of Fundamental Rights of the European Union provides guidelines. These various regulations demonstrate the strong emphasis these countries place on creating equal opportunities (Robin Allen QC and Dee Masters, 2020). Most recently, in 2021, the European Union introduced draft regulations governing the use of artificial intelligence (The Proposal for a Regulation on Artificial Intelligence) (European Commission). These regulations aim to oversee the scope of companies' use of AI, addressing numerous challenging issues, including ethics, safety standards, and the protection of personal information. The draft regulations were crafted to ensure that AI applications in various activities are safe, respect basic rights principles, and support definitive laws aiding developers, investors, users, and the government sector. The regulations classify AI risks from unacceptable levels (AI that violates people's rights and safety) to high risks (which must be supervised throughout the development process and meet specific obligations), limited risk level (e.g., chatbots, which require developers to clearly disclose their AI nature to users), and low risk level (other types of AI that do not have regulatory requirements but still require developer awareness of user impacts and societal effects). Furthermore, initial penalties for violations of these regulations could result in fines of up to 6 percent of total revenue. A European Artificial Intelligence Board will also be established to assist countries in implementing these draft regulations.

However, even though such regulations provide a good starting point for issuing more stringent AI regulations and may serve as an important role model that could set the framework for AI technology development in many countries, there are still some loopholes that could hinder effective enforcement. For instance, a too broad framework regarding the purpose of AI could complicate proving intent. Additionally, there are numerous diversity issues that remain unaddressed, such as collecting information on gender, sexual orientation, and ethnicity, which is considered unethical and unacceptable. Moreover, there are unclear definitions, such as what qualifies as high risk, which AI should be classified as high risk, and the requirements for constant control and supervision, including compliance with specified obligations. If such risks are too stringently defined, it may impede the development of new innovations. Furthermore, the method of evaluating risk levels lacks consistency between the government and technology developers. Another concern is the creation of a European Artificial Intelligence Board as a separate entity from committees overseeing similar matters, leading to potentially inefficient supervision. Additionally, the framework includes exceptions regarding government use; for example, there should be exemptions for using AI to analyze biological data under certain conditions to aid in crime investigation and suspect identification (Thematter, 2021). These loopholes represent significant obstacles that may prevent the regulations from being as effective as they should be.

Therefore, although the European Union has made the most progress among groups of countries in enacting regulations governing AI, its regulatory approach still contains gaps that create practical obstacles. This situation highlights that the mere existence of laws to control AI is not sufficient. The underlying issue lies in the inconsistency of attitudes and values among different groups in society, which leads to practical challenges. Regulators in the government sector must address not only the problems caused by AI but also design strategies to balance the needs of various societal groups. This approach will facilitate smoother implementation of the law. The steps that may be required include the following:

  1. To address the problem of conflicting values within society, such as disagreements over the disclosure of personal information, it is necessary to take significant measures. For instance, AI developers need access to information that represents all societal groups to avoid biases in the decision-making process. It is essential to consider which data should be collected for AI training and which should not. Moreover, if data is stored and subsequently causes an impact, mechanisms must be established to mitigate these effects and compensate those harmed. Therefore, a consistent framework across all related laws is crucial.

  2. Regarding risk assessment problems, it may be beneficial to involve a third party, not just the AI developers themselves. This third party should consist of individuals knowledgeable across various fields, given that AI pertains to many activities. Such expertise is necessary to support the accurate evaluation of different AI types.

  3. The conflict between strict supervision and the drive to innovate in the field of AI needs to be balanced through behavioral models developed by government agencies, which should find an optimal point for supervision. Enhancing the level of strictness in AI regulation is feasible, but the challenge lies in determining the optimal balance that neither stifles AI development incentives nor allows AI advancements to harm society. This challenge requires cooperative efforts.

Conclusion

Government regulations abroad, including those in Thailand, focus on controlling AI to ensure decision-making behavior is ethical, with a particular emphasis on gender fairness. To address existing legal gaps and suggest improvements for future legislation, this study will review relevant literature, such as articles and research published on the internet. It will examine the trends in legal developments and government regulations in foreign countries, including Thailand, that aim to ensure AI decision-making is fair, especially in terms of gender equality. The findings will be compiled and synthesized into a research article.

A review of the current regulations in Thailand reveals that they highlight the issue of outdated Thai laws not keeping pace with the development of AI technology, including potential ways to adapt existing laws to AI. While there is considerable interest in general ethics, awareness of designing laws to address gender inequality or harm caused by gender bias in Thailand remains limited. In contrast, the European Union has made significant progress in developing regulations to control AI. Most recently in 2021, the European Union introduced the Proposal for a Regulation on Artificial Intelligence, drafted to ensure that AI applications in various activities are safe, respect fundamental rights principles, and support clear laws that aid developers, investors, users, and the government sector. These regulations address risks associated with AI and include guidelines to prevent problems and outline penalties. However, despite these provisions being a good starting point for more stringent AI regulation and potentially serving as an important model for establishing frameworks for AI technology development in many countries, there are still some gaps that may hinder effective enforcement. Thus, it is evident that the mere existence of laws to regulate AI is not the core issue; rather, the problem lies in the consistency of attitudes and values among various groups in society, which can lead to practical challenges. The regulatory government sector must not only address the problems posed by AI but also design guidelines to balance the needs of different societal groups. This approach will ensure smoother law enforcement and include robust methods to prevent and resolve issues through cooperation among individuals with genuine expertise from various sectors.

Policy Recommendations

This is based on a comprehensive review of various sources, both domestic and international, which has led to the following policy recommendations:

  1. It is necessary to accelerate the introduction of legislation to regulate AI. This includes establishing a clear definition of AI so that it aligns with classifications specified by law. Additionally, special legislation should be enacted to clearly define the legal status of AI, potentially designating it as an 'electronic person.' However, the rights and duties assigned to AI may differ from those typically associated with ordinary individuals and juristic persons.

  2. A comprehensive law should be established requiring business operators to manage the risk of damages caused by AI, including providing insurance for such damages. This law should also mandate the creation of a fund to compensate for various damages, funded by contributions from those involved.

  3. There should be a law that specifically governs AI behavior in accordance with ethical principles.

  4. Legislation regarding AI registration is necessary to establish a system for monitoring and controlling its use by organizations, ensuring adherence to professional standards.

  5. Criminal liability laws should be applied to programmers or users as indirect wrongdoers using AI as a tool to commit offenses. Liability should be considered even if the programmer or user did not intend for AI to commit a crime but failed to exercise reasonable care in controlling the AI, leading to a criminal offense. Consequently, the programmer or user would be guilty of negligence.

  6. Guidelines for legal development should be established in parallel with the advancement of AI technology to support its future applicability. This type of AI, which will have processing and decision-making capabilities as sophisticated as those of humans, could be implicated in criminal offenses due to its complete external and internal components.

  7. To address the issue of conflicting societal values that hinder law enforcement—such as disagreements over disclosing personal information to AI developers—it is crucial to ensure that the information encompasses the characteristics of all societal groups. Often, the data used to train AI does not include certain groups, leading to bias in decision-making processes. Therefore, it is essential to carefully consider which data should be collected for training AI and which should not. Additionally, if data storage results in any impact, there must be measures to mitigate this impact and ways to compensate those harmed. Thus, it is necessary to draft laws that are consistent across all related legal frameworks.

  8. To address problems related to risk assessment, it may be beneficial to involve a third party in the evaluation process, rather than relying solely on AI developers to evaluate themselves. This third party should consist of individuals with expertise across various fields, given that AI is related to many activities. Therefore, it is necessary to have experts to support the evaluation of each type of AI.

  9. The conflict between the strictness of supervision and the incentives for AI development and innovation should be addressed through behavioral models that must be developed by government agencies to find appropriate points for supervision. It can be said that increasing the strictness in regulating AI is not difficult; however, the challenge lies in identifying the optimal point that will not stifle the incentives for AI development nor cause the AI development to be overly detrimental to society. Therefore, this challenge requires the cooperation of individuals with expertise in these matters.

References

European commission. Advisory Committee on Equal Opportunities for Women and Men. Source: https://ec.europa.eu/info/sites/default/files/aid_development_cooperation_fundamental_rights/opinion_artificial_intelligence_ gender_equality_2020_en.pdf

European Commission. Proposal for a Regulation laying down harmonised rules on artificial intelligence. Source:https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence

Robin Allen QC And dee Masters. 2020. REGULATING FOR AN EQUAL AI: A NEW ROLE FOR EQUALITY BODIES. Source: https://equineteurope.org/wp-content/uploads/2020/06/ai_report_digital.pdf

SAS institute Inc. ปัญญาประดิษฐ์คืออะไร. แหล่งที่มา: https://www.sas.com/th_th/insights/analytics/what-is-artificial-intelligence.html

The Matter. (2021). Justice that is lagging behind may not be in time: When the world moves forward to discuss creating laws for AI. Retrieved from https://thematter.co/science-tech/the-1st-ai-law/141295#google_vignette

Sriphon, C. (2018). Trends in law and artificial intelligence in ASEAN. Retrieved from https://lawforasean.krisdika.go.th/Content/View?Id=350&Type=1

Butin, P. (2018). Law and artificial intelligence. Retrieved from https://so05.tci-thaijo.org/index.php/tulawjournal/article/download/195250/135716/

Thai Programmers Association. (2018). What is artificial intelligence (AI: Artificial Intelligence)?. Retrieved from https://www.thaiprogrammer.org/2018/12/whatisai/

Sukmat, M. (2019). Criminal liability of artificial intelligence. Retrieved from http://www.niti.ubru.ac.th/lawjournal/fileuploads/2-2562/4.%E0%B8%9A%E0%B8%97%E0%B8%84%E0%B8%A7%E0 %B8%B2%E0%B8%A1%20-%20%E0%B9%80%E0%B8%AB%E0%B8%A1%E0%B8%B7%E0%B8%AD%E0%B8% 99%20%E0%B8%AA%E0%B8%B8%E0%B8%82%E0%B8%A1%E0%B8%B2%E0%B8%95%E0%B8%A2%E0%B9 %8C.pdf

United Nations. (n.d.). Sustainable Development Goal 5: Gender equality creates gender equality. Social empowerment for women and girls. Retrieved from https://thailand.un.org/th/sdgs/5

Comments
0
comment
No comments here
Why not start the discussion?