Skip to main content
SearchLoginLogin or Signup

AI Regulatory System for Gender Equality

The Case of Thailand

Published onApr 30, 2024
AI Regulatory System for Gender Equality
·

Executive Summary

In recent years, artificial intelligence has continually evolved and become involved in various economic activities. While AI technology advancements have brought widespread societal benefits, it cannot be denied that AI poses a formidable threat to human society, especially when it comes to biased practices. The development and use of AI, influenced by diverse group interests, may result in biased impacts. To achieve a balance between the needs of all parties, government regulations and private sector guidelines are crucial in overseeing AI activities.

From literature review, Gender equality has become a consistent societal focus, and the government has started prioritizing regulations to supervise various socioeconomic activities to prevent gender inequality problems. For example, ethical guidelines for trustworthy AI development, established by the National Digital Economy and Society Commission, have been developed to guide research, design, development, application of AI and data science technologies based on international principles. These include fairness, non-discrimination, and non-segregation. However, under the guidelines, there are still challenges in holding AI accountable for damages. Existing laws, such as those related to product liability and consumer protection, have limitations when it comes to addressing harms caused by AI. The legal landscape lacks specific provisions to handle AI-related issues comprehensively, particularly in cases where AI poses a danger to human life. It is essential to continue developing and refining laws to address the dynamic challenges posed by AI technology.

Due to the limitations of various regulations, a recent development is the draft Act

regarding the promotion and support of AI innovation in Thailand. This Act incorporates principles of fairness, non-discrimination, and non-segregation into the legal framework. It outlines the development of tools to support oversight, including the establishment of AI innovation testing centers, data sharing, setting AI standards and certifications, implementing risk assessment systems for AI usage and defining standards for agreements between service providers and users of AI products or services. However, within this proposed legislation, there are notable loopholes that may hinder the effectiveness of AI oversight in achieving gender equality. A key vulnerability is the potential misalignment of activities governed by the legislation, focusing more on technology

development than on the core issue of oversight. Additionally, there are unclear details, particularly regarding gender equality. Another issue is the need for robust enforcement of laws throughout the value chain of AI development and usage, particularly in managing risks and establishing clear guidelines for penalties and compensation.

Therefore, We have proposed recommendations for refining the legislation, focusing on key issues within the existing draft Act:

Proposed Amendment Regarding the Definition of "AI Operators"

Recommendation: The legislation concerning AI operators should specify the connections between activities in various economic sectors and the explicit use of AI. This will provide clarity on who is involved, encompassing economic sectors, activities within each sector and possibly related professions. In such cases, in-depth research studies are necessary to illustrate these connections and incorporate the research findings into the definition. This additional information will support law enforcement in other areas.

Establishment of "AI Innovation Testing Centers" for Testing AI Technologies

Recommendation: If the legislation aims to emphasize the control of AI to promote behavior that supports principles, including gender equality and other international principles, the establishment of AI innovation testing centers should be geared towards providing clarity in the details of AI. This facilitates audit processes and risk prevention in subsequent stages. The testing procedures should be integrated into Category 3, which specifies AI standards and legislates details under Category 3 to ensure consistency in standards setting, reviewing, testing, and certification. However, if innovation testing still primarily supports AI research and development, it should not be singled out as a specific provision in the legislation. Instead, it should be enacted as secondary

legislation to support and promote the main provisions of the Act

Empowering Electronic Transactions Development Agency for direct regulatory oversight.

Recommendation: The legislation should strengthen the role of oversight

agencies by explicitly assigning responsibilities in the evaluation/inspection process, penalty imposition, and compensation restitution. These powers should be clearly outlined in the draft legislation, and secondary legislation should be promptly enacted to detail the procedural aspects throughout the entire process. This includes specifying powers for on-site inspections, reporting obligations, and the authority to impose penalties. The legislative provisions should be practical and enforceable, not reliant on voluntary actions.

Establishing Data Sharing

Recommendation: In this legislation, data sharing should be aimed at supporting AI oversight to promote transparency, providing information to stakeholders involved with AI and impacted both directly and indirectly from AI decision making process, rather than solely for supporting future technology development.

Therefore, within this draft legislation, the content related to data sharing should emphasize sharing for the purpose of educating those involved with AI to minimize the direct and indirect impacts of AI. Additionally, the stakeholders involved in data sharing should encompass oversight agencies, business partners and those involved in the usage of AI throughout the value chain. The legislation may define the scope of data sharing, specify the benefits expected from sharing, ensuring that data sharing serves a purpose and

doesn't result in unnecessary data leaks. Further study may be required to assess the value of data sharing in each format, contributing to the detailed regulations in subsequent secondary legislation.

Certification of Standards for AI Systems' Operations in Various Dimensions including to Diversity, non-discrimination and fairness

Recommendation: Under this Act, it is crucial to define the seven standards

explicitly outlined in Article 23, particularly regarding the promotion of diversity, non-discrimination, and fairness. It should detail how these principles will be implemented. Furthermore, To adapt international best practices to the Thai context, Public opinion from Thai citizens nationwide should be considered when refining these standards. It is also essential to promptly establish relevant behavioral practices associated with these principles across all economic sectors to prevent gaps in enforcement and avoid future injustices resulting from law enforcement.

Establishment of a Risk Assessment System for AI System Usage and the Development of an Audit Checklist for Assessment and Risk Management as Operational Guidelines

Recommendations:

  • The current draft of the legislation lacks a clear definition of "risk from AI system usage." It is essential to expedite the study of comprehensive AI- related risks across all economic sectors and promptly enact secondary legislation specifying preventive measures and risk resolution methods. This will empower public agencies and relevant entities to use this information for supervision, control, and mitigating adverse effects.

  • The current draft does not grant sufficient authority to supervisory agencies for enforcing laws to their fullest extent. Particularly concerning the discovery of risks or issues arising from AI operations, the law only states

that the "Office shall promptly assess the impact and notify relevant agencies to take immediate action." This loophole might result in a return to the previous ineffective bureaucratic procedures, allowing risks to escalate uncontrollably. Hence, it is advisable to enhance the role of agencies in proactive and stringent risk management, allowing them to issue immediate cessation orders and potentially impose penalties in cases where the identified risks lead to significant negative consequences. However, if the current draft of the legislation on promoting and supporting AI innovation does not directly empower supervisory agencies to impose penalties, an article may be included to authorize these agencies to act as representatives of the affected parties, allowing them to file lawsuits under civil or criminal law to ensure that enforcement measures are carried out effectively.

In addition, subordinate laws should be designed simultaneously and include details at

least the following:

  1. Announcement on the Classification of AI Business Activities

    • In-depth studies must be conducted regarding the correlation between each economic/occupational sector and artificial intelligence technology to understand the varying levels of connection including to the connections between different types of businesses and the potential societal impacts. In the research process, it is necessary to analyze data from the opinions of relevant individuals in various fields. Regarding gender equality, it is necessary to rely on opinion from women working in various economic sectors to supplement the aforementioned studies.

  2. Announcement on the criteria and methods for defining the standards for

conducting artificial intelligence business.

  • Technical standards for artificial intelligence must be established based on the following 7 international principles including to promotion of diversity, non-discrimination, and fairness and may incorporate ethical guidelines for artificial intelligence practices, simultaneously, feedback from relevant

stakeholders in each field should be collected to refine the standards to align with the current societal context. Standards related to gender equality must provide opportunities for women working in various economic sectors related to artificial intelligence to contribute information for enhancing the standards in support of this issues. Finally. the specified standards may include hierarchical levels based on the connection to artificial intelligence and the potential negative impacts on society, ranging from lenient to stringent levels.

  1. Announcement on the criteria and methods for establishing criteria and procedures for assessing risks associated with the use of artificial intelligence systems.

    • There is a necessity for in-depth studies on the risks and levels of impact arising from the use of artificial intelligence in each economic sector and develop into foundational models that all parties can use to adapt and learn to prevent and manage risks in the future. Defining the responsibilities of regulatory bodies clearly including to follow up on the use of risk-prone artificial intelligence widely to control and prevent future harm. In cases where different standards are set for each business, the certification mark should indicate the standards that have been endorsed. Risk management procedures should be clearly defined, linking the impacts of artificial intelligence in each case to risk management measures. Furthermore, supervisory bodies must be assigned the responsibility to continuously monitor and control risks from artificial intelligence and update information/models in response to dynamic changes. it is advisable to involve stakeholders in the development of the risk assessment system from the outset. In addition regulatory bodies should establish a standardized format for preparing risk reports that AI service providers can adopt. The reporting process should reflect the ability of AI service providers to align operations with the aforementioned principles clearly

  2. Announcement on the criteria and methods for testing artificial intelligence. (Instead of Announcement regarding the establishment of an artificial intelligence testing center)

    • The operations of the testing center should aim to test artificial intelligence standards to align with all 7 international principles and this process should be linked to the risk assessment and management system. Testing should be categorized into tests for compliance with each standard. For each standard, relevant stakeholders should participate in the testing. In cases where testing is required for standards related to gender equality, women working in various economic sectors related to artificial intelligence should be involved in the testing process. The selection of participants for testing and evaluation should encompass individuals expected to be impacted by the commercial use of artificial intelligence. In addition, during the testing process, it should be explicitly stated in the legislation that if risks arise from the use of artificial intelligence, the service provider must promptly enhance and rectify the AI system. Failure to take action within the specified time frame will result in the prohibition of deploying the tested AI for commercial purposes.

  3. Announcement on Criteria and Methods for Addressing AI-Induced Harms

    • Model studies related to risks and impacts should be utilized to establish clear risk management procedures. These procedures should link the impacts of artificial intelligence in each case to risk management measures, which may take the form of penalties, compensation or damage mitigation. In cases where entities associated with artificial intelligence fail to comply with standards or conduct themselves negligently to the extent of causing harm, the legal framework should prescribe methods of penalty or compensation in a separate law (not solely by withdrawing certification). There should be a committee responsible for investigating and imposing a penalty. In cases where the impacts are related to gender inequality, the

legal committee under this law should invite groups of women affected by such impacts to participate in the deliberation process.

Table of contents

Executive Summary 2

Background 1

Conceptual framework 5

Research results 9

Gender equality issue 9

Accountability issue 15

Future development trend of regulations to control artificial intelligence to support gender equality 19

Suggestions for improving future legislation in a direction that emphasizes controlling artificial intelligence to support gender equality 23

Summary 34

References 35

Background

In the past few years, Artificial Intelligence (AI) has continuously developed and gained influence on human economic activities. This has led to interconnected impacts across various sectors of the economy. According to an analysis based on the business value chain of Thai entrepreneurs from the Bank of Thailand (cited in the Ministry of Higher Education, Science, Research, and Innovation, and the Ministry of Digital Economy and Society, 2022), it is found that AI plays a crucial role in reducing obstacles in business operations and enhancing competitiveness in the business sector. Notably, businesses in Thailand that are linked to the development of AI and have invested in such technology include the telecommunications sector, healthcare sector, and manufacturing sector. Among the ASEAN countries, the telecommunications sector, financial sector, transportation business, and the healthcare and medical sector (as shown in Figure 1) are the industries that have embraced AI technology the most. Survey data from the Thai Artificial Intelligence Association aligns with this information, highlighting that the business sector most actively applying AI technology is the healthcare and medical sector, followed by the education sector, agriculture, and manufacturing industries.

Figure 1 AI Adopt level be sector and country

Source: the Bank of Thailand (cited in the Ministry of Higher Education, Science, Research, and Innovation, and the Ministry of Digital Economy and Society, 2022)

Furthermore, when considering the overall ecosystem of artificial intelligence, a significant number of individuals are involved. This includes technology developers, AI users, service providers and those overseeing AI (as shown in Figure 2). All these stakeholders are part of the AI ecosystem and have direct or indirect connections with artificial intelligence.

Figure 2 AI Ecosystem

Source: the Bank of Thailand (cited in the Ministry of Higher Education, Science, Research, and Innovation, and the Ministry of Digital Economy and Society, 2022)

Although the progress of Artificial Intelligence technology has brought widespread benefits to society, it cannot be denied that this technology is also becoming a formidable threat to human society. In the past, there have been instances of issues arising globally from the problematic behaviors of AI, resulting in negative impacts on certain groups within society. This has led various sectors to increasingly emphasize the importance of controlling AI and the emergence of academics and mass media expressing concerns about the frightening aspects of AI. As mentioned in an article titled "AI isn't dangerous, but human bias is," one starting point for AI becoming a threat and increasingly dangerous is its reflection of human biases ingrained in the data it is fed. A clear example is gender bias occurring in the credit approval process, which is widespread globally. Due to biased input data embedded in AI algorithms, it begins to learn and make decisions with inherent biases. This results in women and minority groups having reduced chances of credit approval compared to other groups. Additionally, there are many other problems that highlight the biases of AI, such as the case of 'COMPAS,' an AI tool analyzing the likelihood of a criminal committing a repeat offense in the future. Through data input to the

learning system, it assigned high repeat offense scores to women of color, while giving white men low scores, even when the initial offense for women involved unauthorized bicycle parking, while for men, it involved property damage with a weapon (The standard, 2019) Additionally, there are interesting statistics from Berkeley Haas that found implementing artificial intelligence has a chance of introducing gender bias by up to 44% and racial bias by 26%. Stanford's research reveals that speakers with darker skin have a higher likelihood of being misunderstood by AI Automated Speech Recognition by up to 16% more than white individuals. If AI is used in the interview process, it may lead to unfair scoring. Bavarian Broadcasting conducted a study using AI in online interviews (Video-Hiring AI) and found that the tool had decision-making processes embedded with bias. For example, applicants received higher scores if they were sitting in a well-lit room or in front of a bookshelf. However, scores were instantly lowered if they wore hats or head coverings because AI had learned from past data that successful applicants tended not to wear anything on their heads. Despite clothing have no impact on job performance, this bias led to immediate score reductions. Another instance is the use of AI in the job application selection process by renowned e-commerce company Amazon. The AI was responsible for checking all applicant data and assigning scores ranging from 1 to 5 stars. This resulted in a biased selection favoring male applicants over female applicants. This bias was due to the input data during AI development, which emphasized male data over female data. Consequently, the AI automatically deducted points when it saw the words "female" or noticed that the applicant graduated from a female- specific educational institution (HREX.asia, 2023).

The factors contributing to the aforementioned problems may stem from those involved

in the development process of artificial intelligence often choose data or models that align with their own preferences or biases (sometimes unknowingly) because each individual tends to prioritize themselves and their close associates more than others or people from other groups.. Therefore, the development of AI by certain groups may lead to conflicts with the goals of inclusive societies. For example, if AI is developed by a male group, it may not be able to present favorable results for females. Similarly, if AI is developed by a group with fair skin, it may negatively impact individuals with darker skin. Thus, when the preferences of different groups lead to diverse impacts, what can help create a balance among these preferences is policies, laws and regulations of both public and private sectors. These can be applied to oversee the entire process from AI

development to its implementation, addressing negative impacts and mitigating the effects on individuals affected by the biases of artificial intelligence

Conceptual framework

The key concepts to be used as a framework in this study are related to artificial intelligence and the governance of artificial intelligence. Regarding artificial intelligence, according to information from SAS Institute Inc., it was discovered in 1956, but in its early stages, it did not gain much popularity. Research on artificial intelligence in the early 1950s focused on problem- solving methods and symbolic patterns. Subsequently, around the 1960s, there was an initiation of training computers to simulate human thought processes. It wasn't until the 1970s that advanced research programs in artificial intelligence began to plan and guide the development of more explicit artificial intelligence. Intelligent personal assistant systems were created in 2003, leading to the development of virtual assistants like Siri, Alexa, or Cortana. The early research in these periods paved the way for the development of decision support systems and intelligent search systems designed to complement and enhance human abilities. Each era of artificial intelligence development is summarized in Figure 3

Figure 3 artificial intelligence development Source : SAS Institute Inc.

Due to the tremendous potential of artificial intelligence, it has become a crucial tool for various human activities. According to information from SAS Institute Inc., artificial intelligence plays important roles as follows:

  • Artificial intelligence learns automatically and studies through data, capable of processing large quantities of data accurately and efficiently through computer systems.

  • Artificial intelligence possesses intelligence that allows it to interact with humans.

  • It can learn and develop decision-making processes based on past results, creating more accurate and precise decision-making processes.

  • Artificial intelligence can analyze data more comprehensively and deeply using neural networks with multiple layers.

  • It can achieve remarkable accuracy through neural networks, such as in interactions with virtual assistants like Alexa, Google Search, and Google Photos, image classification, object recognition, and even searching for cancer through MRI machines.

  • Artificial intelligence can maximize the benefits of available data, turning learned algorithms into valuable intellectual property. This gives data owners a competitive edge in the competition

In addition to the aforementioned concepts related to artificial intelligence, in designing the guidelines for governing artificial intelligence, the research approach of the Electronic Transactions Development Agency (2022) will be applied. This study has investigated the development of oversight systems in different countries, as well as existing criteria in Thailand, to serve as a framework for designing a new legislative framework for future artificial intelligence governance. Furthermore, the study leads to the formulation of conceptual guidelines for developing new laws aligned with international standards and foreign governance practices, as follows:

  1. International Standards: Examining both international legal frameworks and relevant industry standards to consider minimum standards

  2. Problem-Solving Approaches: Addressing oversight and promotion while considering the balance of economic development, technological security and human rights. The content scope includes:

    1. Goals of oversight and policy frameworks

    2. Nature of oversight

    3. Oversight organizations

    4. Effectiveness of oversight

The findings from the aforementioned study propose suitable measures for the governance of Thailand, which can be divided into two groups. Firstly, the establishment of subordinate legislation under the Electronic Transactions Act B.E. 2544. Secondly, the creation of new Act to govern artificial intelligence technology, following the framework of Section 5 (5) of the said Act. In the long term, the focus is directed towards the development of new Act for the direct oversight of artificial intelligence technology. In this regard, Electronic Transactions Development Agency (2022) has proposed principles for drafting the aforementioned Act which address two main issues: amending laws to align with international standards and addressing legal gaps and obstacles that currently exist. The summary is as follows:

  1. Amending laws to align with international standards: In amending laws to align

with international standards, essential considerations include:

  • Privacy: Amendments should align the Data Protection Act with Article 22 of the General Data Protection Regulation (GDPR).

  • Cybersecurity: Amendments should enhance the Cybersecurity Act and the Personal Data Protection Committee's announcement on cybersecurity measures in 2022 to address cyber threats and add continuous monitoring and assessment criteria.

  • Trustworthiness: Legal principles must be established under the Cybersecurity Act of 2019 to enhance the characteristics of cyber threats and augment criteria for continuous supervision and post-monitoring of cybersecurity, aiming to foster trust and confidence.

  • Fairness and Non-Discrimination: Specific legislation addressing fairness and non-discrimination issues related to AI is currently lacking in Thailand. Adopting approaches and case studies from the United States is recommended for incorporation into future legislation.

  • Transparency and explainability: Clear legislation is currently absent on this matter in Thailand, it is advisable to amend future laws accordingly.

  • Liability : Although there are existing laws in Thailand holding manufacturers liable for damages caused by unsafe products, they may not be suitable for addressing AI-related issues. Liability must be accurately assigned, covering various AI applications.

  • Human control: Existing laws in Thailand related to AI are contradictory and need to be harmonized. It is necessary to proceed with the amendment of relevant legal details by enacting a new version of the law, in order to improve the details of supervision and ensure consistency.

  1. Addressing Legal Gaps and Existing Obstacles:

From the study of Electronic Transactions Development Agency (2022), issues related to ambiguity and conflicts in various aspects were identified. These issues could hinder the successful development of laws in the future, and the summary is as follows:

  • Ambiguity in the legal status of AI technology, both as products and services, necessitates the establishment of clear definitions for AI technology. Future legislation should distinctly categorize AI types.

  • Conflicts in the legal foundations for protecting rights, especially regarding data protection and intellectual property, pose challenges.

  • Issues related to licensing agreements and standards that fail to adequately protect consumer rights in case of problems arising from AI need attention. Current consumer protection laws, such as the Consumer Protection Act are inadequate to address these challenges.

  • Challenges in international electronic transactions persist due to the lack of clear standards as minimum requirements for international agreements.

From the conceptual framework for designing the draft legislation to regulate artificial intelligence technology mentioned above, it is evident that there is a connection to various issues related to gender equality, particularly in the context of fairness and non-discrimination. This includes other issues relevant to preventing and mitigating the impact of gender bias. Therefore, it is possible that in the future, if the legislation for regulating AI technology is applied impartially, there might be provisions addressing the control of AI to support gender equality within that legislation. Alternatively, secondary legislation specific to this issue may be enacted. The design of methods to control AI to support gender equality would necessarily require consideration of the aforementioned principles and further study about the existing supervision systems.

Research results

From the principles outlined in the framework for regulating artificial intelligence by the Electronic Transactions Development Agency, as mentioned earlier, it is apparent that under the concept of legislating to regulate AI in the future, there is a connection with issues currently under research. From our perspective, the issues of gender equality and accountability for biased actions are most closely related to designing a framework for controlling AI under the concept of Feminist AI. This applies both in terms of supporting gender equality from the outset and addressing or mitigating the problems of bias. However, from the study conducted, it is found that Thailand does not yet have clear laws specifically addressing the problems at hand. While there are no laws in Thailand regarding accountability covering the use of AI in certain types, considering other dimensions not directly related to AI, however there are existing laws that may be used to study and analyze connections to AI-related laws.

Gender equality issue

As countries around the world increasingly prioritize sustainable development, gender equality has become a prominent societal issue. Gender equality and the empowerment of

women and girls are identified as Sustainable Development Goal 5, focusing on promoting gender equality and enhancing the role of women and girls. This goal aims to eliminate actions that lead to unjust treatment of women, such as physical and mental violence and sexual harassment. It also encourages the respect for the human dignity of women, ensuring their right to choose partners at an appropriate age, and granting the right to develop themselves and access various forms of knowledge. Furthermore, importance is given to equal participation in politics and work for women. Under the overarching goal, there are specific sub-goals related to developing AI governance systems that support gender equality. These include sub-goals related to increasing the use of information and communication technology to enhance the role of women, and adopting and strengthening policies and regulations to promote gender equality and enhance the role of women and girls at all levels (National Economic and Social Development Council).

From the aforementioned development perspective, governments worldwide have

started to emphasize the establishment of regulations to oversee various activities of people in society to prevent gender-based injustice. In the case of Thailand, there is a law that promotes fairness to support these development goals under the Gender Equality Act of 2015. This law aims to eliminate discriminatory practices between genders, covering actions that involve separation, discrimination, or restriction of any benefits, whether directly or indirectly, without discrimination. This is based on the principle of fairness, regardless of an individual's gender or expression that differs from their assigned gender at birth. The Ministry of Social Development and Human Security is the main enforcement agency for this law, with administrators from various government agencies participating as committee members. Additionally, there is a committee established for adjudicating cases related to gender-based discrimination which aims to resolve disputes and enforce legal penalties and remedies for the victims (Summary from the Royal Gazette, 2015).

In this regard, considering the Gender Equality Act of 2015 in connection with artificial

intelligence, it is found that this law may be applicable to the oversight of individuals who have not been treated fairly and to create positive incentives for developers or owners of AI at some level. The law clearly states that actions that involve separation, discrimination, or restriction of any benefits, whether directly or indirectly, without discrimination, based on an individual's gender or expression that differs from their assigned gender at birth are prohibited. Therefore, those who

develop or own AI and wish to avoid legal violations must design or use AI in a way that does not lead to the aforementioned issues. However, applying this law to AI may still face certain challenges, especially regarding the definition of wrongdoers. Under the Gender Equality Act of 2015, wrongdoers are limited to state agencies, private organizations, or individuals but do not explicitly cover wrongdoers who are AI operating within designed automated systems. Consequently, individuals who suffer negative impacts from AI decision-making processes may not be able to directly file complaints against the AI. In such cases, legal action may need to be taken against the developers or owners of the AI. However, the law has not yet established clear procedures for considering and proving liability in such cases. The issue of liability will be discussed in more detail in the next section

In addition to the specific laws on gender equality that have been enforced for many

years, there are also policies/guidelines designed to support issues related to fairness in artificial intelligence. In 2019, The Committee on Information Technology, Communication and Telecommunication, National Legislative Assembly passed a policy to promote and support autonomous vehicles, robots and automated systems. This was intended to ensure continuous support and promotion of AI technology, requiring both government and private sectors to provide ongoing and sincere support. The government should implement fair and practical plans, whether in terms of policy development or relevant plans. This involves creating awareness, providing accurate knowledge and understanding to the public, developing a knowledgeable and skilled workforce to support the development of autonomous vehicles, robots and automated systems including to fostering collaboration in a networked manner. Additionally, there should be a well-structured legal oversight system (Office of the Secretary-General of the House of Representatives, 2019). Simultaneously, the Ministry of Digital Economy and Society has collaborated with technology experts from Mahidol University and Microsoft Corporation to develop ethical guidelines for AI. These guidelines aim to oversee the development and utilization of AI responsibly, based on the principles of responsibility. This is considered a crucial step in creating transparency, trustworthiness, and security for Thailand's AI systems. The guidelines are divided into six categories (Techsauce, 2019).

  • Competitiveness and sustainable development

  • Compliance with laws, ethics, and international standards

  • Transparency and accountability

  • Security and privacy

  • Equality, diversity, inclusivity, and fairness

  • Trustworthiness, consideration of accuracy, completeness of information, and openness to feedback

Not long ago, the National Science and Technology Development Agency developed ethical guidelines for artificial intelligence, considering both existing regulations in Thailand and internationally. These guidelines include the "Ethics Guidelines for Trustworthy AI" by the High- Level Expert Group on Artificial Intelligence in the European Commission and the "Thailand AI Ethics Guideline" by Office of the National Digital Economy and Society Commission, Ministry of Digital Economy and Society. These guidelines are intended for use as a practice framework in the research, design, development, application and dissemination of AI and data science technologies driven by AI or algorithms. They are also based on seven international principles, which are:

  • Privacy: Ensuring respect for individuals' privacy.

  • Security and Safety: Prioritizing the security and safety of AI systems.

  • Reliability: Ensuring the reliability of AI systems.

  • Fairness and Non-discrimination: Promoting fairness and preventing discrimination.

  • Transparency and Explainability: Ensuring transparency and explainability of AI systems.

  • Accountability: Holding individuals and organizations accountable for AI systems.

  • Human Oversight and Human Agency: Upholding human control over AI for the sustainable well-being of humanity.

These guidelines aim to set ethical standards for AI development and usage, aligning with global principles (National Science and Technology Development Agency, 2023). Under these guidelines, efforts have been made to complement existing laws and regulations, particularly in defining AI clearly and comprehensively, encompassing all types of AI. The potential risks arising from various forms of AI are addressed, defining those involved in AI, including researchers, developers, designers, and users. The defined activities related to AI cover the entire spectrum, from the initial development stages to widespread use.

In terms of ethical considerations, it is specified that AI should be designed and utilized to promote fairness, equality, diversity, inclusivity and justice for all groups in society without bias or discrimination. This includes providing equal opportunities for all members of society such as disadvantaged groups, people with disabilities and the differently-abled to benefit from AI on an equal and comprehensive basis. The guidelines emphasize avoiding social discrimination based on race or skin color.

Enforcement of various standards is outlined, depending on the stage of the AI development process. The standards related to ethical considerations under these guidelines include:

  • There must be methods or mechanisms to avoid creating inequality and bias in the AI system, taking into account the data input into the system and the design of algorithms. This includes using diverse training and testing datasets of high- quality and trustworthy information that can represent the population that the system is intended to serve. This helps detect biases in the results and allows testing with new, unseen data.

  • Diverse data should be selected for training algorithms, encompassing diversity in user backgrounds, participants in testing, and researchers in the AI development team. This aids in reducing the risk of unfairness in the system. Furthermore, all user groups should participate in testing the AI system.

  • AI should be designed and developed to be accessible, with a universal design that accommodates users of all groups. There should be various options for users to achieve their goals.

  • Documentation should be prepared to illustrate the design, workflow and guidelines for using the AI system or knowledge in different situations. This accommodates users with varying levels of knowledge and understanding.

  • Consideration should be given to the involvement of relevant stakeholders in the development and use of the AI system.

However, the aforementioned criteria are not meant to be entirely prescriptive; some are considered best practices, others are recommended and some are obligatory. Moreover, these guidelines are applicable to various stakeholders involved in artificial intelligence in different

capacities. For AI developers, there are specific control guidelines, including methods or mechanisms to avoid creating inequality and bias in AI systems, using diverse data for training algorithms, designing AI to be accessible and accommodating to all user groups, and providing various options for users to achieve their goals. Meanwhile, users of AI systems, particularly those in the widespread implementation phase, will have more extensive criteria, including documentation to illustrate system design, workflow, and guidelines for system deployment, as well as the involvement of relevant stakeholders (see Figure 4)

Figure 4 Fairness and non-discrimination practices

Source : National Science and Technology Development Agency (2023)

In addition to the aforementioned regulatory oversight, government agencies in Thailand have developed policies and regulations aimed at supporting the ethical development of artificial intelligence (AI), including principles of morality and fairness. While some of these policies and regulations may not explicitly detail issues related to gender equality, they can contribute to the ongoing development in this area. For example, Thailand Artificial Intelligence Guidelines 1 . 0 – TAIG 1 . 0 developed by the Legal Research and Development Center, Faculty of Law, Chulalongkorn University, with support from leading private companies in the country. These

practices focus on presenting principles and practices related to AI that align with government policies.The "Government AI Framework" is another regulation that emphasizes guidelines for the operation of the public sector in utilizing AI based on ethical principles. Additionally, the "National AI Development Action Plan for Thailand," jointly developed by the Ministry of Higher Education, Science, Research and Innovation and the Ministry of Digital Economy and Society, serves as a guide for overseeing AI to ensure it benefits society as a whole.

Accountability issue

While there is no specific legislation in Thailand directly addressing liability for the actions of artificial intelligence, there are studies proposing regulatory guidelines based on existing regulations. For instance, in the case of using autonomous vehicles on public roads where situations may arise in which the artificial intelligence system controlling the vehicle recognizes that it cannot avoid an accident, if an accident occurs, there will be victims. In such a scenario, it raises the question of who is legally responsible. If we consider this issue, it is found that Thailand has civil and commercial codes that define personal or animal liability, attributing liability in the case of AI can be likened to owning a pet. According to the civil and commercial codes, if an animal causes harm, the owner or custodian is liable, unless it can be proven that the harm resulted from an irresistible force, the nature or behavior of the animal or other circumstances. In the industrial revolution era, machinery was developed to replace human labor. However, machinery lacks intelligence. If AI is considered a vehicle driven by machinery, civil and commercial laws strictly prescribe liability. The law stipulates that if damage occurs due to machinery, the owner or controller is liable, unless it can be proven that the damage resulted from an irresistible force or the fault of the injured party. This means the injured party does not have the burden of proof, and it is up to the owner or controller to prove to escape liability. If AI is treated as a dangerous object by nature or by design, similar liability rules would apply to damage caused by machinery. The liability in such cases is akin to that for damage caused by machinery (Jantaphon Sriphon, 2018).

Apart from the civil and commercial laws, there are other relevant laws that can be

applied to control the liability of artificial intelligence. In cases where AI is mass-produced, the

Liability for Unsafe Products Act of 2008 can be applicable. This law imposes strict liability on all operators, making them jointly liable without the need to prove intent or negligence. Moreover, it widens the scope of operators to include manufacturers, regardless of their location. This allows the injured party to sue foreign manufacturers, and the amended Civil Procedure Code of 2015 allows for the service of legal documents through postal mail, which is more convenient and faster than the traditional diplomatic channels. Furthermore, the Liability for Unsafe Products Act also introduces the concept of punitive damages, which were not present in the civil and commercial laws. Punitive damages can be claimed in cases where the operator knowingly manufactures, imports or sells unsafe products, acts with gross negligence or upon becoming aware of the product's danger, remains passive without taking appropriate actions to prevent harm. Additionally, compensation for mental distress is also introduced in cases where unsafe products lead to death, injury or damage to the mental well-being of the injured party, spouse, descendants or heirs of the person affected (Jantaphon Sriphon, 2018)

While Poomin Butrain (2018) found that if artificial intelligence causes harm, legal action

can be pursued under the Civil and Commercial Code, Section 437, which states that "... anyone who owns or controls a vehicle or property that causes harm is liable for the resulting damage..." For instance, in the case of smart cars, the law places the burden of proof on the controller and owner, differing from general litigation where the plaintiff must initiate the evidence gathering process against the disputed party. However, under Section 437, practical challenges may arise as the legal responsibility specified by the law falls onto the controller or owner. In the case of intellectual property ownership, the owner may not be the one causing harm but could be the direct victim, such as in accidents involving self-driving cars. The root cause of the damage may be attributed to the developers of the smart car, but the existing law shifts the burden of liability to the vehicle user. In addition to the Civil and Commercial Code, Poomin Butrain also mentioned the violation of the Liability for Unsafe Products Act of 2008, which holds all operators, whether manufacturers, employers of manufacturers, importers or sellers of products unable to identify the manufacturer or employer, liable for damage caused by unsafe products. Therefore, if there is harm caused by unsafe products, one can directly sue the product manufacturer. However, if the AI used falls outside the scope of this law, it will not be protected by the aforementioned legislation.

Furthermore, looking at the dimension of criminal liability by applying the ideas of John Stuart Mill, it can be concluded that if artificial intelligence poses a threat to humans, criminal liability should be established to control the behavior and actions of AI. However, when considering the completeness of both external (actions) and internal (intent) components, along with various forms of criminal liability, it is found that, from a legal perspective, machines cannot become humans. Nevertheless, the law also addresses accomplices, and in the case of AI, the accomplice is the programmer who designs software for AI to commit wrongdoing, and the user of AI for personal benefit is also considered an accomplice. However, in both scenarios, programmers and users do not manifest any actions that align with the definition of wrongdoing. Therefore, programmers and users are not acting in accordance with the external components of wrongdoing unless they have a clear intention and the resulting harm is foreseeable. In cases where programmers or users do not exercise caution, such as when the programmer of an automated pilot program maliciously programmed the computer system without exceptions to protect human life, resulting in the death of a human pilot, the programmer is then held liable for their intentional act of using AI to kill a human pilot. Furthermore, if we consider the future, as AI develops to be as intelligent as humans, it will create external and internal components that lead to criminal liability. At that time, the criminal liability of AI will be defined under existing laws (Sukmataya, 2019).

In addition to establishing liability in accordance with the traditional laws of Thailand, as

mentioned earlier, this issue is also applied in regulations developed by directly related agencies involved in the development of artificial intelligence. In 2022, the National Science and Technology Development Agency issued a proclamation to disseminate ethical practices in AI, defining liability clearly. It states that research, design or development of AI must have mechanisms that ensure confidence in the responsibility for the impact of AI by those involved in research, design, development and implementation. These mechanisms should be verifiable, including clear identification of those responsible and there should be mechanisms to address and take responsibility for potential harm according to their duties. Additionally, awareness of the crucial role of everyone involved in designing, developing and implementing AI is emphasized. Those who have gains or losses in these processes should consult with each other regarding the

appropriate functioning of AI, including planning for risk management or long-term impacts that may occur. The criteria under the practice guidelines for liability are as follows:

  • Defining the roles and responsibilities of all participants involved in the AI project within their own scope of duties to build confidence in taking responsibility for the potential impacts that may arise from the AI for AI users.

  • Ensuring the auditability of the AI system by allowing it to be examined, utilizing results or data from the system's traceability and other data collection mechanisms that the system cannot perform to support the verification process such as documentation, recording operational data etc.

  • Implementing a risk assessment process for the AI system that considers both direct and indirect stakeholders, providing opportunities for these stakeholders to report defects, risks or biases that may occur in the AI system.

  • When users face danger or experience harm from the AI system, the owner of the AI system should have mechanisms in place to take responsibility for the impact. In all cases, the supervising organization should emphasize and oversee appropriate accountability measures

The criteria mentioned above are not meant to be mandatory but encompass good practices that are not enforceable, recommendations to be followed and requirements to be adhered to. These criteria are applicable to various stakeholders involved in AI, and all of them will be applicable during the testing phase and the phase where AI is deployed widely. However, during the development phase of AI, only the criteria related to defining the roles and responsibilities of project participants and ensuring the auditability of the AI system will be enforced. The level of enforcement for each criterion is summarized in Figure 5

Figure 5 Accountability practice

Source : National Science and Technology Development Agency (2023)

Future development trend of regulations to control artificial intelligence to support gender equality

From the results of the above study, it can be seen that although various organizations, both directly and indirectly related to artificial intelligence continuously focus on establishing guidelines to support gender equality, these existing guidelines cannot be directly applied to control AI to prevent gender bias without loopholes. For example, although the Gender Equality Act of 2015 emphasizes promoting clear gender equality and covers actions related to creating gender injustice, including establishing a committee to adjudicate cases related to gender-based discriminatory practices, it is not explicitly designed to handle cases of AI bias. Therefore, if a user experiences negative impacts from an AI decision-making process, they may not be able to directly file a complaint against the AI. Instead, they may need to file a complaint against the developer or owner of the AI. In this case, the law has not yet established clear procedures for examining and proving such cases. While ethical guidelines for trustworthy AI focus on directly

controlling AI by emphasizing fairness and non-discrimination and specifying clear responsibilities, the practical application of these guidelines still has limitations. This is because the control of AI development with fair decision-making processes can only be enforced for AI developed by projects or researchers under the supervision of government agencies. However, it may not cover AI developed internationally or by private sector entities unrelated to government support. Additionally, some of the established criteria are not strictly enforced but left to voluntary compliance. Furthermore, even though there are discussions about the burden of responsibility, there is still no legal mechanism for compensation, relief or punishment in the event of unfairness issues.

Due to the efforts of Thai government agencies in designing and implementing artificial

intelligence oversight methods over the past decade, the enforcement of standards for governance has become increasingly fair. Recently, The Electronic Transactions Development Agency, an organization under the supervision of the Ministry of Digital Economy and Society has introduced a new law named The draft Act on the Promotion and Support of Artificial Intelligence Innovation in Thailand to complement the oversight of AI based on existing criteria. Principles of fairness, equality and non-discrimination are incorporated into this law while guidelines for the development of tools to support oversight have been outlined. These guidelines include the development of AI innovation testing centers, support for data sharing, the establishment of standards and certification for AI, the establishment of a risk assessment system for AI system usage and the formulation of standards for agreements between service providers and users of AI products or services or those involving AI components (Office of the Council of State, 2023)

Upon analyzing the draft Act on the Promotion and Support of Artificial Intelligence Innovation in Thailand, it is found that within this law, there are several provisions aimed at enhancing the efficiency of AI control to promote gender equality. These provisions include:

  1. Clear definitions of "artificial intelligence," "AI innovation," and "AI operators" are provided to enable the law's direct application to the target group.

  2. The establishment of AI innovation testing centers tasked with testing technologies to clarify the processes of AI operations.

  3. Expert technology agencies, specifically the Electronic Transactions Development Agency are designated to directly oversee and enforce the law, covering standards setting, issuing announcements, preparing documents to guide various operations and evaluating law enforcement results.

  4. The promotion of data sharing for the benefit of AI innovation development.

  5. Certification of standards for algorithms used in AI concerning possibilities for human intervention (Human agency and oversight), technical robustness and safety, privacy and data governance, transparency, societal and environmental well-being and accountability. Criteria, methods and conditions are defined, taking into account the differences in business size, nature and type.

  6. Standards are set for agreements between AI service providers and users regarding roles, responsibilities towards users, service standard guarantees, service fees and ensuring no discriminatory provisions against users.

  7. A system for assessing risks arising from the use of AI systems, considering their influence on changing behavior, potential harm or impacts on social and economic stability. A risk assessment and management list is required as a guideline for action. In cases where there is a credible risk of using AI systems, the agency must promptly assess and inform relevant units for immediate action.

The highlighted seven points within the aforementioned draft Act reflect the government's efforts to enhance the efficiency of public sector organizations in controlling artificial intelligence. However, a deeper analysis reveals some weaknesses embedded in the legislation that may limit the effectiveness of AI control more than desired. The key points are summarized as follows:

  1. While the law empowers relevant agencies in the field of AI technology to oversee the development process and directly regulate target groups, including AI entities and service providers in all forms, it may be too broad to control comprehensively. The provision defining "AI operators" as individuals or legal entities engaged in the sale of goods or services related to AI, regardless of whether they benefit or are registered for value-added tax, might be overly broad and may encompass individuals lacking awareness of their involvement in activities subject to this law.

At the same time, such lack of clarity may prevent effective supervision extending to all parties involved with the artificial intelligence system.

  1. The separation of AI innovation testing centers as a distinct category primarily for technical guidance, regulations or other assistance to AI operators may create a perceived segregation from the establishment and evaluation of standards. This separation might lead to a lack of integration in practical implementation.

  2. While there is a clear designation of authoritative oversight agencies with direct expertise, the role of these agencies seems to focus primarily on documentation and providing recommendations for implementation. In terms of issuing announcements to regulate standards in AI services, it remains unclear how stringent the oversight of standards will be. If the control is limited to voluntary compliance, such as the trustworthy AI ethics guidelines by National Science and Technology Development Agency, the enactment of this law may not yield significant benefits. Furthermore, the responsibilities of these agencies seem to conclude with the preparation of reports assessing the enforcement of this law for future amendments, improvements or cancellations. This summary of legal use appears no different from a research study process.

  3. The issue of data sharing for the benefit of AI innovation development should not

be categorized separately in this Act. This is because data sharing depends largely on individual operators and has little relevance to the seven international principles. However, including data sharing provisions in this law should be aimed at supporting oversight to enhance transparency, by providing information to those involved in artificial intelligence, both directly and indirectly, to enable them to use the information for consideration in preventing and managing risks.

  1. Under the draft Act on the Promotion and Support of AI Innovation in Thailand, it specifies the need to establish standards that promote diversity, non- discrimination, and fairness. However, no further details are provided. Therefore, the draft law merely mentions these international principles without delving into additional specifics, leaving the responsibility for designing further details to the overseeing agencies or entrepreneurs. This implies that this Act only outlines

these international principles without offering details, not different from the approach to regulations or recommendations that has been followed over the past period.

  1. Under the risk assessment framework for the use of artificial intelligence systems, there is still no clear definition of "risk from the use of artificial intelligence." Despite the need for clarity, such risk assessments should be well-defined early on for businesses to prepare and adapt. Importantly, the law does not grant regulatory authorities the ultimate power, especially concerning issues discovered during AI operation. The law merely states that "the office must promptly assess and inform relevant agencies to take immediate action," leaving a potential gap of implementation, this leads the operation within the government system to revert to the original point which is referred to as “delegating responsibility to others”, which may result in risk spreading and becoming difficult to control

Suggestions for improving future legislation in a direction that emphasizes controlling artificial intelligence to support gender equality

While Thai government agencies are currently advocating for new laws to enhance the oversight of artificial intelligence, the laws under preparation still exhibit certain shortcomings that may hinder effective regulation and fail to promote gender equality. A significant gap lies in the legislation of various activities with objectives diverging from the focus on oversight, emphasizing technology development more than regulatory concerns. Simultaneously, there are deficiencies in clearly defining details, particularly in addressing gender equality. Another issue is enforcement of the law is still lacking in specificity, particularly in terms of how to manage risk, impose penalties, and provide compensation. Consequently, the researchers have formulated recommendations for improving the law, categorized according to the key issues present in the existing draft Act, as follows:

the Definition of "AI Operators"

Recommendation: The legislation concerning AI operators should specify the connections between activities in various economic sectors and the explicit use of AI. This will provide clarity on who is involved, encompassing economic sectors, activities within each sector and possibly related professions. In such cases, in-depth research studies are necessary to illustrate these connections and incorporate the research findings into the definition. This additional information will support law enforcement in other areas.

Establishment of "AI Innovation Testing Centers" for Testing AI Technologies

Recommendation: If the legislation aims to emphasize the control of AI to promote behavior that supports principles, including to gender equality and other international principles, the establishment of AI innovation testing centers should be geared towards providing clarity in the details of AI to facilitate audit processes and risk prevention in subsequent stages. The testing procedures should be integrated into Category 3, which specifies AI standards and legislates details under Category 3 to ensure consistency in standards setting, reviewing, testing, and certification. However, if innovation testing still primarily supports AI research and development, it should not be singled out as a specific provision in the legislation. Instead, it should be enacted as secondary legislation to support and promote the main provisions of the Act.

Empowering Electronic Transactions Development Agency for direct regulatory oversight, the establishment of standards, issuing announcements, creating documents to propose various actions, and evaluating the enforcement of laws

Recommendation: The legislation should strengthen the role of oversight

agencies by explicitly assigning responsibilities in the evaluation/inspection process, penalty imposition, and compensation restitution. These powers should be clearly outlined in the draft legislation, and secondary legislation should be promptly enacted to detail the procedural aspects throughout the entire process. This includes specifying powers for on-site inspections, reporting obligations, and the authority to impose penalties. The legislative provisions should be practical and enforceable, not reliant on voluntary actions.

Establishing Data Sharing for Innovation Development Purposes in AI

Recommendation: In this legislation, data sharing should be aimed at supporting AI oversight to promote transparency, providing information to stakeholders involved with AI and impacted both directly and indirectly from AI decision making process, rather than solely for supporting future technology development. Therefore, within this draft legislation, the content related to data sharing should be amended, emphasizing sharing for the purpose of educating those involved with AI to minimize the direct and indirect impacts of AI. Additionally, the stakeholders involved in data sharing should encompass oversight agencies, business partners and those involved in the usage of AI throughout the value chain. The legislation may define the scope of data sharing, specify the benefits expected from sharing, ensuring that data sharing serves a purpose and doesn't result in

unnecessary data leaks. Further study may be required to assess the value of data sharing in each format, contributing to the detailed regulations in subsequent secondary legislation.

Certification of Standards for AI Systems' Operations in Various Dimensions including to Diversity, non-discrimination and fairness

Recommendation: Under this Act, it is crucial to define the seven standards

explicitly outlined in Article 23, particularly regarding the promotion of diversity, non-discrimination, and fairness. It should detail how these principles will be implemented. Furthermore, To adapt international best practices to the Thai context, Public opinion from Thai citizens nationwide should be considered when refining these standards. It is also essential to promptly establish relevant behaviors associated with these principles across all economic sectors to prevent gaps in enforcement and avoid future injustices resulting from law enforcement.

Establishment of a Risk Assessment System for AI System Usage and the Development of an Audit Checklist for Assessment and Risk Management as Operational Guidelines

Recommendations:

  • The current draft of the legislation lacks a clear definition of "risk from AI system usage." It is essential to expedite the study of comprehensive AI- related risks across all economic sectors and promptly enact secondary legislation specifying preventive measures and risk resolution methods. This will empower public agencies and relevant entities to use this information for supervision, control and mitigating adverse effects.

  • The current draft does not grant sufficient authority to supervisory agencies for enforcing laws to their fullest extent. Particularly concerning the

discovery of risks arising from AI operations, the law only states that the "Office shall promptly assess the impact and notify relevant agencies to take immediate action." This loophole might result in a return to the previous ineffective bureaucratic procedures, allowing risks to escalate uncontrollably. Hence, it is advisable to enhance the role of agencies in proactive and stringent risk management, allowing them to issue immediate cessation orders and potentially impose penalties in cases where the identified risks lead to significant negative consequences. However, if the current draft of the legislation on promoting and supporting AI innovation in Thailand does not directly empower supervisory agencies to impose penalties, an article may be included to authorize these agencies to act as representatives of the affected parties, allowing them to file lawsuits under civil or criminal law to ensure that enforcement measures are carried out effectively.

In addition to amending the content of the aforementioned draft legislation, an analysis of the existing gaps under current regulations and the future regulatory trends reveals the need to design secondary legislation under the draft law on promoting and supporting AI innovation in Thailand simultaneously. This is to provide all parties with access to in-depth details and prompt readiness for urgent adjustments. The secondary legislation, crucial for driving the development of AI that supports gender equality, should include at least the following laws:

  1. Announcement …. on the classification of artificial intelligence business activities.

  2. Announcement …. on the criteria and methods for defining the standards for conducting artificial intelligence business.

  3. Announcement …. on the criteria and methods for establishing criteria and procedures for assessing risks associated with the use of artificial intelligence systems.

  4. Announcement …. on the criteria and methods for testing artificial intelligence. (Instead of Announcement regarding the establishment of an artificial intelligence testing center)

  5. Announcement ….. on the criteria and methods for mitigating damages arising from

artificial intelligence.

Under the subordinate laws of each aforementioned category, the necessary details to be enacted in the law are summarized in the table

Table 1 Details that should be enacted in subordinate laws under the draft of the Innovation and Artificial Intelligence Promotion and Support Act of Thailand

Subordinate laws

Details

1) Announcement.… on the

classification of artificial intelligence business activities

  1. In-depth studies must be conducted

regarding the correlation between each economic/occupational sector and artificial intelligence technology to understand the varying levels of connection.

  1. There must be a study of the connections between different types of businesses that utilize artificial intelligence and the potential societal impacts. This information should be integrated into the risk assessment system. In the research process, it is necessary to analyze data from the opinions of relevant individuals in various fields.

  2. Regarding gender equality, it is

necessary to rely on data from women working in various economic sectors to supplement the aforementioned studies.

2) Announcement …. on the

standards for defining the standards for conducting artificial intelligence business.

1) Technical standards for artificial intelligence

must be established based on the following 7 international principles:

  • Feasibility of human intervention in

operations

  • Technical security and resilience against

attacks

  • Preservation of privacy and data governance

  • Transparency of the system

  • Promotion of diversity, non-discrimination, and fairness

  • Improvement of societal well-being and the environment

  • Responsibility for potential risks or damages (It is not advisable to delegate the responsibility to the AI service providers to determine guidelines on their own, as this may lead to overly simplistic guidelines and metrics, underestimate low risks, and have ineffective risk management systems in the future)

  1. The aforementioned 7 standards may

incorporate ethical guidelines for artificial intelligence practices, and simultaneously, feedback from relevant stakeholders in each field should be collected to refine the standards to align with the current societal context.

  1. Standards related to gender equality must provide opportunities for women working in various economic sectors related to artificial intelligence to contribute information for

enhancing the standards in support of this

issues.

  1. The specified standards may include hierarchical levels based on the connection to artificial intelligence and the potential negative impacts on society, ranging from lenient to stringent levels.

  2. In cases where different standards are set for each business, the certification mark should indicate which standards that have been endorsed. These should be broken down according to the standards that businesses can comply with, making it clear which global principles a given business may not yet align with.

3) Announcement …. on the criteria

and methods for establishing criteria and procedures for assessing risks associated with the use of artificial intelligence systems.

  1. There is a necessity for in-depth studies

on the risks and levels of impact arising from the use of artificial intelligence in each economic sector. This includes impacts on the public, societal effects and economic impacts. These studies should be developed into foundational models that all parties can use to adapt and learn to prevent and manage risks in the future.

  1. Define the responsibilities of regulatory/certification bodies clearly, including auditing the standards of artificial intelligence that businesses seek certification for. This should also include the authority to follow up on the use of risk-prone artificial intelligence widely to

control and prevent future harm. ( Risk

management processes should not be confined solely to the internal management within the organization of the AI service provider)

  1. Clearly define risk management procedures, linking the impacts of artificial intelligence in each case to risk management measures.

  2. Under this declaration, supervisory bodies must be assigned the responsibility to continuously monitor and control risks from artificial intelligence and update information/models in response to dynamic changes.

  3. It is advisable to involve stakeholders who are expected to be impacted by the use of artificial intelligence in the development of the risk assessment system from the outset. Participation should not be limited to the review process only after issues have been identified.

  4. Regulatory bodies should establish a standardized format for preparing risk reports that AI service providers can adopt. The reports must specify the assessment methods and results categorized according to relevant international standards. The reporting process should reflect the ability of AI service providers

to align operations with the aforementioned

principles clearly

4) Announcement …. on the criteria

and methods for testing artificial intelligence.

  1. The operations of the testing center should

aim to test artificial intelligence standards to align with all 7 international principles. In addition, the standard testing process should be linked to the risk assessment and management system.

  1. Testing should be categorized into tests for compliance with each standard. For each standard, relevant stakeholders should participate in the testing.

  2. In cases where testing is required for standards related to gender equality, women working in various economic sectors related to artificial intelligence should be involved in the testing process. This ensures that the testing results align most effectively with the societal context.

  3. The selection of participants for testing and evaluation should encompass individuals expected to be impacted by the commercial use of artificial intelligence (deployment stage). In cases where the use leads to outcomes inconsistent with the seven fundamental principles, members of that group should be involved in providing beneficial suggestions for the development and improvement of the AI system to align with those international principles (not just inviting participants who are

expected to be impacted during the testing

process)

5) During the testing process, it should be explicitly stated in the legislation that if risks arise from the use of artificial intelligence, the service provider must promptly enhance and rectify the AI system. Failure to take action within the specified time frame will result in the prohibition of deploying the tested AI for commercial purposes (not just specifying the termination of testing)

5) Announcement ….. on the criteria

and methods for mitigating damages arising from artificial intelligence.

  1. Establish clear risk management

procedures. These procedures should link the impacts of artificial intelligence in each case to risk management measures which may take the form of penalties, compensation or damage mitigation. In cases where entities associated with artificial intelligence fail to comply with standards or conduct themselves negligently to the extent of causing harm, the legal framework should prescribe methods of penalty or compensation in a separate law (not solely by withdrawing certification).

  1. There should be a committee responsible

for investigating and imposing a penalty.

  1. In cases where the impacts are related to gender inequality, the legal committee under this law should invite groups of

women affected by such impacts to

participate in the deliberation process.

Summary

Artificial intelligence poses a formidable threat to human society, especially when it comes to biased practices. Despite of regulatory efforts, there are still challenges in controlling AI because of the limitations of various regulations governing the oversight of artificial intelligence. A recent development is the draft Act regarding the promotion and support of AI innovation in Thailand incorporates principles of fairness, non-discrimination, and non-segregation into the legal framework however there are notable loopholes that may hinder the effectiveness of AI oversight in achieving gender equality. A key vulnerability is the potential misalignment of activities governed by the legislation, focusing more on technology development than on the core issue of oversight. Additionally, there are unclear details, particularly regarding gender equality. Another issue is the need for robust enforcement of laws throughout the value chain of AI development and usage, particularly in managing risks and establishing clear guidelines for penalties and compensation. Therefore, there should be amendments to the draft legislation, particularly in terms of specifying the details of operations in accordance with international standards and creating clarity in the risk management system. In addition, subordinate laws should be designed simultaneously so that those associated with artificial intelligence can use the information for preparation and adaptation.

References

กระทรวงการอุดมศึกษา วิทยาศาสตร์ วิจัย และนวัตกรรม และกระทรวงดิจิทัลเพื่อเศรษฐกิจและสังคม. 2565. แผนปฏิบัติการด้านปัญญาประดิษฐ์แห่งชาติเพื่อการพัฒนาประเทศไทย (พ.ศ. ๒๕๖๕ – ๒๕๗๐). แหล่งที่มา: https://ai.in.th/wp-content/uploads/2022/12/20220726-AI.pdf

The standard. 2019. รับมืออย่างไร เมื่อ AI มีอคติและแบ่งแยก? เปิด 6 ไอเดียพัฒนาปัญญาประดิษฐ์ให้โปร่งใส โดยไมโครซอฟต์. แหล่งที่มา: https://thestandard.co/6-ideas-for-developing-ai/

HREX.asia. 2023. AI Bias in HR อคติของปัญญาประดิษฐ์ ภัยเงียบขององค์กร .แหล่งที่มา: https://th.hrnote.asia/orgdevelopment/230707-ai-bias/

SAS Institute Inc. ปัญญาประดิษฐ์คืออะไร และสำคัญอย่างไร. แหล่งที่มา: https://www.sas.com/th_th/insights/analytics/what-is-artificial-intelligence.html

สำนักงานพัฒนาธุรกรรมทางอิเล็กทรอนิกส์. 2565. เอกสารข้อเสนอแนะเกี่ยวกับแนวทางการกำกับดูแลและ ส่งเสริมเทคโนโลยีดิจิทัลสมัยใหม่ (Emerging Digital Law Recommendation Paper)การกำกับดูแล เทคโนโลยีปัญญาประดิษฐ์สำหรับธุรกรรมอิเล็กทรอนิกส์และบริการดิจิทัล(Artificial Intelligence Governance for e-Business and Digital Services). แหล่งที่มา: https://www.etda.or.th/getattachment/Our-Service/AIGC/Research-and-Recommendation/AI- Recommendation-Paper-25-Oct(419439584-1).pdf.aspx?lang=th-TH

สำนักงานสภาพัฒนาการเศรษฐกิจและสังคมแห่งชาติ. เป้าหมายที่ 5 บรรลุความเสมอภาคระหว่างเพศ และเพิ่ม บทบาทของสตรีและเด็กหญิงทุกคน. แหล่งที่มา: https://sdgs.nesdc.go.th/%E0%B9%80%E0%B8%81%E0%B8%B5%E0%B9%88%E0%B8%A2%E 0%B8%A7%E0%B8%81%E0%B8%B1%E0%B8%9A- sdgs/%E0%B9%80%E0%B8%9B%E0%B9%89%E0%B8%B2%E0%B8%AB%E0%B8%A1%E0%B8

%B2%E0%B8%A2%E0%B8%97%E0%B8%B5%E0%B9%88-5-

%E0%B8%9A%E0%B8%A3%E0%B8%A3%E0%B8%A5%E0%B8%B8%E0%B8%84%E0%B8%A7

%E0%B8%B2%E0%B8%A1%E0%B9%80/

สํานักงานราชกิจจานุเบกษา. 2558. พระราชบัญญัติความเท่าเทียมระหว่างเพศ พ.ศ. 2558. แหล่งที่มา: https://www.ratchakitcha.soc.go.th/DATA/PDF/2558/A/018/17.PDF

สํานักงานเลขาธิการวุฒิสภา. 2562. AI เทคโนโลยีอนาคตของประเทศไทย. แหล่งที่มา:

https://www.senate.go.th/document/Ext21130/21130674_0002.PDF

Techsauce. 2019. กระทรวงดิจิทัลฯ เผยแนวทาง ‘Thailand AI Ethics’ พร้อมหนุนให้ 'AI ขับเคลื่อนเศรษฐกิจ' ด้วยความรับผิดชอบ. แหล่งที่มา: https://techsauce.co/news/ministry-of-digital-economy-and- society-mdes-announced-thailand-ai-ethics-guideline

สำนักงานพัฒนาวิทยาศาสตร์และเทคโนโลยีแห่งชาติ. 2566. ประกาศสำนักงานพัฒนาวิทยาศาสตร์และเทคโนโลย แห่งชาติ เรื่อง แนวปฏิบัติจริยธรรมด้านปัญญาประดิษฐ์ แหล่งที่มา: https://waa.inter.nstda.or.th/stks/pub/ita/20230331-guidelines-artificial-intelligence.pdf

จันทพร ศรีโพน. 2018. แนวโน้มของกฎหมายกับปัญญาประดิษฐ์ในอาเซียน. แหล่งที่มา: https://lawforasean.krisdika.go.th/Content/View?Id=350&Type=1

ภูมินทร์ บุตรอินทร์. 2561. กฎหมายกับปัญญาประดิษฐ์. แหล่งที่มา : https://so05.tci- thaijo.org/index.php/tulawjournal/article/download/195250/135716/

เหมือน สุขมาตย์. 2562. ความรับผิดทางอาญาของปัญญาประดิษฐ์. แหล่งที่มา: http://www.niti.ubru.ac.th/lawjournal/fileuploads/2- 2562/4.%E0%B8%9A%E0%B8%97%E0%B8%84%E0%B8%A7%E0%B8%B2%E0%B8%A1%20-

%20%E0%B9%80%E0%B8%AB%E0%B8%A1%E0%B8%B7%E0%B8%AD%E0%B8%99%20%E0

%B8%AA%E0%B8%B8%E0%B8%82%E0%B8%A1%E0%B8%B2%E0%B8%95%E0%B8%A2%E0

%B9%8C.pdf

สำนักงานคณะกรรมการกฤษฎีกา. 2566. ร่าง พรบ. ว่าด้วยการส่งเสริมและสนับสนุนนวัตกรรมปัญญาประดิษฐ์ แห่งประเทศ. แหล่งที่มา:

https://law.go.th/listeningDetail?survey_id=MTc0NERHqV9MqVdfRlJPTlRFTkq%3D&fbclid=Iw AR3yx-l4lHqRhR3yABNg8Opadmqn2_NyUykxOFTBPCjtWJ0n37CxPmeNA0U

bact.cc. (ร่าง) พระราชบัญญัติ ว่าด้วยการส่งเสริมและสนับสนุนนวัตกรรมปัญญาประดิษฐ์แห่งประเทศไทย พ.ศ.

... แหล่งที่มา: https://bact.cc/f/2023/04/20230329-national-ai-promotion-bill.pdf

Comments
0
comment
No comments here
Why not start the discussion?