Companies and organizations that utilize artificial intelligence will be required to take measures to reduce the risk of overreliance on the technology, draft guidelines by a Japanese government panel showed Saturday.
The draft guidelines obtained by Kyodo News also call on AI developers to be careful not to use biased data for machine learning, while urging them to maintain records of their interactions with the technology, to be provided in the event of any issues.
The panel, which is tasked with discussing the country's AI strategy, is expected to finalize the guidelines by the end of the year. Japan, this year's chair of the Group of Seven industrialized nations, is also working with other members on drawing up international guidelines for AI developers.
The draft outlines 10 basic rules for AI-related businesses, such as ensuring fairness and transparency with regard to protecting human rights and preventing personal information from being given to third parties without an individual's permission.
The rules also ask that information be provided about how data is acquired from an individual or entity and how it is then used by related parties.
Companies that develop AI platforms, providers of services that utilize the technology and users will all be required to share some degree of responsibility.
The guidelines provide principles according to business categories. Developers are requested to ensure that data employed for AI purposes is both accurate and up to date, and that they preferably adopt measures to ensure information that has not been approved for use cannot be accessed.
Meanwhile, providers that utilize AI will be asked to warn users to avoid inputting personal information that they do not want accessed by third parties, and guarantee that their services are limited to their intended use to prevent bad actors from employing the technology for malign purposes.