Google Artificial Intelligence (AI) Principles
1. It must be socially useful
The ever-growing impact of new technologies touches every segment of society holistically. Advances in artificial intelligence will have transformative effects in fields as diverse as health, safety, energy, transportation, manufacturing and entertainment. As we consider the potential development and use of AI technologies, we will consider a wide range of social and economic factors and continue where we believe the possible overall benefits greatly outweigh the foreseeable risks and disadvantages. Artificial intelligence will increase our discovery of the meaning of a certain concept/content to a certain extent. We will strive to access high quality and accurate information using artificial intelligence. We will do this by respecting the cultural, social and legal norms of the countries in which we operate. When making our technologies available on a non-commercial policy, we will consider and evaluate them with the greatest possible precision.
2. Avoid creating prejudices or reinforcing existing ones
AI algorithms and clusters can reflect, reinforce or reduce various biases. Prejudices show a lot of variation, especially when it comes to societies and cultures. We are aware that it is not always easy to distinguish these prejudices from “judgments”. We will refrain from making unfair influences on people on sensitive issues such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
3. Must be built for safety, tested for safety
We will continue to develop and implement strong safety and security practices to avoid consequences that may pose a risk of harm, even unintentionally. We will design our AI systems vigilantly and develop them in accordance with best AI security research practices. We will test AI technologies in a constrained environment for all appropriate situations and monitor all their actions/actions after the technologies are run.
4. It should be open to account to people
We will design AI systems that provide appropriate opportunities for feedback, relevant/required clarifications, and appeal. Our AI technologies will be subject to appropriate human management and control.
5. Privacy should encompass design principles
The development and use of our artificial intelligence technologies will cover our privacy principles. We will provide notification and consent, support structures with privacy guarantees, and ensure appropriate transparency and control over the use of data.
6. Maintain high standards of scientific excellence
Technological innovation is rooted in scientific methods. In addition, it is open to questioning, intellectually rigorous, holistic and cooperative. Artificial intelligence tools; It has the potential to reveal new fields of scientific research and knowledge in critical specialties such as biology, chemistry, medicine and environmental sciences. As we work to continue the development of artificial intelligence, we aspire to high standards of scientific excellence. We will work with stakeholders who can think sensitively, have scientific rigor and a multidisciplinary approach in this field. We will responsibly share AI knowledge by publishing educational materials, good practice examples and research to enable more people to develop useful AI applications.
7. It should be prepared in an accessible way for uses in line with these principles.
Many technologies have multiple uses. We will work to limit potentially harmful and malicious applications. As we develop and implement AI technologies, we will consider possible uses in light of the following:
Main purpose and use: The main purpose and possible use of artificial intelligence technology and application, how close the solution it offers is to be used for a harmful purpose or it can be turned into a harmful use.
Nature and Uniqueness: The technology we develop can be unique or specific to a more general use.
Scale: the use of artificial intelligence technology has a significant impact.
How Google is involved: Providing general-purpose, integrative tools for users and/or developing user-specific solutions
Artificial Intelligence Applications We Will Not Implement
In addition to the aforementioned purposes, we will not design and implement AI in the following areas:
Technologies that are harmful or potentially harmful in general. In situations where there is a significant risk of harm, we will only proceed with appropriate safeguards if we believe the benefits significantly outweigh the risks.
Weapons or technologies whose main purpose is to injure people.
Technologies that collect or use information for surveillance purposes that violate internationally accepted norms.
Technologies whose purpose is to violate widely accepted principles of international law and human rights.
We want to be clear that while we are not developing AI for use in weapons, we will continue to work with governments and the military in many other areas. These include areas such as cybersecurity, education, military recruitment, veteran healthcare, search and rescue. And we will actively seek more ways to increase the critical work of these organizations and keep their serving members and civilians safe.