Today Artificial Intelligence (AI) has a lasting effect on people's lives. Particularly in critical appli-cation areas (working life - human resources, health, etc.), increased requirements in terms of ethics, legal conformity and technical robustness can be expected in the near future in order to guarantee that AI applications do not lead to negative social effects and are trustworthy overall (Trustworthy AI Framework). Both government regulation and voluntary certification require-ments will shape the AI market in the future, increasing the pressure on companies to develop AI applications that do not discriminate or discriminate only to an extent that is unavoidable and is disclosed in a transparent manner.
One of the biggest unresolved challenges is how to avoid discrimination of user groups by AI al-ready during the development phase, however before AI deployment in society and markets. In order to avoid socially and technologically discriminating bias, ethical considerations and social science methods have to be incorporated, which is not considered in the AI development up to now and not reflected in the conventional processes and test procedures of AI development. Thus, AI developers and AI users (e.g., companies, public institutions) currently have insufficient application-oriented tools that cover the entire AI development process in order to avoid, recog-nize, and prevent undesired discrimination of certain user groups by AI applications at an early stage.
This shortcoming is addressed by the project fAIr by design, which aims at the development of a novel generic procedural model and a corresponding method toolbox for the development of fair, non-discriminatory AI, involving different user groups and 5 use cases. Using open innova-tion methods, data scientists, AI experts, social scientists, legal experts and the respective appli-cation experts from companies and other organizations will develop modules and strategies for risk reduction for different discrimination risks, which will then be concluded and further devel-oped in a generic process model and a method toolbox for the prevention of discrimination. The focus is on a spectrum of AI with a direct impact on people in society and work, e.g. in the areas of education, human resources, performance assessment, media and health, through the use cases worked on, but overall a process and a toolbox should be created that can be applied as broadly as possible in all possible diversity/discrimination dimensions.