top of page

1. Responsible AI - "5. Application of AI Technology"



What is responsible AI?

The use of terms such as “responsible AI”, “trustworthy AI” (as used by the EU Commission) or “ethical AI” adds a definitional challenge.


These different terms are generally understood to refer to those AI systems or applications that address the problems and concerns associated with their use.


The definition of “responsible AI” includes five typical characteristics that concern the way AI models are designed and deployed:


1. Fairness: refers to the processes and practices that ensure that AI does not make discriminatory decisions or recommendations.


2. Robustness: The ability to ensure that AI is not vulnerable to attacks or that its performance is affected.


3. Transparency: This refers to sharing information collected during the development process that describes how the AI system is designed and built, as well as the tests conducted to check its performance and other characteristics.


4. Explainability: The ability of AI systems to provide explanations to users and other interested parties about what caused the AI model to produce certain outputs. This is critical to generating trust in AI among users, auditors, and regulators.


5. Privacy: Ensure that AI is developed and used in a way that protects users’ personal information.

0 comments

Recent Posts

See All

Comments


bottom of page