SkillAgentSearch skills...

ResponsibleAI

Capture fundamentals around ethics of AI, responsible AI from principle, process, standards, guidelines, ecosystem, regulation/risk standpoint.

Install / Use

/learn @kkm24132/ResponsibleAI

README

Ethical AI / Responsible AI

Objective: Ethical AI / Responsible AI: Capture fundamentals around ethics of AI, responsible AI from principle, process, standards, guidelines, ecosystem, regulation/risk standpoint.

My Articles/Blogs references

Categories

Category|Description --------|----------- Risk in deployment | <ul><li> Bias (Dataset does not reflect facial recognition not working properly for example)<li> Fairness, History dataset is reality or not<li> Unethical aspects or Unfair usage</ul> Regulatory aspects | <ul><li> Region specific needs (e.g. GDPR etc.)</ul> Provide clarity as much as possible | <ul><li> What is happening from Step 1 to Step N<li> Features used during feature engineering process<li> Any information regarding feature importance / Top N features (as per applicability)</ul> How to approach Bias in AI | <ul> <li> Gather more diverse datasets <li> Explore to include labels from a wider range of judges <li> Monitor output of models / experiments / algorithms <li> Focus on small categories and edge cases <li> Laws and Regulation protocol may be required to address bias </ul>

|Category | DOs (AI Should) |DON'Ts (AI Should Not) | |---------|-----------------|-----------------------| |<ul><li>Principles <li>Processes/Methods <li>Standards/Guidelines <li>Regulation</ul>|<ul><li>Incorporate Privacy Design Principles <li>Incoporate Regulation Principles <li>Be Accountable to users for the solutions that it generates <li>Upholds high standard of scientific excellence of the AI solution <li>Be Accountable to end users / people using the AI solution </ul> | <ul><li>AI creates a solution that may likely to cause overall harm to end users <li> AI solution's principal objective to direct injury <li> Solutions aid in surveillance violating international guidelines </ul>|

Principles from the Ethical Institute

The Ethical institute has recommended following principles

  • Human Augmentation
  • Bias Evaluation
  • Explainability by Justification
  • Reproducible Operations
  • Displacement Strategy
  • Practical Accuracy
  • Trust by Privacy
  • Security Risks

Please check here

As per HBR (Harvard Business review), the ethical frameworks for AI aren't enough. Check Here

Machine Learning Roadmap

ML Roadmap

  • Focus on every stages in the ML journey
  • Critical to highlight on detailing and what tasks are performed in a step wise manner

References

Why do we need ML Interpretability?

  • Some questions pertaining to Model bias, fairness, ethics
  • Do we check causality of features? Does more data help here making better decision?
  • Do we have ability to debug and know more specifics
  • Are there any regulatory requirements associated with and needs to be understood in detail?
  • Do we trust model's outcomes and to what extent?
  • Do we have a segregation of critical domain vs non-critical domain that can be defined?

Human-Centered Design for AI/Data Science

  • Lex Fridman's lecture on Human-Centered Artificial Intelligence :MIT 6.S093
  • Stanford Human-centered Artificial Intelligence research
  • Google's People + AI research (PAIR) Guidebook

Bias - Different Types

This research paper can be followed for 6 different types of Bias in AI.

  • Historical Bias
  • Representation Bias
  • Measurement Bias
  • Aggregation Bias
  • Evaluation Bias
  • Deployment Bias

Bias in AI

Fairness and Model Explainability CHECKLIST

  • This is needed at CRISP-DM stages from a holistic point of view (Kind of a Checklist)
  • Problem Formation
    • Is an algorithm an ethical solution to the problem?
  • Construction of Datasets / Preparation Process
    • Is the training data representative of different groups so that we have diverse data representation for appropriate analysis of feature presence?
    • Are there biases in labels or features?
    • Does the data need to be modified to mitigate bias?
  • Selection of Algorithms or Methods
    • Do fairness constraints need to be included in the objective function?
  • Training Process
  • Testing Process
    • Has the model been evaluated using relevant fairness metrics?
  • Deployment
    • Is the model deployed on a population for which it was not trained or evaluated?
    • Are there unequal effects across users?
  • Monitoring / HITL
    • Does the model encourage feedback loops that can produce increasingly unfair outcomes?

Checklist Responsible AI

Policy Related Guidance

Key Objectives could be as follows fro Policy / Governance / Regulatory related frameworks:

  • Safeguard consumer interest in an AI solution
  • Serve as a common, global, consistent reference point
  • Foster innovation and more robust solutions

Frameworks:

News / Updates

View on GitHub
GitHub Stars7
CategoryDesign
Updated5mo ago
Forks1

Security Score

72/100

Audited on Oct 22, 2025

No findings