AwesomeResponsibleAI
A curated list of awesome academic research, books, code of ethics, courses, databases, data sets, frameworks, institutes, maturity models, newsletters, principles, podcasts, regulations, reports, responsible scale policies, tools and standards related to Responsible, Trustworthy, and Human-Centered AI.
Install / Use
/learn @AthenaCore/AwesomeResponsibleAIREADME
Awesome Responsible AI
A curated list of awesome academic research, books, code of ethics, courses, data sets, databases, frameworks, institutes, maturity models, newsletters, principles, podcasts, regulations, responsible scale policies, reports, tools and standards related to Responsible, Trustworthy, and Human-Centered AI.
Main Concepts
What is AI Governance?
AI governance is a system of rules, processes, frameworks, and tools within an organization to ensure the ethical and responsible development of AI.
What is Human-Centered AI?
Human-Centered Artificial Intelligence (HCAI) is an approach to AI development that prioritizes human users' needs, experiences, and well-being.
What is Open Source AI?
When we refer to a “system,” we are speaking both broadly about a fully functional structure and its discrete structural elements. To be considered Open Source, the requirements are the same, whether applied to a system, a model, weights and parameters, or other structural elements.
An Open Source AI is an AI system made available under terms and in a way that grant the freedoms1 to:
- Use the system for any purpose and without having to ask for permission.
- Study how the system works and inspect its components.
- Modify the system for any purpose, including to change its output.
- Share the system for others to use with or without modifications, for any purpose.
What is Responsible AI?
Responsible AI (RAI) refers to the development, deployment, and use of artificial intelligence (AI) systems in ways that are ethical, transparent, accountable, and aligned with human values.
What is a Responsible AI framework?
Responsible AI frameworks often encompass guidelines, principles, and practices that prioritize fairness, safety, and respect for individual rights.
What is Trustworthy AI?
Trustworthy AI (TAI) refers to artificial intelligence systems designed and deployed to be transparent, robust and respectful of data privacy.
Why is Responsible, Trustworthy, and Human-Centered AI important?
AI is a transformative and dual-side technology prone to reshape industries, yet it requires careful governance to balance the benefits of automation and insight with protections against unintended social, economic, and security impacts. You can read more about the current wave here.
Content
- Academic Research
- Books
- Code of Ethics
- Courses
- Data Sets
- Databases
- Frameworks
- Institutes
- Maturity Models
- Newsletters
- Principles
- Podcasts
- Regulations
- Responsible Scale Policies
- Reports
- Standards
- Tools
- Citing this repository
Academic Research
Adversarial ML
- Oprea, A. et al. (2023). Adversarial machine learning: A taxonomy and terminology of attacks and mitigations. National Institute of Standards and Technology. Article
Artificial General Intelligence (AGI)
- Hendricks, D. et al. (2025). A definition of AGI. Article
Artificial Intelligence Governance (AI Governance)
- Eisenberg, I. W. et al. (2025). The Unified Control Framework: Establishing a Common Foundation for Enterprise AI Governance, Risk Management and Regulatory Compliance. arXiv preprint arXiv:2503.05937. Article Visualization
Credo
Bias
- Schwartz, R. et al. (2022). Towards a standard for identifying and managing bias in artificial intelligence (Vol. 3, p. 00). US Department of Commerce, National Institute of Standards and Technology. Article
NIST
Challenges
- D'Amour, A. et al. (2022). Underspecification presents challenges for credibility in modern machine learning. Journal of Machine Learning Research, 23(226), 1-61. Article
Google
Drift
- Ackerman, S. et al. (2021, June). Machine learning model drift detection via weak data slices. In 2021 IEEE/ACM Third International Workshop on Deep Learning for Testing and Testing for Deep Learning (DeepTest) (pp. 1-8). IEEE. Article
IBM - Ackerman, S. et al. (2020, February). FreaAI: Automated extraction of data slices to test machine learning models. In International Workshop on Engineering Dependable and Secure Machine Learning Systems (pp. 67-83). Cham: Springer International Publishing. Article
IBM
Explainability/Interpretability/Mechanical Interpretability
- Dhurandhar, A. et al. (2018). Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems, 31. Article
University of MichiganIBM Research - Dhurandhar, A. et al. (2018). Improving simple models with confidence profiles. Advances in Neural Information Processing Systems, 31. Article
IBM Research - Gurumoorthy, K. S. et al. (2019, November). Efficient data representation by selecting prototypes with importance weights. In 2019 IEEE International Conference on Data Mining (ICDM) (pp. 260-269). IEEE. Article
Amazon Development CenterIBM Research - Hind, M. et al. (2019, January). TED: Teaching AI to explain its decisions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 123-129)Article
IBM Research - Lundberg, S. M. et al. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Article, Github
University of Washington - Luss, R. et al. (2021, August). Leveraging latent features for local explanations. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining (pp. 1139-1149). Article
IBM ResearchUniversity of Michigan - Ribeiro, M. T. et al. (2016, August). "Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144). Article, Github
University of Washington - Wei, D. et al. (2019, May). Generalized linear rule models. In International conference on machine learning (pp. 6687-6696). PMLR. Article
IBM Research - Contrastive Explanations Method with Monotonic Attribute Functions (Luss et al., 2019)
- Boolean Decision Rules via Column Generation (Light Edition) (Dash et al., 2018)
IBM Research - Towards Robust Interpretability with Self-Explaining Neural Networks (Alvarez-Melis et al., 2018)
MIT
An interesting curated collection of articules (updated until 2021) A Living and Curated Collection of Explainable AI Methods.
A shared effort can be found at Neuronpedia.
Ethical Data Products
- Gebru, T. et al. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86-92. Article
Google - Mitchell, M. et al. (2019, January). Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency (pp. 220-229). Article
Google - Pushkarna, M. et al. (2022, June). Data cards: Purposeful and transparent dataset documentation for responsible ai. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1776-1826). Article
Google - Rostamzadeh, N. et al. (2022, June). Healthsheet: development of a transparency artifact for health datasets. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1943-1961). Article
Google - Saint-Jacques, G. et al. (2020). Fairness through Experimentation: Inequality in A/B testing as an approach to responsible design. arXiv preprint arXiv:2002.05819. Article
LinkedIn
Evaluation (of model exp
Related Skills
feishu-drive
325.6k|
things-mac
325.6kManage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database)
clawhub
325.6kUse the ClawHub CLI to search, install, update, and publish agent skills from clawhub.com
codebase-memory-mcp
758High-performance code intelligence MCP server. Indexes codebases into a persistent knowledge graph — average repo in milliseconds. 64 languages, sub-ms queries, 99% fewer tokens. Single static binary, zero dependencies.
Security Score
Audited on Mar 17, 2026
