Skip to main content

AI Consent Framework for Open Source Communities

Guide your community in defining when and how consent is requested for AI-related decisions, training data, and community contributions

Sponsor and Ecosystem Impact

As AI becomes more integrated into open source, establishing clear consent practices ensures your community maintains trust, transparency, and control over how their contributions are used in AI contexts.

AI Consent Framework for OSS

Evaluate the creation and implementation of an AI Consent Framework developed collaboratively between a service provider, maintainers, and the broader community. The framework defines how consent is established, communicated, enforced, and revisited when AI systems affect contributors, users, or community assets.

Resources

Process Milestones

Note: that some milestones may not currently apply to a project (for example, there are no existing contributors to sponsor), but documentation for future consideration is encouraged.

  • Kick off meeting: Maintainer meets with OSS Wishlist admin and pracitioner (whether sponsor employee or verified pracitioner) to align on goals and timeline.
  • Community research (interviews and/or focus groups) to understand the challenges and painpoints related to AI being added to product/service
  • Report and recommendations shared with maintainer and community
  • Iterations based on feedback
  • Implementation support (support of maintainer and community as needed)
  • Wrap up meeting: Maintainer meets with OSS Wishlist maintainer and pracitioner
  • Survey (maintainer and pracitioner)

Audience: Peer reviewers

Scope:
Consent related to AI use in tooling, workflows, data usage, automated decisions, and contributor interactions.
This rubric evaluates process, alignment, and implementation — not legal compliance or AI model quality.

Scoring (per criterion):
0 = Absent / Not Evident
1 = Ad-hoc / Unclear
2 = Defined but limited
3 = Strong and community-aligned
4 = Exemplary and repeatable


CriterionIndicators of ExcellenceScore
A1. Purpose & IntentClearly states why an AI consent framework exists and what risks or concerns it addresses.0–4
A2. Scope of AI UseDefines which AI uses require consent (e.g., contribution assistance, moderation, data analysis, automation).0–4
A3. Community Involvement in ScopingScope defined with maintainer and community input; trade-offs documented.0–4

CriterionIndicators of ExcellenceScore
B1. How Consent Is GivenClear mechanisms for opt-in, opt-out, or conditional consent at appropriate decision points.0–4
B2. Informed & Understandable ConsentConsent language is clear, accessible, and explains consequences and limitations.0–4
B3. Change & WithdrawalProcesses exist to revise or withdraw consent over time without penalty.0–4

C. Governance, Roles & Accountability (0–12 pts)

CriterionIndicators of ExcellenceScore
C1. Defined OwnershipClear roles for maintaining the consent framework and handling exceptions or disputes.0–4
C2. Decision-Making & EscalationTransparent paths for resolving consent-related conflicts or uncertainty.0–4
C3. Alignment with Community GovernanceFramework aligns with existing governance, values, and Codes of Conduct.0–4

D. Implementation & Tooling (0–12 pts)

CriterionIndicators of ExcellenceScore
D1. Integration into WorkflowsConsent requirements integrated into real contributor or user workflows.0–4
D2. Tooling & AutomationTools or bots support consent enforcement (e.g., labeling AI-assisted contributions).0–4
D3. Practical DocumentationClear guidance exists on when consent applies and how it is operationalized.0–4

E. Transparency & Communication (0–12 pts)

CriterionIndicators of ExcellenceScore
E1. Disclosure of AI UseProject publicly documents where and how AI is used.0–4
E2. Data & Model TransparencyHigh-level information about data sources, models, or services involved is available where appropriate.0–4
E3. Feedback & Reporting ChannelsCommunity can raise concerns or questions about AI use or consent breaches.0–4

F. Monitoring, Review & Renewal (0–8 pts)

CriterionIndicators of ExcellenceScore
F1. Signals & MetricsTracks consent-related signals (opt-outs, friction points, disputes).0–4
F2. Iteration & RenewalFramework includes regular review and update cycles as AI use evolves.0–4

Total Score: / 68 pts

RatingDescriptor
60–68Exemplary — Consent is well-defined, implemented, and community-owned
48–59Strong — Mostly complete with minor gaps
34–47Adequate — Functional but uneven
20–33Weak — Significant gaps or unclear enforcement
0–19Not Viable — Consent not meaningfully addressed

Reviewer Notes

  • Areas of strong alignment:
  • Areas of concern or ambiguity:
  • Recommended next steps: