Program Policy

AI Conduct Policy

GSSoC embraces AI as a learning tool while protecting the integrity of open source contributions and the spirit of genuine skill development.

Why This Policy Exists

Generative AI tools are widely used in software development. GSSoC does not prohibit them — that would be both impractical and counterproductive. However, open source contribution is fundamentally about human understanding, human judgment, and human accountability.

Code that you cannot explain is code that you should not submit. Projects that accept unreviewed AI-generated contributions inherit technical debt that their maintainers will carry indefinitely. Contributors who submit AI output without engagement learn nothing.

This policy exists to protect contributors who benefit most from genuine learning, maintainers who deserve quality contributions, and the open source ecosystem which depends on trust and accountability.

Permitted AI Use

Learning and Understanding

Using AI to explain a concept, debug an error, interpret documentation, or understand an existing codebase is actively encouraged. This is AI in its most valuable role — as a patient, accessible teacher.

Boilerplate Generation

Generating repetitive, well-understood boilerplate — configuration files, test scaffolding, standard patterns — is acceptable, provided you review, understand, and can explain every part of what was generated.

Writing Quality

Using AI to improve the grammar, clarity, or structure of documentation, comments, or commit messages is permitted.

Idea Generation

Using AI to brainstorm approaches to a technical problem is encouraged. The evaluation, selection, and implementation of those approaches must be yours.

Prohibited AI Use

Unreviewed Code Submission

Copying AI-generated code into a pull request without reviewing, testing, and fully understanding every part of it is prohibited. The test: can you explain this code to a reviewer in a live call without preparation?

AI-Generated Communication

Using AI to write issue comments, responses to maintainer feedback, or code review replies is prohibited. All project communication must authentically represent your own thinking.

AI-Written Applications

Using AI to write GSSoC application responses is prohibited. Applications must reflect your genuine voice, experience, and motivation.

Disguising AI Contributions

Minimally editing AI-generated output to obscure its origin while claiming the work as entirely your own is a form of dishonesty and is treated as such under this policy.

Disclosure and Transparency

If AI tools contributed substantially to a solution, include a disclosure note in your pull request description. A suitable format:

Used GitHub Copilot to generate the initial implementation of the data transformation function. Reviewed each section, corrected two logic errors, added test coverage, and adapted the output format to match the project's existing conventions.

Disclosure is not penalised. Disclosure combined with genuine understanding and quality work is viewed positively. Concealment of AI assistance is grounds for the escalating consequences below.

Detection and Enforcement

Mentors and project admins review contributions for indicators of unreviewed AI generation. Contributors may be asked to explain their code in a short synchronous call. Consistent patterns of AI-style output without demonstrable understanding trigger review.

Escalating consequences:

  • First violation: Pull request rejected, formal written warning, required acknowledgment of this policy
  • Second violation: Two-week suspension from program activities and the leaderboard
  • Third violation: Permanent disqualification from the current and future GSSoC editions
  • Severe violations (large-scale, systematic AI submission without any engagement): immediate permanent disqualification