Artificial Intelligence (AI) Use and Authorship Policy
The journal Computer Science (CO-SCIENCE) acknowledges the increasing use of Artificial Intelligence (AI) tools in scientific research and academic writing. However, such tools must be used in accordance with principles of scientific integrity, transparency, human oversight, and ethical responsibility.
This policy is aligned with the current recommendations of the Committee on Publication Ethics (COPE) and the World Association of Medical Editors (WAME).
1. AI Tools Cannot Be Listed as Authors
-
AI systems (e.g., ChatGPT, Copilot, Gemini, Claude, etc.) cannot be included as authors under any circumstances.
-
AI tools cannot meet academic authorship responsibilities such as accountability, conflict-of-interest disclosure, or copyright transfer.
-
Therefore, all authors submitting manuscripts to CO-SCIENCE must be real human contributors.
2. AI Use Must Be Transparently Declared
-
Authors must clearly disclose any use of AI tools during manuscript development.
-
This disclosure must be included in a dedicated section titled “AI Contribution Statement”, or in the Acknowledgments section.
-
The declaration must include:
-
the name and version of the AI tool used
-
the purpose of its use
-
the scope of its assistance
-
This level of transparency is required by COPE and WAME ethical standards.
Example statements:
-
“ChatGPT (OpenAI, 2025, version 4.0) was used only for English language editing. All data interpretation and conclusions are solely the responsibility of the authors.”
-
“GitHub Copilot was used to help identify and simplify Python code errors. All code verification and analyses were conducted by the authors.”
-
“No artificial intelligence tools were used in the preparation of this manuscript.”
3. Authors Are Fully Responsible for AI-Assisted Content
-
Any text, figures, tables, or analyses generated or assisted by AI must be verified, validated, and approved by the authors.
-
Authors bear full responsibility for:
-
inaccuracies
-
misleading statements
-
incorrect citations
-
fabricated references
-
unverifiable data that may result from the use of AI tools.
-
Submitting unverifiable or AI-generated content as scientific findings will be treated as a violation of publication ethics.
4. Guidelines for Editors and Reviewers
-
Editors and reviewers must not upload confidential manuscript content (text, data, images, supplementary files) to public AI platforms.
-
Doing so may constitute a breach of peer-review confidentiality.
-
AI tools may be used only for minor technical or language-related suggestions, never for:
-
editorial decision-making
-
peer-review assessments
-
acceptance or rejection recommendations
-
Editorial decisions must remain entirely under human judgment and oversight.
-
Editors may use AI-detection tools as supportive instruments, but results must be interpreted cautiously.
5. Acceptable Uses of AI Tools
CO-SCIENCE permits the use of AI tools only for supportive, non-substantive purposes, including:
-
Language editing: grammar, spelling, and syntax improvements
-
Code assistance: scripting or debugging support (e.g., Python, R)
-
Visualization: conceptual or schematic illustrations
-
Text simplification: improving readability without altering scientific meaning
Prohibited AI uses:
-
Fabrication of data or research results
-
Generation of references or citations
-
Manipulation or alteration of analytical findings
-
Drafting entire manuscripts using AI
-
Creation of fabricated tables, images, or scientific outputs
6. General Principle
AI tools must be viewed strictly as assistive instruments, not as substitutes for human intellect or academic contribution.
Research conception, data analysis, interpretation, and scholarly discussion must always reflect the authors’ original intellectual work.
Computer Science (CO-SCIENCE) considers the preservation of this balance essential to maintaining ethical and high-quality scientific publishing.

















