Artificial Intelligence (AI) Policy
Purpose
The purpose of this policy is to provide clear guidance on the responsible use of Generative AI tools (e.g., ChatGPT, Copilot, Gemini, Claude, Jasper AI, etc.) in the preparation, submission, review, and editorial handling of manuscripts across all journals published by Nitee Publication.
Authors
- Accountability: Authors are fully responsible for the originality, accuracy, and integrity of their submissions.
- Permitted Uses: Generative AI tools may be used responsibly for:
- Idea generation and exploration.
- Language improvement (especially for non‑native speakers).
- Literature classification and coding assistance.
- Interactive search support.
- Prohibited Uses: AI tools must not be used to:
- Generate text, code, or data without rigorous human revision.
- Create synthetic data to substitute missing data.
- Generate abstracts, figures, tables, or supplemental materials without validation.
- Manipulate images, figures, or original research data.
- Disclosure Requirement: Authors must clearly acknowledge any use of AI tools in their manuscript.
- For journal articles: disclosure must appear in the Methods or Acknowledgments section.
- The statement must include: tool name (with version), how it was used, and why it was used.
- Authorship: AI tools cannot be listed as authors. Authorship requires accountability, copyright responsibility, and contractual assurances — responsibilities that only humans can undertake.
Editors
- Editors must maintain confidentiality and integrity in handling manuscripts.
- Editors must not upload unpublished manuscripts, files, or data into AI tools.
- Editors may use AI tools only for language improvement or administrative support, never for content evaluation.
- Any use of AI tools by editors must be transparent and must not compromise confidentiality, intellectual property, or ethical standards.
Peer Reviewers
- Peer reviewers must not upload unpublished manuscripts or project proposals into AI tools.
- Generative AI may be used only to improve the language of reviews, not to analyse or summarise manuscripts.
- Reviewers remain fully responsible for the accuracy, integrity, and originality of their reports.
Ethical Risks and Safeguards
- Inaccuracy and Bias: AI outputs may contain errors, bias, or fabricated information. Authors and reviewers must verify all content.
- Attribution: AI tools often fail to provide proper citations. Authors must ensure accurate referencing.
- Confidentiality: Manuscripts must not be shared with third‑party AI platforms that lack secure data protection.
- Intellectual Property: Authors must ensure that AI tools used provide sufficient safeguards for copyright and data security.
Oversight and Transparency
- All use of AI must be accompanied by human oversight.
- Transparency in disclosure ensures editors and reviewers can assess whether AI tools have been used responsibly.
- Nitee Publication retains discretion over acceptance or rejection of manuscripts where AI use is deemed inappropriate or unethical.
Policy Review
This policy will be periodically updated to reflect evolving standards in AI technology, publishing ethics, and international best practices.