Artificial Intelligence (AI) Policy
The Journal of Palembang Nursing Studies (JPNS) acknowledges the growing role of artificial intelligence (AI) in academic research and scholarly communication. JPNS recognizes that AI technologies, including large language models (LLMs) and other forms of generative AI, can offer meaningful support to authors in various stages of the research and writing process. These tools may assist with generating ideas, accelerating literature exploration, synthesizing or analyzing findings, improving language clarity, and organizing manuscripts for submission.
While the potential benefits of AI are substantial and may enhance research efficiency and dissemination, JPNS affirms that such technologies cannot replace human intellectual contribution, originality, ethical judgment, or critical thinking. The responsibility for the integrity, accuracy, originality, and scientific rigor of all submitted work remains solely with the authors.
JPNS has therefore developed this AI policy to guide authors, reviewers, and editors in the responsible, transparent, and ethical use of AI technologies. This policy is intended to promote innovation while safeguarding academic integrity, authorship accountability, and the trustworthiness of published research.
For Authors
AI Assistance
The Journal of Palembang Nursing Studies (JPNS) acknowledges that AI-assisted writing tools are increasingly used as they become more accessible. AI applications that provide suggestions to enhance an author’s original work such as tools for improving language quality, grammar, clarity, readability, or manuscript structure are considered assistive AI tools. The use of such tools does not require formal disclosure by authors or reviewers.
However, authors remain fully responsible for ensuring that all submitted work is accurate, original, ethically prepared, and meets the highest standards of rigorous scholarship. The use of AI tools does not diminish an author’s accountability for the scientific integrity of their manuscript.
Generative AI
Generative AI tools that are capable of producing substantive content such as generating text, references, images, figures, tables, or other scholarly material must be disclosed when used by authors or reviewers.
Authors must cite original primary sources and must not list generative AI tools as references. If any part of a submission has been primarily or partially generated using generative AI, this must be clearly disclosed at the time of submission to allow the Editorial Team to appropriately evaluate the content.
Author Responsibilities When Using Generative AI
Authors using generative AI are required to strictly follow JPNS guidelines and, in particular, to:
-
Clearly disclose the use of generative AI or language models in the manuscript, including the specific tool used and its purpose. This should be reported in the Methods section or Acknowledgements, as appropriate.
-
Verify the accuracy, validity, and appropriateness of all content and citations generated by AI, and correct any errors, inconsistencies, or biases.
-
Prevent plagiarism, recognizing that language models may reproduce substantial text from existing sources. Authors must always check original sources to ensure their manuscript does not contain plagiarized material.
-
Prevent fabrication, as AI tools may generate incorrect information, false claims, or non-existent references. All facts, data, and citations must be independently verified prior to submission.
-
Acknowledge authorship responsibility, noting that AI tools (including chatbots such as ChatGPT) must not be listed as authors on any JPNS submission.
The use of generative AI, when properly disclosed and used responsibly, will not in itself result in manuscript rejection. However, if the Editor becomes aware at any stage of the editorial or publication process that generative AI was used inappropriately or without proper disclosure, the Editor reserves the right to reject the manuscript at any time.
Inappropriate use includes, but is not limited to:
-
Generation of misleading, incorrect, or fabricated content
-
Plagiarism or improper attribution
-
Failure to disclose the use of generative AI
For Reviewers and Editors
The use of artificial intelligence (AI) and large language models (LLMs) in editorial and peer-review activities raises important concerns regarding confidentiality, data security, and copyright. Many AI systems continuously learn from submitted inputs and may reuse this information in outputs provided to other users. As such, the use of these tools in handling unpublished manuscripts carries inherent risks to the integrity and confidentiality of the scholarly publishing process.
AI Assistance
Reviewers may choose to use AI-based tools solely for language improvement purposes, such as enhancing clarity, grammar, tone, or structure of their review reports. When doing so, reviewers retain full responsibility for the content, accuracy, fairness, and constructiveness of their reviews. AI tools must not replace the reviewer’s independent scholarly judgment or critical evaluation.
Editors retain full responsibility for all editorial decisions and for safeguarding the integrity of the scholarly record. Editors may use AI tools in a limited and supportive capacity, such as assisting in the identification of suitable peer reviewers. However, all editorial decisions must always be made through independent human judgment.
Generative AI
Reviewers must not use ChatGPT or other generative AI tools to generate peer-review reports, in whole or in substantial part. Any reviewer found to have inappropriately used generative AI to produce a review report will not be invited to review for the journal again, and the review will be excluded from the editorial decision-making process.
Editors must not use ChatGPT or other generative AI tools to:
-
Generate editorial decision letters
-
Summarize, evaluate, or make judgments about unpublished manuscripts
-
Process confidential manuscript content
These activities require independent editorial expertise and must remain entirely human-led.
Undisclosed or Inappropriate Use of Generative AI
If reviewers suspect undisclosed or inappropriate use of generative AI in a submitted manuscript, they are required to promptly inform the Journal Editor and provide relevant concerns or evidence.
If Editors suspect that a manuscript or a peer-review report has been generated using ChatGPT or other generative AI tools either inappropriately or without disclosure they must assess the situation in accordance with this policy. Editors may also seek guidance from their JPNS editorial board or ethics committee when handling such cases.
JPNS and the Journal Editor will jointly conduct any formal investigation into concerns related to undisclosed or inappropriate use of generative AI in published articles.