Generative AI Policy
Generative AI policies
AI and Authorship
ARLJ follows COPE Guidelines on artificial intelligence (AI) and authorship. ARLJ policy is that AI software cannot be listed as an author on a paper.
ChatGPT and similar software are not human, and for this reason, cannot independently design studies, create and critique methodologies, interpret data, or be held responsible for the outcomes and implications of the study in question. For this reason, ChatGPT and similar software should be treated as tools, not authors, for more information on COPE’s guidance on AI and authorship.
AI and Automated Tools
ARLJ policies on the use of AI and automated tools are the following:
- ARLJ will not review or accept manuscripts written by nonhuman authors. Large Language Models (LLMs) and AI tools should not be listed in a byline for any reason.
- Authors are required to disclose whether AI tools were used in the creation and preparation of their manuscripts. ARLJ reserves the right to ask for and receive detailed information on how LLMs and AI were used in the creation of a manuscript.
- Reviewers shall not use LLMs or AI tools when reviewing manuscripts or preparing comments to authors.
- ARLJ will continue to monitor the ethical implications of using AI tools and automation as they evolve and change.
- The maximum ratio for AI to be used is not more than 5%.
View COPE’s guidelines and recommendations regarding AI tools and automation for more information.