Guidelines for Generative AI

Generative AI (GAI) holds significant potential for advancing research and scholarship across various disciplines. GAI tools can help researchers be more effective, productive, and innovative in their work. However, the use of GAI tools raises ethical considerations and challenges that must be addressed to ensure responsible and beneficial outcomes. Uses of GAI in research may involve the drafting of proposals and progress reports, data/statistical analysis, graphic generation, etc. Standards, regulations, and policies are being contemplated or actively developed at federal, state, or institutional levels as the use and impact of GAI evolves. The following guidance is designed to assist members of the WSU research community in employing GAI tools ethically and responsibly and with full transparency.

This guide is focused on the research enterprise and is based on published guidelines by journals, funding agencies, professional societies, and WSU’s assessment of GAI’s benefits and risks.

General guidelines for the use of GAI at WSU can be found on the Provost’s website. All use must comply with WSU EP08 regarding WSU system data.

If you have thoughts about what to add to this guide or how to improve it, please email OR.Hotline@wsu.edu.


Ethical Considerations

Beyond the demands of compliance with laws and policies as they are developed, there are broader ethical considerations related to the use of GAI tools. Researchers should consider the implications of using AI to automate tasks traditionally performed by humans, including the potential consequences for academic integrity and the integrity of the research process. The Office of Research recommends that members of the WSU research committee consider the following points very carefully when introducing GAI tools into research workflows:

  • Prioritize ethical principles such as transparency, fairness, accountability, and respect for human dignity when using GAI tools.
  • Consider the potential societal impacts of generated content, including but not limited to misinformation, bias reinforcement, and privacy concerns.
  • Uphold standards of academic integrity by clearly attributing the use of GAI tools in research outputs and publications.

The following sections are meant to provide context and dimension to these considerations as investigators decide whether and how to make use of GAI tools in their research and scholarship.


Transparency and Disclosure

Privacy and Data Security

To adhere to state and federal privacy and recordkeeping requirements, WSU has enacted several policies related to the protection of individual privacy and data security. These policies and guidelines apply equally to GAI tools.

  • Third-party software, tools, or information storage systems must comply with WSU data security policies. Because GAI tools are unlikely to meet these requirements, researchers should generally not input sensitive data into externally sourced GAI tools.
  • This also applies to AI meeting tools, be cautions when using such tools in meetings discussing sensitive topics. Furthermore, like records produced by human notetakers, records generated by GAI tools are likely to be considered public records and should be treated as such.

Potential Risks of using GAI in Research

The use of GAI tools inherently includes certain risks. Some central concerns are listed below; however, it is the investigator’s responsibility to understand the risks of a given tool and to act appropriately to mitigate those risks.

  • Artificial Hallucinations: GAI tools may confidently present results that are inaccurate. It can be hard to distinguish these hallucinations from outputs with accurate content without scrutiny and, in some contexts, topical expertise.
  • Bias: GAI tools can present outputs that are, at least superficially, novel. However, their models are powered by large datasets from existing images, published works, etc. all of which may contain biases. Furthermore, there may be bias built into the structure of the GAI models themselves. Users should be conscious of the fact that these biases can be reproduced by GAI tools.
  • AI Knowledge Limitations: GAI tools may have specified limitations, such as a cutoff date for the materials included in the model. Researchers should be aware of such limitations. For example, large language models like ChatGPT can be useful for preparing literature reviews, however pertinent literature may be excluded if it was published after the model was updated.
  • Challenges in Translation: When using GAI for translation, especially if researchers are not proficient in both languages involved, consulting with fluent speakers for verification is essential. This helps mitigate potential inaccuracies in the translated text.

Using GAI in Research Data Review

GAI can play a valuable role in data quality assurance. By leveraging its ability to quickly process large datasets, GAI can efficiently identify errors, inconsistencies, and biases that may exist within the data. As with any type of review, investigators should:

  • Corroborate findings with manual checks and expert assessments to ensure that the anomalies flagged by the GAI are accurately interpreted and addressed appropriately.
  • Complement with human judgment and expertise to ensure the reliability and accuracy of the data review process.

Writing and Publishing

The individual is responsible for authorship and content for all writing. GAI may have benefits, as described below, but there are drawbacks too. Users must not input confidential or sensitive information to GAI tools when using them to summarize, present, or translate work. The researcher should always verify that summaries, presentations, and translations created by GAI accurately represent the work.

While cases where the use of GAI tools for the generation of content for publication is appropriate are limited, GAI can be useful in earlier stages of the writing process and during the editing process. As with other kinds of uses, it is always incumbent on the researcher to use GAI tools responsibly by validating their assessments and suggestions.

Researchers should be aware of the existing guidelines in journals, e.g., Plos authorship guidanceNature Portfolio authorship guidance, and in professional organizations, e.g., the American Psychological Association (APA), the International Committee of Medical Journal Editors. The Committee on Publication Ethics (COPE) has issued a position statement and other guidelines on AI and authorship which has been adopted by some publishers and other organizations.

Correct attribution and disclosure of the use of GAI in research papers are essential to maintain transparency and academic integrity.

  • Disclosure in the Methods Section: In the Methods or other relevant sections of your paper, include a subsection detailing the use of GAI. Describe the specific prompts or queries provided to the AI model and the nature of its responses. Be transparent about how GAI was integrated into your research process.
  • Citation: Treat the output of GAI as you would any other source of information. If you directly rely on the AI-generated content, cite it in your paper. Use standard citation styles recommended by organizations like the American Psychological Association (APA), the Chicago Manual of Style, and the Modern Language Association (MLA).
  • Clear Descriptions: Ensure that your description of the use of GAI is clear and informative. Provide enough detail for readers to understand how the AI was utilized and its potential impact on the research outcomes.

Grant Proposal Development, Writing, Submission, and Reporting

As with writing for publication, researchers assume full responsibility for the content of proposals, reports, and other submitted items to federal and other funding agencies. The investigator must ensure the integrity, accuracy, and originality of every aspect of the proposal. This includes any content generated in full or in part by GAI tools.

  • Using GAI in proposal preparation can offer benefits such as increased efficiency. However, it also introduces risks, including the potential for plagiarism, fabrication, or falsification of information. Therefore, investigators must approach the use of AI tools with a clear understanding of these risks and take appropriate measures to mitigate them.
  • Researchers should carefully review the content of submissions for accuracy, originality, and conformity with the requirements of the funding agency, including any stated policies on the use of GAI. As policies are in flux, researchers should take extra care to attend to agency policies.
  • Ultimately, researchers are responsible for submitting accurate, original work.

Federal Funders’ Guidelines and Policies

The National Institute for Health (NIH)

The use of GAI to help write grant applications and/or R&D contract proposals to the National Institutes of Health (NIH) is not prohibited. However, if you choose to use AI tools for this purpose, you do so at your own risk. NIH guidance states: “when we receive a grant application, it is our understanding that it is the original idea proposed by the institution and their affiliated research team.”  The NIH emphasizes that grant applications should reflect the original ideas proposed by the institution and their affiliated research team. There are concerns related to research misconduct such as plagiarism, falsified information, or fabricated citations that could arise from the use of AI tools. If such issues are identified in a grant write-up, appropriate actions will be taken to address the non-compliance. Therefore, while the use of AI for assistance is not forbidden, caution is advised to ensure compliance with NIH guidelines and ethical standards. However, NIH does prohibit the use of GAI in proposal review.

The National Science Foundation (NSF)

The National Science Foundation (NSF) has established guidelines for reviewers and proposers including the below statements:

  • “NSF reviewers are prohibited from uploading any content from proposals, review information and related records to non-approved GAI tools.
  • Proposers are encouraged to indicate in the project description the extent to which, if any, GAI technology was used and how it was used to develop their proposal.”

It’s important to note that proposers bear the responsibility for the accuracy and authenticity of their proposal submissions, including content developed with the assistance of GAI tools. NSF’s Proposal and Award Policies and Procedures Guide (PAPPG) addresses research misconduct, which encompasses fabrication, falsification, or plagiarism in proposing or performing NSF-funded research, or in reporting results funded by NSF. As GAI tools may pose risks related to research misconduct, proposers and awardees are accountable for ensuring the integrity of their proposals and the reporting of research results.

This policy doesn’t prevent research on GAI as a subject of study; however, it underscores the importance of maintaining integrity and authenticity in proposal preparation and research reporting.


Resources and Policies

Content adapted from multiple sources, including: University of Michigan, University of Texas, Arizona State University, and University of North Carolina