Researchers at MSU are expected to use generative AI tools in ways that uphold the highest standards of research integrity, intellectual rigor, and ethical responsibility. This includes thoughtful engagement with generative AI tools in the design, conduct, analysis, and dissemination of research, scholarship, and creative work. Use of these tools must align with applicable data security requirements, disciplinary norms, and university policies governing research conduct and compliance.
The Office of Research and Innovation offers specific procedures for using generative AI tools in research and creative activities. The office establishes a framework for responsible management of research data in alignment with state and federal laws, institutional policies, and intellectual property rights. Regardless of the tool, researchers must approach AI-generated content with a critical lens—validating accuracy, ensuring appropriate attribution, and acknowledging the limitations and potential biases inherent in these systems. AI output should never be relied upon as a substitute for scholarly judgment or original analysis.
Researchers planning to conduct studies involving the use of generative AI tools must engage with the Institutional Review Board (IRB) early in the research design process to ensure ethical compliance and alignment with federal regulations and university policies. This includes studies that collect human subject data for training or testing AI models, analyze interactions with generative AI platforms, or involve the use of generative AI tools in participant-facing activities. Investigators should be prepared to clearly explain the role of AI in their study, how data (especially sensitive or identifiable data) will be collected, stored, and protected, and whether participants are interacting directly with AI systems. If third-party generative AI platforms are used, researchers must assess data privacy, security, and consent implications, and confirm that these platforms meet MSU’s data security and use standards. Early consultation with the IRB office and MSU IT Security is expected to determine risk level, necessary disclosures, and appropriate safeguards. All AI-related elements of the study must be fully documented in the IRB application and participant consent materials to ensure transparency and protect participant rights.
For research projects that do not include human participants or identifiable private information, IRB review is typically not required. Researchers are encouraged to document how generative AI is being used and ensure that it supports, rather than substitutes, scholarly expertise. Researchers should assess potential risks related to data security, intellectual property, and research integrity, and avoid entering sensitive, proprietary, or export-controlled information into generative AI platforms.
From music and visual arts to creative writing, design, and performance, generative AI tools can be used to explore new forms of expression, prototype ideas, and support experimental work. MSU encourages creative exploration with these technologies while maintaining a commitment to ethical practice, authorship integrity, and acknowledgment of human and machine contributions. When using generative AI in artistic contexts, individuals should remain transparent about the role of AI in the creative process, respect intellectual property and cultural sensitivities, and consider the broader social, environmental, and labor implications of AI-assisted artmaking. As this field evolves, MSU supports continued dialogue and reflection on the opportunities and challenges posed by generative AI in the arts and humanities.
Integration of generative AI into research must be disclosed in research outputs, manuscripts, artistic endeavors, and grant applications in accordance with the guidance/policies and expectations of publishers, funders, and collaborators. This may include idea generation, data analysis, and drafting. In the absence of stated guidance/policy, researchers are expected to disclose any intentional and substantial uses of AI. Finally, to promote reproducibility and accountability, researchers are encouraged to keep records of generative AI prompts, outputs, and their integration into the research workflow.
Users are expected to become proficient in the use of digital tools and exercise caution when entering confidential or sensitive information into generative AI tools. They must review the MSU institutional data policy to understand the potential risks associated with generative AI tools and seek clarity to ensure that MSU-approved generative AI tools are authorized (or not) to handle such data (e.g., FERPA-protected, HIPAA-regulated, unpublished research).
Third-party generative AI tools, particularly those operated outside the United States, pose significant risks to data security and intellectual property. These tools may only be used with non-sensitive, public information unless prior approval is obtained by MSU IT Information Security. If there is any uncertainty about the classification of data or the appropriateness of a tool, researchers must contact MSU IT Information Security for guidance. For specific questions, please email informationsecurity@msu.edu.
When collaborating with external hosts, vendors, or subcontractors, researchers are encouraged to verify how meeting content will be handled, especially when generative AI tools are involved. If the data handling practices are unclear or raise concerns, it is important to seek clarification before proceeding. In situations where a host insists on using generative AI-enabled meeting tools despite unresolved privacy or compliance issues, MSU researchers should withdraw from participation to safeguard institutional data and uphold MSU’s research integrity standards.