Guidelines

MSU Guidelines for the Use of Generative Artificial Intelligence (Generative AI) Tools 

Generative artificial intelligence (AI) continues to play an increasingly prominent role across campus and the many communities with which we engage. At Michigan State University (MSU), we are proud that our exploration of AI is guided by ethics and grounded in our core values, ensuring it enhances our mission and strengthens the Spartan impact.

In addition to launching the AI website in the fall of 2025, MSU has released updated Guidelines for the Use of Generative Artificial Intelligence, which supersede all previously issued AI guidance. These guidelines establish clear expectations for the ethical, responsible, and transparent use of AI across educational, research, scholarly, artistic endeavors, and administrative contexts. They reflect our commitment to innovation while safeguarding academic integrity, protecting sensitive data, and ensuring equitable access. These guidelines represent the first step in consolidating the many previous AI-related guidance issued across campus. They will be regularly reviewed and updated to reflect emerging technologies and evolving best practices.

View the guidelines below or download a PDF version of the document.

 

Overview

As generative artificial intelligence (generative AI) continues to evolve rapidly, its integration into teaching, research, scholarly, artistic, and administrative contexts at Michigan State University (MSU) presents exciting opportunities, while also introducing significant challenges due to the complexity and pace of these emerging technologies.

MSU encourages all members of the university community to engage with generative AI tools responsibly, ethically, and creatively, always keeping in mind that academic and administrative decisions must be grounded and centered around human judgment and input.

This document establishes institutional guidelines for the responsible and ethical integration of generative AI tools across Michigan State University, reflecting MSU’s commitment to upholding its core values of collaboration, equity, excellence, integrity, and respect.

These guidelines supersede all previously issued guidance related to the use of generative AI tools at MSU. They supplement existing university policies, standards, and procedures, and serve as the university’s official framework for the ethical, responsible, and equitable use of generative AI.

MSU expects all members of its community to follow these guidelines when using generative AI tools in teaching, research, administrative, and professional contexts. To safeguard institutional data, the university also expects all members of its community to use generative AI tools that are institutionally approved and supported when conducting work on behalf of MSU.

Using MSU-approved generative AI tools helps minimize the risk of inappropriate data sharing and promotes the responsible, effective integration of AI into academic and administrative work. These tools have undergone formal compliance and security reviews, ensuring a more secure and reliable environment aligned with MSU policies, including the Acceptable Use Policy for Information Technology Resources and the Institutional Data Policy.

  • These guidelines apply to all members of the MSU community who use generative AI tools in teaching, learning, research, scholarship, or administrative activities.  
  • It covers MSU-approved generative AI tools, unit-developed applications, and externally or publicly available platforms (both free and paid).
  • Instructors are expected to establish course-specific guidelines that define the appropriate and inappropriate use of generative AI tools.
  • Students may only use generative AI tools to support their coursework or research activities when explicitly permitted by the instructor/research advisor.
  • Researchers and administrative staff should never enter confidential or sensitive information into third-party, non-MSU enterprise generative AI tools, unless explicitly approved by MSU IT Information Security. For more detailed information, please refer to the institutional data policy for specific details about data use and potential risks associated with generative AI tools. For specific questions, please email informationsecurity@msu.edu.
  • When conducting work on behalf of MSU, individuals should only use MSU-approved generative AI tools and consult with their supervisors to determine appropriate use based on their specific roles, clarify expectations, and establish the scope of permissible applications.
  • Third-party generative AI tools, particularly those operated outside the United States, pose significant risks to data security and intellectual property. These tools may only be used with non-sensitive, public information unless prior approval is obtained from MSU IT Information Security

MSU is committed to preparing its community not only to use generative AI tools effectively, but also to examine their broader implications and contribute to a more equitable and responsible technological future. The following MSU AI ethical values developed by the MSU Ethics Institute provide a framework for the responsible use of generative AI in alignment with the university’s mission and values.

While it can be difficult to regulate how generative AI tools are used, and we do not believe the burden of enforcement should fall on any one individual, we do believe that each person has a shared responsibility to align their use of AI with the institutional values. To help guide individual decision-making, we have drawn on both MSU’s Ethical AI Values and the European Union’s AI principles to provide examples of how one might evaluate whether and how to use AI responsibly. Ultimately, if something doesn’t feel right, that hesitation may signal a conflict with your ethical compass. Listening to that instinct is an important first step in making values-aligned decisions about AI.

These values reflect MSU’s commitment to ethical engagement with generative AI tools. However, it is important to recognize that many widely available generative AI platforms pose significant limitations—particularly in terms of unclear data practices, ethically concerning labor conditions, and considerable environmental impacts. As such, ethical use involves more than thoughtful adoption; it requires ongoing critical reflection, transparency, and, when appropriate, a deliberate choice to limit or avoid use altogether. 

AI Ethical Values

  • Collaboration – We prioritize reciprocity and ethical partnerships, aiming to drive interdisciplinary innovation that benefits both our community and society at large.
  • Equity – We are committed to maximizing accessibility and equity, actively dismantling biases and barriers related to socioeconomic status, disabilities and institutional boundaries to empower all members of our community.
  • Excellence – We will ethically employ AI for institutional enhancement and societal betterment, supporting professional development and continuous policy refinement to elevate our standards in all facets of our work.
  • Integrity – We commit to responsible AI use, adhering to all laws, acknowledging its limitations, ensuring privacy, security and unbiased data handling, while rigorously reviewing outputs to maintain our standard of honesty and trustworthiness.
  • Respect – We pledge to utilize AI in ways that honor human dignity and foster a culture of safety and understanding, valuing human insight above all while acknowledging and mitigating AI biases to ensure a respectful, secure community.

For the full expanded version, please visit: ethics.msu.edu/gen-ai.

To support the ongoing critical reflection we advocate, the MSU Ethics Institute offers a range of resources, programs, and collaborative opportunities. These include faculty workshops, student engagement initiatives, interdisciplinary panels, and publicly accessible guidance on emerging issues such as labor ethics and environmental sustainability in AI. While we do not expect every community member to become an expert in these areas, we are committed to building institutional capacity for shared learning. The Institute’s goal is to foster environments where ethical deliberation is both expected and supported, and to provide curated, accessible materials that empower individuals to make informed, value-driven decisions regarding generative AI use. 

Permissible Uses

We collectively share the responsibility to uphold intellectual honesty and scholarly integrity. These are core principles that may be compromised by the misuse of generative AI tools, particularly when generative AI-generated content is presented as original, human-created work. This includes, but is not limited to, contexts where authorship implies intellectual or creative ownership, such as academic writing, artistic production, journalism, and professional communications. 

Instructors are expected to establish a course-specific guidance that defines the appropriate and inappropriate use of generative AI tools. Students may only use generative AI tools to support their coursework when explicitly permitted by the instructor.  

While AI can enhance learning, it should be balanced with opportunities for human engagement, critical thinking, and skill development. Educators and students alike must recognize the limitations of generative AI tools: outputs may be biased, inaccurate, or lack transparency and require careful attention to citation and proper attribution.

Students – Students are expected to follow the course-specific guidance outlined in the syllabus or assignment instructions regarding the use of generative AI tools. In the absence of explicit guidance, students should always consult with their instructor before using generative AI for any assignment or assessment. While the use of generative AI to support learning—such as practicing problems, exploring concepts, or reviewing definitions—is increasingly common, students must seek clarification from their instructors or teaching assistants to ensure their use aligns with academic integrity policies specific to each course and assignment.

When participating in university-sponsored co-curricular and experiential learning activities, students should obtain clear guidance from their instructor, supervisor, or program sponsor regarding whether and how generative AI tools may be used. This includes student employment, internships, leadership roles, research, service-learning, and other applied learning experiences. When in doubt, students are expected to always seek clarification before using generative AI tools in academic or co-curricular contexts.  

AI tools are not yet reliable sources for accurately interpreting or summarizing evidence. Therefore, critical and transparent use of these technologies is essential to maintaining trust, integrity, and fairness in all aspects of your learning and development at MSU.

Educators – The university expects instructors to include generative AI course guidance with a clear statement in every syllabus. This statement should specify whether generative AI use is permitted, the contexts in which it may be used (e.g., assignments, exams, collaborative projects), and how students are expected to appropriately acknowledge and cite their use of generative AI applications.  

To promote clarity and uphold academic integrity, educators should explicitly define both appropriate and inappropriate uses of generative AI tools in their courses and communicate these expectations at the outset of the term. Ideally, a course-guidance on the use of generative AI tools includes three key components: (a) how students are allowed to use generative AI in the course, with examples of tools that are permitted or not allowed; (b) the reasoning behind the guidance, including any rules about citation or attribution; and (c) how students can ask questions or request exceptions, so expectations are clear and transparent. The Center for Teaching and Learning Innovation (CTLI) provides a complete guide to incorporating generative AI in your syllabus. Keep in mind that grounding the use of generative AI tools in actionable learning outcomes and course expectations helps students understand their relationship with generative AI.

Instructors are encouraged to plan ways they can support the development of critical thinking skills around the use of generative AI tools—fostering workforce readiness and preparing students to navigate an increasingly technology-driven world. This could include class discussions, providing resources, or creating specific assignments.  

The use of generative AI detection tools is generally discouraged. However, if an instructor chooses to use such tools, they must clearly inform students about their intended use, including the rationale, how results will be interpreted, and what actions may follow. Detection tool outputs should be considered potential indicators—not conclusive evidence—of generative AI misuse and should never serve as the sole basis for academic or grading decisions.

When using generative AI to develop course materials, interpret/translate text, or support grading processes, instructors should apply the same standards of oversight, transparency, and accountability that they expect from students in their academic use of these tools.

If educators believe a student is committing academic misconduct, they should report it to the Office of Student Support & Accountability

Graduate Teaching Assistants – Graduate Teaching Assistants (GTAs) are expected to adhere to the instructor’s decisions regarding the use of generative AI tools in each course. The use of generative AI to assist students, develop course materials, or support grading is not permitted unless explicitly authorized by the course instructor.  

When authorized, GTAs should apply the same level of oversight, transparency, and accountability expected of instructors in their use of these tools. GTAs are advised to communicate with their instructor of record to clarify expectations, especially regarding how generative AI-related issues should be handled in their course. In cases where student work is suspected of being generated by AI, it is the responsibility of the course instructor—not the GTA—to assess the situation and determine any necessary actions. 
 

Researchers at MSU are expected to use generative AI tools in ways that uphold the highest standards of research integrity, intellectual rigor, and ethical responsibility. This includes thoughtful engagement with generative AI tools in the design, conduct, analysis, and dissemination of research, scholarship, and creative work. Use of these tools must align with applicable data security requirements, disciplinary norms, and university policies governing research conduct and compliance.

The Office of Research and Innovation offers specific procedures for using generative AI tools in research and creative activities. The office establishes a framework for responsible management of research data in alignment with state and federal laws, institutional policies, and intellectual property rights. Regardless of the tool, researchers must approach AI-generated content with a critical lens—validating accuracy, ensuring appropriate attribution, and acknowledging the limitations and potential biases inherent in these systems. AI output should never be relied upon as a substitute for scholarly judgment or original analysis.

Researchers planning to conduct studies involving the use of generative AI tools must engage with the Institutional Review Board (IRB) early in the research design process to ensure ethical compliance and alignment with federal regulations and university policies. This includes studies that collect human subject data for training or testing AI models, analyze interactions with generative AI platforms, or involve the use of generative AI tools in participant-facing activities. Investigators should be prepared to clearly explain the role of AI in their study, how data (especially sensitive or identifiable data) will be collected, stored, and protected, and whether participants are interacting directly with AI systems. If third-party generative AI platforms are used, researchers must assess data privacy, security, and consent implications, and confirm that these platforms meet MSU’s data security and use standards. Early consultation with the IRB office and MSU IT Security is expected to determine risk level, necessary disclosures, and appropriate safeguards. All AI-related elements of the study must be fully documented in the IRB application and participant consent materials to ensure transparency and protect participant rights.

For research projects that do not include human participants or identifiable private information, IRB review is typically not required. Researchers are encouraged to document how generative AI is being used and ensure that it supports, rather than substitutes, scholarly expertise. Researchers should assess potential risks related to data security, intellectual property, and research integrity, and avoid entering sensitive, proprietary, or export-controlled information into generative AI platforms.

From music and visual arts to creative writing, design, and performance, generative AI tools can be used to explore new forms of expression, prototype ideas, and support experimental work. MSU encourages creative exploration with these technologies while maintaining a commitment to ethical practice, authorship integrity, and acknowledgment of human and machine contributions. When using generative AI in artistic contexts, individuals should remain transparent about the role of AI in the creative process, respect intellectual property and cultural sensitivities, and consider the broader social, environmental, and labor implications of AI-assisted artmaking. As this field evolves, MSU supports continued dialogue and reflection on the opportunities and challenges posed by generative AI in the arts and humanities.

Integration of generative AI into research must be disclosed in research outputs, manuscripts, artistic endeavors, and grant applications in accordance with the guidance/policies and expectations of publishers, funders, and collaborators. This may include idea generation, data analysis, and drafting. In the absence of stated guidance/policy, researchers are expected to disclose any intentional and substantial uses of AI. Finally, to promote reproducibility and accountability, researchers are encouraged to keep records of generative AI prompts, outputs, and their integration into the research workflow.

Users are expected to become proficient in the use of digital tools and exercise caution when entering confidential or sensitive information into generative AI tools. They must review the MSU institutional data policy to understand the potential risks associated with generative AI tools and seek clarity to ensure that MSU-approved generative AI tools are authorized (or not) to handle such data (e.g., FERPA-protected, HIPAA-regulated, unpublished research).

Third-party generative AI tools, particularly those operated outside the United States, pose significant risks to data security and intellectual property. These tools may only be used with non-sensitive, public information unless prior approval is obtained by MSU IT Information Security. If there is any uncertainty about the classification of data or the appropriateness of a tool, researchers must contact MSU IT Information Security for guidance. For specific questions, please email informationsecurity@msu.edu.

When collaborating with external hosts, vendors, or subcontractors, researchers are encouraged to verify how meeting content will be handled, especially when generative AI tools are involved. If the data handling practices are unclear or raise concerns, it is important to seek clarification before proceeding. In situations where a host insists on using generative AI-enabled meeting tools despite unresolved privacy or compliance issues, MSU researchers should withdraw from participation to safeguard institutional data and uphold MSU’s research integrity standards. 

Before incorporating generative AI into their work processes, individuals should consult with their supervisors to determine appropriate use based on their specific roles, clarify expectations, and establish the scope of permissible applications.

Members of the MSU community should be transparent when generative AI contributes substantively to the creation of published public-facing materials such as websites, press releases, official reports, ideation support, copyediting, or development of outreach content, as well as internal documents that support institutional operations. Clear attribution and accountability help promote trust and uphold the integrity of our institutional work.

General Expectations for Administrative Use – When using generative AI tools for administrative tasks, such as drafting communications, summarizing documents, generating reports, improving workflow efficiency, or enhancing accessibility, it is essential to use only MSU-approved enterprise tools. Additionally, no sensitive, confidential, or regulated university data should be entered into these tools unless explicitly permitted by MSU IT Information Security. AI-generated content must be carefully reviewed for accuracy, tone, and appropriateness to ensure it aligns with MSU’s values, maintains professional standards, and avoids misrepresentation. While these tools can support drafting and ideation, they must not replace professional expertise, informed judgment, or appropriate review and approval processes. Users should remain mindful of privacy, data protection, and accessibility when incorporating generative AI into university work.

Marketing and Communications – Follow the specific standards established by University Communications and Marketing when integrating generative AI into public-facing materials.

Evaluative or Supervisory Contexts – generative AI tools may be used in limited ways for evaluative or supervisory contexts, such as faculty or staff reviews. This includes drafting assistance, checking for mechanical errors, or refining language. However, they must not substitute for disciplinary judgment, institutional knowledge, or the nuanced understanding essential for fair and transparent evaluation. Use in these settings must be narrowly scoped, clearly disclosed, and subject to careful human oversight, in full alignment with MSU policies, academic governance procedures, and applicable collective bargaining agreements. MSU remains committed to evaluation practices that are equitable, informed, and human-centered.

Law Enforcement and Public Safety Contexts – In areas such as Police and Public Safety, the use of MSU-approved generative AI tools must be approached with particular care, given the legal, ethical, and public trust responsibilities involved. These tools may be used in limited, clearly defined ways. For example, to assist with grammar, formatting, or structuring non-evidentiary narrative sections of internal reports. The use of any generative AI tools other than those outlined previously shall be restricted to the utilization of outside agencies during any investigative analysis (i.e., any state or federal agencies that currently use such generative AI-assisted tools pursuant to law, policy, and ethical concerns. In no case will generative AI or any of its products be the sole determining factor in any investigation. Rather, any information generated or obtained through the use of generative AI will be taken as a part of all information available in the totality of circumstances, and pursuant to statute, case law, ethical standards, best practices, and policy. All use must be fully transparent, subject to human oversight, and in compliance with applicable laws, university policies, and law enforcement standards. 

Non-Permissible Uses

Generative AI tools should not be used to deliberately fabricate, falsify, or misrepresent information; impersonate others or oneself; or create deceptive content, except when explicitly authorized for instructional or research purposes within a controlled and approved environment.  

Non-MSU enterprise generative AI tools must not be used to record, transcribe, or capture discussions that involve sensitive, confidential, or regulated information. For questions related to different types of institutional data and associated risks when using MSU-approved generative AI tools, please review the MSU institutional data policy.  

Entering confidential or sensitive information into third-party, non-MSU enterprise generative AI tools is strictly prohibited unless explicitly approved by MSU IT Information Security. Certain data categories remain strictly prohibited from use with any generative AI tools that have not been formally approved by MSU IT Security. This includes, but is not limited to, student records protected under FERPA; personally identifiable information (PII); personal health information (PHI) covered by HIPAA; and any financial or human resources-related data. Use of such data must comply with MSU’s institutional data policy as well as applicable state and federal regulations, regardless of platform assurances regarding data protection.

No export-controlled data or Controlled Unclassified Information (CUI), including information subject to International Traffic in Arms Regulations (ITAR) or Export Administration Regulations (EAR), may be entered into any generative AI tools unless the tool has been formally approved for use within MSU’s Regulated Research Enclave (RRE). 

Implementation, Procurement and Resources

Before using any generative AI tool to conduct work on behalf of MSU, individuals must carefully evaluate the tool's capabilities and review applicable institutional policies and guidelines. A generative AI tool should never be assumed to be “safe,” “secure,” or “private”—even if it is licensed, paid for, or provided by a well-known company—unless its use has been explicitly authorized by MSU IT.

AI Literacy – MSU continues to actively foster a campus-wide culture of AI literacy by investing in the development of centralized, up-to-date training programs, educational resources, and support materials. See the resources section for more details on existing materials and programs available.

Data and Prompt Management – MSU treats generative AI tools like any other technology procured through MSU IT. Like any other IT system and service approved and supported, MSU IT reserves the right to access and review prompts, data uploads, and outputs, monitor misuse, flag inappropriate content, and investigate actual or suspected misconduct or incidents that may pose risks to the university. Any investigation related to the misuse of any generative AI tools on campus will be conducted in accordance with institutional policies.

Reporting Misuse – Individuals should report any potential compliance or ethical concerns related to the use of generative AI tools to their department leadership or directly to the Office of Audit, Risk and Compliance. In instructional settings, unauthorized use of generative AI in coursework will be treated comparably to other forms of academic misconduct, such as unauthorized assistance, data falsification, or plagiarism. Such cases may be referred to the Office of Student Support and Accountability, as outlined in the Integrity of Scholarship and Grades. Students accused of generative AI misuse will be afforded a fair and transparent process, including the opportunity to respond to concerns, present evidence, and access appropriate support, consistent with the Student Rights and Responsibilities. Alleged misuse of generative AI—whether in academic, research, or professional contexts—will be addressed in accordance with existing university policies on academic integrity, research misconduct, and employee conduct. As with all forms of potential misconduct, the burden of proof lies with the university to demonstrate—based on a preponderance of the evidence (“more likely than not”)—that a violation has occurred.

Periodic Review – This document will be reviewed and updated regularly to reflect emerging technologies and evolving best practices. 

MSU is committed to ensuring that all technology provides an accessible, secure, seamless, and user-friendly experience. When procuring or registering for IT products or services using MSU credentials, individuals and units must carefully assess associated risks and consider their potential impact on both the university community and the broader public. All members of the MSU community are expected to follow the established university processes for procuring IT products and solutions.

To mitigate risk and ensure compliance, individuals who intend to use MSU credentials to access, procure licenses, or sign up for generative AI-enabled products or services must first consult with MSU IT Services. This requirement applies even if the tool is free or open source. IT Services will coordinate with appropriate campus offices to assess the tool, confirm security and compliance with university standards, and ensure that contractual terms do not introduce unnecessary risk to MSU.  

MSU IT also has an approved software list, which allows employees to access or purchase select software without the submission of an IT Readiness form

Attribution

This document was edited with the assistance of generative AI tools, which helped check for mechanical errors and provided suggestions on how to modify existing content for clarity and concision. All final content was authored and reviewed by the authoring team with input from MSU academic and administrative leadership.  

Contact

For questions or further information about these guidelines, please contact the Office of the Provost (provost@msu.edu).

History

Approved: Aug. 12, 2025

Revised on: The guidelines have not been revised to date.