Generative Artificial Intelligence (AI) Guidance

Generative Artificial Intelligence (AI) Guidance

Purpose

This guidance document outlines the ethical, responsible, and secure use of Artificial Intelligence (AI) within the University. This guidance aims to support the University’s mission of education, research, and community service while protecting all stakeholders' integrity, privacy, and rights. It covers the use of AI tools, including but not limited to machine learning models, natural language processing systems, and AI-based search and administrative tools.

Recommended Do and Don't

The following shares a list of dos and don'ts when considering generative AI tools and subscriptions for university business.

DOs

  • DO redact or anonymize PII (student IDs, SSNs, health info) before pasting or uploading anything into any AI tool.
  • DO use ºù«ÍÞÊÓÆµ-approved AI tools for work (MS Co-pilot, GPT Business tenant, never personal free accounts) to ensure protections apply.
  • DO Use institution-approved notetakers (e.g., Copilot, Zoom) with participant consent and review notes before sharing.
  • DO check all AI output for accuracy — verify facts before sharing AI-generated results with colleagues or students.
  • DO use AI to streamline low-risk admin tasks (summaries of public policies, idea generation, draft emails without sensitive data).
  • DO report suspected misuse (e.g., PII accidentally exposed to AI tools) to OIT Security immediately.
  • DO contact OIT for support or email

DON’Ts

  • DON’T enter sensitive, protected, regulated, confidential, or proprietary data into AI tools.
  • DON’T use, purchase, or subscribe to AI tools that have NOT been reviewed and approved by OIT.
  • DON’T upload student records, grades, or advising notes — all are protected by FERPA.
  • DON’T paste confidential HR or payroll data into prompts.
  • DON’T assume outputs are private — even in Microsoft Co-pilot or ºù«ÍÞÊÓÆµ GPT accounts, outputs may be logged or visible to admins.
  • DON’T rely on AI to make final legal, financial, or compliance decision for ºù«ÍÞÊÓÆµ (or personally!).
  • DON’T use personal accounts (free ChatGPT, consumer tools) for any work that involves institutional data.
  • DON’T assume that information available online or in public sources is free to use without copyright, licensing, or attribution considerations.
  • DON’T Enter your ºù«ÍÞÊÓÆµ username, password, or MFA codes into AI agents, chatbots, or third-party apps that request credentials to “link” accounts.
  • DON’T Use unapproved AI notetakers or rely on AI notes without human review.

Key Considerations

  • Privacy Violations: While powerful, AI models can inadvertently reveal sensitive information when trained on personal data, leading to privacy breaches. Additionally, synthetic data generated by AI can sometimes be reverse-engineered to identify individuals, underscoring the need for caution and awareness.
  • Intellectual Property and Copyright Issues: Generative AI can produce similar content or directly copy existing works, potentially violating intellectual property rights. This can result in legal disputes and challenges in determining the ownership of AI-generated content.
  • Privacy Concerns: Generative AI, with its ability to create realistic synthetic data that includes personal information, raises significant privacy concerns. This synthetic data, if not handled with caution, can be misused for identity theft, surveillance, or other malicious activities.
  • Security Risk: AI tools can be used to create or amplify cyber threats such as phishing scams, social engineering, deepfakes, and malicious code. These attacks can appear highly convincing and are often difficult to detect. Users should be cautious when AI systems request access permissions, links, or credentials, and avoid sharing authentication details with AI platforms.Generative AI can create sophisticated phishing scams, deep fakes, and other forms of cyberattacks. These can be challenging to detect and defend against, posing significant security risks.
  • Bias: AI models can inherit biases in the data they are trained on, leading to unfair or discriminatory outcomes. Review outputs critically, especially when used for academic, administrative, or decision-making purposes-- to avoid amplifying bias. 
  • Accuracy: AI users should always validate the accuracy of created content with trusted first-party sources. Users are accountable for the content, code, images, and other media AI tools produce. They should be wary of potential “hallucinations” (e.g., citations to publications or materials that do not exist) or misinformation.
  • Code Development: Generative AI can assist software developers with writing code. However, caution should be taken when using AI for computer code because the resulting code may be inaccurate, lack security precautions, and potentially damage software systems. All code should be reviewed, ideally by multiple people.
  • Institutional Data: The acceptable use and institutional data governance policies govern ºù«ÍÞÊÓÆµ institutional data. These policies disallow the uploading of institutional data into non-ºù«ÍÞÊÓÆµ-sanctioned AI products. In addition, AI users should exhibit great care in what data is uploaded into a product.
  • Business Process: AI offers many potential benefits and efficiencies for business process improvements and process automation. Tools, processes, and outputs should be reviewed for institutional data's reliability, accuracy, consistency, and privacy.
  • Ethics and Responsible Use: AI use at ºù«ÍÞÊÓÆµ should align with institutional values of fairness, integrity, and transparency. As AI capabilities evolve, continue to question whether use cases align with ºù«ÍÞÊÓÆµ’s mission and ethical expectations.
  • Transparency and Disclosure: When AI tools contribute to writing, analysis, or creative work, disclose their use. Transparency builds trust and supports academic and professional integrity.

Caution:

AI Hallucination in Generative AI Usage

When employing generative AI, it is crucial to be vigilant about the reality of "AI Hallucinations." This refers to instances where AI generates false or misleading information, which can occur even in well-trained models. To mitigate this risk:

  • Verification Protocol: Always cross-verify AI-generated data with trusted sources before use.
  • Awareness and Training: Educate users on the signs of AI hallucination, empowering them to identify and question implausible outputs.
  • Limit Use for Critical Tasks: Avoid relying solely on AI for decision-making in high-stakes scenarios where misinformation could lead to significant consequences.
  • Iterative Review: Implement a multi-stage review process that involves both AI outputs and human oversight to ensure accuracy and reliability.

This caution is integral to maintaining the integrity and reliability of academic work where AI tools are utilized.

 


Generative AI & Related News