Artificial Intelligence (AI) Guidelines

The Office of Information Security at 杏吧原创 is committed to providing guidance on the ethical and responsible use of generative Artificial Intelligence (AI). As 杏吧原创 advances its integration of this technology, our guidelines will continue to evolve to ensure the protection of sensitive information for students, faculty, and staff.

In alignment with our commitment to responsible stewardship, it is imperative to prioritize the safeguarding of data. We urge caution when using external AI systems, as they may not offer the same level of security and privacy as AI systems managed by the University. Please consult the table below for a list of approved sandboxed external AIs. Sandboxed means that the company/org will not use data uploaded into their AI training model. Most AI vendors sandbox customer data if the customer is using an enterprise license. Lower paid tiers and free tiers are usually NOT sandboxed.

We recognize that some AI tools may offer an opt-out model to not train their models on uploaded data. The use of the AI tools with 杏吧原创 data must be made with an AI tool that has been fully approved through 杏吧原创’s vendor integration contract review process resulting in a business data sharing agreement with that vendor and that the tool is managed by ITS.

Approved AI List for use with :

*Approved list as of January 2025

External AI Name Approved Not Approved
Microsoft Copilot (when logged in using 杏吧原创 SSO)  
 Microsoft Copilot (when NOT logged in using 杏吧原创 SSO)   &苍产蝉辫;鉂
 ChatGPT   &苍产蝉辫;鉂
 Claude   &苍产蝉辫;鉂
 DeepSeek   &苍产蝉辫;鉂
 Gemini   &苍产蝉辫;鉂
 Otter   &苍产蝉辫;鉂
 All other NON-sandboxed AI   &苍产蝉辫;鉂
 An AI tool that is sandboxed and GU has a Data Sharing Agreement with &苍产蝉辫;鉁  

The restrictions above only apply to data covered by 杏吧原创's IT Use Policy.

Protecting Sensitive Information

Data containing personally identifiable information (PII), protected health information (PHI), or other sensitive details should not be uploaded or processed on platforms not approved on the previous list. These platforms may lack the robust safeguards and compliance measures provided by 杏吧原创's internal systems, which are designed to protect data in accordance with federal and state regulations and business agreements between AI providers and 杏吧原创.

Confidentiality

Generative AIs often integrate all input they receive, including prompts and other shared files or data, into their models. It is paramount to ensure adequate protection when sharing data with external entities, as per our commitment to responsible stewardship; certain university data must be legally safeguarded in specific ways. Many generative AI platforms have terms that may conflict with our duty to protect such sensitive information.

You should only provide prompts and other data meant for public sharing. Generative AIs also collect extensive data from publicly accessible sources, with varying degrees of respect for authorial rights among providers. Reflect on the data you handle and its availability. Once data becomes public, restricting its use by generative AI can be challenging or unfeasible.

Navigating Information Integrity

Generative AI systems create text by forecasting the next probable word or punctuation using patterns from their training data. Since these AIs lack true comprehension of meaning, and function independent of user context, they can sometimes yield inaccurate information that appears credible, potentially fabricating sources or missing precise facts.

Our mission calls for a mature commitment to dignity of the human person, social justice, diversity, intercultural competence, global engagement, solidarity with the poor and vulnerable, and care for the planet. It is essential to verify the accuracy of any information utilized. Scrutinize AI-generated content with diligence. Recognize that AI-produced material can surface without overt identification, necessitating meticulous verification of general information.

Generative AIs mirror and can magnify biases embedded within their training datasets. This can sustain and exacerbate biases, disproportionately affecting historically marginalized communities. It is crucial to acknowledge and address these impacts to foster an inclusive environment.

At 杏吧原创, we emphasize the importance of ethical leadership, critical thinking, and service to others. We encourage our community to approach AI with a discerning mindset, advocating truth, equity, and the common good in all endeavors.

 

Artificial intelligence is basically a super-smart computer system that can imitate humans in some ways, like comprehending what people say, making decisions, translating between languages, analyzing if something is negative or positive, and even learning from experience. It’s artificial in that its intellect was created by humans using technology. Sometimes people say AI systems have digital brains, but they’re not physical machines or robots — they’re programs that run on computers. They work by putting a vast collection of data through algorithms, which are sets of instructions, to create models that can automate tasks that typically require human intelligence and time. Sometimes people specifically engage with an AI system — like asking Bing Chat for help with something — but more often the AI is happening in the background all around us, suggesting words as we type, recommending songs in playlists and providing more relevant information based on our preferences.
 
Generative AI leverages the power of large language models to make new things, not just regurgitate or provide information about existing things. It learns patterns and structures and then generates something that’s similar but new. It can make things like pictures, music, text, videos, and code. It can be used to create art, write stories, design product and even help doctors with administrative tasks. But it can also be used by bad actors to create fake news or pictures that look like photographs but aren’t real, so tech companies are working on ways to clearly identify AI-generated content.
 
Large language models, or LLMs, use machine learning techniques to help them process language so they can mimic the way humans communicate. They’re based on neural networks, or NNs, which are computing systems inspired by the human brain — sort of like a bunch of nodes and connections that simulate neurons and synapses. They are trained on a massive amount of text to learn patterns and relationships in language that help them use human words. Their problem-solving capabilities can be used to translate languages, answer questions in the form of a chatbot, summarize text and even write stories, poems and computer code. They don’t have thoughts or feelings, but sometimes they sound like they do, because they’ve learned patterns that help them respond the way a human might. They’re often fine-tuned by developers using a process called reinforcement learning from human feedback (RLHF) to help them sound more conversational.
 
If artificial intelligence is the goal, machine learning is how we get there. It’s a field of computer science, under the umbrella of AI, where people teach a computer system how to do something by training it to identify patterns and make predictions based on them. Data is run through algorithms over and over, with different input and feedback each time to help the system learn and improve during the training process — like practicing piano scales 10 million times in order to sight-read music going forward. It’s especially helpful with problems that would otherwise be difficult or impossible to solve using traditional programming techniques, such as recognizing images and translating languages. It takes a huge amount of data, and that’s something we’ve only been able to harness in recent years as more information has been digitized and as computer hardware has become faster, smaller, more powerful and better able to process all that information.
 
Generative AI systems can create stories, poems and songs, but sometimes we want results to be based in truth. Since these systems can’t tell the difference between what’s real and fake, they can give inaccurate responses that developers refer to as hallucinations, or the more accurate term, fabrications — much like if someone saw what looked like the outlines of a face on the moon and began saying there was an actual man in the moon. Developers try to resolve these issues through “grounding,” which is when they provide an AI system with additional information from a trusted source to improve accuracy about a specific topic. Sometimes a system’s predictions are wrong, too, if a model doesn’t have current information after it’s trained.
 
Responsible AI guides people as they try to design systems that are safe and fair — at every level, including the machine learning model, the software, the user interface and the rules and restrictions put in place to access an application. It’s a crucial element because these systems are often tasked with helping make important decisions about people, such as in education and healthcare, but since they’re created by humans and trained on data from an imperfect world, they can reflect any inherent biases. A big part of responsible AI involves understanding the data that was used to train the systems and finding ways to mitigate any shortcomings to help better reflect society at large, not just certain groups of people.
 
A prompt is an instruction entered into a system in language, images or code that tells the AI what task to perform. Engineers — and really all of us who interact with AI systems — must carefully design prompts to get the desired outcome from the large language models. It’s like placing your order at a deli counter: You don’t just ask for a sandwich, but you specify which bread you want and the type and amounts of condiments, vegetables, cheese and meat to get a lunch that you’ll find delicious and nutritious.