As generative artificial intelligence (Gen AI) tools like chatbots become more integrated into academic and administrative life, it’s important to pause and think about how we’re using them—and how we should be using them. Whether you’re a faculty member exploring AI to enhance teaching, a staff member streamlining workflows, or a student using chatbots for research and writing support, it’s essential to practice ethical and responsible AI use.

This article covers 5 tips on using AI chatbots thoughtfully and responsibly. Since Microsoft Copilot is available to all faculty, students, and staff through Penn State’s Microsoft 365 license, we have included some Copilot-specific information where applicable.

1: Learn about the tool you are using.

With many Gen AI systems currently available, it is helpful to research your options and choose the best tool for your needs. Depending on what you want to ask AI for, you may want to consider what capabilities and features set a tool apart from other options, what dataset was used to train the model (the more current, the better!), how the algorithm works, and how the tool handles your input data. This information can help you protect your privacy and can also alert you to limitations that may affect the quality of the tool’s results.

You can find important information about how Copilot works and what its recommended uses are in Microsoft’s Transparency Note for Copilot and Microsoft Copilot Frequently Asked Questions (scroll to the bottom for FAQs).

 


2: Do not enter personal or confidential information into AI tools.

Some Gen AI tools offer some protection for your personal data and inputs, while others may use your data to train their models. This means there is a risk that publicly available tools could inadvertently share information you include in your prompts with other users. Keep this in mind when you select a tool and when you decide how much you are willing to share in your prompts. Visit Penn State’s AI Hub Guidelines to learn what tools are available for employees to use for each level of information. Currently, Level 3 and 4 data are prohibited for use with all general use AI chatbots.

The version of Microsoft Copilot available at Penn State includes Enterprise Data Protection. Prompts and responses are protected under the same terms as other Microsoft 365 services and user inputs are not used to train Microsoft’s AI models. Use of Level 1 and 2 information is allowed in Copilot.

 


3: Always evaluate AI outputs for accuracy and potential bias.

Verifying the accuracy of AI outputs and checking for bias is essential, especially when using AI for decision-making, research, or sensitive topics. There are multiple methods you can use to ensure the information you receive is correct. Fact-check the information you receive using generally trusted, reliable sources like academic journals; websites with URLs ending in .org, .edu, or .gov; and established news outlets. Check multiple sources to confirm consistency. You can also ask chatbots to provide sources for their information; just be sure to evaluate those sources as well! Signs of bias in AI responses may include representation of only one group or perspective, overgeneralizations, and reinforcement of stereotypes. If you notice bias in a response, you should report it to the tool developer. Most chatbots, including Copilot allow you to provide feedback on the interaction.

 


4: Think of AI outputs as a draft rather than a final product.

AI tools can be an excellent starting point for generating ideas, summarizing information, and exploring perspectives, but they are not a replacement for human knowledge and experience. AI generates responses based on patterns in data; it doesn’t “know” things in the same way humans do. Always review and revise AI outputs based on your own knowledge and judgment, and be sure to verify data using trusted sources. 

 


5: Be transparent about when you use AI.

If you intend to use AI in a professional or academic setting, you should find out if there are any restrictions on its use in your department or unit and be sure to adhere to them. It is also good practice to clearly disclose when you use AI to develop content. Transparency promotes trust and accountability, and encourages ethical AI use by others. Bookmark the AI Hub Guidelines page to stay up to date on Penn State’s current recommendations policies related to Gen AI use.

 


If you are interested in learning more about Penn State’s enterprise version of Copilot, ITLD has created the following resources: