Guidelines for AI Tools in the Workplace
Introduction
With technology advancements such as ChatGPT, Google Bard and other artificial intelligence (AI)-driven platforms, there's growing enthusiasm within our community to leverage these tools and integrate them into the university context. The following advisory provides guidance on how to use these tools safely, without putting institutional, personal or proprietary information at risk. Additional guidance may be forthcoming as circumstances evolve.
UC ANR recognizes the potential for AI technologies to perpetuate biases and inequalities if not implemented and monitored carefully. Therefore, all AI systems and algorithms used within the university must undergo thorough scrutiny for bias and fairness throughout their development, deployment and ongoing usage.
Implementing AI in our Workplace
If you are looking to implement a new AI tool or process, please review the following guidelines and review the Responsible use of Artificial Intelligence Report, then contact Bethanie Brown at brbbrown@ucanr.edu and Jaki Hsieh Wojan at jhsiehw@ucanr.edu.
HR and IT will conduct a thorough assessment of the information security, employee and labor relations implications associated with the selected AI tools. They will provide you with support and recommendations regarding the appropriateness of these tools.
AI Training
A thorough training on the use of AI is required for all employees leveraging AI tools in the workplace. UC Berkeley has made their AI Essentials training publicly available and is highly recommended. Other training programs may be available for use, please contact UC ANR Human Resources at humanresources@ucanr.edu for additional information.
Prohibited Use
- AI tools may not be utilized in situations where they impact an employee's personal information, health and safety or conditions of employment, unless otherwise specified by policy or law.
- Any use of ChatGPT should be with the assumption that no personal, confidential, proprietary or otherwise sensitive information may be used with it. Information classified as Protection Level P2, P3, or P4 should not be used.
- Similarly, ChatGPT or other public AI tools should not be used to generate output that would be considered non-public. Examples include, but are not limited to, proprietary or unpublished research; legal analysis or advice; recruitment, personnel or disciplinary decision making; completion of academic work in a manner not allowed by the instructor; creation of non-public instructional materials; and grading.
- Please also note that OpenAI explicitly forbids the use of ChatGPT and their other products for certain categories of activity, including fraud and illegal activities. This list of items can be found in their usage policy document.
Precautions
- Ethics
It is imperative to prioritize the well-being and rights of employees by ensuring human oversight and accountability. The use of AI to make high-stakes decisions or penalize employees should be avoided. In addition to the consideration of the ethical implications, clear processes, procedures and standards and potentially union notifications must first be put in place prior to leveraging AI that impacts an employee’s conditions of employment.
- Scams
Be wary of fake websites attempting to mask as popular AI apps. This article shares how to tell ChatGPT scams apart from the real ChatGPT website (link is external).
- Errors and “Hallucinations”
When using generative AI tools like ChatGPT, Google Bard and similar technologies for business purposes, be vigilant about "hallucinations" — moments when the AI generates unverified or incorrect information. Always cross-check the tool's output for accuracy before incorporating it into university-related tasks. While generative AI is potent, it can occasionally produce false or misleading content. Ensure all facts and figures generated by these tools are independently verified through non-AI sources before use. In other words, don't simply copy and paste what is produced into your work.
- Bias
When using Large Language Models (LLMs) like ChatGPT, it's important to recognize that the datasets used to train the models may be trained on incomplete or biased data. Implicit and systemic biases can inadvertently be built into AI systems. Such biases run counter to UC ANR's institutional values of diversity, equity, and inclusion. Therefore, using outputs in a way that amplifies these biases can be contrary to our shared institutional values.
- Illegal Content
Data sets used to train AI, and the resulting models, can also contain illegal content. It is important to be aware of what data sets contain and to avoid storing illegal content on UC systems, even inadvertently. Register data sets you are using with HR and IT, and be sure to notify immediately both units if you find or become aware of illegal content.
Potential Opportunities for Use
Publicly available information (Protection Level P1) can be used freely in ChatGPT. In all cases, use should be consistent with the UC ANR Principles of Community. Areas to consider the use of AI:
- Promotional Materials, Image and Video Production
Automate the creation of promotional material. Edit and create images, as well as voiceover tracks for videos, to elevate your media production. - Coding and Web Development
Draft code for common programming tasks, accelerating the development process. - Job Descriptions and Postings
Use templates to suggest customized language for position overviews, key responsibilities and qualifications. Review the language to ensure it is free from unintended biases as biased wording may discourage certain groups from applying, potentially impacting the diversity of the applicant pool. - Training and Onboarding
Develop training materials and FAQs for new tools and automate responses to common questions during staff training sessions. - Website and Communications Content
Edit text for clarity and grammar, suggest optimal layouts, headlines and meta descriptions, and draft content for course listings, prerequisites or institutional information.