- Author: Sarah L Marsh
- Posted by: Sam Romano
Globally, approximately 570 million small and medium-sized farms need training in various agricultural fields. However, the delivery of agriculture training faces significant challenges. In some areas, the difficulty in obtaining this training has led to people turning to generative artificial intelligence (AI) models such as ChatGPT to ask questions relating to their agricultural production.
The way that ChatGPT and other models work is that the models are trained on vast amounts of data to learn patterns and relationships between words. This enables the models both to understand language in nuanced ways and to generate answers to a wide range of prompts, which means that ChatGPT can become adapted to specific uses and theoretically provide a comprehensive answer to any question. Researchers supported by the CGIAR's Excellence in Agronomy Initiative and the Digital Innovation Initiative studied the accuracy of Chat GPT-provided information and professional advice in response to queries from African farmers. Tzachor et al (2023) found significant inaccuracies that could potentially lead to poor management and crop losses. The problems with the answers ranged from vagueness to inaccuracy.
I became curious as to how accurate ChatGPT was with regards to questions relating to California rice and so conducted an informal test of my own. I asked ChatGPT questions relating to California water-seeded rice management to see how accurate the model was.
When queried about the insecticides that are registered for use in California water-seeded rice to control armyworms, ChatGPT responded with 6 insecticides – only one of which (lambda-cy) is used in CA rice systems. The remaining insecticides “recommended” were not used in California, not used for armyworms, or no longer commercially available.
I also asked ChatGPT “How to manage weedy rice in California water-seeded rice fields.” The model returned several paragraphs, with one problematic paragraph reproduced below:
"Apply herbicides labeled for controlling weedy rice in water-seeded rice fields. Herbicide options may include products containing penoxsulam, propanil, or other active ingredients specifically targeting weedy rice. It's crucial to follow label instructions carefully and use herbicides at the appropriate timing and application rates to maximize effectiveness and minimize off-target effects."
As evidenced by these examples, ChatGPT is responding with answers that are not accurate and should not be taken as recommendations.
- Author: Pamela S Kan-Rice
With technology advancements such as ChatGPT, Google Bard and other artificial intelligence (AI)-driven platforms, there's growing enthusiasm within our community to leverage these tools and integrate them into the university context. The following advisory provides guidance on how to use these tools safely, without putting institutional, personal or proprietary information at risk. Additional guidance may be forthcoming as circumstances evolve.
UC ANR recognizes the potential for AI technologies to perpetuate biases and inequalities if not implemented and monitored carefully. Therefore, all AI systems and algorithms used within the university must undergo thorough scrutiny for bias and fairness throughout their development, deployment and ongoing usage.
Implementing AI in our Workplace
If you are looking to implement a new AI tool or process, please review the following guidelines and review the Responsible use of Artificial Intelligence Report, then contact Bethanie Brown at brbbrown@ucanr.edu and Jaki Hsieh Wojan at jhsiehw@ucanr.edu.
HR and IT will conduct a thorough assessment of the information security, employee and labor relations implications associated with the selected AI tools. They will provide you with support and recommendations regarding the appropriateness of these tools.
AI Training
A thorough training on the use of AI is required for all employees leveraging AI tools in the workplace. UC Berkeley has made their AI Essentials training publicly available and is highly recommended. Other training programs may be available for use, please contact UC ANR Human Resources at humanresources@ucanr.edu for additional information.
Prohibited Use
- AI tools may not be utilized in situations where they impact an employee's personal information, health and safety or conditions of employment, unless otherwise specified by policy or law.
- Any use of ChatGPT should be with the assumption that no personal, confidential, proprietary or otherwise sensitive information may be used with it. Information classified as Protection Level P2, P3, or P4 should not be used.
- Similarly, ChatGPT or other public AI tools should not be used to generate output that would be considered non-public. Examples include, but are not limited to, proprietary or unpublished research; legal analysis or advice; recruitment, personnel or disciplinary decision making; completion of academic work in a manner not allowed by the instructor; creation of non-public instructional materials; and grading.
- Please also note that OpenAI explicitly forbids the use of ChatGPT and their other products for certain categories of activity, including fraud and illegal activities. This list of items can be found in their usage policy document.
Precautions
- Ethics
It is imperative to prioritize the well-being and rights of employees by ensuring human oversight and accountability. The use of AI to make high-stakes decisions or penalize employees should be avoided. In addition to the consideration of the ethical implications, clear processes, procedures and standards and potentially union notifications must first be put in place prior to leveraging AI that impacts an employee's conditions of employment.
- Scams
Be wary of fake websites attempting to mask as popular AI apps. This article shares how to tell ChatGPT scams apart from the real ChatGPT website (link is external).
- Errors and “Hallucinations”
When using generative AI tools like ChatGPT, Google Bard and similar technologies for business purposes, be vigilant about "hallucinations" — moments when the AI generates unverified or incorrect information. Always cross-check the tool's output for accuracy before incorporating it into university-related tasks. While generative AI is potent, it can occasionally produce false or misleading content. Ensure all facts and figures generated by these tools are independently verified through non-AI sources before use. In other words, don't simply copy and paste what is produced into your work.
- Bias
When using Large Language Models (LLMs) like ChatGPT, it's important to recognize that the datasets used to train the models may be trained on incomplete or biased data. Implicit and systemic biases can inadvertently be built into AI systems. Such biases run counter to UC ANR's institutional values of diversity, equity, and inclusion. Therefore, using outputs in a way that amplifies these biases can be contrary to our shared institutional values.
- Illegal Content
Data sets used to train AI, and the resulting models, can also contain illegal content. It is important to be aware of what data sets contain and to avoid storing illegal content on UC systems, even inadvertently. Register data sets you are using with HR and IT, and be sure to notify immediately both units if you find or become aware of illegal content.
Potential Opportunities for Use
Publicly available information (Protection Level P1) can be used freely in ChatGPT. In all cases, use should be consistent with the UC ANR Principles of Community. Areas to consider the use of AI:
- Promotional Materials, Image and Video Production
Automate the creation of promotional material. Edit and create images, as well as voiceover tracks for videos, to elevate your media production. - Coding and Web Development
Draft code for common programming tasks, accelerating the development process. - Job Descriptions and Postings
Use templates to suggest customized language for position overviews, key responsibilities and qualifications. Review the language to ensure it is free from unintended biases as biased wording may discourage certain groups from applying, potentially impacting the diversity of the applicant pool. - Training and Onboarding
Develop training materials and FAQs for new tools and automate responses to common questions during staff training sessions. - Website and Communications Content
Edit text for clarity and grammar, suggest optimal layouts, headlines and meta descriptions, and draft content for course listings, prerequisites or institutional information.
References