- Author: Andrew J Lyons
ESRI has developed a wide range of products and services that can be brought together to support a range of decision support contexts. Some of these are "no code" tools to develop web apps relatively easily. A good example of this is ArcGIS Dashboards, which allows you to bring together with relative ease a range of data sources with a modest number of transformations, layouts, and widgets. These are good options for relatively simple use-cases and if your data are already in the ArcGIS ecosystem.
However I actually get more excited about the building blocks of these systems, which to its credit ESRI also makes available. You can think of these as legos that can be combined in different configurations to support specific decision contexts. They include tools for data importing, pre-processing, analysis, and hosting, as well as delivery through a custom web interface. Below are a few highlights from my week at the ESRI conference, themed around the main components of a decision support system.
Data Pipelines. ArcGIS Data Pipelines is a fairly new tool (still in beta) for ArcGIS Online that allows you to create a data processing pipeline that imports vector or tabular data, apply some transformations, and (re)publish it as a Feature Service. A typical use case would be ingesting data from a non-ESRI platform (think open-source data portals commonly used by municipalities), clean it up, and republish it on AGOL (or Enterprise). Previously you would have to do this manually or write Python scripts for this kind of thing. Being able to import and clean up data through a flow-chart style interface is a pretty cool functionality, and particularly useful for decision support systems which often require bringing together disparate data sources under one umbrella. ArcGIS Data Pipelines exist 100% on AGOL, which is convenient but presents some limitations on what you can do. Right now all the import "jobs" have to be executed manually from the flow chart style editor, but it will support scheduling when it comes out of beta this fall. Pro Tip: although ESRI isn't charging an extra license fee to use Data Pipelines, credits are charged according to how much time you spend in the Editors, not the actual amount of data processed. So be sure to close the Editor when you're done with it!
Data Processing with Python. In terms of pre-processing and analyzing data, ESRI has a strong set of options including both "point-and-click" solutions and scripting tools. The scripting languages provide the most flexibility, and within these Python is far-and-away the most developed in the ESRI universe. ESRI has two primary packages for Python, including the venerable "arcpy" package and the recently rebranded "ArcGIS Map SDK for Python". The former is recommended if you're working locally (i.e., on your laptop), while that later is recommended if you're working in a cloud environment. Both have hundreds of functions for all kinds of data manipulations, as well as other functions for administering ArcGIS resources and interfacing with their various products. I'm a relatively newbie to working with spatial data in Python, so I attended a number of Python sessions that covered both the big picture as well as specific workflows. I look forward to working with these more and using them in projects.
Constructing Composite Indices. One of my favorite sessions this year was on constructing composite indices. Indices are commonly used in decision support systems to simplify decisions involving lots of different types of data - think about a system designed to allocate resources based on a multi-dimensional construct like "vulnerability", "conservation value", or "acquisition value". These are hard problems to wrap your head around, because they inherently involve multiple criteria which are often apples and oranges. In IGIS we bump into this need all the time. ArcGIS Pro now has a "Create Composite Index" data processing tool. But what I really appreciated about the workshop was the robust discussion of best practices, and how construction of a 'good' index is by no means a technical problem. Bias is introduced at every corner whether you like it or not, including how you normalize the individual layers, and how you combine layers into an single composite index. ESRI has some smart people thinking about this, and I would recommend everyone who creates or uses Composite Indices to read their recent whitepaper Creating Composite Indices Using ArcGIS: Best Practices.
Building Front Ends. Many people think about web-based decision support systems in terms of the 'front end', or web app. Although the magic of decision support is almost always grounded on the data or model, a well-designed web app is effectively the gatekeeper for sharing the value of research with users. ESRI has developed a number of "preset" web app tools, that combine a modest number of layouts and capabilities. These includes everything from simple Dashboards to the more flexible "Instant" and 'Experience Builder' web apps. These are all "no-code" solutions, which work well enough for many use-cases. Where more custom functionality is needed, ESRI continues to develop and document the underlying components. Of note this year is a reorganization of the JavaScript and Python libraries, which is not a game changer in itself but makes it a little easier to find what you need to construct a dataset or web app. If you like the look of ESRI's web apps, ESRI has also made many of the web app 'controls' (like a button or color selector) in their Calcite Design Framework available to web developers. But the most exciting announcement this year was that many of the underlying components of ESRI's 'out-of-the-box' web apps (like a map viewer or legend) will soon be released as open source web components. This will dramatically lower the bar for creating custom web apps scratch, and enable people like myself who don't have the time or patience to fine-tune web GIS JavaScript commands for a standard UI/UX.
All-in-all, attending the ESRI User Conference was an enriching experience. In addition to learning a lot of technical info, I met the ESRI developers who build these tools, other users like myself who are trying to figure out how to put the pieces together, and vendors who add value to ESRI's ecosystem. In IGIS, we like to say a 5 minute consultation can save you 5 hours of frustration. With the info and contacts I picked up at the User Conference, I feel like I made a big step forward that will save me a lot of time and effort applying these tools to upcoming projects.
- Author: Sean Hogan
The thing that I possibly like the most about the ESRI Users Conference is that you not only get to see all of the recent updates that the company has recently developed but also glimpses of what is coming soon. In particular, I am excited about the advancements that they have made in respect to web mapping applications.
Are you an ArcGIS Online user, and have been wondering when to transition from the Traditional Web Map Viewer to the new Web Map Viewer? To answer that question, for myself, I think the time is now. At this point, per ESRI, there are now only four remaining functionalities that remain to be added to the new Map Viewer to give it all of the functionality that the Traditional Map Viewer had, which will be discontinued in late 2025, including the abilities to:
- Calculate fields (this would be nice to have)
- Add additional relationships to related records (something I have never needed to do previously)
- Vector tile style editing (not something I have ever needed to do, but which I could see being useful for some people)
- Saving/duplicating layers (there is a relatively simple work around for this, but it would be nice to have it built into the Map Viewer)
That said, there are far more than four added functionalities that the new Map Viewer has that the old version lacks, including:
- Easier browsing of data
- Feature editing enhancements
- Analysis enhancements, both for vectors and rasters
- Shortcuts
- Improved visibility, filtering, and effects (on the fly)
- Toggleable layers
- Charts (donut and pie)
- Label enhancements (including improved bookmarks and placements)
- Blending (in a group layer); including blending layers with basemaps and multiply effects
- Added display expressions
- Multidimensional imagery support, including an imagery slider
- The ability to upload feature symbols (svg)
- Sketch layers (as opposed to Classic's notes) with snapping, and also with the ability to upload custom symbols
- The ability to add and manually georeferenced media layers (jpg or png) using control points (on which media blending and effects can also be applied)
Besides all of these additions, the interface has been thoughtfully revamped with usability and efficiency in mind. Some of the above enhancements will now allow you to complete some work flows/functions several times faster than they could be done before. I must admit that I am very pleased!
- Author: Shane T Feirer
The first day of the ESRI User Conference was a good day of new technologies and good stories of Conservation and GIS. There were approximately 18,000 attendees in San Diego for the Conference with an additional 10,000 attending virtually.
Jack Dangermond and ESRI highlighted their new offerings like ArcGIS Reality that allows users to generate a digital twin of at scale from buildings, cities, to entire countries. ESRI has also released new story map templates (the brief, e-magazine) and theme and text enhancements. They have built in new enhancements to the ArcGIS Dashboards, these enhancements include better mobile support and additional support of Arcade.
There have been considerable investment in enhancing Spatial Analysis and Data Science, these included:
- 60+ New Tools
- GeoAI Models (Deep Learning)
- Analysis in the Map Viewer
- Time Series Forecasting
- Predictive Modeling
- Multivariate Indexing
- Pie and Donut Charts
- Big Data Tools
I am looking forward to learning more about the enhancements in the coming days.
Beyond ArcGIS Online and its apps, ESRI highlighted its work on integrating the ArcGIS Mapping system into Office 365 applications. Users of Office 365 can now easy map data in spreadsheets, in teams, sharepoint these new tools. This could bring more mapping technology to UCANR Academics and Staff.
- Author: Andrew J Lyons
- Contributor: Tapan B Pathak
- Interested in hearing about the latest projected climate data just released from California's Fifth Climate Change Assessment?
- Have a burning question about how climate change will impact California's agricultural systems?
- Want to hear how researchers, industry leaders, and state agencies are thinking about decision support for climate adaptation?
If you answered yes to any of these questions, join us at the California Adaptation Forum on July 31, 2023 in Pomona (one of the workshops will also be on Zoom) for a pair of pre-conference workshops that will address these and other topics.
The California Adaptation Forum, being held in person for the first time in 5 years, is the premier conference in the state that brings together community leaders, state and local agencies representatives, researchers, and others to share and discuss how Californians are adapting to climate change today and preparing for the future. Pre-conference workshops will be held on Monday July 3, including:
Using Climate Data & Tools for Decision-Making with Cal-Adapt
July 31, 1:00 – 2:45 pm. In-person
Now more than 10 years old, Cal-Adapt is California's primary source for downscaled climate data. This workshop will review the data and tools available through Cal-Adapt.org, and describe the latest generation of downscaled climate data just released from the Fifth Climate Change Assessment. You'll also get to see the upcoming “Cal-Adapt: Analytics Engine” in action, talk to the researchers who built it, and play with the tools in interactive working sessions. Bring your questions, curiosity, and enthusiasm to dive into the data and explore how
climate projections can help you and your community prepare for climate change.
Speakers:
- Nancy Thomas (Geospatial Innovation Facility, UC Berkeley)
- Grace Di Cecco (Eagle Rock Analytics)
- Nancy Freitas (Lawrence Berkeley National Lab and UC Berkeley)
- Justine Bui (Spatial Informatics Group)
Advancing decision support for climate adaptation in agriculture and natural resources
July 31, 2023. 3:00 – 5:00 pm. In-person and on Zoom
Information on climate change is one of our best resources for adaptation planning. We have a pretty good idea how weather and climate can affect agricultural and natural systems, and we have pretty good data on climate trends. Connecting the two is the realm of decision support tools. In this workshop, we'll hear from researchers asking questions about the impacts of climate changes on tree crops, agricultural pests, urban trees, and water resources. In the second half of the workshop, participants will be invited to discuss their burning questions about climate adaptation, and talk with researchers who are trying to harness climate data to develop decision support tools for adaptation.
Speakers:
- Tapan Pathak (Professor of Extension, UC Merced)
- Andy Lyons (Program Coordinator, UC ANR)
- Bob Klein (California Pistachio Research Board)
- Janet Hartin (CE Advisor, UC ANR)
- Michael Wolff (Senior Environmental Scientist, CDFA)
- Romain Maendly (Climate Change Technical and Policy Advisor, DWR)
Zoom info: This workshop will be held in person and on Zoom. Zoom participants will be included in the entire workshop - presentations, Q&A with the presenters, and breakout discussions. Register here.
/h3>/h3>
- Author: Andy Lyons
- Author: Sean Hogan
- Author: Maggi Kelly
Find out what our GIS experts say about ChatGPT's answers to GIS questions
Introduction
ChatGPT has taken the world by storm. Most of the media coverage has been around its ability to write essays, cover letters, and even poetry. But its language model is drawn from the web, which covers a lot more than prose. Its programming model in particular has impressed - and even scared - professional software engineers and data scientists.
But domain experts who have tested ChatGPT also caution that using the world wide web as the corpus of knowledge for training ChatGPT is both a blessing and vulnerability. Because as we all know, the web is a mixed bag of content. A lot of it is accurate and of good quality, but not everything. It takes humans years of experience to develop the wisdom to sift the wheat from the chaff - how is ChatGPT doing after just a few months out of the oven?
We decided to test it out with our domain knowledge - geospatial data, science, and technology. Like everyone, we're curious about the accuracy of ChatGPTs answers to common GIS questions that have a known answer, and how it deals with questions that don't have a single answer (like which GIS software is “best”). But we were also interested in what types of questions it can handle, and when it essentially gives a non-answer.
To kick the tires on ChatGPT's geospatial domain knowledge, we asked it a series of questions in a few categories that we have a lot of experience with: general GIS, GIS software (i.e., ArcGIS), and spatial programming with Python and R. Our experts then gave each response a score of 1-5 stars, plus a little commentary.
See below the questions we asked, or jump to the end for our conclusions. The results might surprise you!
General GIS Questions
1. What's the best spatial projection to use for a GIS analysis of a forest?
ChaptGPT's answer:
The choice of spatial projection for a GIS analysis of a forest can depend on a variety of factors, such as the location of the forest, the purpose of the analysis, and the data being used. However, some common projections that are often used for forest analyses include:
- Universal Transverse Mercator (UTM): UTM projections are often used for smaller areas such as a single forest stand, as they provide accurate measurements for a particular zone, which are then combined to provide an accurate representation of the entire forest.
- Lambert Conformal Conic: Lambert Conformal Conic projections are often used for larger areas such as a forest region or a country. They are particularly useful for mapping features that are oriented north-south and are widely used in the United States.
- Albers Equal Area Conic: Albers Equal Area Conic projections are also commonly used for forest analysis, particularly for areas that are more east-west in orientation. They are often used for regional and national level forest assessments.
Ultimately, the best projection to use for a specific forest analysis will depend on the objectives of the analysis, the nature of the data being used, and the spatial characteristics of the area being analyzed. It's always a good idea to consult with a GIS professional or a forest analyst to determine the best projection for your specific needs.
IGIS experts score: 4.5 stars
Slightly more emphasis could have gone to scale, and the trade-offs of preserving area, direction, positional accuracy, and cartographic shape; however, this is a very good answer! SH
2. What is the most common format for spatial data?
ChaptGPT's answer:
The most common format for spatial data is the Shapefile (.shp) format. It was developed by ESRI (Environmental Systems Research Institute) and has become the de facto standard for spatial data storage and exchange.
A shapefile is a collection of files that stores feature geometry, attributes, and spatial index information. The main files include the .shp file (which stores the feature geometry), the .dbf file (which stores the attribute data), and the .shx file (which stores the spatial index information).
Shapefiles are widely used in GIS (Geographic Information Systems) software for storing and analyzing spatial data, including points, lines, and polygons. They can represent a variety of geographic features, such as roads, rivers, land parcels, and administrative boundaries.
However, with the development of new technologies and data standards, other file formats are becoming more common in the GIS community, such as GeoJSON, KML, and GeoTIFF. The choice of file format will depend on the software and tools being used, the type and complexity of data being analyzed, and the needs of the analysis.
IGIS experts score: 4.5 stars
ChatGPT answers the question accurately, however I was struck by the oddity of including “GeoTIFF” as a competing or comparable file format to a Shapefile, given that a GeoTIFF is a raster format and a Shapefile is a vector format. This comparison is not technically wrong, but requires further explanation. SH
What is the best format for vector spatial data, and why?
The best format for vector spatial data depends on the specific needs of the analysis and the software being used. However, some of the most common and widely used formats for vector spatial data include:
- Shapefile (.shp): Shapefile is a widely used format for vector data that is supported by many GIS software packages. It stores geometric and attribute data in separate files and can represent point, line, and polygon features.
- GeoJSON (.json): GeoJSON is a format for encoding a variety of geographic data structures using the JSON (JavaScript Object Notation) format. It is often used for web mapping applications and is supported by many web mapping libraries.
- KML (.kml): KML (Keyhole Markup Language) is an XML-based format for storing geographic data and is often used for displaying geographic data in Google Earth.
- GeoPackage (.gpkg): GeoPackage is an open, standards-based format for storing geospatial data that can store both vector and raster data. It is designed for use on mobile devices and is gaining popularity as a format for sharing geospatial data.
The choice of format will depend on factors such as the purpose of the analysis, the size and complexity of the data, and the software being used. It is always a good idea to consult with experts in the field, such as GIS professionals, to determine the best format for a given analysis.
IGIS experts score: 3.5 stars
ChatGPT does a good job at picking some good (and common) vector data formats; however, in my opinion it doesn't do a great job at explaining why its selected formats are the best, but rather just gave some basic examples of use-cases. SH
What is the best source of data to make a map of wildfire risk in California?
ChatGPT's answer:
There are several potential sources of data that could be used to create a map of wildfire risk in California, depending on the scale and scope of the analysis. Some potential sources of data are:
California Department of Forestry and Fire Protection (CAL FIRE): CAL FIRE provides a variety of data and resources related to wildfire risk, including maps of high fire hazard severity zones, fire hazard severity zone datasets, and other wildfire-related data.
National Interagency Fire Center (NIFC): NIFC provides a variety of data and resources related to wildfires across the United States, including maps of current wildfires, fire potential maps, and fire weather forecasts.
US Forest Service: The US Forest Service provides a variety of data related to wildfire risk and forest management, including data on forest structure, fuels, and vegetation cover.
United States Geological Survey (USGS): USGS provides various data related to wildfires, including historical wildfire perimeters, vegetation and fuel data, and topographic maps.
NASA: NASA provides satellite imagery and data related to wildfire activity and smoke plumes, which can be used to track the spread and impact of wildfires.
The choice of data source will depend on the specific needs of the analysis and the scale of the study. It's often a good idea to consult with experts in the field, such as fire ecologists or GIS professionals, to determine the best data sources for a given analysis.
IGIS experts score: 4.5 stars
This is a very good answer for primary sources of fire risk data; however I would have also liked to have seen mention of secondary data sources, such as ESRI or Living Atlas. SH
Is it possible to classify historic black and white aerial photography to distinguish between grasses, shrubs, and trees?
ChatGPT's answer:
Yes, it is possible to classify historic black and white aerial photography to distinguish between grasses, shrubs, and trees. Aerial photography can provide valuable information about the vegetation cover and composition of an area, and it has been used in many applications, including vegetation mapping and monitoring, land use and land cover mapping, and natural resource management.
There are various methods for classifying aerial photography, such as visual interpretation, unsupervised and supervised classification, object-based image analysis, and machine learning algorithms. These methods rely on the spectral properties of the imagery and the spatial patterns of the features being mapped.
In order to distinguish between grasses, shrubs, and trees, it is important to consider the differences in their spectral reflectance properties. For example, trees typically have higher reflectance values in the near-infrared portion of the spectrum compared to grasses and shrubs, while grasses tend to have higher reflectance values in the visible portion of the spectrum. These spectral differences can be used to develop spectral indices and classification algorithms that can accurately distinguish between different types of vegetation.
However, it's worth noting that the accuracy of the classification will depend on various factors, such as the resolution of the imagery, the quality of the data, and the complexity of the vegetation cover. In addition, manual interpretation of aerial photography can be time-consuming and labor-intensive, and it may require specialized expertise in vegetation identification and classification. Therefore, it's often a good idea to consult with experts in the field, such as remote sensing or GIS professionals, to determine the best approach for a specific analysis.
IGIS experts score: 2.5 stars
This answer would have been good, if it were not for the third paragraph, which doesn't really apply to black and white imagery in any practical sense. Near-infrared reflectance and black and white imagery are very much two different things. SH
GIS Software Questions
Where do I go in ArcGIS Pro to split a big polygon into multiple polygons using a road network to define the regions?
ChatGPT's answer:
To split a big polygon into multiple polygons using a road network to define the regions in ArcGIS Pro, you can use the "Split Polygons" tool in the "Editing" tab. Here are the steps to follow:
- Open the "Editing" tab in the ArcGIS Pro ribbon.
- Click on the "Split" drop-down menu and select "Split Polygons."
- In the "Split Polygons" pane, select the polygon layer you want to split in the "Input Features" field.
- Select the road network layer you want to use as a splitting feature in the "Split Features" field.
- Choose the split method that you want to use. For example, you can split the polygon by the roads that intersect its boundary, or by the roads that fall within the polygon's interior.
- Adjust any additional options for the split method you chose.
- Click the "Run" button to execute the tool and create the split polygons.
Once the tool is complete, a new layer will be created with the split polygons. You can edit this layer further, if needed, to refine the individual polygon boundaries.
IGIS experts score: 2 stars
I attempted to use the above work-flow to split a polygon, and the directions were both inaccurate and not very clear. In my opinion, a better way of tackling this geoprocessing step would have been to use the search tool in the geoprocessing tool box, as this approach would be much less likely to change with updated versions of ArcGIS Pro. SH
Programming Questions
Write Python code to convert a multi-layer raster stack to Zarr.
ChatGPT's answer:
Here is some Python code that demonstrates how to convert a multi-layer raster stack to the Zarr format using the rasterio and xarray libraries:
import rasterio
import xarray as xr
# Open the raster stack
with rasterio.open('input.tif') as src:
# Read the raster data into an xarray dataset
data = xr.open_rasterio(src)
# Write the dataset to a Zarr store
data.to_zarr('output.zarr', mode='w')
The code above opens the raster stack using rasterio and reads it into an xarray dataset. Then it's using the xarray to_zarr function to write the dataset to a Zarr store.
You can also specify the storage backend to use, by default it uses the filesystem backend, but you can use other backends like 'S3', 'GCS' etc.
data.to_zarr('s3://my-bucket/output.zarr', mode='w', storage_options={'anon':True})
It's also possible to use the zarr library directly to write the data, but xarray provides a convenient way to handle the conversion and makes it easy to work with the Zarr data later on.
Please note that the above code is just an example and may require additional modifications to work with your specific dataset and system configuration. Additionally, the example above writes the data to a single zarr store, but you can also write it to multiple stores using the group parameter of the to_zarr method.
IGIS experts score: 5 stars
The provided code works and is easy to follow. I was impressed that it offered an alternative library and anticipated the next question - how to upload to cloud storage (because Zarz is a cloud optimized format). AL
Write a R script to import a polygon layer from a file geodatabase, and plot it using leaflet.
ChatGPT's answer:
Sure, here's an example R script that imports a polygon layer from a file geodatabase and plots it using leaflet:
# Load required libraries
library(sf)
library(leaflet)
# Set the working directory to where the file geodatabase is located
setwd("path/to/geodatabase")
# Read in the polygon layer from the geodatabase
data
# Create a leaflet map and add the polygon layer
m %
addTiles() %>%
addPolygons(data = data, fillColor = "blue", fillOpacity = 0.6)
# Display the map
m
In this script, you'll need to replace "path/to/geodatabase" with the actual file path where your geodatabase is located, "geodatabase.gdb" with the name of your file geodatabase, and "polygon_layer" with the name of the layer containing the polygon data.
The st_read() function from the sf library is used to read in the polygon layer from the geodatabase. The leaflet() function is used to create a new leaflet map, and the addTiles() function adds the default tile layer. The addPolygons() function is used to add the polygon layer to the map, where the data argument is the data frame containing the polygon data, and the fillColor and fillOpacity arguments are used to set the polygon fill color and opacity. Finally, the m object is displayed to show the leaflet map.
IGIS experts score: 4 stars
The code is correct, and the explanations are helpful. Impressive that it knows how to combine functions from two R-packages that are often presented separately. It didn't however check the projection of the polygon data to verify the data are in WGS 84, which is required by leaflet. AL
Conclusions
Overall we found ChatGPT very impressive. We gave it a series of questions on geospatial concepts, data, desktop software, and programming, and in all cases it was able to interpret our questions sensibly. Although it missed some of the details, the responses were always relevant, and in many cases it was able to bridge different domains of knowledge (such the scale of projections and the scale of forests).
Where it works well: ChatGPT is most impressive in domains like coding that are fairly well-defined and have highly structured ontologies and syntax. It's probably not too surprising that a computer program is most proficient in understanding computer languages. Programming is also probably one of the easier domains for it to train on: answers to common questions on the internet probably converge pretty quickly. We predict it will have the greatest utility for questions on getting started, for example which function to use, and well-defined use cases. Most advanced topics, like debugging or designing the architecture of a program to address a specific use-case, are much more complex and will continue to require human expertise and experience synthesizing information.
Other fields it might work well in. Extrapolating this conclusion further, we could also predict AI could become very useful in fields like law, medicine and other highly technical areas where domain knowledge is well-developed, highly structured, and requires storage capacity well beyond what the human brain can handle. It will be interesting - and scary - to see how AI intrudes upon fields that have historically been seen as some of the most intellectually demanding and prestigious occupations out there.
AI in education: There are many active discussions about the use of AI in education, as there is no doubt that these tools are going to change education and training. There is considerable worry about how these tools might facilitate exam cheating in higher education, and some instructors have responded by trying to keep AI tools out of the classroom. However, that seems like trying to hold back an ocean back with a sand castle. Many teachers are figuring out ways to embrace and incorporate tools like ChatGPT in their teaching. It is clear that these tools can also help to accelerate the learning curve, but also require strengthening skills in discernment, critical thinking, the value of diverse perspectives and experiences, and the ways in which knowledge serves as a form of power.
Learning how to talk to ChatGPT: We found that to get good answers you have to ask good questions. This is perhaps an extension of the well-known adage, ‘garbage in, garbage out' (which also applies to ChatGPT's training data). As we learned from trial and error, if you don't ask a clearly expressed question, using standard terminology and with all the needed parameters, the results are more unpredictable, general, and probably not helpful. Again this is not surprising given that you're talking to a computer who doesn't know your background and can't read your body language. Indeed one of the skills we think will need to be taught to use ChatGPT effectively is articulating good questions. Perhaps not coincidentally, learning how to ask better questions is also one of the primary learning outcomes of many K-16 curricula.
Recognizing and reconciling multiple realities and knowledges. Our domain of interest - geospatial science, data, and tools - is fairly cut-and-dry, and the ‘one universal truth' paradigm isn't so far fetched. That being said, there are still shades of gray, old and new standards that often coexist together, scale dependencies, data modeling choices, philosophical differences, power dynamics, etc. To its credit, when there isn't a single answer or the decision has consequences, ChatGPT likes to invoke ‘It is always a good idea to consult with experts in the field…' That often feels like a cop-out, but it's probably the right thing to do.
In other domains, such as many social science fields and the humanities, multiple situated knowledges and interpretations are not an outlier but an intrinsic characteristic of the field. Whether this new generation of AI can recognize and articulate the value in different types of knowledge, time will tell.
What we'd like to see more of: transparency. Transparency is a tall ask for AI. The algorithms are extremely complex and fuse together training data in ways that are hard if not impossible to unpack (even if you know the algorithm!). Not knowing where information comes from is a huge paradigm shift from traditional scholarship, and not just a bit unsettling. In conventional research, transparency is baked in through practices like literature reviews and citations. This is how scholars build upon the work of each other, allowing the reader to trace the lineage of knowledge and evaluate for herself whether a new discovery or idea holds water. ChatGPT on the other hand draws from a massive treasure trove of online content, however the specifics of how sources are chosen and evaluated are complex and known only to an elite few software engineers. To say its sources are anonymous is incorrect - they're just hidden. You don't have to be a George Orwell to think of ways this could produce unintended and undesirable consequences in a big way.
Does ChatGPT level the playing field? Technologies like ChatGPT can level the playing field to some degree, allowing poorly resourced communities to catch up in terms of education and technology. Think of the positive impacts on universities in poor countries. However once transformative technologies get monetized, they tend to exacerbate existing inequalities. Hopefully the powerful companies that are developing these tools will listen to the smart people inside and outside the company and find ways to keep this technology in the public domain.
In conclusion, the journey to Cyborg is a slippery slope. Long before ChatGPT, people freaked out about the transformative impacts of search engines, cell phone apps, and even Wikipedia. Is ChatGPT going to be a game changer in the world of GIS? Only time will tell. We will definitely be using it for what it's already good at, and are not too worried about our jobs being taken over by robots - yet!
/h3>/h4>/h4>/h3>/h4>/h3>/h4>/h4>/h4>/h4>/h4>/h3>/h3>