- Author: Andy Lyons
Calling all water resource managers and researchers! Do you have spatial data of water stocks, water infrastructure, or water usage? Do you have a story to go with it? Then you have everything you need to submit a map idea for an exciting open-source atlas project.
Guerrilla Cartography, the non-profit cartography group that put together the stunning Food: An Atlas, recently announced another call-for-maps for their second big project: Water: An Atlas. If you haven't heard of this group, Guerrilla Cartography is an Oakland-based consortium of cartographers that believe heavily in the power of maps to tell stories, the art and science of cartography, and the power of collaboration between cartographers and researchers. For each atlas project, they pair volunteer cartographers with researchers to turn basic maps into beautiful works of art, then crowd-fund to print the atlases which are also available for free as PDFs. See for example how they portrayed California's almond production in the 2013 Food Atlas.
The water theme is certainly topical, as drought, ground water depletion, and sea level rise are major issues throughout the western US and around the world. Most of us notice water issues when there is either too much or too little water right in front of us. Maps are uniquely suited to convey the spatial and temporal scales of water, and the atlases produced by Guerrilla Cartography are as artistic as they are informative. I can't wait to see the stories revealed through maps when the atlas comes out next spring. The deadline to submit an idea for a map is September 12, 2016, so hop on it water researchers!
- Author: Andy Lyons
I had a great time at last week's three-day Apps-for-Ag hackathon, which was hosted by ANR, the California State Fair, and the City of Sacramento. Although I've participated in an open science CodeFest before at NCEAS, this was my first time at a competitive hackathon where teams compete for cash prizes and start-up support. On the first day, we heard about some of the key challenges facing agriculture in California, including Asian Citrus Psyllid (ACP), and how technology may be able play a role in addressing those challenges. Some participants also came with their own challenges, and spoke about those.
The organizers of the hackathon did an amazing job assembling a great pool of resource persons, including Bobby Jones the Chief Data Officer from USDA, two representatives from Amazon Web Services who brought a couple of Internet-of-Things starter kits, Neil McRoberts who is an expert on ACP, and several veterans of previous hackathons.
In the end, four teams formed, and in less than 48 hours developed their ideas into a product and presented their work to a panel of judges at the State Fair. All four teams choose to develop an app for a mobile device, although the hackathon guidelines would have allowed other outputs including websites or desktop tools. It was interesting to watch the projects evolve from the broad sketches teams discussed on Friday, to final products on Sunday. The presentations often included marketing plans, existing product research, monetization strategies, etc. I was struck how online spatial databases and visualization were central to all of the apps developed. The judges questioned the teams about both technical and practical issues, including the economic viability of some of business models, policy issues including privacy, and bandwidth requirements. A description of the award winning apps can be found here.
Judges at the Apps for Ag Hackathon: Rob Trice, Glenda Humiston, Bobby Jones, and Tom Andriola
I came away from the hackathon with a few impressions. First, although the technology stack each team adopted varied, they all integrated spatial databases from the cloud with geolocation services on mobile devices. Half of the teams also used sensors connected to the Internet, and half sought to foster online communities. The diversity of applications developed demonstrates the power of this suite of tools to meet an enormous number of opportunities in agriculture and natural resource management.
Second, although hackathons have been known to produce wildly successful apps and even start-ups, I suspect the most significant and enduring outcome of many hackathons may not be the apps themselves, but the connections made, the knowledge shared, and the resources mobilized (i.e., people, data, & tools). Not only did everyone who participated meet some new people, the nature of the challenges required groups with diverse skillsets to work together. All of the challenges needed the skills of developers, designers, domain specialists, and business development experts. No one has all these skills, but everyone came away with a little more appreciation of the diversity of roles needed for a successful project, and a little more experience communicating across fields. We also all learned some new things about agriculture and technology, as well as available resources out there from datasets to hardware to cloud based computational platforms.
Finally, the spirit of entrepreneurialism was very real, acting like a high-octane fuel that powers engines of creativity and rapid development. The prize money, the competitive spirit, the rules of contest, and the alluring possibility of perhaps establishing a successful start-up were clearly incentives that gave teams the extra energy to work late into the night. However it also occurred to me that the hackathon spirit may also act as a filter on what can be achieved in a hackathon. Not all important problems can be digested in such a short period, and not all technology solutions to important problems are easily monetizable, which is one of the criteria hackathon teams may use to select projects. To tackle complex challenges like ACP, domain specialists could perhaps scope out the technology needs in greater detail before the hackathon, organizers could incentivize teams to work on those challenges, and conference calls could be pre-arranged with targeted end-users who may be a small set of scientists or managers.
The Apps-for-Ag hackathon series is a great collaboration between the ag industry, technology companies, government agencies at all levels, and academic institutions like ANR. I can't wait to see what see what challenges will be tackled next time.
- Author: Andy Lyons
Bay area R users were lucky to have to have the annual useR! conference in their back yard this week. Stanford hosted this year's conference, which is the largest R meeting in the country featuring four days of workshops, keynotes, and presentations. The popularity of this event is evident not only by the more than 700 attendees, but also by the fact that these 700 were the lucky ones! Conference registration maxed out weeks before the registration deadline and many others had to be turned away.
I attended a half-day workshop on Effective Shiny Programming by Joe Cheng, the CTO of RStudio and creator of Shiny. If you haven't already heard of Shiny, this amazing package allows R users to turn their R scripts into an application complete with a GUI that can be either run locally or online. As one of the attendees aptly remarked during the Q&A session, Shiny has been a 'game changer' for the R community. I use Shiny to provide a GUI for certain functions in my R package T-LoCoH that analyzes animal movement.
In his presentation, Cheng described conceptually how reactive programming works in Shiny. Shiny's default behavior is to automatically re-run scripts any time the user changes an input value through something like a slider. This works fine for simple tasks, but can slow down an app visibly and unnecessarily if data crunching takes more than a millisecond. Reactive programming lets you control when things are updated in a Shiny app.
Cheng explained how and when to use the main two types of objects that control reactivity. 'Reactive' objects (e.g., created with reactive(), reactiveValue(), reactiveEvent()) return values, are cached, and lazy (only called when needed). 'Observers', on the other hand, are eager (run when programmed to, even if they're not necessary for example if an output is not visible), never cached, and don't return values. Unless we need to precisely control when something is run, Cheng cautioned, we should use reactive objects which gives Shiny the power to decide when a value needs to be updated (and Shiny is generally better at doing this than programmers, especially as an app evolves). Observers are often used to respond to 'events' such as a button click. For more info on reactive programming, see 'How to customize reactions' in RStudio's excellent Shiny tutorial.
Cheng also gave a preview of some new features in the next release of Shiny, which should be out later this summer. One of the most exciting new features is the ability to record the 'state' of a Shiny app within the URL (so for example you can restore the position of sliders and other inputs). There will also be improved security for making database connections, and a new function (req()) which will make it a lot easier to check if all the required inputs from the user have been entered.
In a future post, I'll discuss the use of Shiny as a web GIS tool, how techniques like reactive programming offer capabilities to create powerful and flexible web applications, and when you might want to use a Shiny application instead of a more packaged web GIS tool like ArcGIS Online or CartoDB.
- Author: Shane Feirer
Day 3 of the ESRI User Conference, new tools, new story maps, and new ways to work with data.
New Tools, ESRI is supporting new tools with the python and R programming languages. With python they have integrated the ability to easily use 3rd party libraries within ArcGIS by integrating conda into the upcoming release of ArcGIS Pro 1.3 and they have also made it possible to use python to manage ArcGIS online content with the Python API. With R, ESRi has released a ArcGIS R Bridge that allows for the use use or esri data sets in R and the easy use or results from your R analyzes in ArcGIS.
New Story Maps, at the user conference last year, ESRI highlighted a new story map style called the cascade story map. I found out yesterday that they have developed an app builder for this new style of story map and they have also released another style called a crowdsource story map. I also reached out to the developers of story maps today and found out they are developing a new template, they are going to share this new template with us. I cannot wait to see how these storymaps will be used by UCANR in the coming months / year.
New ways to work with data, ESRI has developed new ways to work with data, these data may include Big Data or Multi-dimensional Data. In the case of Multi-dimensional Data they have highlighted new tools to work with netcdf data, but they also showed how that are using existing tools within ArcGIS to work with Multi-dimensional Data. These tools start by importing Multi-dimensional Data into raster mosaics and they using the full suite of ArcGIS tools on these data structures. When it comes to Big Data, they have created a new suite or tools and capabilities within ArcGIS that will allow us to perform big data analysis directly within ArcGIS. Multi-dimensional Data can be used now with ArcGIS and Big Data Analytics will be available in the coming months.
I look forward to seeing what the 4th day will bring.
- Author: Shane Feirer
Day 2 of the ESRI User Conference, for those who have not been to the user conference is an informative yet tiring event. Today I had the opportunity to get quick and helpful answers to some issues I have been struggling with in my programming within IGIS. With the technical support, I look forward to releasing an app we have been developing with Researchers within UCANR.
Beyond getting some technical support questions answered, I have seen new tools that are going to be released in the coming months to analyze big data. I have seen new ways to visualize and display real-time data, such as streaming feeds from our UCANR Flux Network. I have also seen new data collection apps that we will be able to use to get citizen involvement in the collection of data to extend our research within UCANR. I look forward to seeing new tools that we might be able to UCANR in the coming months.