- Posted By: Rob York
- Written by: Rob York (view the full blog at www.foreststeward.com)
Article reviewed: Impacts of fire exclusion and recent managed fire on forest structure in old growth Sierra Nevada mixed-conifer forests
By B.M. Collins, R.G. Everett, and S.L. Stephens. 2011. Published in Ecosphere, Volume 2(4):art51. doi:10.1890/ES11-00026.1
The plot line: These researchers found some rare vintage 1911 data that was collected in what is now Yosemite National Park. With what they think is reasonable confidence, they were able to relocate the 1911 areas and do a century-long re-measurement. The times of measurement in this case are especially relevant- 1911 conditions reflect what the forest looked like under an unaltered fire regime (i.e. before Euro-Americans came along and screwed things up). Because this particular forest has no history of harvesting and only recent reintroductions of fire, the re-measurement assesses long-term change as a result of fire suppression and evaluates the effectiveness of recent fires in re-establishing a forest structure that is similar to 1911. As many others have also found, they found that fire suppression dramatically altered forest structure over time, leading to much higher tree densities, and a decline in ponderosa pine traded for an increase in white fir. Areas that burned recently with moderate severity fires were much more similar in structure to 1911 conditions than areas that did not burn or that burned with low severity fires. They conclude that current restoration treatments are likely not creating forest structures that are similar to pre EuroAmerican times (they are still too dense), and that managers should consider the complex interaction of climate change with fire suppression when trying to create resilient forests.
Relevant quote: “While fires of lesser intensity likely will reduce surface fuels and understory trees which is important in reducing potential tree mortality from fire and possibly maintaining desired forest conditions once achieved initially, they may not be sufficient alone to achieve historical forest structure given the substantial tree establishment that occurred during the fire exclusion period.”
Relevance to landowners and stakeholders:
There are several ways to reconstruct what the forest looked like in the past (e.g. pictures, written accounts, backwards modeling, isotopes, pollen, etc.). All of the available methods have their problems with bias, but nothing beats raw data from measurements collected by actual people. This is assuming that the people took care in collecting the data objectively (i.e. they did not suffer from “majestic tree” bias by selecting areas to measure just because they had huge trees). The degree to which old data are useful is really a matter of how much you trust the original folks who did the measurements. In this case, the original folks did not have incentive to heavily bias their measurement, and it appears that they measured areas systematically, which should reduce bias. So as far as old data go, the data used in this study is pretty good. We are still not sure that they were totally unbiased because they were not doing research at the time, and therefore did not have the same level of quality assurance or precision checks that modern researchers do. While I am suspicious that these 1911 surveyors took as much care in measurements as I would expect from modern researchers, I nonetheless consider it likely that this old data is much better than the alternatives for reconstructing forest structure as of a century ago.
While reconstruction studies in the Sierra Nevada are all inexact, the overwhelming agreement among them overcomes their individual imprecision. This study agrees with what is now an abundance of work that has documented at least these very basic changes regarding the effects of fire suppression:
- Forests where only fire suppression has occurred are much denser than they were in the past (using any measure of density- canopy, # of trees per area, or basal area).
- There are lots more trees in the smaller and moderate size classes than there were in the past.
- White fir has increased in tree density, primarily because of increases in small and medium sized trees
- Ponderosa pine has decreased in relative density, primarily because of a lack of small and medium sized trees
Relevance to managers:
Managers will appreciate that the authors of this paper are very direct in providing management implications, yet they are also nuanced in describing how this research may inform decisions. They provide three implications:
- Treatments that attempt to recreate the forest structure of the past should avoid using average values for hard targets, and should instead consider recreating ranges of conditions or measures of variability
- The structural restoration targets that are currently being used are likely too conservative. If the objective of treatments are to recreate past structure, post treatment densities are too high.
- As managers move beyond the oversimplified approach of recreating past structure and instead incorporate the objective of building resilient forests, they will consider the effects of climate change along with the effects of fire suppression.
Areas that burned recently with moderate severity fires were closer to 1911 structure than areas that either had no fire or burned with low severity fire. Despite a lack of statistical significance, you can still see that even moderate severity fires did not reduce density to what the 1911 structure was. With higher sampling effort, they probably could have detected a difference (from the graph, it looks like areas burned with moderate severity fires had about 125 trees per hectare while 1911 density was about 80 trees per hectare). So for fire managers, this would indicate that fires following long periods of should be toward the moderate classification of severity if they are trying to recreate past structural conditions. Perhaps repeated low-severity fires will eventually lead to more similar conditions to the past, but we won’t know until we have a longer track record of conducting repeat burns.
This study further indicates to me that broadly-applied upper diameter limits (e.g. thou shalt not cut a tree greater than 24”) simply do not make sense. The authors point out that, while limiting tree removal to less than 12” diameter can make sense from the perspective of reducing fire severity, it can also mean that stand density in terms of basal area is much too high from either a structural restoration point of view or a forest resilience point of view.
Critique (I always have one, no matter how good the article is):
I was disappointed that they did not measure as many areas as they could have. They say they found 50 areas that were measured in 1911, but then only re-measure 30 of these areas because of “time and access constraints.” They make a big deal (appropriately) about how unique and important this data set is. If it is so important, why not spend the money and time to re-measure the whole thing? Further, they use the non-significance between 1911 forest and the moderately burned modern forest structure as evidence that the moderate severity fire re-created the 1911 structure. If you look at the data, however, the areas burned with moderate severity fire still had higher density (over 50% higher as far as I can tell from the graph). This difference may have been detectable with a few more plots measured. Finally, not measuring some of the areas makes me wonder if the more easy-to-access areas were measured first, thus introducing a possible bias.
I also wish they had reported basal area differences. Basal area is often a much more useful way of describing forest density, especially in the context of conducting restoration/resilience treatments.
All in all, however, it is a good study especially with respect to reconciling fuel reduction treatments (fire or thinning) with forest resilience treatments. And it is interesting to compare this reconstruction study with this one that I reviewed earlier.
- Posted By: Rob York
- Written by: Zev Balsen, PhD candidate in the Stephens lab at UC Berkeley
Article reviewed: Post-1935 changes in forest vegetation of Grand canyon National Park, Arizona, USA: Part 1 – ponderosa pine forest
By Vankat, J.L. 2011. Post-1935 changes in forest vegetation of Grand canyon National Prak, Arizona, USA: Part 1 – ponderosa pine forest. Published in Forest Ecology and Management, Vol 262 pp 309-235.
The plot line: In 1935 the National Park Service sampled hundreds of vegetation plots in Grand Canyon National Park (GCNP). In 2004, the author of this paper relocated and remeasured many of these plots. Ninety-nine of the plots were in ponderosa pine forest (PPF); these PPF plots were analyzed in this article. The author looked for changes in tree density and basal area between 1935 and 2004. Comparisons were made based on size class, species composition, and PPF subtype (dry, mesic, and moist).
Because dozens of comparisons were made, there was a battery of findings, with some species and forest types showing significant increases, some decreases, and others showing no change. Perhaps the most surprising finding was that when all PPF forest types were analyzed together, there was no change in tree density and basal area actually decreased. These results were accompanied by changes in species composition: The number of quaking aspen decreased by over 80% and the number of white fir increased by a shocking 631%.
When the three PPF subtypes were analyzed separately, a different set of changes were seen. Dry PPF forest, which is found at lower elevations and is transitional with pinyon-juniper vegetation, showed no statistically significant changes. Mesic plots, on the other hand, showed a large decrease in basal area, mostly due to loss of ponderosa pine in the medium size class. Despite this decrease in basal area, tree density did not decrease. This can be explained by a significant increase in the density of small trees, including a staggering 2351% increase in small white fir. Finally, moist PPF showed a large decrease in total tree density, mostly due to loss of quaking aspen. In these moist forests, quaking aspen went from being the most common tree in 1935 to being third most common after ponderosa pine and white fir in 2004.
In the second part of this article, the author attempted to place his findings in the context of other studies on historical forest structure in GCNP. Estimates of tree density and basal area at different dates were plotted and trend lines were fit to the graphs. The results in this section of the paper were highly variable. They suggested an increase since the 19th century in total density of saplings and trees. However, for just trees--excluding saplings--the graphs showed no clear trend.
Relevant quote: “Previous studies...in the Southwest indicated that historical dynamics...have involved increases in forest densities... However, this study -- the first to examine multi-decadal changes across a never-harvested Southwestern PPF landscape using resampled historical plots -- documents that changes...also have included decreases since 1935."
Relevance to landowners and stakeholders:
Historical forest information is crucial to our understanding of current forest conditions, management goals, and appropriate management techniques. However, previous studies in GCNP (and elsewhere) have relied on estimates from extrapolated data, reconstructions based on tree rings, and sampling of contemporary reference sites thought to represent historical conditions. These methods all have their own particular pitfalls that may introduce large uncertainties about historical conditions. For example, extrapolation of data depends on assumptions about rates of change. Thus it may be dangerously circular to use these studies to form conclusions about how forests have changed during the 20th century. Tree ring studies may have inaccuracies because of loss of evidence from older trees. Reference sites present problems because the current and recent impacts of climate, fire, and diseases can never exactly duplicate historical conditions. Because the paper discussed here relied on direct measurement of contemporary and historical plots, landowners may feel more confident about its findings.
The lack of increase in density in any of the PPF types or in overall PPF was surprising. One explanation is that forest densities had already increased by 1935. Another is that increases in small trees were offset by losses of larger trees. The larger trees may have suffered from competition with the smaller, vigorous trees. In either case, this paper highlights the insight that forest changes are not monotonic: even in the absence of fire or other disturbance, forests density may increase, decrease, or remain constant. Different forest types and subtypes may change at different rates and in different directions.
Relevance to managers:
This paper highlights some of the uncertainties about past PPF structure. Yet there are even greater uncertainties regarding the future. Climate, invasive species, fire, and societal pressure will have unpredictable, synergistic effects on forests. The author of this paper suggests that, in light of the uncertainty about both past and future, managers need to shift their focus away from emulating the "historical range of variation" (HRV) and instead think in terms of avoiding "thresholds of potential concern" (TPC). For example, future climate conditions may make it impossible to achieve the HRV in certain places. The TPC approach provides a much broader range of acceptable conditions.
Critique (I always have one, no matter how good the article is):
The first section of the paper describing the remeasured plots was well done and full of novel findings. The second section, in which the remeasurements were combined with other data on historical forest conditions, was weaker and detracted from the robust findings in the first section. The regressions in the second section were presented without any significance values. Presumably that is because each fit was based on only a few scattered points and they would not have been statistically significant. If that is the case, these graphs would have been better presented as eyeballed trends, without the R2 values and exact regression parameters.
Finally, the unique management history of GCNP suggests that this paper's findings might not translate well to areas outside of the park. Since the study site has never been harvested and has experienced some prescribed fire since the 1980s, its developmental trajectory might be quite different from forests in which timber has been cut and all fire has been suppressed.
- Posted By: Rob York
- Written by: Reproduced from the website, www.foreststeward.com
Article reviewed: Interacting disturbances: wildfire severity affected by stage of forest disease invasion
By M.R. Metz, K.M. Frangioso, R.K. Meentemeyer, and D.M. Rizzo, published in the journal Ecological Applications, vol. 21: 313-320
The plot line: These researchers evaluated the influence of Sudden Oak Death (SOD) on wildfire severity. They were able to do so because they had measured forests that were infected to various degrees (ranging from no infection to very advanced infection) by SOD prior to a wildfire (the Basin Complex fire) occurring. They found that, where SOD had recently infected forests and caused lots of standing dead trees, fire severity was greater but SOD infection was not the primary determinant of fire severity. Burn severity was very patchy and influenced by many other factors besides whether or not the area had been infested with SOD. In areas where SOD infection was advanced (i.e. several years since first infections), there was greater burn severity at the forest floor but again SOD infestation was not a major determinant of fire severity. They suggest that management efforts may be more effective if targeted in areas where SOD is still in the initial stages of infestation (i.e. where there are lots of standing dead trees with dead leaves and branches).
Relevant quote: “Our results indicate that the timing of fire relative to disease progression is an important predictor of burn severity in infested areas because differences among fuel types were more important indicators of damage than pathogen presence alone.”
Relevance to landowners and stakeholders:
Wildfire risk is often grossly over-simplified. During this time of year, for example, we begin to hear reports on the news that this year’s fire danger will be “especially high.” If it is a wet spring, they say that fire danger will be “especially high” because of all of the growth of fuels (vegetation) that is occurring. If it is a dry spring, they say that fire danger will be “especially high” because of the dry fuel conditions. And if it is an average year, they usually wait until we have a hot and dry week and then claim that it is actually a very dry year and, you guessed it, fire danger will be “especially high.” The reality is that fire behavior is a result of a huge complexity of factors that include how much fuel is available to burn, the structure of the fuel, the local topography, and the weather conditions at the time of the fire.
Historically, fires in central coast forests of California have not occurred as frequently as they have up in the Sierra Nevada. But as a landowner, I am actually more comfortable in the Sierra Nevada than in central coastal forests when it comes to altering the behavior of fire when occurs. Fires in the coastal forests appear to be more weather-driven and less fuel-driven than those in the Sierra Nevada. And as a forester, I know that I can alter fuels on my land to manage fire behavior. Weather? Not so much. This research suggests that treating areas of recent SOD infestation might lower fire severity a little, but that other factors will accumulate to have more of an influence on fire behavior. For forestland owners in the central coastal forests of California, you should hope that insurance companies don’t read this article.
Relevance to managers:
While infestation occurs as a gradual process, there appears to be 4 logical stages of SOD infection:
1. Initial infection- trees lose vigor and slowly decline over about 6 years.
2. Crown mortality- over 1 or 2 more years, leaves and small branches die and slowly shed off of the dead trees
3. Snag decomposition- snags either gradually crumble apart of fall over
4. Log buildup- Logs are on the ground and gradually decompose
As discussed in a previous post, Sudden Oak Death is anything but sudden. As the authors of this article point out, SOD truly is a “chronic and progressive stress” rather than a sudden one.
It is during stage 2 above where the authors of this research seem to be recommending that managers focus on in terms of reducing fire risk. This could mean prioritizing fuel-reduction treatments to occur in areas that have high densities of standing dead trees with lots of dead biomass still in the crowns.
Fire severity may be just as high or higher in stage 3, but this study did not measure fine surface fuels so it is unknown. But it would make some intuitive sense to me that a buildup of litter and debris from SOD may increase fire severity. As the authors mention, this needs further study. See a related post on the interaction of bark beetles and fire severity in lodgepole pine forests.
Of course, the most effective management would be to try to stop SOD in the first place, but this is obviously difficult. I heard one of the authors of this article give a talk about management options with respect to lowering SOD infection. He mentioned two things that I recall:
1. Thinning + burning might be effective (presumably by increasing individual tree vigor and by reducing future fire severity)
2. A no-host buffer around critical areas (e.g. removing host species around a park core area, for example) would be very difficult because most hosts sprout.
Critique (I always have one, no matter how good the article is):
The primary limitation is the fact that only large logs were measured as surface fuel prior to the fires. Obviously if the researchers had known that a fire was going to happen, they would have been more comprehensive in measuring surface fuels. But just measuring “1000 hour fuels” leaves a lot of the surface fuel equation unaccounted for.
- Author: Rob York
Article reviewed: Subsurface carbon contents: Some case studies in forest soils
By D.W. Johnson, J.D. Murphy, B.M. Rau, and W.W. Miller, published in the journal Forest Science, Vol 57, 3-10
The plot line: This is an article that was part of a special issue of Forest Science that was born from a conference on the importance of carbon in deep forest soils (it is not surprising that, yes, soil scientists think it is important!). This article emphasizes the need for understanding the pattern of carbon content as one goes deeper into forest soils. The pattern is highly relevant because deep soils are rarely sampled because of the physical difficulty involved in getting to deep soils (it’s a lot of digging). If one knows the pattern pretty well, then one can sample shallow soils and then estimate how much carbon is deeper if they are confident of the pattern. They found two basic shapes- linear (total carbon increases at a constant rate with depth) and asymptotic (total carbon increases at lower rates as you go deeper). The linear soils tended to have about 50% of their carbon below 20 cm (the common depth of sampling), while asymptotic tended to have about 35% below 20cm. The conclusions seemed to be that, it is reasonable to sample only part of the soil horizon and then extrapolate for estimating lower depths, but the correct extrapolation equation (i.e. the mathematical representation of the pattern) has to be used.
Relevant quote: “…there is a tendency to either ignore C and nutrient stores in deeper soil horizons, perhaps producing significant bias in soil C in global scale modeling efforts or to resort to modeling soil C contents of deeper soil horizons. ”
Relevance to landowners and stakeholders:
This article is largely for academics, so it is difficult to find direct relevance for landowners and stakeholders. Instead, I refer readers to the previous discussion of carbon accounting on September 4, 2009. The main point of that discussion- that we are a long ways off from being confident in estimating carbon in forests with great accuracy- still seems to be the case today. Studies like this, however, move us closer to the “gold standard” in carbon accounting that will someday be needed for true carbon markets to develop.
Relevance to managers:
I recently tried to bone up on the issue of carbon in forest soils because foresters are now required to address impacts of forest treatments upon carbon when conducting environmental impact reports. I liked this article because it focused on missing data- specifically data estimating the carbon in the deep soil profiles that are usually not sampled. For managers, the entire amount of carbon in soils is often virtually unknown compared to above ground carbon. It has always been part of a foresters job to understand how much wood volume (i.e. carbon) is standing above-ground in the forest, but it has only recently become part of our job to also understand how much carbon is below-ground.
One item of relevance from this article seems to be that there are two major sources of variability when it comes to carbon in soils: one is the pattern at which carbon content changes as one gets deeper in soils; the other is the total amount of carbon that exists from location to location. The latter can be estimated, but only if the former is understood with relatively good certainty. When applying estimates of belowground carbon to total forest carbon budgets, it probably makes sense to check to see how deep soil carbon is estimated, if at all.
Another relevant point is that there is likely to be carbon in very deep horizons that is unaccounted for when below ground carbon is estimated. In this article, soil carbon was estimated down to between ½ and 1 meter. Where deeper soils exist, a significant amount of carbon is not being estimated if carbon amounts are extrapolated to only ½ or 1 meter. This is less of a problem for soils with asymptotic patterns of soil carbon (the amount of carbon declines with depth). But even for these soils, a true asymptote was not reached within ½ to 1 meter depth so carbon would still remain unaccounted.
Critique (I always have one, no matter how good the article is):
My only critique is that the selection of the two models used to represent the pattern of how carbon changes with soil were not justified in a statistical sense. They fit patterns to either a logarithmic or asymptotic (a “Langmuir” equation) model, but don’t explain how one or the other model was selected. Often it is the correlation coefficient or a model selection criteria that is used to pick the “best” model. It is only a minor critique since both models are relatively parsimonious and do not have as much need for a model selection approach.
- Author: Rob York
Article reviewed: Restoring forest raptors: Influence on human disturbance and forest condition on Northern Goshawks
By M. Morrison, R. Young, J. Romsos, and R. Golightly. Published in Restoration Ecology, Vol. 19 No. 2 pp. 273-279, accessible here.
The plot line: Suspecting that human interactions with goshawks are causing their decline (like has been observed in Europe), the researchers measured human activity (hiking) and development (roads and houses) differences between areas that are occupied frequently versus infrequently to see if there was a difference. Human activity was higher in areas not used as much by goshawks, but the difference was not detectable with statistical tests and two particularly popular hiking spots appeared to be the reason for the difference (although the authors don’t mention this). Road density did appear to be greater in areas that were not used as much by goshawks, but there were also differences in forest structure and elevation between frequently and infrequently occupied sites. Despite largely inconclusive results, management recommendations are made.
Relevant quote: “There is little reason… to restore structural conditions for the goshawk if human disturbance will negate any positive benefits.”
Relevance to landowners and stakeholders:
As I have observed when listening to landowners who manage their forest for wildlife, they often do so because they enjoy seeing wildlife (who doesn’t?). Seeing wildlife on land managed for wildlife is tremendously gratifying. There is a paradox, however, if the human love for wildlife is unrequited. Such may be the case with goshawks. Although this study did not find evidence to support that human interaction is correlated with goshawk frequency, it would not be surprising if there was such a relationship because of possible adverse physiological responses to human presence (especially if the humans are directly harassing them). So the primary relevance is… don’t harass goshawks when you are hiking!
Relevance to managers:
The relevance with respect to managing forests is limited because of the experimental design, low statistical power, and habitat differences between study sites. The study did suggest what seems to be an efficient design for monitoring goshawks. They found that an “occupancy index” (# of observations over time / # of surveys; frequency, in other words) was related to reproductive success. One could therefore measure occupancy index and assume that it is pretty well correlated with reproductive success. They also point out that it is important to survey multiple years because of wide year-to-year variability in breeding success.
Critique and/or limitation (there's always something, no matter how good the article is):
In the abstract and many times in the results and discussion, they state that human interaction was much higher in areas that were infrequently occupied by goshawks. Technically, this was true. But they also relied on a statistical test to inform them on whether or not they would get the same result consistently if they were to do the experiment again (i.e to tell them if the difference was “real”). The difference was far from what most people (even wildlife biologists who tend to have higher p value thresholds) would consider significant in a statistical sense (p=0.322). P-value thresholds (which they never defined) are subjective and in many cases useless, but if you use hypothesis testing, you can’t just dismiss non-significant results in some cases and accept them in others. The authors even make very broad management recommendations based on this non-significant result.
I wouldn’t critique this inference if the data actually did suggest that there was a real difference, but it doesn’t. Eight sites were used in the pool of frequently occupied territories and 13 in the pool of infrequently occupied territories. The reason they found a large difference in human activity (and the reason why the difference was not significant) was that two of the infrequently occupied sites were exceptionally popular hiking areas. This pushed the variance (and the p value) way up. We don’t know if these are true outliers or not because the sample size was small. But if you take these two sites out, there is virtually no difference in human activity between frequently and infrequently occupied territories. To make broad management recommendations from what appear to be inconclusive results seems far from appropriate.
Here are their data, with human interaction regressed against occupancy:
See those two points way up at the top? Those were exceptionally popular hiking areas with a lot of human interaction. When you take those out there is no relationship at all. Actually, when you leave them in there is still no relationship if your p-threshold (alpha) is 0.05 or even 0.10. And if you do accept it as a real relationship, the adjusted r2 value is a miniscule 0.07. Is that a close enough relationship to base a management recommendation on?
The counter-argument could be that when you consider all of the results, in sum, they add up to being enough to make a management recommendation. But when considering all the results, it just makes me think that there is even less reason for making management recommendations. The infrequently and frequently occupied sites had differences in forest structure. How do we know these differences are not the reason for the differences in occupancy frequency? They were not big differences in forest structure, but neither were the differences in human activity.
I do grant that there was a solid difference in road density between frequently and infrequently occupied areas. But there is no “weight of evidence” from many different results that suggests human activity in general has caused a decline in goshawk occupancy.
In the methods, they say that occupancy is not used as a continuous variable because of gap in the data, but then it is indeed used as a continuous variable to correlate occupancy with human activity, local road extent, and road+trail extent. How come the gap is not an issue with those variables? (you can kind of see the gap in the graph above between 0.5 and 0.8 on the x-axis).
They lead off the restoration implications with this: “Our results suggest that goshawk protection within the Basin has been insufficient and that some of the territories require actions to restore vegetation to pre-settlement conditions.” This is problematic: 1) They did not evaluate the sufficiency of protection measures. No overall measure of goshawk trend or status is made. The protection measures between occupied and unoccupied territories (their primary variable of interest) were the same. 2) They did not study how vegetation change since settlement has impacted goshawks. This is the first mention of pre-settlement conditions in the whole article.
Article reviewed: Restoring forest raptors: Influence on human disturbance and forest condition on Northern Goshawks
By M. Morrison, R. Young, J. Romsos, and R. Golightly. Published in Restoration Ecology, Vol. 19 No. 2 pp. 273-279
The plot line: Suspecting that human interactions with goshawks are causing their decline (like has been observed in Europe), the researchers measured human activity (hiking) and development (roads and houses) differences between areas that are occupied frequently versus infrequently to see if there was a difference. Human activity was higher in areas not used as much by goshawks, but the difference was not detectable with a statistical test and two particularly popular hiking spots appeared to be the reason for the difference (although the authors don’t mention this). Road density did appear to be greater in areas that were not used as much by goshawks, but there were also differences in forest structure and elevation between frequently and infrequently occupied sites. Despite largely inconclusive results, management recommendations are made.
Relevant quote: “There is little reason… to restore structural conditions for the goshawk if human disturbance will negate any positive benefits.”
Relevance to landowners and stakeholders:
As I have observed when listening to landowners who manage their forest for wildlife, they often do so because they enjoy seeing wildlife (who doesn’t?). Seeing wildlife on land managed for wildlife is tremendously gratifying. There is a paradox, however, if the human love for wildlife is unrequited. Such may be the case with goshawks. Although this study did not find evidence to support that human interaction is correlated with goshawk frequency, it would not be surprising if there was such a relationship because of possible adverse physiological responses to human presence (especially if the humans are directly harassing them). So the primary relevance is… don’t harass goshawks when you are hiking!
Relevance to managers:
The relevance with respect to managing forests is limited because of the experimental design, low statistical power, and habitat differences between study sites. The study did suggest what seems to be an efficient design for monitoring goshawks. They found that an “occupancy index” (# of observations over time / # of surveys; frequency, in other words) was related to reproductive success. One could therefore measure occupancy index and assume that it is pretty well correlated with reproductive success. They also point out that it is important to survey multiple years because of wide year-to-year variability in breeding success.
Critique and/or limitation (there's always something, no matter how good the article is):
In the abstract and many times in the results and discussion, they state that human interaction was much higher in areas that were infrequently occupied by goshawks. Technically, this was true. But they also relied on a statistical test to inform them on whether or not they would get the same result consistently if they were to do the experiment again (i.e to tell them if the difference was “real”). The difference was far from what most people (even wildlife biologists who tend to have higher p value thresholds) would consider significant in a statistical sense (p=0.322). P-value thresholds (which they never defined) are subjective and in many cases useless, but if you use hypothesis testing, you can’t just dismiss non-significant results in some cases and accept them in others. The authors even make very broad management recommendations based on this non-significant result.
I wouldn’t critique this inference if the data actually did suggest that there was a real difference, but it doesn’t. Eight sites were used in the pool of frequently occupied territories and 13 in the pool of infrequently occupied territories. The reason they found a large difference in human activity (and the reason why the difference was not significant) was that two of the infrequently occupied sites were exceptionally popular hiking areas. This pushed the variance (and the p value) way up. We don’t know if these are true outliers or not because the sample size was small. But if you take these two sites out, there is virtually no difference in human activity between frequently and infrequently occupied territories. To make broad management recommendations from what appear to be inconclusive results seems far from appropriate.
Here are their data, with human interaction regressed against occupancy:
See those two points way up at the top? Those were exceptionally popular hiking areas with a lot of human interaction. When you take those out there is no relationship at all. Actually, when you leave them in there is still no relationship if your p-threshold (alpha) is 0.05 or even 0.10. And if you do accept it as a real relationship, the adjusted r2 value is a miniscule 0.07. Is that a close enough relationship to base a management recommendation on?
The counter-argument could be that when you consider all of the results, in sum, they add up to being enough to make a management recommendation. But when considering all the results, it just makes me think that there is even less reason for making management recommendations. The infrequently and frequently occupied sites had differences in forest structure. How do we know these differences are not the reason for the differences in occupancy frequency? They were not big differences in forest structure, but neither were the differences in human activity.
I do grant that there was a solid difference in road density between frequently and infrequently occupied areas. But there is no “weight of evidence” from many different results that suggests human activity in general has caused a decline in goshawk occupancy.
In the methods, they say that occupancy is not used as a continuous variable because of gap in the data, but then it is indeed used as a continuous variable to correlate occupancy with human activity, local road extent, and road+trail extent. How come the gap is not an issue with those variables? (you can kind of see the gap in the graph above between 0.5 and 0.8 on the x-axis).
They lead off the restoration implications with this: “Our results suggest that goshawk protection within the Basin has been insufficient and that some of the territories require actions to restore vegetation to pre-settlement conditions.” This is problematic: 1) They did not evaluate protection measures. The protection measures between occupied and unoccupied territories (their primary variable of interest) were the same. 2) They did not study how vegetation change since settlement has impacted goshawks. This is the first mention of pre-settlement conditions in the whole article.