- Author: Ben Faber
Evaluating claims of new products that could potentially improve yield and tree health is a daunting task. Every week I get calls and literature from people promoting fertilizers and techniques that "resist insects," "reduce salt levels in the soil," "increase crop quality," "release that natural fertility of your soil," and numerous other claims. There just is not enough time in the day to approach each and every one of these materials or techniques, even though some may, in fact, be promising.
So what does a grower do? You hear about a new product. It only costs $20 an acre to apply. Might as well fly it on all 50 acres. But then, how do you know it has done anything? What results do you have to compare it with? Last year's yield which was miserable? We know how variable avocado yields are, so last year's harvest may not be a good comparison.
When we conduct field trials, we assume a clear comparison is available to test the effects of the treatment. With field trials, there are usually small plots, repeated several times (at least three), and arranged in an apparent haphazard (random) fashion. The reason is threefold: 1) to account for variability in the field, 2) to prevent a systematic bias in favor of one treatment over another and 3) to see if differences in treatments are due to chance or to the superiority of the treatment.
How are observational trials different from replicated one? The big difference is that they are not replicated. Each treatment occurs only once, so we have no measure of the natural variability in the field or trees. As a result, we risk thinking we have a difference due to treatment which is actually due to field variability. Without replication there is no way to tell.
Let's examine this replication idea a little more closely. We had a frost trial where we applied copper or a water control spray to young trees in November. Copper is a noted bactericide and the idea was to control the frost-nucleating bacteria. Forty trees, randomly spaced in the orchard were sprayed with either a dilute copper spray according to instructions or water alone. We evaluated frost damage to the trees in January. The first counts showed 40% frost damage with the copper spray and 60% with the water alone. Great. Let's go out and spray the whole orchard next year with copper. However, successive counts showed 50% frost damage with the copper and only 30% from the water. In the end, there was no significant difference to trees that had been sprayed with either material.
These results show the natural variability in biological systems and demonstrate the disadvantages in looking at results from a non-replicated trial based on a single year. This becomes even more important when interpreting information from a trial site different from your own. If every grower sprayed a non-replicated treatment at their own ranch, the risk of coming to the wrong conclusion about that treatment at each location is still 50%. Just like flipping a coin. Is that worth spending money on?
As each of the variables (soil type, irrigation quality, management, etc.) increases, the risk of making a poor decision about a product or practice increases, as well. You can see that there are difficulties associated with relating information from a non-replicated trial based on a single year of data at a different location to your own situation.
How does someone go about evaluating a new practice or material at home without going through all the complications of a complicated research trial? Mary Bianchi, farm advisor in San Luis Obispo and I came up with a little checklist.
- Be conservative in your approach and critical observations. Resist the urge to spray the whole grove. Leave something, so that a comparison can be made. Preferably run a side-by-side comparison.
- Use consistent farming practices across all areas of the trial.
- Compare the new practice to one which is a standard for your operation.
- Don't bias your results by implementing the new practice where it stands to have the best effect anyway. For example, don't spray boron on the trees that always give a good yield.
- Run the test more than one year and in more than one location, especially if the new practice is costly.
- Talk to the industry and use the experience of others in different locations as a check on your own experience. A good place to swap ideas is at the California Avocado Society/University of California Cooperative Extension sponsored bimonthly meetings.
- Author: Ben Faber
Growers are faced with an ever-changing list of commercial “tools”, each with the promise of providing some advantage to the farmer. Frequently, these are new fertilizer mixes presented as proprietary cocktails promoted and dispensed with promises of a multitude of profitable (yet improbable) benefits to the buyer. With the large number of new products available, and the number of salespeople promoting them, it is often difficult for growers to distinguish between products likely to provide real benefit, and those that may actually reduce the profitability of the farm.
In all situations when a company approaches the University or a commodity research board with a new product or technology for sale to California growers, these institutions act as grower advocates. They are charged with sorting through the available information; asking the right questions; getting the necessary research done if the available information warrants this pursuit; disseminating accurate information on these new technologies and products, and doing all that can help maximize grower profits now and in the future. When approached with a new product or technology it is obligatory to challenge claims with the following questions:
Is there some basic established and accepted scientific foundation on which the product claims are made?
Language that invokes some proprietary ingredients or mysterious formulations, particularly in fertilizers mixes registered in the State of California, raises red flags. A wide range of completely unrelated product benefit claims (such as water savings, pesticide savings, increased earlier yield) raises more red flags. Product claims that fall well outside of any accepted scientific convention generally mean the product is truly a miracle, or these claims are borderline false to entirely fraudulent.
Has the product undergone thorough scientific testing in orchards?
Frequently, products are promoted based on testimonials of other growers. While testimonials may be given in good faith, they are most often not backed up by any real scientific testing where a good control was used to compare orchard returns with and without the product.
A “test” where a whole block was treated with a product and which has no reliable untreated control does not meet accepted standards for conducting agricultural experiments. Also, a treated orchard cannot reliably be compared to a neighboring untreated orchard; and a treated orchard cannot be compared to the same orchard that was untreated the previous crop year. Even a test with half a block of treated trees and half untreated is not considered dependable by any known scientific standard of testing.
Only a well designed, statistically replicated, multi-year trial allows for direct comparison of untreated versus treated trees with statistical confidence. Verifiable data from tests that meet acceptable standards of scientific design, along with access to raw baseline (before treatment) yield data from the same trees (preferably for the two years prior) should be used to determine the validity of test results provided.
Are the test results from a reliable source?
If the testing were not done by a neutral party, such as university scientists, agency, or a reputable contract research company using standard scientific protocols, this raises red flags. If the persons overseeing the tests have a financial interest in seeing positive results from the product, it raises red flags.
Does the product have beneficial effects on several unrelated farm practices?
A product that increases production of trees, makes fruit bigger, reduces pests, reduces water use, and reduces fertilizer costs, is more than a little suspicious. In reality, if such a product really existed, it would not need any testing at all because its benefits would be so obviously realized by the grower community that it would spread rapidly by word of mouth and embraced by the entire grower community.
Are other standard and proven farm products put down in the new product sales delivery?
If a new product vendor claims that their product is taken up 15 times faster than the one growers are currently using, or is 30 times more efficient, it probably costs 15 to 30 times more per unit of active ingredient than the standard market price. Growers should always examine the chemical product label to see what active ingredient they are buying. There has to be a very good reason to pay more for an ingredient where previously there had been no problem supplying the same ingredient at a cheaper price to trees in the past.
There are impartial sources of such information available to farmers to help corroborate information provided by product vendors. Perhaps the most reliable and accessible impartial research and education resources for growers are their local Cooperative Extension Farm Advisors and commodity research boards.
When promising products emerge, local university Farm Advisors can advise growers on how to evaluate these products and may help design a small trial to test a particular product on a few trees under local orchard conditions. If in these pursuits a truly promising new product or technology emerges, research board funding may follow but only on the recommendation of that board's Research Committee.