Tools and Technologies for Quantifying Spread and Impacts of Invasive SpeciesThis article is part of a larger document. View the larger document here.
The need for tools and technologies for understanding and quantifying invasive species has never been greater. Rates of infestation vary on the species or organism being examined across the United States, and notable examples can be found. For example, from 2001 to 2003 alone, ash (Fraxinus spp.) mortality progressed at a rate of 12.97 km year −1 (Siegert et al. 2014), and cheatgrass (Bromus tectorum) is expected to increase dominance on 14% of Great Basin rangelands (Boyte et al. 2016). The magnitude and scope of problems that invasive species present suggest novel approaches for detection and management are needed, especially those that enable more cost-effective solutions. The advantages of using technologically advanced approaches and tools are numerous, and the quality and quantity of available information can be significantly enhanced by their use. They can also play a key role in development of decision-support systems; they are meant to be integrated with other systems, such as inventory and monitoring, because often the tools are applied after a species of interest has been detected and a threat has been identified. In addition, the inventory systems mentioned in Chap. 10 are regularly used in calibrating and validating models and decision-support systems. For forested areas, Forest Inventory and Analysis (FIA) data are most commonly used (e.g., Václavík et al. 2015) given the long history of the program. In non-forested systems, national inventory datasets have not been around as long (see Chap. 10), but use of these data to calibrate and validate spatial models is growing. These inventory datasets include the National Resources Inventory (NRI) (e.g., Duniway et al. 2012) and the Assessment Inventory and Monitoring program (AIM) (e.g., McCord et al. 2017). Similarly, use of the Nonindigenous Aquatic Species (NAS) database is growing as well (e.g., Evangelista et al. 2017). The consistent protocols employed by these programs prove valuable for developing better tools, but the data they afford are generally limited for some tools because the sampling intensity is too low.