Use of vision and sound to classify feller-buncher operational state
Productivity measures in logging involve simultaneous recognition and classification of event occurrence and timing, and the volume of stems being handled. In full-tree felling systems these measurements are difficult to implement in an autonomous manner because of the unfavorable working environment and the abundance of confounding extraneous events. This paper proposed a vision method that used a lowcost camera to recognize feller-buncher operational events including tree cutting and piling. It used a fine K-nearest neighbors (fKNN) algorithm as the final classifier based on both audio and video features derived from short video segments as inputs. The classifier’s calibration accuracy exceeds 94%. The trained model was tested on videos recorded under various conditions. The overall accurate rates for short segments were greater than 89%. Comparisons were made between the human- and algorithm derived event detection rates, events’ durations, and inter-event timing using continuously recorded videos taken during feller operation. Video results between the fKNN model and manual observation were similar. Statistical comparison using the Kolmogorov–Smirnov test to evaluate measured parameters’ distributions (manual versus automated event duration and inter-event timing) did not show significant differences with the lowest P-value among all Kolmogorov–Smirnov tests equal to 0.12. The result indicated the feasibility and potential of using the method for the automatic time study of drive-to-tree feller bunchers.