Active search with unmanned aerial vehicle (UAV) swarms in cluttered and unpredictable environments poses a critical challenge in search and rescue missions, A rapid dynamic headspace method for authentication of whiskies using artificial neural networks where the rapid localizations of survivors are of paramount importance, as the majority of urban disaster victims are surface casualties.However, the altitude-dependent sensor performance of UAV introduces a crucial trade-off between coverage and accuracy, significantly influencing the coordination and decision-making of UAV swarms.The optimal strategy has to strike a balance between exploring larger areas at higher altitudes and exploiting regions of high target probability at lower altitudes.To address these challenges, collaborative altitude-adaptive reinforcement learning (CARL) was proposed which incorporated an altitude-aware sensor model, a confidence-informed assessment module, and an altitude-adaptive planner based on proximal policy optimization (PPO) algorithms.
CARL enabled UAV to dynamically adjust their sensing location and made informed decisions.Furthermore, a tailored reward shaping strategy was introduced, which maximized search efficiency in extensive environments.Comprehensive simulations under diverse conditions demonstrate that CARL surpasses baseline methods, achieves a 12% improvement in full recovery rate, and showcase its potential for enhancing the effectiveness Feline Stool-Associated Circular DNA Virus (FeSCV) in Diarrheic Cats in China of UAV swarms in active search missions.