Object navigation in open-world environments remain a critical challenge for robotic systems. Despite advancements
in large language models (LLMs) for task planning, open-vocabulary vision models for object detection, and versatile
legged robots capable of traversing complex terrains, existing approaches lack a unified navigation framework to execute
composite long-range missions. We propose LOVON, a novel system that integrates LLMs for hierarchical task planning
with open-vocabulary visual detection and legged robot mobility. To address real-world challenges including visual jittering,
blind zones, and temporary target loss, we design dedicated solutions such as Laplacian Variance Filtering for visual stabilization.
Extensive evaluations on Go2, B2, and H1-2 legged platforms demonstrate successful completion of long-sequence tasks involving
real-time detection, search, and navigation toward open-vocabulary dynamic targets. To the best of our knowledge, this work presents
the first operational system achieving such capabilities in unstructured environments.