We are thrilled to announce our latest vision paper (preprint), “GIScience in the Era of Artificial Intelligence: A Research Agenda Towards Autonomous GIS,” led by Dr. Zhenlong Li and Ph.D. candidate Huan Ning from the Geoinformation and Big Data Research Lab at Penn State. This paper, a collaborative effort involving geospatial science and computer science experts across academia, national labs, and government agencies, presents a timely research agenda for the next-generation AI-powered GIS—one that is autonomous, intelligent, and more accessible.
Over the past two years, we have been developing foundational ideas and prototype systems that explore how large language models (LLMs) can serve as decision-making cores and build geoprocessing workflows in GIS applications. This vision paper builds on our earlier work, particularly the 2023 paper that formally introduced and defined the concept of Autonomous GIS (Li & Ning, 2023), by expanding the conceptual framework, demonstrates early-stage implementations, and outlines a roadmap for advancing this emerging paradigm.
Historically, GIS has evolved along with disruptive technologies, from mainframe-based standalone systems to distributed web platforms, cloud-enabled services, and high-performance cyberinfrastructure. Now, with the advent of generative AI, particularly LLMs capable of reasoning, planning, and coding, GIS is poised for another transformation. We refer to this new phase as Autonomous GIS: a next-generation AI-driven GIS that can independently plan, execute, verify, and refine spatial analyses with minimal human intervention.The paper presents Autonomous GIS not as a single software or tool, but as a paradigm—one where AI-powered agents perform geospatial tasks much like human analysts. These agents, which we call GIS agents, are capable of operating individually or collaboratively. They can retrieve data, perform spatial computations, visualize results, adjust workflows, and even generate reports, all guided by user intent expressed in natural language.
To conceptualize how Autonomous GIS works, we propose a framework that includes five core behavioral goals: self-generating, self-executing, self-verifying, self-organizing, and self-growing. These goals emphasize the system’s capacity to initiate geospatial inquiries, carry out data-driven tasks, evaluate its own performance, manage limited resources, and learn from both successes and failures. Rather than replacing human analysts, these agents serve as digital collaborators, bringing spatial intelligence to a wider range of users and disciplines.
We also introduce a structured framework of autonomy levels, ranging from Level 0 (fully manual) to Level 5 (knowledge-aware and self-improving). This hierarchy helps delineate the progressive capabilities of GIS agents and provides a roadmap for future research and development. The levels are as follows: Level 0: Manual GIS, where all tasks are performed by humans without automation; Level 1: Routine-aware GIS, which automates predefined processes; Level 2: Workflow-aware GIS, capable of generating and executing workflows based on user input; Level 3: Data-aware GIS, which can autonomously identify, retrieve, and prepare appropriate datasets; Level 4: Result-aware GIS, which can evaluate its outputs and iteratively refine its approach; and Level 5: Knowledge-aware GIS, a fully autonomous system that learns from experience and external knowledge to improve over time. Most of the current research is focused on Level 2 agents—those that can autonomously generate and implement spatial workflows based on natural language input. Progressing to Level 3 and beyond will require substantial advancements in data reasoning, uncertainty management, and adaptive learning.
In addition to outlining theoretical principles, the paper highlights several proof-of-concept implementations developed by our lab. These include LLM-Find, a natural language-driven data retrieval agent; LLM-Geo, a workflow-generation agent for spatial analysis; LLM-Cat, a vision-enabled agent for autonomous cartography; and GIS Copilot, a QGIS plugin that assists users in conducting terrain analysis with existing tools. Each example demonstrates how AI can automate different components of the spatial analytical process, opening new possibilities for more intelligent, accessible, and scalable GIS solutions.
Beyond demonstrating technical advancements, we call for a collective effort from the GIScience community to address several pressing challenges that will shape the future of Autonomous GIS. These include embedding core geospatial knowledge and reasoning capabilities into LLMs, developing rigorous and reproducible evaluation metrics tailored to spatial analysis, ensuring transparency in automated decision-making, and establishing ethical guidelines around issues such as data privacy, bias, and responsible AI use in geographic applications. As these systems grow in capability and influence, thoughtful, collaborative guidance from the community will be essential to ensure their reliability, fairness, and societal value.
While this goal remains aspirational, the foundational work has already begun. We invite geospatial and AI researchers, developers, and educators to collaborate in shaping this future, where GIS becomes not only more powerful and scalable, but also more inclusive and responsive to the world’s most pressing spatial challenges.
You can access the preprint here and the original Autonomous GIS definition paper here.

Evolution of GIS driven by major disruptive technologies.

Conceptual framework of autonomous GIS

Levels of autonomous GIS, inspired by Mike Lemanski (Smith, 2016)