NaviMus: AI-Driven WebGIS for Unsupervised and Supervised Museum Learning

"NaviMus is a WebGIS platform designed to help users discover and learn from museums through self-directed or AI-supervised spatial exploration, and adaptive guidance."

What it is about

NaviMus is an AI-driven WebGIS platform realizing the "participatory museum" vision. It functions as a bi-directional interface, accelerating information dissemination while capturing reverse user feedback to bridge the gap between institutions and the public. By prioritizing User-Centered Design, the system democratizes access to digital resources. Whether through unsupervised free exploration or AI-assisted planning, NaviMus enhances museum learning outcomes, transforming passive viewing into active, personalized engagement.

How we built it

Data acquisition was the first step and multi-sourced: global layers (continents, cities) were retrieved from ArcGIS Hub and converted from Shapefiles to GeoJSON, while the "Museums in Munich" dataset was manually curated and encoded for local precision. Our technical workflow integrates a Node.js backend with a high-fidelity CesiumJS frontend. We implemented unsupervised clustering algorithms for dynamic visualization and integrated local Ollama LLMs with Live2D to power the AI agent. Finally, a VGI module was coded to allow continuous user-driven data refinement.

Challenges we ran into

During product ideation, our initial concept involved providing 3D views of museum interiors to showcase exhibits, but we found this wouldn't scale across all museums due to data and technical constraints. We pivoted to a simpler approach: enabling free, semi-guided, or fully guided exploration to help users discover museums matching their interests, enhanced by AI for quick search and recommendations. For visualization, we experimented with various options before settling on point clustering, which effectively revealed spatial patterns and densities. Finally, aligning development with the UI proved challenging, as we prioritized platform functionality over polished frontend design.

What we're proud of

We're really proud of NaviMus because it turns complex museum discovery into a simple, fun map experience that saves users time and sets the right expectations before they visit. We love how the point clusters instantly showing museum distribution in Munich, letting tourists, locals, and teachers quickly filter by categories like design or history, read key details, and get one-click routes to Google Maps or official sites.

What we learned

We learned how tough it is to find clean, complete museum data; like categories, and locations for Munich, so we got better at scraping, cleaning, and filling gaps with google map and official sites. On the tech side, we picked up WebGIS skills using clustering libraries and smooth map interactions, plus basic AI for quick recommendations based on user prompts.

What's next

We plan to deploy production-grade MongoDB and automate data harvesting via AutoGLM agents. Future steps involve refining the AI recommendation engine through user analytics and launching public beta testing for real-world validation.
Students
Hengshuo Dong and Michael Olanrewaju

15th intake
Supervisor
Juliane Cron, M.Sc.
Keywords
Munich, Digital Museum, Online Exhibition, WebGIS, CesiumJS, Conversational AI, AIDriven Navigation, Interactive Learning
TRY IT