Why enterprise AI's next breakthrough lies in spatial intelligence

A person holding out their hand with a digital AI symbol.
(Image credit: Shutterstock / LookerStudio)

The digital maps and navigation systems we use today, including street names, signs and visual cues, were designed for people. Now at the cusp of a new era in robotics, we need a new kind of map that allows machines to comprehend physical environments. This is spatial intelligence and it represents a critical evolution in AI.

Bobby Parikh

Senior Vice President of Engineering at Niantic Spatial.

Bridging the spatial gap in AI

While large language models (LLMs) are transformational, human life and the majority of business activity still happens in the physical world. AI understands text, code, and images, but it doesn’t yet understand the world we live in.

Consider an AI agent in warehouse management. It might recognize a forklift in a security camera feed, but it cannot determine whether that forklift is blocking a critical route, positioned safely for maintenance, or creating a potential safety hazard. Without spatial intelligence, AI will remain confined to advisory roles rather than operational ones.

Spatial intelligence in the enterprise

Spatial intelligence is already reshaping businesses operations in logistics, manufacturing, and field services. Deloitte's 2025 Tech Trends report names spatial computing as a key technology for enterprise.

In logistics, centimeter-level accuracy can dramatically improve warehouse efficiency by optimizing layouts, reducing errors, and improving inventory and delivery workflows. In design and construction, AR overlays let distributed teams collaborate on 3D models, guiding teams as if they were on site. For training, immersive simulations allow staff to practice tricky tasks and receive live feedback.

Consumers are also seeing this first hand. Brands and venues are creating AR experiences from self guided tours and interactive wayfinding to educational mini-games embedded in urban spaces. These use cases present new ways to engage and retain customers.

AR glasses and robots need a smarter map

The next generation of spatially aware AI will be a foundational technology for the upcoming AR glasses from Snap, Meta, and Google. These devices promise to free us from looking down at our phones so that we take in the world around us. They just need an AI driven digital map that is both highly accurate and persistent in the way it anchors digital content to precise locations.

Looking further ahead, spatial awareness is essential for robots. Analysts predict that humanoid robots could be part of daily life within the next decade, working in fields such as healthcare, hospitality, maintenance. Goldman Sachs projects the humanoid robot market could reach $38 billion by 2035.

Recent breakthroughs in AI allow robots to recognize objects, understand context, and mimic human actions. But without the ability to comprehend their physical environment, their usefulness will remain limited. Spatial awareness is the key to unlocking safe, autonomous navigation and tasks in dynamic real-world environments.

Building the AI map of the world

While computer vision can describe an image (“a city street with shops”) it can lack the precision needed for machines operating in the real world. GPS alone isn't accurate enough, often missing its mark by half a city block. Visual Positioning Systems (VPS) offer the centimeter-level accuracy required for true spatial understanding.

Beyond location, AI needs to understand the difference between a geotypical and a geospecific model. The former is a generic, simulated environment the robots can use to train under different possible scenarios. The latter reflects the real world with exact detail. A robot trained in a geotypical world will still need to operate in the real-world and therefore needs the geospecific model to operate accurately and efficiently in real-time.

Enter Large Geospatial Models (LGMs) which are the spatial counterpart to LLMs. Where LLMs process text from across the internet, LGMs are trained on billions of real-world images, all tied to physical locations. These models give machines a contextual understanding of space and structures.

Humans can intuitively guess what a church or town square looks like from different angles. For machines, this is incredibly difficult. LGMs will allow AI to infer missing information and reason spatially, just as LLMs do with language. They will become the foundation for a true operating system for the physical world.

Seizing the opportunity

For enterprise leaders, the key question isn't if spatial intelligence will matter, but how quickly they can integrate it.

From operational precision and worker safety to customer engagement and automation, spatial intelligence promises a leap forward in how AI serves business needs. The next frontier of AI lies not in deeper digital understanding, but in helping machines comprehend and interact with the world as we do.

We've featured the best AI chatbot for business.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

Bobby Parikh is the Senior Vice President of Engineering at Niantic Spatial.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.