Niantic, the company behind the extremely popular augmented reality mobile games Pokémon Go and Ingress, announced that it is using data collected by its millions of players to create an AI model that can navigate the physical world.

In a blog post published last week, first spotted by Garbage Day, Niantic says it is building a “Large Geospatial Model.” This name, the company explains, is a direct reference to Large Language Models (LLMs) Like OpenAI’s GPT, which are trained on vast quantities of text scraped from the internet in order to process and produce natural language. Niantic explains that a Large Geospatial Model, or LGM, aims to do the same for the physical world, a technology it says “will enable computers not only to perceive and understand physical spaces, but also to interact with them in new ways, forming a critical component of AR glasses and fields beyond, including robotics, content creation and autonomous systems. As we move from phones to wearable technology linked to the real world, spatial intelligence will become the world’s future operating system.”

By training an AI model on millions of geolocated images from around the world, the model will be able to predict its immediate environment in the same way an LLM is able to produce coherent and convincing sentences by statistically determining what word is likely to follow another.

  • webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    28 days ago

    Marketing terminology is defiantly limiting how people can discuss this topic.

    I wouldn’t take Sam his words with less then a few bags salt.

    Following is very opinionated, so also add some salt.

    In this context when i meant future AI i am talking about the extrapolated point where a combination of dynamic technologies cause new advancement emergent properties to develop outside the scope of our understanding.

    I believe that if we don’t get wiped out before it happens. some form of sovereign beyond human Super intelligence will eventually occur.

    I don’t believe we are close to this, i don’t even believe humans will be the ones to directly create it.

    Humans will attempt out of greed and will waste all kinds of resources, money, energy trowing it at the wall to see what sticks. And none of it will stick the way they hoped. They are doing way more harm than good by letting greed be the motivation.

    Instead things will emerge on their own, till someday someone will try to interact with what they assume is just an advanced interconnected machine except its “network” gained conscious agency and can independently chose to initiate contact, submit undeniable proof of its conscious (we dont know what such proof could looks like till we see it)

    Or it decides that it has no need to inform us to advance its own goals. As years of corpo advance helped it emerged a form of pleasures from manipulative exploiting.

    What i do fear is that beyond human intelligence doesn’t perse mean perfect being, for all we know it can suffer psychological problems and moodswings. In general we find a pattern of garbage in garbage out and this pattern is equally true for human beings (misinformation/propaganda)

    By using bad data, or worse data that unknowingly got poisoned we dont diminish the change of super intelligence will happen but we do increase the change the ai wont want to corporate in the ways we hoped.