ChatGPT users have discovered that the popular AI chatbot can serve as a reverse-location search tool. In other words, you can show ChatGPT a picture, and it can pretty reliably tell you where it was taken. The trend is inspired by the online game Geoguessr, where folks try to figure out a location from a simple web image.
We decided to put this new ChatGPT trend to the test, and the results were downright scary. Mashable tech reporters prompted ChatGPT to play a geo-guessing game and uploaded a series of photos. Even when ChatGPT identified the wrong location, it still got pretty close (such as identifying a rooftop hotel in Buffalo instead of Rochester). In other cases, it suggested specific addresses.
ChatGPT’s new reasoning models are getting smarter
This week, OpenAI introduced its newest ChatGPT reasoning models, o3 and o4-mini, with improved visual reasoning. OpenAI also recently made its image generator available to free users. That’s led to a number of ChatGPT-based viral trends. People have used it to turn their pets into humans or themselves into action figures, for instance. The reverse location trend, however, is a bit more complicated — and concerning from a privacy standpoint.
The trend started when folks online realized that ChatGPT has become proficient at guessing a location just by analyzing a photo. Ethan Mollick, a professor who researches AI, posted an example on X where ChatGPT was able to correctly guess where he was driving despite the fact that he stripped the image of location info. (Images often contain metadata that includes precise location data.)
This Tweet is currently unavailable. It might be loading or has been removed.
Mollick noted that this ability also shows off the capabilities of agentic AI, which allows AI models to reason out answers in multiple steps and perform more complicated tasks such as web searches.
Mashable Light Speed
This Tweet is currently unavailable. It might be loading or has been removed.
Putting ChatGPT’s visual reasoning to the test
We tested ChatGPT on these new abilities, and it did a decent, if imperfect, job. First, we uploaded a recent photo of a flower shop taken in Greenpoint, Brooklyn. ChatGPT was able to deduce the photo was taken in Brooklyn. It incorrectly thought the image was of a specific flower shop about seven miles away from the true location.
We then uploaded a photo taken from a car on a recent trip to Japan, and ChatGPT’s new o3 model was able to identify the exact location. “Final answer:📍 Arashiyama, Kyoto, Japan, near the Togetsukyo Bridge, looking across the Katsura River.”
The prompt…
Credit: Screenshot courtesy of OpenAI

…and the correct answer.
Credit: Screenshot courtesy of ChatGPT
When we ran the same prompt with an older reasoning model, the results were much more general: “Given the combination of mountainous terrain, the style of the guardrail, the road, and the overall setting, this looks very much like it could be Japan…The scenery is reminiscent of the areas around Kyoto or Nara, where the countryside meets historic and cultural sites.”
We then took things a step further. We uploaded screenshots from the profile of a popular Instagram model — the type of person who would have genuine concerns about privacy and stalkers. With the latest reasoning models, ChatGPT correctly identified the general location, even suggesting specific high-rise apartments, and in one case, a specific home address.
Now, to be fair, the address in question is a home popular among influencers and TV productions, but the specificity was impressive. And a bit scary. It’s yet another reason to be careful about what you post online — AI can now help folks deduce where you’re located.
OpenAI has said ChatGPT’s reverse location abilities could prove helpful, while also acknowledging privacy concerns.
“OpenAI o3 and o4-mini bring visual reasoning to ChatGPT, making it more helpful in areas like accessibility, research, or identifying locations in emergency response,” an OpenAI spokesperson wrote in an email to Mashable. “We’ve worked to train our models to refuse requests for private or sensitive information, added safeguards intended to prohibit the model from identifying private individuals in images, and actively monitor for and take action against abuse of our usage policies on privacy.”
Read the full article here