I don't think so. Generative AI apps like ChatGPT need to be trained on huge datasets. I don't think the necessary datasets exist. As was said, old maps and books in the library are good resources. Alot of this has not been digitized. Old timers and simply driving by the site to look at the grass, trees, and grading also help, and I don't believe this info has been digitized to the extent necessary to train the app, then you have to tell it what to look for. I've been doing this long enough that can drive by a site and have a good idea whether or not it will produce -- I could not do that when I started out. This "domain experience" to use the fancy word, needs to be trained into the app, assuming the digital view of the terrain is detailed enough (lidar is good, but not good enough for this purpose). Throw in different coils, machines and settings that are different for each site, and no datasets on mineralization data on the sites, and the training problem grows even harder. I think it theory it is possible to do this, but in practice, no one will bother to create the data, then train it.
Then, can you train the app to guess whether or not you will get permission. I think so, based on the house, land, cars, and location of the owner, but another hard dataset and training problem. Does the number of "no trespassing" signs posted correlate with permission success? I think so. Has the bot been trained? I don't think so.
I think a use case screaming for AI is TID, particularity weeding out false ferrous high tones. I would expect the next generation of machines, from at least one manufacturer, to at least claim to be able to do this. This also screams for crowdsourcing -- imagine being able to download/upload training data from your machine to the cloud to have a cloud-based TID database. I'd buy such a machine if it actually worked. Near perfect deep TID is the holy grail for VLF machines, assuming we stick with this dinosaur technology