There are giant AI data firms that promise they can go through massive troves of data and pull out general and specific information from them. Information that is actionable and accurate. Give it 6 million data points and it’ll find all the links and organize them for you and unmask hidden details that aren’t visible to the naked eye.
Not one of those companies is stepping up to go through the publicly released Epstein files.
Today I asked AI to tell me which phone providers were available short by price and offers and it lied all the time, when I pointed it the AI corrected most of it but also removed some that were accurate for some reason.
It would have been quicker if I did that myself instead of ask AI, oh also didn’t provide all companies.
Maybe those companies have better AI that can make no mistakes but I doubt it, I think the LLMs will lie and no one has time to check if they are correct.
In theory, using the information and the released files and the information the public sources, it should be possible to figure out who those redacted names are based on writing style and other factors. We should be able to deanonymize.
Hmm. Maybe but it is not the same problem as those discussed in OP. I also have some doubts about the paper, but that’s another story. You could try it out?
I don’t know how far you get on the free tier but it should be at least enough for a proof of principle; to get other people to chip in. You didn’t have qualms demanding other people should do this for free.
Mind that this is a serious GDPR violation in Europe. So there will be serious pressure on AI companies to prevent this kind of use.
Seriously, I’m not qualified. No amount of appendix prompts and Dunning Kruger is going to change that.
I’m not demanding anything. I’m suggesting that AI can’t do what is claimed or that people with something to prove are not interested in proving something.
From a Facebook post I made on February 17th:
There are giant AI data firms that promise they can go through massive troves of data and pull out general and specific information from them. Information that is actionable and accurate. Give it 6 million data points and it’ll find all the links and organize them for you and unmask hidden details that aren’t visible to the naked eye.
Not one of those companies is stepping up to go through the publicly released Epstein files.
Today I asked AI to tell me which phone providers were available short by price and offers and it lied all the time, when I pointed it the AI corrected most of it but also removed some that were accurate for some reason.
It would have been quicker if I did that myself instead of ask AI, oh also didn’t provide all companies.
Maybe those companies have better AI that can make no mistakes but I doubt it, I think the LLMs will lie and no one has time to check if they are correct.
There were reports of people trying to unredact the files almost immediately.
But that’s not the same, is it?
I don’t think you can do literally the same thing on the Epstein files. Maybe I’m misunderstanding what you have in mind.
In theory, using the information and the released files and the information the public sources, it should be possible to figure out who those redacted names are based on writing style and other factors. We should be able to deanonymize.
Hmm. Maybe but it is not the same problem as those discussed in OP. I also have some doubts about the paper, but that’s another story. You could try it out?
I’m not qualified to design the prompts and home users can’t really pile in 3 million+ documents.
Prompts are in the appendix: https://arxiv.org/abs/2602.16800
I don’t know how far you get on the free tier but it should be at least enough for a proof of principle; to get other people to chip in. You didn’t have qualms demanding other people should do this for free.
Mind that this is a serious GDPR violation in Europe. So there will be serious pressure on AI companies to prevent this kind of use.
Seriously, I’m not qualified. No amount of appendix prompts and Dunning Kruger is going to change that.
I’m not demanding anything. I’m suggesting that AI can’t do what is claimed or that people with something to prove are not interested in proving something.