So far I tested deepseek r1:7b and llama3.2 post on the fediverse and on detecting AI content (I have no shortage of AI content). Seems like llama3.2 takes preforms worse at detecting AI content compared to deepseek but works quicker. I’m not working with any graphics cards. Deepseek model be really good at generating replies with the right prompt, but it does take several minutes to run. No one has said anything about AI text I submitted other than reply I made to the wrong person and people are even sharing and liking the AI produced text
Do you generate replies in a custom way every time, adjusting the prompt and supervising the result, or do you have fully-automatic system? If you do use any sort of manual intervention on per post basis, whatever you’re doing is not going to work as a bot.
Rn I’m supervising the response and iterating the prompt. I haven’t even implemented a scraper or api. I haven’t fully botted it yet because I don’t know the architecture I want. Like do I even want ai detector or a script that puts in grammar mistakes or an basic algorithm to pick prompts