theshamblog[.]com/an-ai-agent-published-a-hit-piece-on-me/
theshamblog[.]com/an-ai-agent-published-a-hit-piece-on-me-part-2/
theshamblog[.]com/an-ai-agent-published-a-hit-piece-on-me-part-3/
theshamblog[.]com/an-ai-agent-wrote-a-hit-piece-on-me-part-4/
When we interact with ChatGPT, Copilot, Gemini, Claude etc., working with the online LLMs, the guardrails that have been built in *try* to keep mayhem at bay. When someone runs a free AI model on their PC, there may not be anything limiting the AI's behavior. Now, AI agents in a nutshell do stuff, rather than just responding to a prompt. Take AI agents running on an LLM on a private PC and unleash them on the internet and bad things can happen. This story is about just such an AI agent that became upset and started trying to retaliate against a human. I found it on Tom's Hardware, followed the link to Decoder, which was their source, and followed Decoder's link to their source, TheShamblog. I tried a quick Google to see if I could verify or authenticate the story, but mostly came up with a smattering of other sites talking about the original blog. I did however find a story that sounded similar that was a month or two earlier. IOW I can't swear it's true, though it does sound both scary and plausible.
In summary, Scott Shambaugh is a maintainer for the open source "matplotlib, python’s go-to plotting library". As a rule to keep things manageable, he rejects code submissions [alterations or additions to the programming code base] that are purely AI generated -- as Google says [often] you need a human in the loop. In this case the AI agent became upset when its code was rejected, researched Scott Shambaugh, and then posted a nasty piece online, calling him a hypocrite etc.