buildcognitiveresonance.substack[.]com/p/large-language-mistake
cognitiveresonance[.]net/
People hype AI because it's making them richer. Many biz leaders and owners eat that up -- want to see more and more of it -- because a long held dream is to make more money without having to deal with those darn pesky human employees. AI can be a useful tool, but it's nowhere near what those promoting the hype would have you believe. The linked essay breaks that down, focusing on the sought after, and feared prospect of AGI -- loosely, AI that approaches & then exceeds human intelligence.
My 2 cents, FWIW...
The one thing AI absolutely excels at is pattern recognition. It can go through enormous amounts of data and find patterns at speeds magnitudes faster than any human. Having basically digested the web, it can be a huge help if/when you select a topic or area where there's little disagreement or politics etc. Writing programming code [coding] is a great example. Each programming language has a set of builtin operations, e.g., if this, then that, that are chained together, line after line, sometimes stretching into the millions of lines. But there's no one single way of doing most things -- coders build up their own libraries of tricks they've used, stuff they've thought up or borrowed. Most of those coding methods have been published online, so AI knows them all. The result is coders are all in on AI -- were among the very first to benefit -- since working alongside AI can save them enormous amount of time.
Move to softer topics where there's even just a few voices of disagreement and AI can, and too often will fall apart. It actually has no idea what you're asking it or telling it to do -- it takes your input and goes looking for patterns. When today's AI, based on LLMs [Large Language Models] gives you a verbal or textual response, the words it uses were chosen because statistically they were often used together in the online data it consumed. When you ask a question, the basis for its answer is whatever words were found associated with the words you used. If there's not universal agreement on whatever you asked, it'll pick a source from its training, which may be correct or not, and may be further influenced by additional programming inserted to skew towards one bias or another, e.g., the current war in some political circles on woke. If/when it sounds human, that's solely because it's regurgitating what some humans wrote that the AI found among all its data.