A (Semi) Breif (Terible, Biased, Inaccurate, and Misspelled) Tanget
Also throughout mankind we have been kinda stupid, I mean we smart and aren't. We can do many cool things like turn electric singles into many cool things, make a fuck ton of food to the point where we CAN be obesse in some places. But we are also VERY easy to trick, from moving video to snake oil, it's pretty easy.
We are especially easy fooled by words. We have grown up as a species in a world were only we can understand 'words', I mean we invented them. We figured out we can draw stuff and apply meaning towards them. We know when you smash a whole bunch togather we can create things the educate, entertain, etc us. We can pass on info the next generation even when we a very sleepy for a while. As long as we are taught how writing works, we express what we want in anyways we want really. We have intution, we can understand what we do and do not know. We know that we cannot know everything, we can try to express things we have never seen before.
But you can make a machine that can calcuate a series of words. I mean a calcator can do 77* 22, but it aint thinking to itself 'carry the 1' but they're turned into eletrical singels and more singels turn them into the output. Someone had to have figured out how to turn the singels into them, well more of MANY MANY MANY MANY people over the course of thousnads of year did.
This machine is yes, able to smash the words, but is it expressing itself? Is it thinking, no? Does it have free-will, not. It doesn't have a will besides what it was programmed to do. It doesn't know what it doesn't know, because it doesn't know what it does. Now this all very cool math but is very bad when people do not know that it doesn't know.
All LLMs all have massive massive massive amounts of data, all linked togather. But these have to be written by something that knows, and thinks, and has intution. Everything 'AI' outputs has been written. I mean yeah us humans are just remixing letters, but we know that, ait doesn't, because it doesn't think. We can fix our mistakes and understand, but the 'AI's can understand that it is wrong or can be wrong, so if it keeps picking up wrong patters it trained from itself it'll eat itself. LLMs cannot train off of itself, so it is always bottlenecked to itself. We humans don't have to keep repeating the same mistakes, but an 'AI' will because it doesn't know.
We won't be able to create a machine that can know and think like us untuil we understand how we work FULLY. Yes 'AI's are moddled to a degree after our human brain, but like we barley know how we work. I mean our brains are so unperdical, I have ADHD I am built different and such. That is so stupid about 'AGI' or whatever we keep moving the goal post, because componies need money. If we really wanted AGI first figure out brainy poo then ai. I sleep now I am very insane atm.
TLDR: Humans invented writing; we now have machines; we figure how to make machine perdict writing we have written; machine doesn't know or think like we do it just does.