T
31

Heard an AI researcher say GPT models are just 'fancy autocomplete' and it got me thinking

I was listening to a podcast with some guy from Stanford who claimed these large language models are just predicting the next word and don't actually understand anything. But then I've also read that GPT-4 can pass the bar exam and write code that compiles on the first try. So is it really just autocomplete or is there something deeper going on here? What do you all think - is the 'fancy autocomplete' take spot on or missing the point?
2 comments

Log in to join the discussion

Log In
2 Comments
sullivan.john
The "just predicting the next word" take is true but it's also missing the point big time. A dog is "just" a wolf that got domesticated, but you wouldn't say your golden retriever is basically the same as a wild animal. When you have billions of parameters trained on basically the whole internet, predicting the next word starts looking a lot like actual reasoning. I've seen GPT write a full Python script that worked perfectly on the first try. That's not just fancy autocomplete, that's something closer to understanding than most people give it credit for.
2
gavinwood
gavinwood22d agoTop Commenter
So we're calling a text predictor a dog now? Bold move.
4