The Biggest “Lie” in AI? LLM doesn’t think step-by-step



Remove your personal information from the web at and use
code BYCLOUD for 20% off🙌

My Newsletter

my project: find, discover & explain AI research semantically

(just pushed out a new UI update! everything should be smoother now)

My Patreon

Think before you speak: Training Language Models With Pause Tokens
[Paper]

Let’s Think Dot by Dot: Hidden Computation in Transformer Language Models
[Paper]

Do LLMs Really Think Step-by-step In Implicit Reasoning?
[Paper]

On the Biology of a Large Language Model
[Paper]

Try out my new fav place to learn how to code

This video is supported by the kind Patrons & YouTube Members:
🙏Nous Research, Chris LeDoux, Ben Shaener, DX Research Group, Poof N’ Inu, Andrew Lescelius, Deagan, Robert Zawiasa, Ryszard Warzocha, Tobe2d, Louis Muk, Akkusativ, Kevin Tai, Mark Buckler, NO U, Tony Jimenez, Ângelo Fonseca, jiye, Anushka, Asad Dhamani, Binnie Yiu, Calvin Yan, Clayton Ford, Diego Silva, Etrotta, Gonzalo Fidalgo, Handenon, Hector, Jake Disco very, Michael Brenner, Nilly K, OlegWock, Daddy Wen, Shuhong Chen, Sid_Cipher, Stefan Lorenz, Sup, tantan assawade, Thipok Tham, Thomas Di Martino, Thomas Lin, Richárd Nagyfi, Paperboy, mika, Leo, Berhane-Meskel, Kadhai Pesalam, mayssam, Bill Mangrum, nyaa

[Discord]
[Twitter]
[Patreon]
[Business Inquiries] bycloud@smoothmedia.co
[Profile & Banner Art]
[Video Editor] @Booga04
[Ko-fi]

Leave a Reply