AI starts feeling a lot less spooky once you dig in just a tiny bit on what the models are (numbers), how they are trained, what an inference engine is, and how orchestration and glue result in actual productized AI. It all feels very grounded to me after months of looking into these areas and doing some very basic vibe code lab work.
Agreed. LLMs are just glorified matrix math calculators. But pretty fucking amazing what they can do. I think part of the "scary" comes from realizing that we, humans, are not nearly as ineffable and "special" as we'd like to think - we're just logic engines too much of the time when the chemicals aren't making us emotional.
It reminds me of a hypothesis I ran across that suggested thinking may have evolved to support communication (e.g., language), not the other way around.
AI starts feeling a lot less spooky once you dig in just a tiny bit on what the models are (numbers), how they are trained, what an inference engine is, and how orchestration and glue result in actual productized AI. It all feels very grounded to me after months of looking into these areas and doing some very basic vibe code lab work.
Agreed. LLMs are just glorified matrix math calculators. But pretty fucking amazing what they can do. I think part of the "scary" comes from realizing that we, humans, are not nearly as ineffable and "special" as we'd like to think - we're just logic engines too much of the time when the chemicals aren't making us emotional.
It reminds me of a hypothesis I ran across that suggested thinking may have evolved to support communication (e.g., language), not the other way around.