We need to give LLMs a stream of inputs and outputs that are replaced over time matml.bearblog.dev 2 points by tokenomics 5 hours ago
Marshferm 5 hours ago You have to discard 90-97% of machine vision, which has little to do with mammalian vision tokenomics 4 hours ago The point is less about vision, and more about needing a continious stream of inputs and outputs. Marshferm 4 hours ago That’s not what human vision is.
tokenomics 4 hours ago The point is less about vision, and more about needing a continious stream of inputs and outputs. Marshferm 4 hours ago That’s not what human vision is.
You have to discard 90-97% of machine vision, which has little to do with mammalian vision
The point is less about vision, and more about needing a continious stream of inputs and outputs.
That’s not what human vision is.