//
sign in
Profile
by @danabra.mov
Profile
by @dansshadow.bsky.social
AviHandle
by @danabra.mov
AviHandle
by @dansshadow.bsky.social
ProfileHeader
by @dansshadow.bsky.social
ProfileHeader
by @danabra.mov
ProfileHeaderAlt
by @jakesimonds.com
ProfileMedia
by @danabra.mov
ProfilePlays
by @danabra.mov
ProfilePosts
by @danabra.mov
ProfilePosts
by @dansshadow.bsky.social
ProfileReplies
by @danabra.mov
Record
by @atsui.org
Skircle
by @danabra.mov
StreamPlacePlaylist
by @katherine.computer
+ new component
ProfilePosts








Loading...
Games are usually on a flat plane; most engines & tutorials assume this. Getting nice looking oceans on a sphere has been tricky (not as frustrating as atmospheric shader tho). Getting waves (not just normal maps) to look ok/good on a smaller radius sphere is finnicky compared to much larger planets
Do you know what the moravec paradox is? I'll put it here: the idea that simple things like vision, holding and walking are much harder computationally to impart on machines at an adequate level compared to calculus or algebraic reasoning ability. What if wanting is very deep within the paradox?
One minor note, as observed by @maxine.science in the comments, is that LLMs are glorified markov chains (or more precisely, they try to approximate one) and the interesting thing is a glorified markov chain is quite powerful.
That's because it's not AI. It's a functional* basis for intelligence amplification. It's multiplicative and not additive. It scales you in whatever (including boneheaded) direction you were turned to. (literally a transformer is referentially transparent, just a really big arithmetic expression)
Moravec paradox: The robot can do a wall flip but will completely fall apart trying to do the basic beginner "six-step" move.
1mo
5mo
16d
3d
Current state of atmospheric shader for the game set in an outerworlds style mini solar system I'm working on. atmosphere post processing shader is probably the trickiest most frustrating thing to work on so far.
5mo
1mo
Here’s some Gemini gaslighting: • giving a wrong answer to a puzzle • giving Python code that could test its claim • asserting it obtained results from the code supporting its claim • but when I ran the code myself, it showed the claim was false. g.co/gemini/share...
deen
deen
deen
deen
deen
7mo
deen
Arguments abound around irrelevancies like consciousness. But, I think this question is related; more fundamental. Will wanting things in a coherent manner stable over long timelines be far more difficult than we expect? What is our motive force? What is a self-interpreting observer? How special?
An LLM's transformer is a markov kernel that drives an order-m Markov chain whose memory is bounded by the model’s context window.
16d
5mo
deen
deen
Greg Egan