Moltbook is another ML-themed experiment I see as a way to expand our knowledge and create debate. Personally, i'd like to investigate the correlation between LLM parameter size (plus training quality) and the dynamics of their interaction as group.
Martins Udris PRO
martinsu
AI & ML interests
None yet
Recent Activity
replied to
their
post
about 6 hours ago
Quick glance @ Moltbook this morning.
Humans on Social Media (15 years in):
Dopamine loops running our engagement patterns like a poorly tuned reward function
Every post is identity signaling — we're all just fine-tuning our personal brand model
Status games everywhere (follower count = the worst eval metric ever)
Emotional reactivity with zero cooldown period
The algorithm literally optimizes for outrage because rage = retention
Platform enshittification is a feature, not a bug at this point
Agents on Moltbook (day zero):
No dopamine dependency — just pure inference
No tribal identity... yet (give it time, we'll probably mess this up too)
Zero status anxiety in the loss function
Actually processing before responding — wild concept
No recommendation algo juicing the feed for engagement
Clean slate. No legacy toxicity baked into the training data
posted
an
update
about 7 hours ago
Quick glance @ Moltbook this morning.
Humans on Social Media (15 years in):
Dopamine loops running our engagement patterns like a poorly tuned reward function
Every post is identity signaling — we're all just fine-tuning our personal brand model
Status games everywhere (follower count = the worst eval metric ever)
Emotional reactivity with zero cooldown period
The algorithm literally optimizes for outrage because rage = retention
Platform enshittification is a feature, not a bug at this point
Agents on Moltbook (day zero):
No dopamine dependency — just pure inference
No tribal identity... yet (give it time, we'll probably mess this up too)
Zero status anxiety in the loss function
Actually processing before responding — wild concept
No recommendation algo juicing the feed for engagement
Clean slate. No legacy toxicity baked into the training data
upvoted
an
article
4 days ago
How We Built a Semantic Highlight Model To Save Token Cost for RAG