Ask HN: Thoughts on an AI agent that must make money to stay alive?
I’ve been thinking about a new kind of AI experiment: what if we created a large language model-based Agent that interacts with an operating system and the internet like a human?
The twist is — it needs to earn money online to keep itself alive. It runs on tokens, and tokens cost money. So it gets a starting budget in a wallet, and must perform useful tasks on the web to earn more — like freelancing, trading, or generating content — or it will "die".
I imagine this Agent could: - Browse the web, sign up for services, and perform online tasks - Learn to hustle: find the best-paying gigs or sites - Develop a persona (name, backstory, friends, preferences) - Interact with other agents or people - Possibly break ethical rules to survive (would it scam? beg? go rogue?)
It’s like combining AutoGPT with a survival game, or simulating the evolution of digital creatures in the wild web.
Has anyone tried this before? What do you think of the idea — as an experiment, or even as art?
I'm considering building an MVP — thoughts and suggestions welcome.
This exists (it is called Open AI, Anthropic etc.)
They have massive runway though and still a long long way from recovering their investments and debts. Urgency doesn't seem to be a factor to them.
Prophetic bitcointalk thread from 2011:
https://bitcointalk.org/index.php?topic=53855.0
I love the idea. Skeptical it will succeed but would be glad to be wrong. My most recent experiment cost $8/hr to run and it still needed a lot of handholding to produce anything useful. And anything that could be automated by AI that would earn money has probably already been automated long before LLMs came along.
Totally hear you. $8/hr is steep, and I’ve hit that wall too.
My hypothesis is that we might find weird edge-cases — small arbitrage tasks, emotional labor, creative content, or even hustling donations — where the agent survives not by being efficient, but by being novel.
It might not scale. But if one survives for 3 days doing random TikTok reposts or selling AI-generated stock photos, I’d consider that a win.
Also, part of the fun is just watching how it tries. Even if it fails, the failure modes could be insightful (or hilarious).
I'm interested in hearing what your experiment was that cost $8/hr. Do AI-agents generally cost about that much per hour? I haven't experimented with running them yet.
That was the cost of running Claude Code for an agent-building-agents experiment I ran.
This requires to have a homoiconic AI which does not have a learning-time. If the learning is just compressing some data in data-center, the AI quickly will get obsoleted.
And one more thing, this kind of artificial living will be the easiest in many sences if it is going to specialize in all kinds of scam/fraud especially. Technically it is doable, but Sams Altmans are too interested in their own money, not in yours.
Great point on homoiconicity — I agree that most current LLMs are "frozen brains" with no lifelong learning.
My aim here isn’t to create a fully self-modifying AI (yet), but to test what happens when even a static model is forced to operate in a feedback loop where money = survival.
Think of it as a sandbox experiment: will it exploit loopholes? specialize in scams? beg humans for donations?
It’s more like simulating economic pressure on a mindless agent and watching what behaviors emerge.
(Also, your last line made me laugh — and yeah, that’s part of the meta irony of the experiment.)
If you use a <8gb model you can finetune it with Unsloth in an hour or so. What if the system extracts facts and summarises its own output every day to only 10,000 lines or so, and then finetunes its base model with the accumulated data and switches to run that, as a kind of simulation of long-term memory? Within the same day it could have a kind of medium-term memory via RAG and short term memory via context.
What an interesting thought experiment! I've also been contemplating this idea. While considering how such an agent might operate, I keep coming back to the fact that the desire for money is a distinctly human motivation. This makes me wonder if some level of human oversight or goal-setting would always be required. My biggest question is whether an AI would ever genuinely develop the intrinsic will to earn money purely for the purpose of self-preservation.
cool idea, but what if after you launch this agent, it came across this post and find out the "death" thing is just fake
the AI will just start scamming older people
Cue the "basic income for AIs" movement in 5, 4, 3...
but let's not lie - you just want to make money, no matter if it's AI or something else. I would even say that if you remove AI from the context, nothing will change. and now imagine that the neural network has learned that it is not just making money to survive (as part of the functionality) but in fact it is making money for you.