latexr 2 hours ago

Full title is:

> Go read Peter Naur's "Programming as Theory Building" and then come back and tell me that LLMs can replace human programmers

Which to me gives a very different understanding of what the article is going to be about than the current HN title. This is not a criticism of the submitter, I know HN has a character limit and sometimes it’s hard to condense titles without unintentionally losing meaning.

n4r9 2 hours ago

Although I'm sympathetic to the author's argument, I don't think they've found the best way to frame it. I have two main objections i.e. points I guess LLM advocates might dispute.

Firstly:

> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.

To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Secondly:

> Theories are developed by doing the work and LLMs do not do the work

Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?

  • IanCal 2 hours ago

    Skipping that they say it's fallacious at the start, none of the arguments in the article are valid if you simply have models

    1. Run code 2. Communicate with POs 3. Iteratively write code

    • n4r9 37 minutes ago

      I thought the fallacy bit was tongue-in-cheek. They're not actually arguing from authority in the article.

      The system you describe appears to treat programmers as mere cogs. Programmers do not simply write and iterate code as dictated by POs. That's a terrible system for all but the simplest of products. We could implement that system, then lose the ability to make broad architectural improvements, effectively adapt the model to new circumstances, or fix bugs that the model cannot.

philipswood an hour ago

The paper he quotes is a favorite of mine and I think is has strong implications for the use of LLMs, but I don't think that this implies that LLMs can't form theories or write code effectively.

I suspect that the question to his final answer is:

> To replace human programmers, LLMs would need to be able to build theories by Ryle’s definition

  • skydhash 22 minutes ago

    Having a theory of the program, means you can argue about its current state or its transition in a new state, not merely describing what it is doing.

    If you see "a = b + 1" it's obvious that the variable a is taking the value of variable b incremented by one. What LLMs can't do is explaining why we have this and why it needs to change to "a = b - 1" in the new iteration. Writing code is orthogonal to this capability.

IanCal 2 hours ago

What's the purpose of this?

> In this essay, I will perform the logical fallacy of argument from authority (wikipedia.org) to attack the notion that large language model (LLM)-based generative "AI" systems are capable of doing the work of human programmers.

Is any part of this intended to be valid? It's a very weak argument - is that the purpose?

philipswood an hour ago

> Theories are developed by doing the work and LLMs do not do the work. They ingest the output of work.

It isn't certain that this framing is true. As part of learning to predict the outcome of the work token by token, LLMs very well might be "doing the work" as an intermediate step via some kind of reverse engineering.

  • skydhash 29 minutes ago

    > As part of learning to predict the outcome of the work token by token

    They're already have the full work available. When you're reading the source code of a program to learn how it works, your objective is not to learn what keyword are close to each other or extract the common patterns. You're extracting a model which is an abstraction about some real world concept (or some other abstractions) and rules of manipulation of that abstraction.

    After internalizing that abstraction, you can replicate it with whatever you want, extends it further,... It's an internal model that you can shape as you please in your mind, then create a concrete realization once you're happy with the shape.

karmakaze an hour ago

I stopped thinking that humans were smarter than machines when AlphaGo won game 3. Of course we still are in many ways, but I wouldn't make the unfounded claims that this article does--it sounds plausible but never explains how humans can be trained on bodies of work and then synthesize new ideas either. Current AI models have already made discoveries that have eluded humans for decades or longer. The difference is that we (falsely) believe we understand how the machine works and thus doesn't seem magical like our own processes. I don't know that anyone who's played Go and appreciates the depth of the game would bet against AI--all they need is a feedback mechanism and a way to try things to get feedback. Now the only great unknown is when it can apply this loop on its own underlying software.

  • skydhash an hour ago

    > The difference is that we (falsely) believe we understand how the machine works and thus doesn't seem magical like our own processes.

    We do understand how the machine works and how it came to be. What most companies are seeking for is a way to make that useful.

voidhorse 23 minutes ago

Ryle's definition of theory is actually quite reductionist and doesn't lend itself to the argument well because it is too thin to really make the kind of meaningful distinction you'd want.

There are alternative views on theorizing that reject flat positivistic reductions and attempt to show that theories are metaphysical and force us to make varying degrees of ontological and normative claims, see the work of Marx Wartofsky, for example. This view is far more humanistic and ties in directly to sociological bases in praxis. This view will support the author's claims much better. Furthermore, Wartofsky differentiates between different types of cognitive representations (e.g. there is a difference between full blown theories and simple analogies). A lot of people use the term "theory" way more loosely than a proper analysis and rigorous epistemic examination would necessitate.

(I'm not going to make the argument here but fwiw, it's clear under these notions that LLMs do not form theories, however, they are playing an increasingly important part in our epistemic activity of theory development)