hckrnws
Common Lisp Screenshots: today's CL applications in action
by _emacsomancer_
GitHub and Codeberg links on the site don't open for me. ("To protect your security, codeberg.org will not allow Firefox to display the page if another site has embedded it. To see this page, you need to open it in a new window.") This is because of the use of frames:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html>
<head>
<title>Common Lisp Screenshots</title>
<meta name="description" content="Today's Common Lisp applications in action">
<meta name="keywords" content="">
<meta name="generator" content="ORT - Ovh Redirect Technology">
<meta name="url" content="https://simple.photo/vindarel/c352e2c0177b24786fb40041657485dd/common-lisp-screenshots/">
<meta name="robots" content="all">
</head>
<frameset rows="100%,0" frameborder=no border=0>
<frame name="ORT" src="https://simple.photo/vindarel/c352e2c0177b24786fb40041657485dd/common-lisp-screenshots/">
<frame name="NONE" src="" scrolling="no" noresize>
<noframes>
<body><a href="https://simple.photo/vindarel/c352e2c0177b24786fb40041657485dd/common-lisp-screenshots/">Click here</a><hr></body>
</noframes>
</frameset>
</html>
You can fix this by replacing the OVH feature with a regular redirect, like an `index.html` with a `<meta>` tag: <meta http-equiv="refresh" content="0; url=https://simple.photo/vindarel/c352e2c0177b24786fb40041657485dd/common-lisp-screenshots/">
If possible, you can also fix it by making your links on the https://simple.photo/ page open in a new window.(including HN! - https://media.simple.photo/12M3xnh3VhDMUgCs8DVhkTBI6OgDGGIX/... )
Indeed, Arc's been running on Common Lisp for a while now!
Too bad the screenshot doesn't show this page as a nice autoreference mise en abîme.
I like how the disclaimer went humble bragging about the range of usage.
>"Please don't assume Lisp is only useful for Animation and Graphics, AI, Bioinformatics, B2B and Ecommerce, Data Mining, EDA/Semiconductor applications, Expert Systems, Finance, Intelligent Agents, Knowledge Management, Mechanical CAD, Modeling and Simulation, Natural Language, Optimization, Research, Risk Analysis, Scheduling, Telecom, and Web Authoring just because these are the only things they happened to list."
>Kent Pitman
He left out Guessing Animals!
https://en.wikipedia.org/wiki/Kent_Pitman
>While in high school, he saw output from one of the guess the animal pseudo-artificial intelligence (AI) games then popular. He considered implementing a version of the program in BASIC, but once at the Massachusetts Institute of Technology (MIT), instead he implemented it in several dialects of Lisp, including Maclisp.
Kent Pitman's Lisp Eliza from MIT-AI's ITS History Project (sites.google.com)
https://news.ycombinator.com/item?id=39373567
https://sites.google.com/view/elizagen-org
https://climatejustice.social/@kentpitman/111236824217096297
https://web.archive.org/web/20131102031307/http://open.salon...
The core route optimization algorithm of Routific is also written in Common Lisp :)
That's awesome, thank you. How do you know it, is there a reference on the net somewhere?
Until Vindarel gets the TLS working there's also a direct URL: (<https://simple.photo/vindarel/c352e2c0177b24786fb40041657485...>). It's a bit of a shame that there's no indication to what application each screenshot is from.
Thank you, TLS should be fine now. (fixed ±12 hours ago)
Many of them do say which program they are from; at least the first of multiple are from the same program.
Yeah, turns out that feature is gated behind javascript, which is unfortunate. The website works pretty well otherwise.
While not Common Lisp I've always found it pretty cool that AutoCAD shipped with a Lisp, making the language technically a hugely deployed commercial success.
Nowadays it also supports .NET, COM and ObjectARX.
Just like Gimp eventually added support to Python alongside Script-Fu.
Which end up reducing the interest to reach out to Lisp languages.
Were it not for early exposure to Autolisp I would not have appreciated Lisp or Lisp-based systems, like Emacs, the way that I did. I might've ended up whinging that they didn't use a mOdErN language like JavaScript.
Autolisp definitely sent me down the left-paren path.
[flagged]
We have enough headlines about LLMs already. Let's just enjoy a cool Lisp site without some AI advocate telling us that non-AI things are irrelevant.
I'm not an AI "advocate". I'm telling y'all about how the world is. How it's going to be. I'm not happy about it, but we've crossed the threshold beyond which it's incomprehensibly silly not to factor the massive changes LLMs bring into how you work designing or implementing software. Lisp apps are cool, but as of 2026 they're fading into irrelevance. The paradigm of programming they represent is bound for the Computer History Museum and Usagi Electric's YouTube channel—not the reality of new software development. Even a legacy code base can be poured into an LLM, which will grok it instantly, answer your questions about it, and propose changes and improvements that will make it more performant, reliable, and comprehensible. I know this because I've done it.
> I'm not an AI "advocate". I'm telling y'all about how the world is. How it's going to be.
This, together with grand claims that obviously don't hold up in reality, does make you an AI advocate no matter how much you dislike the label.
If you comment was more measured and had nuanced view, then I'd understand wanting to push back on it. But then you also say stuff like "Even a legacy code base can be poured into an LLM, which will grok it instantly" so no wonder others see you as a AI advocate.
Well done you.
Catch all the security holes while you were reviewing it, or did you leave those to the machine as well?
I don't agree. That may be your experience, but it is annoying to have someone act as a prophet for all things and disregard what everyone else says. Emacs is still relevant in my daily work even with heavy use of LLMs.
> Even a legacy code base can be poured into an LLM
Which LLM can read a whole code base? Embeddings do not count.
The unfettered instrumental rationality of the techno-slob on full display. Bonus depravity-points if the multi-paragraph HN comments are also being outsourced to the Machine.
Depending on a corporation to do your programming (and burning half the planet in the process, pardon the hyperbole) is the very opposite end of the "hacker" ethos where Lisp stands. Very surprising to see this sort of comment on HN, of all places.
Really? Feels like most Of HN lately is just “get on the AI hype train or get downvoted”
HN has never really walked the walk when it came to embodiment of the hacker spirit.
Before the incessant AI hype it was crypto, and before that it was JavaScript frameworks and before that it was ...
I've always understood hackers to be a subset of users at HN. Maybe there were more in the early days, but with the growth of the startup business model, a lot of different users were attracted to the site. The core value seems to be interest in technology and the cultures around it. Emphasis own the plurality of cultures because I think there are multiple, competing ones. Though, as per guidelines, any story interesting to users is acceptable for submission.
HN is only a name, in reality it's VC news.
Hackernews isn't really for that kind of hacker. Ever since Paul Graham became a startup wonk and VC, it's really more for "growth hackers". It was originally called "Startup News". For growth hackers, productivity, profitability, and scalability, in metrizable form especially, are far more important than romanticism about the lone hacker or small team of geniuses building something with just a laptop and their wits, or even moral concerns about the environment. (And LLMs burn less energy, and deliver more value, than crypto did. The energy consumption of AI has been way overblown.) And Lisp was created specifically to bring about this world. It was an early initial experiment in intelligence by symbolic computation—one which ultimately failed as we found that we can get a lot closer to intelligence by matmuling probability weights with good old-fashioned numeric code written in C++, Fortran, or maybe even Rust. So the long-term AI initiative which gave rise to Lisp ultimately spelt its end as well.
But the force-multiplier effects of LLMs are not to be denied, even if you are that kind of hacker. Eric S. Raymond doesn't even write code by hand anymore—he has ChatGPT do everything. And he's produced more correct code faster with LLMs than he ever did by hand so now he's one of those saying "you're not a real software engineer if you don't use these tools". With the latest frontier models, he's probably right. You're not going to be able to keep pace with your puny human brain with other developers using LLMs, which is going to make contributing to open source projects more difficult unless you too are using LLMs. And open source projects which forbid LLM use are going to get lapped by those which allow it. This will probably be the next major Linux development after Rust. The remaining C code base may well be lifted into Rust by ChatGPT, after which contributing kernel code in C will be forbidden throughout the entire project. Won't that be a better world!
Kind of yes and kind of no. Not many reasons to use Common Lisp I agree, but the Lisp idea itself has still something to offer that couldn’t be found in other systems.
I’m comfortable to declare that are not macros the most powerful thing of Lisp, but the concept of an environment. Still in 2026 many languages now implement the concept of evaluating the code and make it immediately available but nothing is like Lisp.
Lower level programming languages today they all still requires compilation. Lisp is one of the few that I found having the possibility to eval code and its immediately usable and probably the only that really relies heavily on REPL driven development.
Env+REPL imo is the true power still far ahead of other languages. I can explore the memory of my program while my program is running, change the code and see the changes in real time.
The issue is that CL is old, and Clojure is so close to be perfect if it wasn’t for Java. Clojure replaces Java, not CL and this is its strength but also its weakness.
Comment was deleted :(
Can your LLM do that to a running system? Or will it have to restart the whole program to run the next iteration? Imagine you build something with long load-times.
Also, your Lisp will always behave exactly as you intended and hallucinate its way to weird destinations.
I can’t speak to getting an LLM to talk to a CL listener, simply because I don’t know the mechanics of hooking it up. But being as they can talk to most anything else, I see no reason why it can’t.
What they can certainly do is iterate with a listener with you acting as a crude cut and paste proxy. It will happily give you forms to shove into a REPL and process the results of them. I’ve done it, in CL. I’ve seen it work. It made some very interesting requests.
I’ve seen the LLM iterate, for example, with source code by running it, adding logging, running it again, processing the new log messages, and cycling through that, unassisted, until it found its own “aha” and fixed a problem.
What difference does it make whether it’s talking to a shell or a CL listener? It’s not like it cares. Again, the mechanics of hooking up an LLM to a listener directly, I don’t know. I haven’t dabbled enough in that space to matter. But that’s a me problem, not an LLM problem.
An LLM can modify the code, rebuild and restart the next iteration, bring it up to a known state and run tests against that state before you've even finished typing in the code. It can do this over and over while you sleep. With the proper agentic loop it can even indeed inject code into a running application, test it, and unload it before injecting the next iteration. But there will be much less of a need for that kind of workflow. LLMs will probably just run in loops, standing up entire containers or Kubernetes pods with the latest changes, testing them, and tearing them down again to make room for the next iteration.
As for hallucinations, I believe those are like version 0 of the thing we call lateral thinking and creativity when humans manifest it. Hallucinations can be controlled and corrected for. And again—you really need to spend some time with the paid version of a frontier model because it is fundamentally different from what you've been conditioned to expect from generative AI. It is now analyzing and reasoning about code and coming back with good solutions to the problems you pose it.
Ah, so I need to pay 100s of $ and use the "frontier" model, which is always a moving BS excuse. Last month Opus 4.5 was the frontier, gotta use it, now it's 4.6, and none of them so far have produced anything consistently good.
It is NOT reasoning about code. It's a glorified autocomplete that wastes energy. Associating "reasoning" to it is an antropomorphizateion.
And calling hallucinations "lateral thinking" is a fucking stretch.
"Let's use tool `foo` with flag `-b`" even if the man page doesn't even mention said flag.
Sure, they might be able to create numerous iterations of containers, testing them, burning resources....but that is literally a thousand monkeys smashing their heads on typewriters to crank out 4chan posts.
High level programming languages were conceived by humans and for humans. Will AIs in future better use their own languages, or maybe even output machine language directly?
Comment was deleted :(
You are comparing a PL to a text generator. What are you on?
Hey, please don't cross into personal attack - you can make your substantive points without that, and we'll all be better for it.
I believe (correct me if I’m wrong), their point is that with time, we’re writing less code ourselves and more through LLMs. This can make people disconnected from the “joy” of using certain programming languages over others. I’ve only used cl for toy projects and use elisp to configure my editor. As models get better (they’re already very good), the cost of trashing code spirals downwards. The nuances of one language being aesthetically better than other will matter less over time.
FWIW, I also think performant languages like rust will gain way more prominence. Their main downside is that they’re more “involved” to write. But they’re fast and have good type systems. If humans aren’t writing code directly anymore, would a language being simpler or cleverer to read and write ultimately matter? Why would you ask a model to write your project in python, for instance? If only a model will ever interact with code, choice of language will be purely functional. I know we’re not fully there yet but latest models like opus 4.6 are extremely good at reasoning and often one-shotting solutions.
Going back to lower level languages isn’t completely out of the picture, but models have to get way better and require way less intervention for that to happen.
No, I'm not.
I used to appreciate Lisp for the enhanced effectiveness it granted to the unaided human programmer. It used to be one of the main reasons I used the language.
But a programmer+LLM is going to be far more effective in any language than an unaided programmer is in Lisp—and a programmer+LLM is going to be more effective in a popular language with a large training set, such as Java, TypeScript, Kotlin, or Rust, than in Lisp. So in a world with LLMs, the main practical reason to choose Lisp disappears.
And no, LLMs are doing more than just generating text, spewing nonsense into the void. They are solving problems. Try spending some time with Claude Opus 4.6 or ChatGPT 5.3. Give it a real problem to chew on. Watch it explain what's going on and spit out the answer.
> But a programmer+LLM is going to be far more effective in any language than an unaided programmer is in Lisp—and a programmer+LLM is going to be more effective in a popular language with a large training set, such as Java, TypeScript, Kotlin, or Rust, than in Lisp. So in a world with LLMs, the main practical reason to choose Lisp disappears.
You are working on the assumption that humans don't need to even look at the code ever again. At this point it in time, it is not true.
The trajectory over the last 3 years do not lead me to believe that it will be true in the future.
But, lets assume that in some future, it is true: If that is the case, then Lisp is a better representation than those other languages for LLMs to program in; after all, why have the LLMs write in Javascript (or Java, or Rust, or whatever), which a compiler backend lowers into an AST, which then gets lowered into machine code.
Much better to program in the AST itself.
IOW, why program in the intermediate language like JS, Java, Rust, etc when you can program in the lowered language?
For humans, using the JS, Java or Rust lets us verbosely describe whatever the AST is in terms humans can understand, however the more compact AST is unarguably better for the way LLMs work (token prediction).
So, in a world where all code is written by LLMs, using an intermediate verbose language is not going to happen unless the prompter specifically forcibly selects a language.
> The trajectory over the last 3 years do not lead me to believe that it will be true in the future.
Everything changed in November of 2025 with Opus 4.5 and GPT 5.2 a short time later. StrongDM is now building out complex systems with zero human intervention. Again, stop and actually use these models first, then engage in discussion about what they can and can't do.
> But, lets assume that in some future, it is true: If that is the case, then Lisp is a better representation than those other languages for LLMs to program in; after all, why have the LLMs write in Javascript (or Java, or Rust, or whatever), which a compiler backend lowers into an AST, which then gets lowered into machine code.
That's your human brain thinking it knows better. The "bitter lesson" of AI is that more data=better performance and even if you try to build a system that encapsulates human-brain common sense, it will be trounced by a system simply trained on more data.
There is vastly, vastly more training data for JavaScript, Java, and Rust than there is for Lisp. So, in the real world, LLMs perform better with those. Unlike us, they don't give a shit about notation. All forms of token streams look alike to them, whether they involve a lot of () or a lot of {;}.
> That's your human brain thinking it knows better. The "bitter lesson" of AI is that more data=better performance and even if you try to build a system that encapsulates human-brain common sense, it will be trounced by a system simply trained on more data.
I feel you glossed over what I was saying.
Let me try to rephrase: if we ever get to a future where humans are not needed to look at or maintain code again, all the training data would be LLM generated.
In that case, the ideal language for representing logic in programming is still going to be a Lisp-like one.
Until there is a bug and say due to DNS issues your LLM is. It reachable because everything is down
Good thing I've got Qwen downloaded to my MacBook in case of that eventuality!
Highly recommend https://github.com/fosskers/vend
My hammer is also solving problems. Still, hammering is not programming. LLMs are text generators.
The difference between the programming tools available before and LLM-based programming tools is the difference between your hammer and that of Fix-it Felix, which magically "fixes" anything it strikes. We are living in that future, now. Actually try it with frontier models and agentic development loops before you opine.
Assuming that everybody disagreeing with such takes simply can't have tried the latest generator is quite telling. Consider, that maybe, I'm not as easily impressed?
Crafted by Rajat
Source Code