Arc Forumnew | comments | leaders | submitlogin
2 points by shader 1742 days ago | link | parent

This comment/discussion is not really about your work, but about Ivan Illich's book "Tools for Conviviality" as presented in your introduction.

I don't think I agree with the distinction between manipulative vs. convivial tools, which makes the impression that some tools are inherently manipulative and will progress down one path of management, while others are intrinsically convivial and help improve agency and choice over time.

Even if the distinction is applied to schools of thought about how tools should be used (short-term productivity vs long-term freedom), I don't think the distinction matches reality. There is nothing about the use of tools in itself that restricts autonomy; anyone can at any time choose to use a different tool if they so desire (excepting external constraints such as employer mandates). The description about flaws in one tool being papered over by others to produce an increasingly unwieldy mass of mutually supporting tools with no chance of improvement or reconstruction sounds scary, but I haven't seen it. At any point in time, if someone thinks they can make a better tool to cut out several intervening steps in a process or replace an existing tool, they are welcome to make it - and should it prove truly simplifying and cost-reducing, it will be adopted. Special interests don't capture tools to make them ends in themselves; if at any point a tool ceases to be valuable (at least in perception), it will be cut out of the workflow.

One might respond by mentioning the current stack of (x86, linux, docker, python etc., web framework, html, javascript, browser) as having so much inertia that replacing any of the lower layers would be nearly impossible. The truth, however, is that anyone can replace any of those pieces whenever they wish in their own workflow; they will just have a hard time convincing everyone else to do the same. See RISC-V, rkt, Clojure, etc.

So I don't see "papering" vs "replacing" as two contrary and mutually exclusive actions. We can do both: ameliorate the current problems with temporary patches, and work on longer-term replacements. Ideally, the two steps could be taken at the same time to save effort, as in the strangler pattern [0]. In general, there's a trade-off, and falling to either extreme can mean failure. Constant patching leads to technical debt and ossification, but overzealous rewrites can run out of money and time before they produce any benefits.

In some sense, I think I agree with the direction you're taking it. We can have better tools than we currently use, and we don't need to hang on to compatibility to old systems built on wrong concepts with leaky abstractions. I just think we need to recognize the freedom we already have to shake off the past, instead of treating the existing inertia as inevitable or even relevant. It is not necessary to save or change the rest of the world.

----------

[0] https://martinfowler.com/bliki/StranglerFigApplication.html



3 points by akkartik 1741 days ago | link

"We can have better tools than we currently use, and we don't need to hang on to compatibility to old systems built on wrong concepts with leaky abstractions. I just think we need to recognize the freedom we already have to shake off the past..."

I absolutely agree. The shackles are nowhere except in our minds.

"It is not necessary to save or change the rest of the world."

I absolutely disagree. This argument is akin to saying one can survive a pandemic without saving or changing the rest of the world. It's only true if you don't take costs into account. It's far cheaper to stay immune in the presence of herd immunity than without.

Over-engineering has a way of compounding. It's only obvious that something is unnecessary for a brief window of time. Then people start using it, and it starts becoming load-bearing. Compounding efforts to unwind past decisions quickly multiply until they exceed individual limits of effort.

Even within an individual's limits, I care very much about people being able to make changes to their computers without devoting their lives to the endeavor. We all should have lives outside computers. We should be able to modify our compilers without spending years understanding them. Just by poking and tinkering and getting feedback for an hour here and there.

Even if we assume infinite capacity for unwinding past decisions, it gets rapidly impossible to even see alternatives to them.

When I think about how many different things I've had to learn just to get this computer to its current state, it seems laughable to say any individual could do it all. Nobody should have to go through what I've been through.

At the very least we need some critical mass of people to care about implementation complexity. One lone voice seems really fragile, because tiny embers of light can get put out from many sources of bad luck, no matter how lofty their intentions.

"I don't see "papering" vs "replacing" as two contrary and mutually exclusive actions. We can do both."

Certainly. I do both. In my day job I have to deal with Docker and Python and Javascript. But most people do only that. They think programming is about knowing an ever-growing list of nouns rather than concepts. Replacing is dying out. Because we've given early decisions too much inertia.

-----

3 points by shader 1739 days ago | link

> I absolutely disagree.

Maybe I should have said more precisely "it's not necessary to save the rest of the world at once". I think it comes down to something like the second law of motion: F = ma. There's no reason you couldn't move the whole world, it's just a trade-off between time, distance, and force. I think a lot of people get stuck thinking they have to change everybody before they can get moving, or that the idea isn't a success unless it "wins", but neither is true.

New ideas rarely displace old ones. It seems like it in computing, because the space has grown so rapidly, but I would almost wager that most language and platforms are used at least as much now as they were at their peaks, just because there are so many more computers and developers now than there used to be. From that perspective, new ideas don't defeat old ones, they just capture more of the new frontiers. As new companies are founded, and new developers graduate from college, they adopt new technologies while the old teams general stick with what they started with. Sometimes they fail or reorganize, but many times they persist with their old systems.

We could be disheartened by that realization (Cobol and TCL never died...), but I think with the right perspective it can bring more hope. Lisp hasn't actually been destroyed, it just got outpaced - there are probably more people using more variants of Lisp now than at any point in history. As such, we don't have to be disappointed when we don't convince people to switch to our model - it was foolish to expect that in the first place - instead, we can look forward to whatever positive benefit our work does bring to those who adopt it.

> This argument is akin to saying one can survive a pandemic without saving or changing the rest of the world. It's only true if you don't take costs into account. It's far cheaper to stay immune in the presence of herd immunity than without.

To address your counter-argument more specifically, I don't think the analogy is appropriate. The pandemic analogy may be very apropos to our current global crisis, but doesn't describe how ideas work. People don't get "infected" by bad ideas in the same way they are by viruses; they learn and adopt ideas intelligently. We don't have to worry about maintaining our immunity as a small community.

However, other aspects of the community model are more appropriate. If we don't reach critical mass, or have clear motivations and objectives, we'll eventually dissipate and move on to other things. See: the arc language community. Everyone doubtless benefited from the experience, but the outcome wasn't a future in which all of us use Arc as our primary language. So yes, we should invest in community and growth, but that's not the same thing as trying to save the world.

Also, there are economies of scale that come with larger communities, that I think aligns with your reference to "cost" much better. In a small community with an early-stage technology, everyone has to build things from scratch before they can use them. Larger, better established communities benefit from the work of those that came before. No argument there. But that's not really an argument for or against "saving the world" either. It would almost be a category error to observe the advantages of being an established project and say that new one needs to adopt the strategy of "being established".

> Over-engineering has a way of compounding. It's only obvious that something is unnecessary for a brief window of time. Then people start using it, and it starts becoming load-bearing. Compounding efforts to unwind past decisions quickly multiply until they exceed individual limits of effort.

Again, I think this is focusing on the existing mountains and edifices and forgetting we can just go around them. It would indeed be a pain to redesign x86, given that it's main advantage is that it is x86. But instead some people developed RISC-V. Also, I don't think it's true something will only be obviously unnecessary at the beginning, and seem more essential later; the fact we're having these discussions (and things like the UNIX Hater's Handbook) attest that people are quite capable of seeing faults and imagining alternatives. Unwinding decisions exceeds individual capacity only if you're trying to rebase the rest of the stack onto your changes. That is, only if you try to save the world, which is begging the question.

> Even within an individual's limits, I care very much about people being able to make changes to their computers without devoting their lives to the endeavor. We all should have lives outside computers. We should be able to modify our compilers without spending years understanding them. Just by poking and tinkering and getting feedback for an hour here and there.

I admire that objective, and am also addressing that issue in the system I'm currently designing. Imagine being aware of any changes someone made to their own copy of an application without requiring them to submit a pull request, and merging it in if you like the work. No need to track forks or use github pages, the development tools themselves track relevant changes that other people are making to the same codebase. That way, the first time anyone finds and solves a bug, everyone else benefits. And you don't have to worry about your local installation and the upstream repository getting out of sync if you make a personal modification; they aren't treated any differently. I think that would boost open-source development productivity immensely. Such is what I am trying to design.

> Nobody should have to go through what I've been through.

Thanks for your efforts. And now that you've done them, I don't think anyone will have to. They might want to change something, but it will hopefully be much easier to build on your work than it was for you to do it in the first place.

> At the very least we need some critical mass of people to care about implementation complexity. One lone voice seems really fragile, because tiny embers of light can get put out from many sources of bad luck, no matter how lofty their intentions.

I agree with that, and that's where I think I'll close. What we need is sustainable growth and community development, so that good ideas don't die out. But we shouldn't sacrifice any other values for the sake of growth, much less dominance. If our ideas and community have a larger R0 they will eventually become dominant, but I don't see the need to exhaust ourselves forcing the issue and wasting time and energy overcoming irrelevant resistance.

-----

2 points by akkartik 1739 days ago | link

I mostly agree with this. Replacing all of existing software isn't anywhere near on my radar. My goal right now is just for Mu to not die :) You're right about going around things rather than redesigning them. Isn't that what I'm doing? I think this is perfectly in keeping with "replace vs paper over". There's no universal quantifier attached that requires the old thing to be replaced everywhere.

I was only using the analogy with pandemics to point out that there are situations where secondary consequences exist, even if it superficially seems like one can go one's own way. I didn't intend to suggest Mu provides any sort of immunity to anything.

> Unwinding decisions exceeds individual capacity only if you're trying to rebase the rest of the stack onto your changes. That is, only if you try to save the world, which is begging the question.

I think I'm losing the thread of this particular back and forth. Perhaps we're saying the same thing, and you took papering over vs replacing to be more mutually exclusive than I intended. I think it's existential for replacing to take some mindshare away from papering over, because of the overwhelming tendency for everyone around us to go the other way. Once you start talking about not having to rebase the rest of the stack, I feel like you're in my replacing camp. Functional replacement rather than sub-system replacement.

> And now that you've done them, I don't think anyone will have to. They might want to change something, but it will hopefully be much easier to build on your work than it was for you to do it in the first place.

Hah! Thank you, but don't underestimate humanity's ability to forget.

-----

2 points by shader 1739 days ago | link

Based on the fact that we're discussing this on the arc forum, and you've built languages to replace-ish assembly and C, and I'm designing a language to replace pretty much everything else, I'd say we're much more on the same page than nearly everyone else.

This particular back-and-forth was that I said we "don't need to save the whole world", and you "absolutely disagreed". It sounds like you've agreed with all of my points or softened yours, so I'm not really sure where that leaves us.

Somewhat ironically, I sometimes think of the design I keep hinting at basically as a rebase of most of computer science onto a virtual computer with 32byte pointers to ROM.

-----

2 points by akkartik 1739 days ago | link

Ah, I see.

> > It is not necessary to save or change the rest of the world.

> I absolutely disagree.

In my mind the poles of this disagreement were zero change to the world vs non-zero change to the world. I was saying it seems futile to try only to change myself but not some others. The thought of a universal quantifier, zero vs infinity, that didn't occur to me at this point.

-----

2 points by shader 1736 days ago | link

Yeah, I guess I could have phrased that better.

In my mind, "not necessary" didn't imply "necessarily not", but I can see how it might sound like I wanted to just let the world burn and walk away. I only intended to suggest not worrying about it and not putting that popular-opinion cart before your original-objective horse.

-----

2 points by akkartik 1739 days ago | link

Heh, I notice now that I never actually said "replace" in the paper. I said, "take it out and think about the problem anew." That sounds like we're on the same page?

-----