Arc Forumnew | comments | leaders | submit | shader's commentslogin
2 points by shader 1038 days ago | link | parent | on: Bel in Clojure

That was a fun and insightful take on the experience of implementing Bel.

The 'lit concept is pretty nice; like tags, but more thoroughly integrated.

I also like the idea of integrating interpreter features more completely, with globe, scope, err, etc.

It raises a question though; how does Bel relate to the Arc community? Is it the full successor? Can they coexist somehow?

-----

2 points by shader 1376 days ago | link | parent | on: Correctness and Complexity

I haven't read this yet, but it looks relevant to some of our past discussions.

-----

2 points by shader 1376 days ago | link

After reading it, it doesn't focus as much on the practical aspects of complexity vs correctness in software, but does rather thoroughly cover the theory behind the problem.

I got it from a HN thread about the safety of Zig vs Rust (https://news.ycombinator.com/item?id=26537693)

From that context, the main idea that I'm thinking about is the tradeoff between "correctness" and "complexity" in an application or language.

That is, you could add complexity (language features, tools, architecture) to ensure some kinds of correctness, but that complexity adds risks of its own.

So, is it better to have a simpler language that doesn't have as many guarantees but is easier to understand and iterate with (Zig), or a more complex language that has more guarantees but at the cost of program complexity and slower builds? (Rust and Haskell)

This is the discussion that I thought y'all would be interested in.

-----


Thanks for sharing! I always like ideas that integrate previously distinct concepts into a single recursive hierarchy.

I am curious what makes it slow? Is it just an implementation detail, or something fundamental to the algorithms / concepts?

-----

3 points by rocketnia 1378 days ago | link

There are a number of factors going into what makes it slow (and what might make it faster). I'm particularly hopeful about points 4 and 5 here, basically the potential to design data structures which pay substantially better attention to avoiding redundant memory use.

By the way, thanks for asking! I'm reluctant to talk about this in depth in the documentation because I don't want to get people's hopes up; I don't know if these optimizations will actually help, and even if they do, I don't know how soon I'm going to be able to work on building them. Still, it's something I've been thinking about.

1. Contracts on the whole

Punctaffy does a lot of redundant contract checking. I have a compile-time constant that turns off contract checking[1], and turning it off gives a time reduction in the unit tests of about 70%, reducing a time like 1h20m to something more like 20m. That's still pretty slow, but since this is a quick fix for most of the issue, it's very tempting to just publish a contractless variation of the library for people who want to live on the edge.

2. Whether the contracts trust the library

Currently, all the contracts are written as though they're for documentation, where they describe the inputs and the outputs. This constrains not just that the library is being used with the correct inputs but that it's producing the correct outputs. Unless the library is in the process of being debugged, these contracts can be turned off.

(Incidentally, I do have a compile-time constant that turns on some far more pervasive contract-checking within Punctaffy to help isolate bugs.[2] I'll probably want it to control these output-verifying contracts as well.)

3. The contract `hypernest/c`

One of the most fixable aspects here is that my `hypernest/c` contract checks a hypernest's entire structure every time, verifying that all the different branches zip together. If I verified this at the time a hypernest was constructed, the `hypernest/c` contract could just take it for granted that every hypernest was well-formed.

4. Avoidable higher-dimensional redundancy in the data structure

Of course, even without contracts, 20 minutes is still a pretty long time to wait for some simple tests to compile. I don't want to imagine the time it would take to compile a project that made extensive use of Punctaffy. So what's the remaining issue?

Well, one clue is a previous implementation of hypersnippets I wrote before I refactored it and polished it up. This old implementation represented hypertees not as trees that corresponded to the nesting structure, but as plain old lists of brackets. Every operation on these was implemented in terms of hyperstacks, and while this almost imperative style worked, it didn't give me confidence in the design. This old implementation isn't hooked up to all the tests, but it's hooked up to some tests that correspond to ones that take about 3 minutes to run on the polished-up implementation. On the old list-of-brackets implementation, they take about 7 seconds.

I think there's a reason for that. When I represent a hole in a hypertee, the shape of the hole itself is a hypertee, and the syntax beyond each of the holes of that hole is a hypertee that fits there. A hypertee fits in a hole if its low-degree holes match up. That means that in the tree representation, I have some redundancy: Certain holes are in multiple places that have to match up. Once we're dealing with, say, degree-3 hypertees, which can have degree-2 hypertees for holes, which can have degree-1 hypertees for holes, which have a degree-0 hypertee for a hole, the duplication compounds on itself. The data structure is using too much space, and traversing that space is taking too much time.

I think switching back to using lists of brackets and traversing them with hyperstacks every time will do a lot to help here.

But I have other ideas...

5. Avoidable copying of the data structure

Most snippets could be views, carrying an index into some other snippet's list of brackets rather than carrying a whole new list of brackets of their own. In particular, since Punctaffy's main use is for parsing hyperbracketed code, most snippets will probably be views over a programmer's hand-written code.

6. The contract `snippet-sys-unlabeled-shape/c`

There's also another opportunity that might pay off a little. Several of Punctaffy's operations expect values of the form "snippet with holes that contain only trivial values," using the `snippet-sys-unlabeled-shape/c` contract combinator to express this. It would probably be easy for each snippet to carry some precomputed information saying what its least degree of nontrivial-value-carrying hole is (if any). That would save a traversal every time this contract was checked.

This idea gets into territory that makes some more noticeable compromises to conceptual simplicity for the sake of performance. Now a snippet system would have a new dedicated method for computing this particular information. While that would help people implement efficient snippet systems, it might intimidate people who find snippet systems to be complicated enough already.

It's not that much more complicated, so I suspect it's worth it. But if it turns out this optimization doesn't pay off very well, or if the other techniques already bring Punctaffy's performance to a good enough level, it might not turn out to be a great tradeoff.

---

[1] `debugging-with-contracts-suppressed` at https://github.com/lathe/punctaffy-for-racket/blob/399657556...

[2] `debugging-with-contracts` at https://github.com/lathe/punctaffy-for-racket/blob/399657556...

-----

2 points by shader 1360 days ago | link

I want to think more about this and continue the conversation, but I'm worried the reply window will close first.

Since I presume your input space is relatively small (the AST of a program, which usually only has a few thousand nodes), it sounds like you have some sort of state-space explosion. Your comment about recursive matching of hypertees sounds like the biggest problem. Just a shot in the dark (having not studied what you're doing yet), but is there any chance you could use partial-order reduction, memoization, backtracking, etc. to reduce the state-space explosion?

I could be wrong, but most of the other optimizations sounded like they address constant factors, like contract checking. But then I don't know much about how contracts work; I guess the verification logic could be rather involved itself.

If the window closes, maybe we could continue at #arc-language:matrix.org

-----

3 points by rocketnia 1354 days ago | link

I'm sorry, I really appreciate it, but right now I have other things I need to focus on. I hope we can talk about Punctaffy in the future.

-----


I would also recommend looking into moving to a GitLab repository; they offer CI and many other features for free.

I also generally like them more than Github for some reason, (maybe the fact that they're more open source friendly?), but that's just me.

-----


Aaron Hsu discusses the cumulative effects of complexity and value of simplicity; reminded me a lot of our recent discussions here.

One of his main focal points is that "Generalized pointers are the refined sugar of programming"

He points out the size and complexity of an expression tree rendered as linked list nodes using pointers - the nodes are large, and the memory layout is unknown and unpredictable. We have to rely on GC to manage it all.

He proposes an alternative structure based on APL using a "depth list" format. The nodes are in one list, and their location in the tree is in a parallel list. This makes memory usage and layout predictable, and makes the structure much more amenable to generalized memory operations.

Thoughts?

-----

2 points by akkartik 1713 days ago | link

I watched this just enough to reassure my initial suspicion that I'd seen this a few months ago. It's certainly in my neighborhood, but there are a lot of words in the first few minutes that mean different things to different people. It seems possible that what he means by obesity is just "running slower than optimal". I don't think that matters much. He talks about waste and excess, but it's actually nice to be able to ball up a paper and start afresh when you're writing a novel or a paper. Waste isn't always a bad thing. Civilizations are in some ways defined by what they waste (https://www.ribbonfarm.com/2012/08/23/waste-creativity-and-g...). So I wish I had a more concrete motivation for what he's aiming towards, so I could assess if the waste he's concerned about is something I'm concerned about.

Regarding replacing pointers with depth lists, this video has a more detailed explanation, which is what I'm going by: https://dyalog.tv/Dyalog18/?v=hzPd3umu78g.

On one hand I'm glad to see radical ideas like this. As I've struggled to make heap allocations safe and thought about how Rust does it, I've often felt acutely uncomfortable that things have to be as they are. So maybe he's right and pointers are refined sugar that we can thrive without.

But I'm not yet convinced by this particular presentation. Depth lists seem to be basically manually allocated memory that is managed by array indexes rather than addresses. All the benefits derive from them having a consistent lifetime. That gives up a lot of the flexibility of heap pointers! Rather than frame this as, "here's a mechanism that is applicable everywhere," which seems patently false, I'd like to see more of an argument that yes, there are programs you can't write without pointers, but you don't actually need them. From this perspective, Rust's position in https://rust-unofficial.github.io/too-many-lists seems more honest:

"I hate linked lists. With a passion. Linked lists are terrible data structures. Now of course there's several great use cases for a linked list... But all of these cases are super rare... Linked lists are as niche and vague of a data structure as a trie."

(Even this is inadequate. Rust is not just giving up linked lists, it gives up up any data structure that may have two pointers to a single allocation. Doubly linked lists. Trees with a parent pointer. And on and on. Maybe all these data structures are super rare. But it gives me the warm fuzzies to know my language can support them. And I need a stronger argument to give them up.)

The trade-off Mu makes is different: you can have any data structure you want, but refined-sugar will cost you performance to ensure safety, and you'll have to deal with a little additional low-level complexity to juggle two kinds of pointers. I prefer this trade-off to anything else I've seen, but I'm still not quite happy with it. I wish there was something better, or some argument that would persuade me to settle with one of these solutions.

-----


How hard do you think it would be to port SubX / Mu for RISC-V?

-----

2 points by akkartik 1728 days ago | link

Given how close SubX is to x86 -- and Mu is to SubX -- I'd say 'porting' is the wrong word. So writing something in the spirit of SubX/Mu for RISC-V would be a non-trivial effort. Still not that much, since there isn't much code, and since there's lots of scope for manually transliterating existing code piecemeal.

If someone started this I'd be glad to contribute to it.

-----


I'm still trying to figure out what I want to use for my static site publishing.

I've looked at Hugo [0] and Haunt [1]. The former is powerful, but a bit challenging to configure and requires tons of golang libraries to build. Prohibitively many, since I want to run my server on Guix-SD. The latter is simple and clean, but only supports a few formats.

The main problem is that I use org-mode for my writing.

I probably just need to add a step in the build pipeline that runs Emacs to export the org-mode source in a different format amenable to haunt or similar. Some people just customize the org exporter and use that directly, but that seems like even more work.

What's everyone else using?

---------

[0] https://gohugo.io/

[1] https://dthompson.us/projects/haunt.html

-----

3 points by zck 1727 days ago | link

I'm using a static site generator I wrote in arc. My workflow is as such:

1. Write my entries in an org file that contains all the entries.

2. Narrow to the subtree (C-x n s), and export to an HTML buffer (C-x C-e h H)

3. Manually copy the relevant part of the overly-large HTML file into a new file (blog-entry-name.html). This is only the content of a page; it does not include any headers, footers, navbar stuff, or the html wrapper around the body.

4. Insert by hand a serialized arc template, containing three keys: a url slug, the title of the page, and the publication date.

5. Update the frontpage of my site to link to the new page. This file is similarly formatted: an arc template, then html content.

6. Update a file that indicates which pages should go in the sidebar of my site.

7. Run an arc command to generate the entire static site.

8. Check it out locally, then rsync the content to my nearlyfreespeech.net server.

Obvious places for improvement are 3, 5, and 6.

I was thinking about this recently. There's something quite fun in writing extremely personal software. This is not a tool that is designed to be used by millions of people, and I'm ok with that. I'm actually quite happy with storing settings for the page and the html content in the same file! It seems like a neat hack to me.

-----

2 points by akkartik 1726 days ago | link

<3

-----

2 points by shader 1735 days ago | link | parent | on: Why I'm betting on Julia

Evan Miller's appreciation for Julia is based on similar concerns to our recent discussion on transparent bicycles [0].

Apparently Julia, a dynamic language built on LLVM, provides REPL access to the intermediate representation and assembly versions of a function.

For additional relevance to the remnant arc community, it also provides a very easy-to-use FFI, and is a homoiconic language like lisp with s-expression support [1].

------------

[0] http://arclanguage.org/item?id=21379

[1] https://docs.julialang.org/en/v1/manual/metaprogramming/

-----

2 points by akkartik 1735 days ago | link

Indeed, that was a fun article to reread. Interesting that 6 years on, his prediction hasn't come to pass.

I should clarify that I don't actually care much about performance. I'm bootstrapping from machine code not to keep things fast but to control the total implementation stack and so keep it comprehensible. Julia's use of native assembly is just to help people make their programs faster, not to help more people get into the compiler. It's yet another language telling you to use it as "an abstraction" and not worry about the details. Which compromises the whole point of open source: getting more eyeballs on the code.

Do you know if building the Julia compiler relies on a previous version of the Julia compiler? I couldn't tell from https://github.com/JuliaLang/julia/tree/v1.4.0

-----

2 points by shader 1735 days ago | link

Yeah, I didn't even really register his prediction because I'm used to taking such statements with piles of salt.

Without a clear niche and "killer app", most languages don't have any way to draw attention or attract new programmers. I've run across Julia before, but never had a reason to look at it for more than a few seconds. Today I looked at it long enough to read half of the metaprogramming page, and file it away for potential future use if I need an array-oriented language or have a data science project, just for fun.

But most people won't hear about it, and won't have the same tendencies I do to try doing new projects in new languages. Perhaps Julia does have a specific problem it's trying to solve, and people with that problem are more likely to discover and adopt it, but that doesn't translate to the broader community very quickly.

> I'm bootstrapping from machine code not to keep things fast but to control the total implementation stack and so keep it comprehensible.

Yes, and I think you've made that point fairly well, but this was a fairly helpful clarification and restatement. Perhaps there are several dimensions to "exposed internals": visibility, accessibility (that is, manipulatability), and comprehensibility or traceability. And I think attempting to maximize these attributes will lead to a design much like you have in Mu of minimizing the overall surface area of the internals - otherwise they may start to obscure each other. Traceability at least is facilitated by shorter traces; that is, fewer layers of abstractions to navigate.

It seems that formal systems often follow an axiomatic model, parsimoniously adding layers as necessary, while industrial systems are more of the 'large, flat' model that build on top of an existing platform but don't add more than a few layers of abstraction. Generally just classes, interfaces, and service APIs. At the same time, the formal systems don't necessarily add true "layers", because identities are preserved across abstractions. Something to ponder more I guess.

Also, regarding performance, I'm not really sure how merely seeing the assembly code output by Julia helps that much if you can't directly control it. I could see it being helpful for sanity checks, or for learning about how the machine works (that's why I thought it was relevant), but it would be really hard to use for tuning.

> Do you know if building the Julia compiler relies on a previous version of the Julia compiler?

I don't know for certain, but it looks like the src directory contains a lot of CPP and some "flisp", so I would guess not.

-----

2 points by akkartik 1735 days ago | link

> Traceability at least is facilitated by shorter traces; that is, fewer layers of abstractions to navigate.

akkartik nods particularly vigorously to this

-----

3 points by Goladus 1696 days ago | link

I think the main reason Julia hasn't taken off is they arrived rather late to the party in terms of their target audience. Using Python and R to script libraries (or binaries) compiled in C and Fortran already had the momentum in the data science space. Julia first appeared in 2012 [1], which is also same year as the initial release of Anaconda [2], essentially a packaging and streamlining of what many in the scientific community were already doing with Python and R. With deep learning and data science really taking off in popularity the last few years, Python was the ecosystem of choice for most.

Julia might have a bright future. It seems to have a small but thriving community and its ambitions match what a lot of people want out of a programming language. The problem is that at this point "Python driving C+Fortran" is the 200 lb behemoth they're competing with, and face nontrivial competition from a host of other languages (R, Matlab, Go).

---

[1] https://julialang.org/blog/2012/02/why-we-created-julia/

[2] https://web.archive.org/web/20181012114953/http://docs.anaco...

-----


SubX and Mu seem like pretty decent layers over machine code for systems programming, and I appreciate the transparency and control they offer.

How would you rate Mu as a systems programming language compared to say, Rust, for the purposes of implementing a language VM?

And what extensions do you expect to need before it is a good choice for such things?

Also, I'm somewhat curious about what you think about regarding applying SubX/Mu to cross-platform development. Clearly, that would be a challenge because the differences between platforms would be pretty hard to ignore in a transparent system. At the same time, it might be possible to build a code-generation framework on top of Mu that makes language and compiler development easier, because it has better access to the internals and have more control over what is produced on each platform.

-----

2 points by akkartik 1741 days ago | link

Rust and Mu are similar in aiming for memory safety without garbage collection. They differ in how they go about it.

Rust provides better abstractions with higher performance and a more polished experience overall. It's also designed for concurrency. However, linear types are an adjustment, and the compiler is complex and (like other mainstream software) not really intended for users to understand the internals of.

Mu starts by encouraging people to understand it. I have no idea how far I can take it, so I err on the side of being simple to the point of simple-minded in service of an implementation that can fit in a single head. It doesn't require changing habits on memory management. It encourages less performant code (e.g. clearing a register often takes 5 bytes rather than 1 using an xor). It postpones thinking about memory in favor of memory leaks. It totally punts on concurrency for now.

Portability is another area of difference. Rust aims to be portable. Mu is tightly coupled with a single processor architecture (x86). I absolutely want to support other processors -- but they'll be in their own forks, so that each repo stays simple and people who don't run one processor don't have to think about it.

-----

3 points by shader 1739 days ago | link

Concurrency is an interesting topic. I hadn't thought of that when reading about SubX, which mostly covered the part of assembly/machine language I'm familiar with. I always wished, though, that my assembly class (or an advanced sequel) had covered some of the more advanced topics for modern processors instead of the 8080, like how to set up memory management (the boot process on newer CPUs is rather involved...) and how multicore computation works. I know all about the higher-level implications, and can use threads etc., but have never actually learned how those things are handled at the bottom. Or syscalls etc.; I've only ever used them, not implemented them. That's a bit out of scope for the discussion, but something I look forward to "seeing-through" your future work on accessible internals. :)

I guess part of my curiosity was targetted at how you would build Mu up to handle performance and portability. Although the way they are usually implemented does obscure the internals, it seems possible that some compromise might be found. Any thoughts?

Perhaps the answer is at the higher lisp (AST / intermediate representation) level. Mu and SubX stay perfectly aligned to the final binary, but a code generator could optimize the AST and generate code in Mu for the target platform.

On that note, have you thought about giving them a more accessible API for such purposes? Or even just an s-expression syntax? Otherwise the target format for generation is limited to text, which would lose a lot of semantics in translation. Also, how are you thinking of maintaining introspection continuity across levels as you add more layers? Maybe something simple like source map support?

-----

2 points by akkartik 1739 days ago | link

At the moment my ideas on all these areas are extremely hand-wavy. Ranging from "you don't really need it," to "somebody else's problem" :)

I feel fairly strongly that I don't want to add very many more layers. Ideally just one more. As more high-level languages get added, I want the lower levels to be integrated into Mu, so that they can in turn become the new HLL. But that's just an aspiration at this point.

One thing somebody else suggested was Plan9's assembly syntax. It may provide some inspiration for making SubX portable.

-----


That's funny, I saw the headline go by on HN and thought it sounded similar to your work and some recent thoughts I've had on abstractions, but didn't realize it was actually your paper.

I have lots of thoughts after reading the paper for discussion, which I think I'll post in separate comments to make it easier to reply in separate threads if you want to.

-----

2 points by shader 1741 days ago | link

I do like your main concepts, as captured in your title that "bicycles for the mind should be transparent."

This main point is something I've been arriving at in my own design work, particularly in two different areas:

1. The "end-to-end" principle.

Originally observed in and applied to networking, but I'm finding it incredibly useful as a general design pattern. Basically, decisions should be made where the best information is. Since an application must implement its own model of error correction anyway, the network shouldn't waste time on redundant and possibly unnecessary generic error prevention.

I think applying the end-to-end principle to general application development is similar to your concept of transparency. Instead of hiding details and trying to resolve them in was that may eventually leak anyway, difficulties should be exposed to the user so they can make the most appropriate decision. A user can still limit the complexity they have to deal with manually by using a standard library, but they should always have the option of a custom solution.

2. Leaky "abstractions" and lies

In the thread about your paper on HN, someone mentioned Joe's article on the Law of Leaky Abstractions [0], which I probably had read before but helps illustrate this point. Most of what we call "abstractions" in software aren't. They're merely encapsulations. Need a new word, like obscuration or something...

True abstractions are used very effectively in mathematics, and are really the reason for the success of mathematics as a discipline. Real abstractions are true generalizations and identities. They are simpler notations or terms that can be substituted by their more complex definitions. E.g., derivatives instead of the epsilon-delta formulas. Normal d/dx notation doesn't "hide" the complexity of the epsilon-delta, but is equiavlent to it. Two mathematical objects that are proven equivalent can be used interchangeably, with fascinating implications. This relies on strict observance of the assumptions used by the relevant theorems. Using the consequent of a theorem when its antecedents and assumptions do not apply is dangerous, and I think very analogous to what we do with abstractions in programming all the time.

"Encapsulations" merely hide complexities and try to present an illusion to the user. E.g., Joe's example of TCP providing "reliable" communication. It can't really be reliable, but it pretends. The problem with these encapsulations is that eventually the truth comes out, and disillusionment is painful.

I think a much better approach would be to adopt a maxim like "Don't lie." If we are honest about how a system actually works, and only use true abstractions, then we will never be surprised by its behavior. We may have to take a very different approach to solving certain problems, but the solution will be much more correct for the effort.

---------

[0] https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...

-----

2 points by akkartik 1741 days ago | link

I've had long debates over drinks with a friend about this. In the end we'll see over time whether coming up with the right abstractions is a sufficient approach.

I think it's not at all inevitable that the right abstractions will win out. We have to very carefully set the initial conditions to ensure the paths to them aren't prematurely discarded.

The analogy with math is misleading here, because math doesn't have syscalls. It's all entirely in the platonic realm, and applying it becomes something outside of math. An externality. Computers have syscalls, and the syscalls let us do fairly arbitrary things. Given this much power, "don't lie" is about as enforceable as it is with legislators.

(Here's the comment you're referring to which cited leaky abstractions: https://news.ycombinator.com/item?id=22599953#22604086. In a sibling comment to it I mentioned my analogy with legislators as a superior alternative to computers as cars or planes.)

-----

3 points by shader 1739 days ago | link

> ...we'll see over time whether coming up with the right abstractions is a sufficient approach

What other approaches are you considering? And sufficient for what purpose?

> it's not at all inevitable that the right abstractions will win out

I addressed this more thoroughly in my other wall of text, but I don't see why "winning" is a necessary objective. If you have an abstraction that works, it won't suddenly disappear on you. Or maybe I don't understand. "Win out" over what? And by what standard of success?

> We have to very carefully set the initial conditions to ensure the paths to them aren't prematurely discarded.

That's a slightly different consideration, trying to pick initial conditions that result in better abstractions. Honestly, it sounds like premature optimization and harder than the halting problem. In some ways though I think the general principles I mentioned are precisely intended to address this challenge. Basically, whenever you come to a design decision where you don't know what the right answer is, or if there might be more than one, don't hide the complexity but pass it on as flexibility to the user. The system stays correct and transparent without lying about how it works, and the user doesn't lose any options. This approach seems to satisfy your desire to not prematurely discard paths.

> The analogy with math is misleading here, because math doesn't have syscalls

My reference to math was not an analogy, but a description of an approach to abstractions. In Mathematics, abstractions are held to rigorous standards: they must actually be proven to behave as claimed, any exceptions must be included in the definition, and any application outside of the assumptions is invalid or at least highly suspect and must be justified.

> syscalls let us do fairly arbitrary things. Given this much power, "don't lie" is about as enforceable as it is with legislators.

It's not enforceable with mathematicians either; the best we can do is read eachother's work to check for such lies and avoid using flawed work. Such care is more important in mathematics where any flaws can ruin a proof, whereas in software a bug might only be reached in rare edge cases and even then we can turn it off and on again. That explains why the cost of failure is lower, and programmers don't apply the same effort to write flawless code. But what I'm proposing is not "don't let anyone lie" which sounds impossible, but "don't lie", which is a call for personal integrity. It's a design decision, in which integrity and transparency are chosen over comfort and simplicity. That decision is probably just as costly in software as it is in life, but hopefully proves just as rewarding.

> ...applying it becomes something outside of math. An externality.

That just sounds wrong to me, and needs a stronger argument. It may be a pain to write out all the preconditions and postconditions describing the whole state of the computer and environment (I certainly wouldn't recommend it), but that doesn't mean it is conceptually impossible and thus "outside math". However, thinking that way is mostly irrelevant anyway. Instead, I propose the simpler objective of not misrepresenting what could happen. "Not lying" is not the same thing as "telling the whole truth". Math uses pretty broad lower and upper bounds all the time; precise answers aren't always available, but we can still avoid claiming things we can't prove.

-----

2 points by akkartik 1739 days ago | link

I think we're saying the same thing but misunderstanding the words used by each other. Don't get hung up on my careless use of the word "win". Like I said elsewhere, I'm not trying to genocide competing ideas :)

The rest of this thread is just a reminder to me to avoid analogies like the (ahem) plague.

-----

3 points by akkartik 1739 days ago | link

> My reference to math was not an analogy, but a description of an approach to abstractions. In Mathematics, abstractions are held to rigorous standards: they must actually be proven to behave as claimed, any exceptions must be included in the definition, and any application outside of the assumptions is invalid or at least highly suspect and must be justified.

Such abstractions are great -- if you can find them. Once in a lifetime things. Pretty much nothing we use in software comes close to this. Not even Lisp.

Not lying is not easy. You have to take responsibility for basically every misunderstanding someone may make with your ideas.

-----

3 points by shader 1739 days ago | link

> Such abstractions are great -- if you can find them. Once in a lifetime things.

Maybe we're discussing different when it comes to abstractions.

They don't have to be paradigms and approaches, like your reference Lisp as a whole. I'm not even sure how "Lisp" fits into the "don't lie" model; it's on a completely different scale. I'm just concerned with how things are represented. UDP doesn't lie; it says up front that its datagrams are unreliable. TCP on the other hand pretends to be a reliable ordered stream; that pretense comes at a cost, and sometimes fails anyway.

Another one that came up recently was Golang vs Rust and how they handle file paths (or pretty much anything really). Golang takes the path of least short-term resistance, and tries to make things easy by pretending paths are strings. Turns out that isn't always true. Rust in contrast works hard to ensure correctness; for file paths they use a custom OsStr type that covers the possible cases. There are lots of examples where Rust uses result types to present a more complex but accurate set of outcomes.

> Not lying is not easy. You have to take responsibility for basically every misunderstanding someone may make with your ideas.

That sounds like a good point, but after thinking for a second I don't think I agree. If I was really trying to guarantee that no one misunderstood my work, that would be a problem. However, that's not the objective of the principle, which is a guide for choosing representations. Apply the same logic to human honesty. Is it really that impossible to avoid lying in conversation? Am I suddenly lying if someone misinterprets what I said?

I think avoiding lying in abstraction design should be similar to personal honesty in conversation with other people. Don't intentionally misrepresent something. If someone misreads the specification and makes a mistake, that's their fault.

I will go a step further though and say that simply having a disclaimer doesn't match the spirit of the principle. Surely the TCP documentation (or a little critical thinking) will reveal that it can't truly make streams reliable. The problem though is that it tries, and wants you to believe it does except for "rare circumstances." I would prefer a system built using UDP with full expectation of the potential failures, than one on TCP that thought it had covered all the edge cases that mattered only to run into a new one.

I guess the Erlang "let it crash" philosophy is almost a corollary. If you don't have illusions that your code won't crash, and prepare for that worst possible case, then any unhandled errors can just be generalized to a crash.

The purpose of the principle is to make better designs by giving more accurate and flexible options to the user. If a design choice is between exposing the internals as they are, or trying to cover up the complexity to coddle the user, choose the former. Give the user the flexibility and power of handling the details themselves (possibly via library). Don't lie.

-----

3 points by rocketnia 1717 days ago | link

I like this whole discussion, akkartik and shader. You've both made a lot of excellent points, and I found myself nodding along to one comment only to nod along to the next as well.

Right now I have a lot to say about the math analogy in particular. (The rest of the discussion has been great too.)

Mathematics has a lot in common with even the messy parts of programming.

Mathematics involves a lot of computation, and not necessarily of the digital computer kind or the arithmetic kind. If a human reader has to look up a definition of a word they don't know, then they're practically having to perform a manual lambda calculus substitution step. A lot of popular concepts in math are subject to transitive closure, which gives them indirect consequences hidden away on non-obvious reasoning paths. A lot of topics in math have to do with metareasoning and higher-order reasoning. Altogether, the kind of effort it takes for a human reader to understand a mathematical claim can involve a lot of the same things that on a computer we'd consider program execution.

As much as people might not like to admit it, mathematical theorems don't always take the form of the precise "don't lie" abstractions shader is describing. When a proof has a flaw in it, people still make use of the theorem, either by conjecturing that it's true anyway for some as-yet-undiscovered reason, or by explaining why the flaws in the proof don't matter in this context. In domains where it makes sense to change the mathematical foundations (e.g. deciding to use a different logic or a different set theory), a lot of the bread-and-butter theorems and concepts of mathematics can end up having flaws in them, but instead of coining new names for all those theorems and concepts, it's easier to use the same names and merely describe all the little patches that are necessary for them to work. So I think mathematics makes use of its share of abstraction-breaking techniques, techniques a software engineer might simulate with some combination of dynamic scope, side effects, preprocessing passes, code-walking, aspect-oriented weaving, dependency injection, or something like that. (This is mostly visible to people who are trying to make the math precise enough that a computer can verify or assist with it.)

Of course, math is not quite the same as software engineering. Unlike software, math is written primarily for humans to understand, and it only incidentally has computational aspects. This influences the kind of BS that's possible with math, both for better and for worse.

- For the worse: Once a mathematical argument goes on for a bit too long, humans rarely have the diligence to require that every single part of that reasoning makes sense; they're content to give leeway to some parts that they already feel they understand clearly. Some popular points of leeway end up serving as the foundations for a lot of mathematics, and we might call those the "syscalls" of math. Of course, every paradox of barbers and liars and time travel and infinity and whatnot reveals that humans are stubbornly hospitable to inconsistent ideas, and software engineering shows that humans' leeway leaves room for bugs on an extremely frequent basis.

- For the better: Since humans are in the loop when it comes to reading and sharing mathematical results, the kind of BS that confuses and dismays people has some trouble thriving. If the effort it takes to apply a mathematical concept is too full of hacks and spaghetti, people probably won't find it to be their favorite concept and won't share it with each other. (Of course, there seem to be some concepts which make a lot of sense once people get to know them, but still seem to require a rather circuitous route to learn about. In this way, people can end up being enthusiastic about parts of math that look, from the outside, to be full of nope.)

Considering all that mess, math nevertheless has a reputation for leakless abstractions, and that reputation is well deserved. "The study and development of leakless abstractions" would be a fitting definition of mathematics. The mess comes from the fact that humans are the ones discussing, developing, identifying examples of, and using the abstractions.

Likewise, even if software engineering deals with a big mess of leaky abstractions a lot of the time, the leakless ones are an important part of the design space. Unlike hardware, software code is a mathematically precise chunk of data, and the ways we transform it and compile it are easily a mathematical topic with lots of room for leakless abstractions. The reason (and perhaps the only essential reason) for the mess is that humans are the ones discussing, developing, making hardware for, and using the software.

While it's clear that math and software are two worlds with notable differences -- distinguished at least by the presence of computer hardware that gives a user meaningful value out of using software they don't know how to maintain -- I believe software could very well develop a popular perception as a world of Platonic forms, the same kind of perception math has. It's not that farfetched for people today to say, "obviously, as soon as you put an algorithm on a device and execute it, it's not the same algorithm." What if someday people say nothing we build or do can be a "true" algorithm because an algorithm is a Platonic concept that our world can only approximate?

Is that the right perception for software? Well... is it the right perception for mathematics? I think the perception doesn't matter that much one way or the other. Everyday software can have leakless abstractions of the same kind everyday mathematics is known for, and many of math's abstractions are actually riddled with holes in ways software engineers might find familiar.

-----

2 points by shader 1739 days ago | link

I have a metaphor I like to use sometimes to describe the perils of over-extending metaphors. That's not exactly the same, but maybe it's relevant enough to share.

  A metaphor is like an old rusty wheelbarrow; if you put too
  much into it and push it too far, it will break down, you'll
  trip over it, cut yourself, get infected with tetanus, and
  end up in the hospital filled with regret.
I'm still not satisfied with the ending and tweak it slightly each time. It fits the pandemic humor rather well though.

-----

2 points by shader 1739 days ago | link

> my analogy with legislators a superior alternative to computers as cars or planes.

I think what you're trying to say is that modifying software is more like amending a law than tinkering with a personal car. Legislators are to laws what programmers are to software, and we have gotten bad results from leaving law to experts, so we shouldn't leave software to experts either.

You haven't clearly explained how software is like laws, or made clear arguments for why leaving laws to experts is the source of the problem, why software would suffer the same fate, and what should be done differently.

> So the fallacy in the argument is that implementation details are relevant to everyone. They aren't. They're relevant to system builders and people who like tinkering because that is a separate domain to the user domain.

Quoting the guy you responded to, his main point seems to be that implementation details aren't relevant to everyone. He uses cars as an example; some people tinker, others just drive. Empirically his point is borne out: most people just use Instagram on their iPhone without any concern for implementation details.

> Given the outsized effect that software infrastructure has on our lives, I think computers are more like laws

I think I disagree regarding the "outsized effect". I don't have many personal interactions with the law so I don't think it effects me much, but I use software every day so I feel its effects. Also, the effect that laws have is community-wide and enforced, but nobody is compelling me to use any particular piece of software.

> laws are too important to be left to legislators

Too important in what sense? What other alternatives? The problem is, laws are fundamentally a community rather than individual concern. Laws are hard to modify because they can only be enforced by a community as a whole. You could pick a different process, but however you propose changes to a law they have to survive the politics of the community to be enforced, and to the degree that a community does not unanimously agree on general interests those laws will be created for special ones. In contrast, software use is voluntary and personal.

> it would be great to separate the user experience from the developer experience if we knew how

The points you take up in this argument are 1) implementation details are relevant to everyone, and 2) the developer and user experience can't be separated. With these points, I disagree. There are many users who are not developers, and don't care about internals. Just like both cars and laws, most people don't care about the details until they have to. At that point, some will look for professional help (mechanic, lawyer), and others will do it themselves. Yes, the details are important, but that's why we hire mechanics and lawyers who know them well, and not an argument for learning them yourself.

However, your main point was that mind-bikes should be see-through. The argument that not everyone cares about internals is actually irrelevant to that point, and I don't think you need to quixotically defend against every attack. Details should be accessible simply because that makes tools ergonomic and maintenance easy, regardless of who's doing it. I really like the fact that the oil filter on my car is right on top of the engine; makes changing it really easy. Even if I didn't change the oil myself, it would still make the task easier for the mechanic. That's not just altruism either; if it's easier for him, I probably get better service at lower prices.

I don't always care about the details, but when I do, I like them to be accessible and easy to understand.

-----

2 points by akkartik 1739 days ago | link

> You haven't clearly explained how software is like laws, or made clear arguments for why leaving laws to experts is the source of the problem, why software would suffer the same fate, and what should be done differently.

Both are systems of rules that have unanticipated long-term effects on the lives of people. As a result they're impossible to safely delegate. You can't pay programmers to write code for you, because they'll drown you in technical debt that only washes over you long after they're paid and gone. You can't expect legislators to make good laws because they'll constantly take the easy way out and leave a mess for future generations to deal with. (Now maybe you can't expect people to make good laws in general. But if there's a way out, that's the direction to look for it.)

-----

3 points by shader 1736 days ago | link

I feel like much of the disagreement in this conversation comes down to a question of degrees, as you mentioned in your other comment.

Many of your comments seem to imply an absolutism that you may not actually mean; I know I sometimes overstate my arguments when I'm trying to keep a point concise anyway. Or possibly I'm being pedantic by requesting that you qualify all of your claims.

I agree that laws and software last a long time. Most other systems would wear out and need replacing eventually. The need to explicitly repeal laws gives them incredible inertia, and is a good argument for sunset clauses. At first glance software seems like it could last as long, since it's only information, but in practice I think the underlying systems and surrounding environment change enough that the software must also adapt or die.

Perhaps it's a change in perspective. If you look at the software application itself, it doesn't have to change and can last forever. But systems change, and eventually leave the application behind. That's a (possible) victory for those of us on the side of active replacement.

> As a result they're impossible to safely delegate.

That doesn't follow. Long-lasting results could just as easily be an argument for hiring an expert to do the best possible job. E.g. an architect for a stone cathedral vs. a diy shed.

> You can't pay programmers to write code for you.

I would think that the existence of the software _industry_ rather than a network of niche artisans flatly disproves this statement. People do hire others, and it proves to be a more effective and profitable way to develop usable products than trying to do everything by oneself.

> because they'll drown you in technical debt

If that's the only argument against hiring someone else, it's not a very good one. For one thing, companies can choose to set style and documentation requirements, so that little information is lost when they leave. On the other hand, I am quite capable of drowning myself technical debt on my own; I might even be more likely to take shortcuts if I have to do everything myself. To paraphrase my father, software written by someone else is hard to understand and maintain, and software you wrote more than 6 months ago was written by someone else.

I think my main point in all of this is to try to convince you not to be so pessimistic, because a lot of the things you're focusing on either aren't actually that big of a problem, or aren't necessary to solve. Part of it is also about strategy. I generally agree your design decisions and objectives, but based on the HN discussion it's going to be a hard sell if you present your ideas in direct opposition to everything else. Someone pushed back on the idea that everyone needs to understand implementation details. Truth is, they're right, but instead of explaining how the availability of details can still be valuable even if they aren't needed by everyone on a daily basis, you tried to counter. I'm not holding that against you; I have a tendency to do the same thing (it's easy to just respond point-by-point to all the flaws in a post, and forget that I should be working towards an objective). But I think you can do better.

Still, kudos to you for getting this far in the first place. I have too much fun talking about ideas to actually build them.

-----

3 points by akkartik 1735 days ago | link

I went back and reread my grandparent comment here, and I actually wouldn't change much if I was phrasing it more carefully. I'm much more certain about the problem than about Mu as a solution to it.

The one thing I would change is to add a "given current technology". I don't intend to suggest that the problem of delegating rule-making is destined to be forever impossible. But it does seem about as difficult as the fictional science of psychohistory (https://en.wikipedia.org/wiki/Psychohistory_(fictional)). It's not a hard science, but requires far better understanding of human nature than we currently have.

> > As a result they're impossible to safely delegate.

> That doesn't follow. Long-lasting results could just as easily be an argument for hiring an expert to do the best possible job. E.g. an architect for a stone cathedral vs. a DIY shed.

Let me requote myself to add the sentence it follows from: "Both are systems of rules that have unanticipated long-term effects on the lives of people. As a result they're impossible to safely delegate." The crux here is the 'unanticipated'. A badly built stone cathedral can cave in, that's about the worst case. If you were rich enough that may seem like a reasonable worst-case risk. You can protect against it with some diversification. But installing the wrong program on your computer can cause the whole to become compromised. And it's not just a theoretical possibility either. In rules and software, action at a distance is the norm rather than the exception.

> I would think that the existence of the software _industry_ rather than a network of niche artisans flatly disproves this statement. People do hire others, and it proves to be a more effective and profitable way to develop usable products than trying to do everything by oneself.

You're right, when I say "impossible" I mean what should be rather than what is. I would not hire a programmer (even though I am one and I get paid by others; this is a painful self-criticism to admit to) and I would not recommend any of my loved ones do so. We all rely on hired programmers. I have grown painfully aware of this exposure. At least we should try not to increase this reliance.

The current software industry is not "effective", for any meaning of the term that is related to the end user over any reasonably long term. It certainly is profitable. I'd much rather the world had been a lot more measured in its incorporation of software into every aspect of modern life.

-----

2 points by shader 1735 days ago | link

> I don't intend to suggest that the problem of delegating rule-making is destined to be forever impossible. But it does seem about as difficult as the fictional science of psychohistory

I wasn't thinking along those terms, but the clarification is still helpful. Also, I like the reference to psychohistory. A cool idea, but completely unrealistic given chaos theory. If we can't even manage the dynamics of the logistic map [0], successfully predicting the behavior of more complex nonlinear systems is even more unlikely.

I'm still not sure it's fair to characterize all of software as unpredictable or having unpredictable effects though. The limit of the iterated logistic map is unpredictable for certain parameter ranges, but that doesn't mean that all math functions have unpredictable limits. I think for the purposes of this discussion it would be very helpful to identify which systems do and why, so that we can better understand what should or even could be done about it.

To some extent I accept the unpredictability premise, because software is written and used in a social context (which your statements about psychohistory imply is your primary concern) which has very complicated dynamics. But at the same time, the software artifact itself is basically static information that doesn't have any dynamics at all.

This emphasizes the question of which effects you are concerned about?

> In rules and software, action at a distance is the norm rather than the exception.

In contrast to your statement of needing better understanding of human nature, this statement makes it sounds like the dynamics and unpredictability you're considering is the behavior of software components in composition with other components. Which is it? Even so, I think in principle I could agree with that as well; in general unless you know what's in the components, and what all of the components of the system are, you can't predict what the behavior of a system will be. Specific cases are another matter, however.

That said, there's lots of work on atomicity and confinement that can help with identifying boundaries. Imperative code is exposed to possible side effects when invoking functions, but functional languages are not. I think proper security models can also help with this; in an object-capability system, nothing can perform operations but that you provide them with that capability. Loose coupling (services, protocols, dataflow) limits the possible externalities as well. I grant that the 'action at a distance' problem exists, but there are other possible solutions besides "avoid delegation entirely".

> I would not hire a programmer [...] and I would not recommend any of my loved ones do so

Are you really suggesting that the only software anyone uses should be software they wrote themselves? That hardly seems scalable or effective. Like subsistence farming. One might think that subsistence farmers have more control over their diet, since they get to control most of the inputs. They can hypothetically choose which varieties of vegetable to plant, and what fertilizers and techniques to use in growing them. In practice though, a true subsistence farmer is very constrained to use only what they can produce, and vastly more exposed to the environment since they have no alternative sources. Buying food at a modern supermarket does give one less direct control over how the food is grown, but there are many, many more options to choose from around the world. Perhaps you won't find exactly what you want, but you're much more likely to be able to find it year round.

The software available for sale may not be what you want, but you have lots of choices and you don't have to buy any of it. I can agree that being able to understand the internals could help make better choices, but I can't understand an absolute injunction against delegation.

> The current software industry is not "effective", for any meaning of the term that is related to the end user over any reasonably long term

I think this is probably the key point. How would you define "effective"? Given that people do pay for software, they must consider it effective for their objectives.

I suppose another line of reasoning might be to consider not a specific configuration of software, but rather the changes in its objectives over time (which may have been what you were alluding to with the reference to psychohistory). In that case, the specific objectives don't matter but rather some level of adaptability. That fits with what you've said earlier about being able to jump into a system and make changes with only a few hours of effort to understand the surrounding behavior. I can definitely see how that would be an argument for easily understandable and accessible internals, similar to the car maintenance analogy. However, I still don't see an argument against delegation.

Indeed, another key question is: what are you proposing is different about writing software oneself than letting someone else do it?

A third party could easily be better at building maintainable and understandable systems than myself (I could have hired you to build Mu...), and I can be cautious to hire only people I trust and take security precautions. If it has something to do with consequent understanding of the system that was built, I would argue that after a fairly short amount of time all of those differences are erased by forgetfulness. As I said before, any software you wrote 6 months ago was written by somebody else.

Again, I am not at all questioning the value of accessible and comprehensible internals, just the seemingly unrelated arguments against delegation etc.

----------

[0] https://en.wikipedia.org/wiki/Logistic_map

-----

2 points by akkartik 1735 days ago | link

> A cool idea, but completely unrealistic given chaos theory.

Agreed.

> In contrast to your statement of needing better understanding of human nature, this statement makes it sounds like the dynamics and unpredictability you're considering is the behavior of software components in composition with other components. Which is it?

When the mechanisms are simple and easy to reason about, it's easier to delegate because worst-case analysis of human nature is possible. When the mechanisms are powerful, we need greater understanding of human nature to navigate the complexity of incentives and unforeseen consequences.

> Are you really suggesting that the only software anyone uses should be software they wrote themselves?

Yeah, this bears clarifying. I'm not hung up on some sort of notion of purity that I came up with. I wouldn't make this recommendation in the '80s. Then things didn't seem so far gone. My recommendation takes into account the amount of software and complexity and tragedy-of-the-commons effects we've already introduced into the world. We have enough bespoke software. We don't need more that is driven by extrinsic motivations like pay. We need more software driven by intrinsic motivation, that is more likely to try to grapple with long-term consequences. Someone inexperienced may still pollute the well, but they may learn from their past decisions. Software professionals like me are largely unlikely to think hard about these things, because our salaries depend on us not understanding.

When the first truffula tree was cut down I wouldn't get too hung up on it. (Assuming I had the same worldview I have now.) But at some point enough trees get chopped down that it gradually gets more and more urgent to not dig ourselves deeper.

Hopefully this clarifies things even if I didn't answer every last question of yours. Do let me know if you'd still like me to answer some other bit.

-----

3 points by shader 1734 days ago | link

> When the mechanisms are simple and easy to reason about, it's easier to delegate because worst-case analysis of human nature is possible. When the mechanisms are powerful, we need greater understanding of human nature to navigate the complexity of incentives and unforeseen consequences.

That helps tie the two together, thanks.

I'm still not convinced though on what you think the downsides would be, or why; specifically, why they couldn't be bounded and partitioned in some way, or why you have to get them all correct up front instead of adapting later with better information.

If we look at the truly long term, the butterfly effect might indeed paralyze us as to what the potential consequences of any single action might be. But a butterfly-flap does not directly cause a hurricane; they are connected by a myriad of intervening causes, many of which could have stopped the storm. Planning ahead is good, but how far? Specifically, how far given how little we actually control.

> We need more software driven by intrinsic motivation, that is more likely to try to grapple with long-term consequences.

This helps a lot for explaining your vision, and sounds much more agreeable.

I read After Virtue by Alisdair MacIntyre [0] recently, and this sounds like an appreciation for the virtues necessary for the successful practice of software development. It would be interesting to explore that further.

> the ... tragedy-of-the-commons effects we've already introduced into the world, But at some point enough trees get chopped down that it gradually gets more and more urgent to not dig ourselves deeper.

Mentioning a tragedy-of-the-commons and an analogy of tree-chopping makes it sound as though industrial software was producing destructive and indelible environmental effects. The idea of "digging ourselves deeper" also implies that it's cumulative and aggregate. That is, something like there being a "bad-software gas" effect, and every emission of bad software contributes to it and won't easily dissipate.

I think this is a contrast I keep trying to come back to; maybe eventually I'll figure out how to state it clearly.

Chopping a tree is destructive and permanent; it doesn't respawn, though eventually new trees might grow. This makes tree-chopping cumulative unless managed properly. It is also really does reduce the aggregate number of trees and thus whatever benefits trees provide the environment. That is, lost trees could hypothetically affect everyone. Eventually, tree-chopping could add up to something significant. Furthermore, each tree chopped is a larger percentage of the remaining trees, so later chops may be more serious than earlier ones.

I don't think any of these attributes apply to software though. Writing software is constructive. Each project is independent, and does not cause any effects unless directly invoked or linked. This means new projects create more options, but not more liability; there is no aggregate externality. There are hundreds of thousands of projects on Github, and thousands of more companies with private source, and I am aware of and affected by very few of them. If one of those projects causes harm when I link it into mine, that's as much my fault as the original author's because I chose to link it.

The existence of vague, aggregate negative externalities for software needs further justification. Specifically identifying the issue may also help identify better how to fight it.

I think a common error in macro analysis (such as in Keynesian economics) is to treat members of a type as an aggregate quantity. The capital in an economy isn't a homogeneous quantity C that can be easily shifted to different industries, but actual machines and experts coordinated in a structure of relationships that can't be so easily changed. Software doesn't exist as some abstract quantity S that increases or decreases and correlates to a quantity of harm H. Each project is unique, and used by other projects in unique ways - or may not have any effect on them, if they are not so connected.

> Hopefully this clarifies things even if I didn't answer every last question of yours. Do let me know if you'd still like me to answer some other bit.

That's fine; it's enough if my questions help you understand my perspective and confusion, so you can better address them. I'm trying to work out more precisely my thoughts on these things as well. The better I understand the problem and the value of these particular software virtues, the better I can incorporate them into my own work.

---------

[0] https://en.wikipedia.org/wiki/After_Virtue

-----

2 points by akkartik 1734 days ago | link

You said above that chopping a tree is permanent, even though new trees may grow. We're of one mind there. It's worth dwelling on this distinction, because it is the essence of what I'm trying to get across.

It's not about the single tree. Trees grow and die, and every step of the process seems natural in isolation. The problem with externalities lies in the scale, in the disruption of the natural balance between competing forces.

Yes, individual rules are not directly connected to each other, and their effects are often isolated. The trouble lies in their implementation and how the interactions of implementations of different disparate rules often reduces the degrees of freedom in future rule making/management.

Say you have 100 people creating new rules or software. Each of them only cares about a few projects of rule-making, and the different projects are largely independent. If 90% of the rule-makers are making badly designed rules, the whole will gradually grow unmanageable. This story doesn't depend on what the rules are about in the real world. All we are concerned about are the implementation details. Are they implemented parsimoniously, or is the overall effect a result of two clauses combining from hundreds of pages apart? Are they easy to understand, or are they deliberately phrased in a convoluted manner to make it difficult for newcomers to join in the business of rule making? Do they follow some meta-rules consistently, or do they normalize deviance (http://lmcontheline.blogspot.com/2013/01/the-normalization-o...)? There are many such tests, and we think of them as 'design'. The design of a system is a meta-commons even if each of us has a different use case within it.

Even if you think you don't care about the rules most people follow, the rules care about you. For example, it's harder for me to find other C++ programmers to collaborate with because the pool of people who care about the same domain as me is further diluted by people who care about the same subset of C++ as me. The parts of C++ I don't use still affect my life.

-----

3 points by shader 1730 days ago | link

Our point of difference here regards externalities. You seem to think they are pervasive, largely irreversible, and negative. I'm still not sure that externalities exist or what they would be, let alone why they would be irreversible or negative. What about positive externalities, as good practices are recognized and recommended by people to their peers?

> The trouble lies in their implementation and how the interactions of implementations of different disparate rules often reduces the degrees of freedom in future rule making/management.

Within a legal jurisdiction all laws are effectively "executing in the same scope." Thus it's obvious and natural that they could interact with each other. Less obvious that such interactions are negative, or why that would make it harder for new people to write rules. Clearly our legislators have no difficulty pumping out more and more laws every year...

In software, the implementations don't run in the same scope, unless deliberately made to do so. And again, software projects aren't compulsory or indelible. They can be deleted, ignored, replaced, etc. very easily.

I might understand if you focused on the social implications of the software, such as one person using a code base, thinking its patterns were normal, and replicating them elsewhere. But you specifically mention the implementations and that somehow interactions between implementations reduces degrees of freedom. How? Where? Within a project, or across projects?

> Say you have 100 people creating new rules or software.

This whole section mixes your analogy and your subject. Many of the statements are obvious or at least reasonable for a legal system, but not obvious for software; at least, not software as a whole. Perhaps an individual project can fit the analogy fairly well. A program may, per your analogy, have complex behavior caused by spaghetti interactions across hundreds of pages, which would make things difficult for a newcomer to change. Such things repeated as patterns across a project do constitute the "design" or architecture of it. Do such patterns exist across all of software though, when anyone can start a new project in an empty file?

I appreciate the mention of "normalizing deviance", it's potentially a useful concept for this discussion. I think you're referring to the abstract pattern of normalizing deviance from what could otherwise have been consistent meta-rules, as opposed to a specific deviance? I'm not sure where that leaves me though.

I can almost imagine the "meta-commons" you mention, but I have no confidence that it is anything other than "common sense". Yes, people have habits of thought and behavior informed by their past experiences; there's no reason they can't learn and do better on their next project though.

> Even if you think you don't care about the rules most people follow, the rules care about you. For example, it's harder for me to find other C++ programmers to collaborate with because the pool of people who care about the same domain as me is further diluted by people who care about the same subset of C++ as me. The parts of C++ I don't use still affect my life.

It sounds like you're describing a situation where "bad" approaches steal mind share, reducing the number of people available for doing "good" work. Or, in your example, the existence of other domains (rules?) creates negative externalities for you by diluting your supply of collaborators. This sounds like a zero-sum fallacy. You're assuming that those people would still be C++ programmers and that they would be compelled to share your interests for lack of options, or that the other domains poached collaborators that were rightfully yours. Generally it works the other way though; the overall size of the market is increased by the existence of competitors, who share marketing costs in a larger economy of scale. Thus, because of those other domains, people with different interests are drawn into C++ programming would otherwise wouldn't be, and maybe some of them might eventually discover yours and collaborate with you. Sadly we can't run multiple experiments to find out which universe you have more collaborators in though.

-----

3 points by akkartik 1728 days ago | link

This is a helpful summary of our differences. The keystone is whether zero-sum is a good framing or not.

I think we should by default assume we're in a zero-sum regime anytime people's attention is involved. For example, think back to the SOPA/PIPA boycotts a few years ago when every website turned off for a day. That's a trick you can only pull a finite number of times before people grow jaded.

Similarly, someone is only going to try to learn programming a small number of times. And every time they think they failed increases the energy barrier for the next time. Burnout is the primary concern here, to my mind.

The same principles apply also to experienced programmers learning about new software. Burnout is the prime enemy, but the concern of minimizing burnout is on no author's mind. Nobody owns the design goal of comprehensibility of internals. Lack of ownership is the essence of an externality.

When it takes too much effort to comprehend a piece of software, people can no longer keep up with it on their time. They have to be paid to do so. That biases the playing field between insiders and users. A small number of people working on a popular piece of software can have disproportionate influence on the lives of people.

To me the argument of the previous paragraphs is ironclad. Which part do you disagree with? Is software already easy to comprehend? Is comprehension to outsiders not the #1 problem in software? Is the difficulty in comprehension not because of tragedy-of-the-commons effects?

---

I'm going to stop debating the non-software side of things, since I'm not an expert there and I don't really have any solutions to offer. I strongly feel the problems of limited attention carry over there. But it's probably easier for me to convince you of that in the realm of software. Oh wait, one final note:

> Clearly our legislators have no difficulty pumping out more and more laws every year...

Have you not met programmers talented at churning out crap? The difficulty is not in writing new laws, but in having the whole make sense. At least in software the computer enforces some basic checking. If the program crashes, that's visible to all. Contradictions in laws can fester for long periods until someone works up the resources to take a case all the way to the Supreme Court. (Why in the world don't US courts deal with counterfactuals? The whole principle of "case or controversy" (https://en.wikipedia.org/wiki/Advisory_opinion#United_States) is bound up in pre-software thinking. In my ideal world courts would be able to give feedback on bills even before they are turned into Law, and actually influence how legislators voted on them. "Have you considered this corner case?")

-----

2 points by akkartik 1739 days ago | link

It's the difference between a game of chess and a game of Nomic (https://en.wikipedia.org/wiki/Nomic). Being able to change the rules makes every move much more powerful, and the game much more chaotic. If you have inequality between people who can make moves and people who can't, I think that dooms the system in the long run. (It may well be doomed in the long run anyway, but again, I think there's only one direction to go here.)

-----

2 points by shader 1736 days ago | link

Yes, if it was a game that would be true, because (within the rules) you can't just walk away from the game, and there is only one winner.

However, software like most things in real life isn't a game. There aren't any rules. You can just fork it, or ignore it, or employ any number of other subversive strategies.

You keep thinking in zero-sum terms, and in ultimate terms like "doom" that I don't think actually apply to the domain.

How do you define "doom" anyway? Sounds like there's an objective you want the meta-system to be moving toward, but you expect it to fail. I'm not convinced there is such a perfect destination, or that there's anything like predictable determinism in the system.

-----

1 point by akkartik 1739 days ago | link

Here's an example of the ways that laws affect your life in umpteen ways: http://akkartik.name/post/2010-12-19-18-19-59-soc

-----

2 points by shader 1741 days ago | link

This comment/discussion is not really about your work, but about Ivan Illich's book "Tools for Conviviality" as presented in your introduction.

I don't think I agree with the distinction between manipulative vs. convivial tools, which makes the impression that some tools are inherently manipulative and will progress down one path of management, while others are intrinsically convivial and help improve agency and choice over time.

Even if the distinction is applied to schools of thought about how tools should be used (short-term productivity vs long-term freedom), I don't think the distinction matches reality. There is nothing about the use of tools in itself that restricts autonomy; anyone can at any time choose to use a different tool if they so desire (excepting external constraints such as employer mandates). The description about flaws in one tool being papered over by others to produce an increasingly unwieldy mass of mutually supporting tools with no chance of improvement or reconstruction sounds scary, but I haven't seen it. At any point in time, if someone thinks they can make a better tool to cut out several intervening steps in a process or replace an existing tool, they are welcome to make it - and should it prove truly simplifying and cost-reducing, it will be adopted. Special interests don't capture tools to make them ends in themselves; if at any point a tool ceases to be valuable (at least in perception), it will be cut out of the workflow.

One might respond by mentioning the current stack of (x86, linux, docker, python etc., web framework, html, javascript, browser) as having so much inertia that replacing any of the lower layers would be nearly impossible. The truth, however, is that anyone can replace any of those pieces whenever they wish in their own workflow; they will just have a hard time convincing everyone else to do the same. See RISC-V, rkt, Clojure, etc.

So I don't see "papering" vs "replacing" as two contrary and mutually exclusive actions. We can do both: ameliorate the current problems with temporary patches, and work on longer-term replacements. Ideally, the two steps could be taken at the same time to save effort, as in the strangler pattern [0]. In general, there's a trade-off, and falling to either extreme can mean failure. Constant patching leads to technical debt and ossification, but overzealous rewrites can run out of money and time before they produce any benefits.

In some sense, I think I agree with the direction you're taking it. We can have better tools than we currently use, and we don't need to hang on to compatibility to old systems built on wrong concepts with leaky abstractions. I just think we need to recognize the freedom we already have to shake off the past, instead of treating the existing inertia as inevitable or even relevant. It is not necessary to save or change the rest of the world.

----------

[0] https://martinfowler.com/bliki/StranglerFigApplication.html

-----

3 points by akkartik 1741 days ago | link

"We can have better tools than we currently use, and we don't need to hang on to compatibility to old systems built on wrong concepts with leaky abstractions. I just think we need to recognize the freedom we already have to shake off the past..."

I absolutely agree. The shackles are nowhere except in our minds.

"It is not necessary to save or change the rest of the world."

I absolutely disagree. This argument is akin to saying one can survive a pandemic without saving or changing the rest of the world. It's only true if you don't take costs into account. It's far cheaper to stay immune in the presence of herd immunity than without.

Over-engineering has a way of compounding. It's only obvious that something is unnecessary for a brief window of time. Then people start using it, and it starts becoming load-bearing. Compounding efforts to unwind past decisions quickly multiply until they exceed individual limits of effort.

Even within an individual's limits, I care very much about people being able to make changes to their computers without devoting their lives to the endeavor. We all should have lives outside computers. We should be able to modify our compilers without spending years understanding them. Just by poking and tinkering and getting feedback for an hour here and there.

Even if we assume infinite capacity for unwinding past decisions, it gets rapidly impossible to even see alternatives to them.

When I think about how many different things I've had to learn just to get this computer to its current state, it seems laughable to say any individual could do it all. Nobody should have to go through what I've been through.

At the very least we need some critical mass of people to care about implementation complexity. One lone voice seems really fragile, because tiny embers of light can get put out from many sources of bad luck, no matter how lofty their intentions.

"I don't see "papering" vs "replacing" as two contrary and mutually exclusive actions. We can do both."

Certainly. I do both. In my day job I have to deal with Docker and Python and Javascript. But most people do only that. They think programming is about knowing an ever-growing list of nouns rather than concepts. Replacing is dying out. Because we've given early decisions too much inertia.

-----

3 points by shader 1739 days ago | link

> I absolutely disagree.

Maybe I should have said more precisely "it's not necessary to save the rest of the world at once". I think it comes down to something like the second law of motion: F = ma. There's no reason you couldn't move the whole world, it's just a trade-off between time, distance, and force. I think a lot of people get stuck thinking they have to change everybody before they can get moving, or that the idea isn't a success unless it "wins", but neither is true.

New ideas rarely displace old ones. It seems like it in computing, because the space has grown so rapidly, but I would almost wager that most language and platforms are used at least as much now as they were at their peaks, just because there are so many more computers and developers now than there used to be. From that perspective, new ideas don't defeat old ones, they just capture more of the new frontiers. As new companies are founded, and new developers graduate from college, they adopt new technologies while the old teams general stick with what they started with. Sometimes they fail or reorganize, but many times they persist with their old systems.

We could be disheartened by that realization (Cobol and TCL never died...), but I think with the right perspective it can bring more hope. Lisp hasn't actually been destroyed, it just got outpaced - there are probably more people using more variants of Lisp now than at any point in history. As such, we don't have to be disappointed when we don't convince people to switch to our model - it was foolish to expect that in the first place - instead, we can look forward to whatever positive benefit our work does bring to those who adopt it.

> This argument is akin to saying one can survive a pandemic without saving or changing the rest of the world. It's only true if you don't take costs into account. It's far cheaper to stay immune in the presence of herd immunity than without.

To address your counter-argument more specifically, I don't think the analogy is appropriate. The pandemic analogy may be very apropos to our current global crisis, but doesn't describe how ideas work. People don't get "infected" by bad ideas in the same way they are by viruses; they learn and adopt ideas intelligently. We don't have to worry about maintaining our immunity as a small community.

However, other aspects of the community model are more appropriate. If we don't reach critical mass, or have clear motivations and objectives, we'll eventually dissipate and move on to other things. See: the arc language community. Everyone doubtless benefited from the experience, but the outcome wasn't a future in which all of us use Arc as our primary language. So yes, we should invest in community and growth, but that's not the same thing as trying to save the world.

Also, there are economies of scale that come with larger communities, that I think aligns with your reference to "cost" much better. In a small community with an early-stage technology, everyone has to build things from scratch before they can use them. Larger, better established communities benefit from the work of those that came before. No argument there. But that's not really an argument for or against "saving the world" either. It would almost be a category error to observe the advantages of being an established project and say that new one needs to adopt the strategy of "being established".

> Over-engineering has a way of compounding. It's only obvious that something is unnecessary for a brief window of time. Then people start using it, and it starts becoming load-bearing. Compounding efforts to unwind past decisions quickly multiply until they exceed individual limits of effort.

Again, I think this is focusing on the existing mountains and edifices and forgetting we can just go around them. It would indeed be a pain to redesign x86, given that it's main advantage is that it is x86. But instead some people developed RISC-V. Also, I don't think it's true something will only be obviously unnecessary at the beginning, and seem more essential later; the fact we're having these discussions (and things like the UNIX Hater's Handbook) attest that people are quite capable of seeing faults and imagining alternatives. Unwinding decisions exceeds individual capacity only if you're trying to rebase the rest of the stack onto your changes. That is, only if you try to save the world, which is begging the question.

> Even within an individual's limits, I care very much about people being able to make changes to their computers without devoting their lives to the endeavor. We all should have lives outside computers. We should be able to modify our compilers without spending years understanding them. Just by poking and tinkering and getting feedback for an hour here and there.

I admire that objective, and am also addressing that issue in the system I'm currently designing. Imagine being aware of any changes someone made to their own copy of an application without requiring them to submit a pull request, and merging it in if you like the work. No need to track forks or use github pages, the development tools themselves track relevant changes that other people are making to the same codebase. That way, the first time anyone finds and solves a bug, everyone else benefits. And you don't have to worry about your local installation and the upstream repository getting out of sync if you make a personal modification; they aren't treated any differently. I think that would boost open-source development productivity immensely. Such is what I am trying to design.

> Nobody should have to go through what I've been through.

Thanks for your efforts. And now that you've done them, I don't think anyone will have to. They might want to change something, but it will hopefully be much easier to build on your work than it was for you to do it in the first place.

> At the very least we need some critical mass of people to care about implementation complexity. One lone voice seems really fragile, because tiny embers of light can get put out from many sources of bad luck, no matter how lofty their intentions.

I agree with that, and that's where I think I'll close. What we need is sustainable growth and community development, so that good ideas don't die out. But we shouldn't sacrifice any other values for the sake of growth, much less dominance. If our ideas and community have a larger R0 they will eventually become dominant, but I don't see the need to exhaust ourselves forcing the issue and wasting time and energy overcoming irrelevant resistance.

-----

2 points by akkartik 1739 days ago | link

I mostly agree with this. Replacing all of existing software isn't anywhere near on my radar. My goal right now is just for Mu to not die :) You're right about going around things rather than redesigning them. Isn't that what I'm doing? I think this is perfectly in keeping with "replace vs paper over". There's no universal quantifier attached that requires the old thing to be replaced everywhere.

I was only using the analogy with pandemics to point out that there are situations where secondary consequences exist, even if it superficially seems like one can go one's own way. I didn't intend to suggest Mu provides any sort of immunity to anything.

> Unwinding decisions exceeds individual capacity only if you're trying to rebase the rest of the stack onto your changes. That is, only if you try to save the world, which is begging the question.

I think I'm losing the thread of this particular back and forth. Perhaps we're saying the same thing, and you took papering over vs replacing to be more mutually exclusive than I intended. I think it's existential for replacing to take some mindshare away from papering over, because of the overwhelming tendency for everyone around us to go the other way. Once you start talking about not having to rebase the rest of the stack, I feel like you're in my replacing camp. Functional replacement rather than sub-system replacement.

> And now that you've done them, I don't think anyone will have to. They might want to change something, but it will hopefully be much easier to build on your work than it was for you to do it in the first place.

Hah! Thank you, but don't underestimate humanity's ability to forget.

-----

2 points by shader 1739 days ago | link

Based on the fact that we're discussing this on the arc forum, and you've built languages to replace-ish assembly and C, and I'm designing a language to replace pretty much everything else, I'd say we're much more on the same page than nearly everyone else.

This particular back-and-forth was that I said we "don't need to save the whole world", and you "absolutely disagreed". It sounds like you've agreed with all of my points or softened yours, so I'm not really sure where that leaves us.

Somewhat ironically, I sometimes think of the design I keep hinting at basically as a rebase of most of computer science onto a virtual computer with 32byte pointers to ROM.

-----

2 points by akkartik 1739 days ago | link

Ah, I see.

> > It is not necessary to save or change the rest of the world.

> I absolutely disagree.

In my mind the poles of this disagreement were zero change to the world vs non-zero change to the world. I was saying it seems futile to try only to change myself but not some others. The thought of a universal quantifier, zero vs infinity, that didn't occur to me at this point.

-----

2 points by shader 1736 days ago | link

Yeah, I guess I could have phrased that better.

In my mind, "not necessary" didn't imply "necessarily not", but I can see how it might sound like I wanted to just let the world burn and walk away. I only intended to suggest not worrying about it and not putting that popular-opinion cart before your original-objective horse.

-----

2 points by akkartik 1739 days ago | link

Heh, I notice now that I never actually said "replace" in the paper. I said, "take it out and think about the problem anew." That sounds like we're on the same page?

-----

2 points by shader 1741 days ago | link

In the testable interfaces section, you mention parameterizing system calls so that fakes can be passed in for testing purposes.

1. How does Mu handle dependency injection?

You mentioned that Mu is supposed to be directly translatable to SubX, so I'm curious how that works. Otherwise, it sounds like a fragile pattern for testability, since someone is likely to hard-code their preferred output.

2. The system-object parameters remind me a lot of object-capabilities.

Something I've been thinking a lot about for the design of my own system, in which I plan to restrict all side-effects (and thus I/O and system calls) to message passing. I was mostly interested in it for the security value, in that a program must be introduced to a service to send it messages.

Your use case is obviously much lower-level, and trying to match the machine execution more closely, but it seems you've arrived at almost the same design for very different reasons. By decoupling a program from its environment and requiring that resources be passed as parameters, you gain testability.

There's no reason you couldn't go further and implement a full object-capability system for resources at runtime, so that the program itself must be given those resources when launched, instead of implicitly getting them from the environment. Based on your application to testing, it could also make external black-box testing of applications easier: just provide fakes when running the program in a sandbox, and see how it behaves.

Just some thoughts for when you start implementing the OS kernel you mentioned...

-----

2 points by akkartik 1741 days ago | link

Oh yes, we're very much thinking in similar ways here. I'd love to build a capability-based OS at some point. For now my ambitions are more limited, to just support testing syscalls with a pretty similar shape to their current one. But it's very much something I hope will happen.

This is the method of science as I understand it. I'm trying to change as few variables as possible (still very many), in hopes that clarity with them will open up opportunities to explore other variables.

> How does Mu handle dependency injection?

Fakes are just code like any other. You can see my code for a fake screen in an earlier prototype: https://github.com/akkartik/mu1/blob/master/081print.mu#L95. Or colorized: https://akkartik.github.io/mu1/html/081print.mu.html#L95 ('@' means 'array'). Does that answer the question? I think maybe I'm not following what you mean.

> it sounds like a fragile pattern for testability, since someone is likely to hard-code their preferred output.

The way I see it, it's a losing battle to treat code vs test as an adversarial relationship. All the tutorials of TDD that start out hard-coding a value in the first test are extremely counter-productive.

There is indeed a lot of fragility in Mu's tests. And when I write them I'm often testing the results in ways that aren't obvious. The test may validate that the code a compiler generates is such and such, but is that really what we want? To make sure I have to actually run the code. Occasionally I encounter a situation where I traced something but didn't actually do what I said I was doing. Mu's emulator requires validating against a real processor, and I've encountered a couple of situations where my program ran in emulation but crashed natively. In each case I had to laboriously read the x86 manual and revisit my assumptions. I haven't had to do this in over a year now, so it feels stable. But still, you're right. There's work here that isn't obvious.

The use case for Mu is someone who wants to make an hour's worth of changes to some aspect of their system. As they get deeper into it I'm sure they'll uncover use cases I haven't paved over for them, and will have to strike out and invent their own experimental methodologies like all of us have to do today. My goal is just to pave everything within a few hours of Hamming distance from my codebase. That seems like a big improvement in quality of life. I'll leave harder problems to those who come after me :)

-----

2 points by shader 1739 days ago | link

> I'd love to build a capability-based OS at some point.

I'm designing a capability-based lisp at the moment that will at some point need an efficient VM implementation and byte-compiler, and I was planning on turning it into a microkernel OS (where the interpreter is the kernel); maybe if I actually build enough of it in a reasonable timeframe we can meet somewhere in the middle and avoid duplicating effort.

> Fakes are just code like any other...

I wasn't really asking how fakes were handled, but rather how you inject them during a test.

> My goal is just to pave everything within a few hours of Hamming distance from my codebase. That seems like a big improvement in quality of life. I'll leave harder problems to those who come after me :)

Now you're starting to sound like my earlier comment about not needing to save the whole world. :P

-----

2 points by akkartik 1739 days ago | link

That would be excellent!

Any function that needs to print to screen must take a screen argument. All the way up the call chain. Then tests simply pass in a fake. I end up doing this for every syscall. Memory allocation requires an "allocation descriptor". Exit rewired an exit descriptor that can be faked with a continuation. And so on. Does that help?

-----

2 points by shader 1739 days ago | link

Yep, that answers my question perfectly.

You had said "dependency injection" somewhere, so I thought there might be more to it.

-----

2 points by akkartik 1739 days ago | link

Maybe I should clarify that I mean dependency injection but not a dependency injection framework. Automating the injection decisions defeats most of the purpose of DI, I think.

-----

2 points by shader 1738 days ago | link

Yes, I suppose "dependency injection" as a concept doesn't actually require anything sophisticated like a framework or IoC containers etc. But the term "dependency injection" sounds to me like it's doing more than just passing in a parameter, and I normally wouldn't expect it to be used unless it meant something more.

I think that's because "injection" is active. It sounds like intrusively making some code use a different dependency without its explicit cooperation. Passing parameters to a function isn't really "injecting"; it's just normal operation.

> Automating the injection decisions defeats most of the purpose of DI, I think.

I don't know about "automated" decisions, but the value of something like "injection" to me seems that you could avoid explicitly mentioning the dependencies at intermediate layers of code, and only reference them at the entry points and where they are used. The way your code works, every function that depends on anything that could call 'print has to have a screen in its parameter line. For Mu, that may be a reasonable design decision; you want to keep things transparent and obvious every step of the way, and don't want to add any more complexity to the language than necessary. However, I think there is a case to be made for something like dynamic variables to improve composability in a higher-level context. That's a discussion for a different language (like the one I'm designing, which gets around the various problems of scoping by making almost everything static anyway).

This is probably talking past your point, but I'm trying to argue that there might be value in some degree of DI beyond parameterizing functions. I have not necessarily justified "automation", and since I don't have a good definition for that anyway, I don't think I'll try.

-----

2 points by akkartik 1738 days ago | link

That makes sense. I think my criticism is really for Guice, which has the effect of making Java a dynamically typed language that can throw type errors at run-time. But if you start out with a dynamic language like Lisp, dynamically-scoped variables are much more acceptable.

Regarding the term "injection", I'm following https://en.wikipedia.org/wiki/Dependency_injection. It feels like a marketing term, but now we're stuck with it.

-----

2 points by shader 1736 days ago | link

Why are runtime type errors acceptable in Lisp but not in Java? Or was there some other reason for dynamically-scoped variables to be acceptable in Lisp?

-----

2 points by akkartik 1735 days ago | link

This side is definitely not absolutist. In my opinion, Java started out encouraging people to rely on the compiler for type errors. We got used to refactoring in the IDE, and if nothing was red we expect no type errors.

With a dynamic language you never start out relying on the compiler as a crutch.

This argument isn't about anything directly technical about the respective compilers, just the pragmatic thought patterns I've observed..

-----

3 points by akkartik 1741 days ago | link

Thank you very much for your comments! It's funny, given all the rhetoric around 'feedback' in our society, just how hard good feedback is to come by. When I wrote this paper over 3 months I told myself that my goal was to get 3 people to think hard about the details of what I'm doing. Based on this thread I'd say I've gotten on the scoreboard there.

-----

2 points by shader 1739 days ago | link

You're welcome! And thank you for the thought-provoking paper and continued discussion. I was a little worried when I posted the first round of comments, after I saw how long they turned out to be.

Now for round two...

-----

More