Arc Forumnew | comments | leaders | submitlogin
3 points by shader 1739 days ago | link | parent

> Such abstractions are great -- if you can find them. Once in a lifetime things.

Maybe we're discussing different when it comes to abstractions.

They don't have to be paradigms and approaches, like your reference Lisp as a whole. I'm not even sure how "Lisp" fits into the "don't lie" model; it's on a completely different scale. I'm just concerned with how things are represented. UDP doesn't lie; it says up front that its datagrams are unreliable. TCP on the other hand pretends to be a reliable ordered stream; that pretense comes at a cost, and sometimes fails anyway.

Another one that came up recently was Golang vs Rust and how they handle file paths (or pretty much anything really). Golang takes the path of least short-term resistance, and tries to make things easy by pretending paths are strings. Turns out that isn't always true. Rust in contrast works hard to ensure correctness; for file paths they use a custom OsStr type that covers the possible cases. There are lots of examples where Rust uses result types to present a more complex but accurate set of outcomes.

> Not lying is not easy. You have to take responsibility for basically every misunderstanding someone may make with your ideas.

That sounds like a good point, but after thinking for a second I don't think I agree. If I was really trying to guarantee that no one misunderstood my work, that would be a problem. However, that's not the objective of the principle, which is a guide for choosing representations. Apply the same logic to human honesty. Is it really that impossible to avoid lying in conversation? Am I suddenly lying if someone misinterprets what I said?

I think avoiding lying in abstraction design should be similar to personal honesty in conversation with other people. Don't intentionally misrepresent something. If someone misreads the specification and makes a mistake, that's their fault.

I will go a step further though and say that simply having a disclaimer doesn't match the spirit of the principle. Surely the TCP documentation (or a little critical thinking) will reveal that it can't truly make streams reliable. The problem though is that it tries, and wants you to believe it does except for "rare circumstances." I would prefer a system built using UDP with full expectation of the potential failures, than one on TCP that thought it had covered all the edge cases that mattered only to run into a new one.

I guess the Erlang "let it crash" philosophy is almost a corollary. If you don't have illusions that your code won't crash, and prepare for that worst possible case, then any unhandled errors can just be generalized to a crash.

The purpose of the principle is to make better designs by giving more accurate and flexible options to the user. If a design choice is between exposing the internals as they are, or trying to cover up the complexity to coddle the user, choose the former. Give the user the flexibility and power of handling the details themselves (possibly via library). Don't lie.



3 points by rocketnia 1718 days ago | link

I like this whole discussion, akkartik and shader. You've both made a lot of excellent points, and I found myself nodding along to one comment only to nod along to the next as well.

Right now I have a lot to say about the math analogy in particular. (The rest of the discussion has been great too.)

Mathematics has a lot in common with even the messy parts of programming.

Mathematics involves a lot of computation, and not necessarily of the digital computer kind or the arithmetic kind. If a human reader has to look up a definition of a word they don't know, then they're practically having to perform a manual lambda calculus substitution step. A lot of popular concepts in math are subject to transitive closure, which gives them indirect consequences hidden away on non-obvious reasoning paths. A lot of topics in math have to do with metareasoning and higher-order reasoning. Altogether, the kind of effort it takes for a human reader to understand a mathematical claim can involve a lot of the same things that on a computer we'd consider program execution.

As much as people might not like to admit it, mathematical theorems don't always take the form of the precise "don't lie" abstractions shader is describing. When a proof has a flaw in it, people still make use of the theorem, either by conjecturing that it's true anyway for some as-yet-undiscovered reason, or by explaining why the flaws in the proof don't matter in this context. In domains where it makes sense to change the mathematical foundations (e.g. deciding to use a different logic or a different set theory), a lot of the bread-and-butter theorems and concepts of mathematics can end up having flaws in them, but instead of coining new names for all those theorems and concepts, it's easier to use the same names and merely describe all the little patches that are necessary for them to work. So I think mathematics makes use of its share of abstraction-breaking techniques, techniques a software engineer might simulate with some combination of dynamic scope, side effects, preprocessing passes, code-walking, aspect-oriented weaving, dependency injection, or something like that. (This is mostly visible to people who are trying to make the math precise enough that a computer can verify or assist with it.)

Of course, math is not quite the same as software engineering. Unlike software, math is written primarily for humans to understand, and it only incidentally has computational aspects. This influences the kind of BS that's possible with math, both for better and for worse.

- For the worse: Once a mathematical argument goes on for a bit too long, humans rarely have the diligence to require that every single part of that reasoning makes sense; they're content to give leeway to some parts that they already feel they understand clearly. Some popular points of leeway end up serving as the foundations for a lot of mathematics, and we might call those the "syscalls" of math. Of course, every paradox of barbers and liars and time travel and infinity and whatnot reveals that humans are stubbornly hospitable to inconsistent ideas, and software engineering shows that humans' leeway leaves room for bugs on an extremely frequent basis.

- For the better: Since humans are in the loop when it comes to reading and sharing mathematical results, the kind of BS that confuses and dismays people has some trouble thriving. If the effort it takes to apply a mathematical concept is too full of hacks and spaghetti, people probably won't find it to be their favorite concept and won't share it with each other. (Of course, there seem to be some concepts which make a lot of sense once people get to know them, but still seem to require a rather circuitous route to learn about. In this way, people can end up being enthusiastic about parts of math that look, from the outside, to be full of nope.)

Considering all that mess, math nevertheless has a reputation for leakless abstractions, and that reputation is well deserved. "The study and development of leakless abstractions" would be a fitting definition of mathematics. The mess comes from the fact that humans are the ones discussing, developing, identifying examples of, and using the abstractions.

Likewise, even if software engineering deals with a big mess of leaky abstractions a lot of the time, the leakless ones are an important part of the design space. Unlike hardware, software code is a mathematically precise chunk of data, and the ways we transform it and compile it are easily a mathematical topic with lots of room for leakless abstractions. The reason (and perhaps the only essential reason) for the mess is that humans are the ones discussing, developing, making hardware for, and using the software.

While it's clear that math and software are two worlds with notable differences -- distinguished at least by the presence of computer hardware that gives a user meaningful value out of using software they don't know how to maintain -- I believe software could very well develop a popular perception as a world of Platonic forms, the same kind of perception math has. It's not that farfetched for people today to say, "obviously, as soon as you put an algorithm on a device and execute it, it's not the same algorithm." What if someday people say nothing we build or do can be a "true" algorithm because an algorithm is a Platonic concept that our world can only approximate?

Is that the right perception for software? Well... is it the right perception for mathematics? I think the perception doesn't matter that much one way or the other. Everyday software can have leakless abstractions of the same kind everyday mathematics is known for, and many of math's abstractions are actually riddled with holes in ways software engineers might find familiar.

-----