[Self-interest] Toward Declarative Programming

Adam Spitz adam.spitz at gmail.com
Thu Apr 23 18:31:12 UTC 2020


(Warning: I'm going to write another wall of text. I wish I knew how to say
this more concisely.)

Yes, I think a best-of-both-worlds ought to be possible. I've found it
easier to see how the two worlds might fit together if I think in terms of
levels-of-understanding.

For example, imagine that you want to write a function that takes a natural
number as input and produces some output. You have a vague idea of what the
function should do, so maybe you tentatively write down some code. Now you
want to keep fiddling with the function until you've gained some confidence
that it does what you want it to do. There's a sequence of things you might
do, depending on how much understanding you have/want:

  - At first you might not even know what you want the answer to be, so you
might try manually calling the function a few times in an
evaluator/REPL/unit-test-suite and seeing whether the outputs make sense to
you. At this stage, it really helps to work in terms of examples, because
you don't understand this problem well enough yet to be able to articulate
generalizations. You just have to run the code on some examples to see what
happens.
  - Then, if you're willing to put in the effort, you might sit and think
for a little while about how to articulate some general properties that you
want to be true of the output. If you can do that, then you can
programmatically generate a whole bunch of input numbers, call the function
on each one of them, and verify that the outputs have the properties you
want them to have. So now you have some understanding of the general
properties that you want your solution to have, but you're still just
running the code on the examples to see what happens (and since you can't
actually try *all* of the natural numbers, you just kinda have to hope that
your automatically-generated examples do a good enough job of covering the
whole space).
  - And *then*, if you're willing to put in even more effort, and if you
have a language with dependent types (like Idris, and hopefully Haskell too
sometime reasonably soon (they're working on it)), you might figure out how
to articulate a proof-by-induction. (In case you're unfamiliar with
dependent types, the idea is that proofs are reified first-class values
that you can construct and pass around. So you can write down the property
you want as a type-family called P, then construct a value of type P(0),
and construct a function that takes a value of type P(k) and returns a
value of type P(k+1), and use those to construct a function that can return
a proof of type P(n) for any natural number n.) So now your understanding
of this problem is deep enough that not only do you understand the general
properties that you want your solution to have, but also you don't even
have to run the code on a bunch of examples to see whether the property
holds or not, because you've articulated the reasons why the property will
hold for *every* example.

(At first I thought this was a bad example because Haskell doesn't actually
have dependent types yet. But now I think that actually makes it a great
example - type systems are always *almost* good enough.)

Writing the proof is more work (you still have to do basically everything
that you did in the previous stages, and more), and requires more
understanding of whatever problem you're trying to solve. It's *easier* to
just run some examples than it is to explain why every example will work.
Not every task calls for that level of rigor, so sometimes it's not worth
the effort. So in that sense I agree with the "horses for courses" angle.
But "running some examples" and "writing a proof" don't feel to me like
separate paths; rather, they're steps along the same path. There's a step
from examples to generalizations, and then there's another step from
knowing *that* your generalization holds to knowing *why* your
generalization holds. At each step, you're gaining a greater understanding
of something that you were already doing.

A *lot* of the seemingly-unnecessary extra complexity in the
typed-functional-programming world is like that. Learning those languages
involves learning a bunch of new words, but that's not because they're
working on watches instead of dogs; rather, it's because they've put a lot
of effort into finding extremely-simple and mathematically-precise
definitions of concepts that are present-but-not-explicitly-articulated in
programs written in other languages. e.g. You can go through your whole
career as a programmer without ever learning what the word "monad" means,
but monads aren't some esoteric thing that you've never had to work with;
"monad" is the name for a distilled version of the essence of a pattern
that you're already vaguely familiar with but haven't articulated.

Anyway, each of those levels-of-understanding is an important part of the
programming process. Self does a better job of the early stages where it's
important to be able to play with tangible examples and run them to see
what happens. Haskell does a better job of the later stages where you have
a deep understanding of what you want your program to do and you want to
articulate it. (And that's why Haskell's strengths are more about "compile
time" than "run time" - after you've articulated your understanding of the
program to that level of depth, the compiler might as well check it for
you. But I've found it more useful to think in terms of
depth-of-understanding than in terms of what-time-to-do-checking.) I don't
see any reason why it would be impossible to make a system that's great at
both.



On Wed, Apr 22, 2020 at 9:33 PM Russell Allen <mail at russell-allen.com>
wrote:

>
> > On 22 Apr 2020, at 10:38 pm, Adam Spitz <adam.spitz at gmail.com> wrote:
> > With that said, I've spent the past few years learning about Haskell and
> functional programming and type theory and category theory and proofs and
> various other things that I used to consider icky, and I like it very much.
> I'm convinced that it's usually a very good idea to avoid imperativeness
> and mutation as much as possible, and I'm convinced that there's a huge
> amount of value in the kinds of mathematically-precise abstractions
> (functors, monads, algebraic data types, etc.) that are used in those kinds
> of languages, and I'm convinced that type systems are much much more-useful
> and less-awful than I believed back when the only typed languages I knew
> were C++ and Java. I wouldn't be happy going back to a dynamically-typed
> imperative language like Self (though I do desperately miss the Self
> environment).
>
>
> I haven’t dealt with Haskell and family all that much, but the feeling I
> get is that the difference between its approach and Self is at least
> partially that Haskell focuses on the written syntax and ‘compile time’ and
> Self on the interacting world and ‘run time'.  The is, Self programming is
> always interacting with something messy and existing, like being a vet, and
> Haskell is like being a watchmaker, creating an intricate and often
> beautiful static creation which is then wound up and left to tick.
>
> So Self ends up producing morphic, and Haskell ends up producing Pandoc
> (which is fabulous btw - I use it all the time)
>
> Is there a way to get the best of both worlds?
>
> Russell
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.selflanguage.org/pipermail/self-interest/attachments/20200423/420fafd7/attachment-0001.html>


More information about the Self-interest mailing list