As I understand it, the Self language uses imperativity all the way down to its fundament. In particular, I believe the usual (and only?) way to set up an object is to clone an existing one (the prototype) and then mutate one clone. I don't see anything about the language, nor more generally about a classless outlook, that would light the way toward creating a good declarative language. Am I missing something?
Jack WAUGH
I agree that "copy and mutate" sounds like it might not be compatible with declarative programming, but I think there are deeper psychological principles that make Self worth understanding even if you don't want imperative programming.
I don't know whether you've heard the prototype spiel before, and Dave and various others here can do it better than I can, but:
Part of the motivation behind prototype-based languages is that when human beings come up with categorization systems that feel intuitive, they usually don't really look like "here are the properties that are shared by every member of this category"; they look much more like "here are some properties that are typically shared by the more-central members of this category, most of the time; membership in the category is fuzzy, and varies depending on how similar the thing is to the central examples of the category; we define things by describing how they're different from other things." (George Lakoff makes the case for this in Women, Fire, and Dangerous Things.)
For example, here are some observations about the way humans categorize things:
- A robin is a better example of a bird than an ostrich is. (Membership gradience - some examples are more central than others, and people more-readily identify the central examples as belonging to that category.) - Most humans have ten fingers, but a few don't, and they probably still qualify as human. (Family resemblances - not all properties are shared.) - If someone shows you a picture of a tabby cat and asks you what it is, you say "cat", not "animal" or "tabby". (The most cognitively-basic part of the abstraction hierarchy is somewhere in the middle, rather than the top or bottom.) - etc.
(Of course, human beings also came up with the more-rigid categorization systems that are used in most of mathematics, and those systems still sorta-kinda count as categorization systems, despite not having all of the properties that the more-typical categorization systems have. ;))
I'm not saying that you *can't* find some way to represent those kinds of qualities in a class-based language like Smalltalk or a type-theory-based language like Haskell; I'm just saying that these considerations are (as far as I've seen) not even really on the radar of people who talk about those languages. (Which isn't to say that Self does it perfectly. I don't think Self - or any other prototype-based language that I know of - fully captures the flavour of the categorization model that Lakoff describes. But Self's "you make new things by copying and modifying existing things" is meant to be a step in that direction. It's something that the Self people have spent some time thinking about.) If you buy Lakoff's theory, there's something in human psychology that isn't a good fit for the kinds of rigid categorization systems used in existing declarative languages like Haskell.
With that said, I've spent the past few years learning about Haskell and functional programming and type theory and category theory and proofs and various other things that I used to consider icky, and I like it very much. I'm convinced that it's usually a very good idea to avoid imperativeness and mutation as much as possible, and I'm convinced that there's a huge amount of value in the kinds of mathematically-precise abstractions (functors, monads, algebraic data types, etc.) that are used in those kinds of languages, and I'm convinced that type systems are much much more-useful and less-awful than I believed back when the only typed languages I knew were C++ and Java. I wouldn't be happy going back to a dynamically-typed imperative language like Self (though I do desperately miss the Self environment). But I think there are some insights about human psychology that the Self people understand better than the Haskell people.
I'm not sure about this part, but I don't think those psychological insights are fundamentally incompatible with declarative programming; I just think it's the kind of thing that the declarative-programming world hasn't spent a lot of time focusing on. So, yes, I think there's a lot in Self's classless outlook that could light the way toward creating a better declarative language.
On Tue, Apr 21, 2020 at 11:52 PM Jack Waugh tzh9741mq402@sneakemail.com wrote:
As I understand it, the Self language uses imperativity all the way down to its fundament. In particular, I believe the usual (and only?) way to set up an object is to clone an existing one (the prototype) and then mutate one clone. I don't see anything about the language, nor more generally about a classless outlook, that would light the way toward creating a good declarative language. Am I missing something?
Jack WAUGH _______________________________________________ Self-interest mailing list Self-interest@lists.selflanguage.org http://lists.selflanguage.org/mailman/listinfo/self-interest
On 22 Apr 2020, at 10:38 pm, Adam Spitz adam.spitz@gmail.com wrote: With that said, I've spent the past few years learning about Haskell and functional programming and type theory and category theory and proofs and various other things that I used to consider icky, and I like it very much. I'm convinced that it's usually a very good idea to avoid imperativeness and mutation as much as possible, and I'm convinced that there's a huge amount of value in the kinds of mathematically-precise abstractions (functors, monads, algebraic data types, etc.) that are used in those kinds of languages, and I'm convinced that type systems are much much more-useful and less-awful than I believed back when the only typed languages I knew were C++ and Java. I wouldn't be happy going back to a dynamically-typed imperative language like Self (though I do desperately miss the Self environment).
I haven’t dealt with Haskell and family all that much, but the feeling I get is that the difference between its approach and Self is at least partially that Haskell focuses on the written syntax and ‘compile time’ and Self on the interacting world and ‘run time'. The is, Self programming is always interacting with something messy and existing, like being a vet, and Haskell is like being a watchmaker, creating an intricate and often beautiful static creation which is then wound up and left to tick.
So Self ends up producing morphic, and Haskell ends up producing Pandoc (which is fabulous btw - I use it all the time)
Is there a way to get the best of both worlds?
Russell
Yes! Simulation vs translation. Horses for courses.
On Apr 22, 2020, at 6:33 PM, Russell Allen mail@russell-allen.com wrote:
On 22 Apr 2020, at 10:38 pm, Adam Spitz adam.spitz@gmail.com wrote: With that said, I've spent the past few years learning about Haskell and functional programming and type theory and category theory and proofs and various other things that I used to consider icky, and I like it very much. I'm convinced that it's usually a very good idea to avoid imperativeness and mutation as much as possible, and I'm convinced that there's a huge amount of value in the kinds of mathematically-precise abstractions (functors, monads, algebraic data types, etc.) that are used in those kinds of languages, and I'm convinced that type systems are much much more-useful and less-awful than I believed back when the only typed languages I knew were C++ and Java. I wouldn't be happy going back to a dynamically-typed imperative language like Self (though I do desperately miss the Self environment).
I haven’t dealt with Haskell and family all that much, but the feeling I get is that the difference between its approach and Self is at least partially that Haskell focuses on the written syntax and ‘compile time’ and Self on the interacting world and ‘run time'. The is, Self programming is always interacting with something messy and existing, like being a vet, and Haskell is like being a watchmaker, creating an intricate and often beautiful static creation which is then wound up and left to tick.
So Self ends up producing morphic, and Haskell ends up producing Pandoc (which is fabulous btw - I use it all the time)
Is there a way to get the best of both worlds?
Russell
Self-interest mailing list Self-interest@lists.selflanguage.org http://lists.selflanguage.org/mailman/listinfo/self-interest
(Warning: I'm going to write another wall of text. I wish I knew how to say this more concisely.)
Yes, I think a best-of-both-worlds ought to be possible. I've found it easier to see how the two worlds might fit together if I think in terms of levels-of-understanding.
For example, imagine that you want to write a function that takes a natural number as input and produces some output. You have a vague idea of what the function should do, so maybe you tentatively write down some code. Now you want to keep fiddling with the function until you've gained some confidence that it does what you want it to do. There's a sequence of things you might do, depending on how much understanding you have/want:
- At first you might not even know what you want the answer to be, so you might try manually calling the function a few times in an evaluator/REPL/unit-test-suite and seeing whether the outputs make sense to you. At this stage, it really helps to work in terms of examples, because you don't understand this problem well enough yet to be able to articulate generalizations. You just have to run the code on some examples to see what happens. - Then, if you're willing to put in the effort, you might sit and think for a little while about how to articulate some general properties that you want to be true of the output. If you can do that, then you can programmatically generate a whole bunch of input numbers, call the function on each one of them, and verify that the outputs have the properties you want them to have. So now you have some understanding of the general properties that you want your solution to have, but you're still just running the code on the examples to see what happens (and since you can't actually try *all* of the natural numbers, you just kinda have to hope that your automatically-generated examples do a good enough job of covering the whole space). - And *then*, if you're willing to put in even more effort, and if you have a language with dependent types (like Idris, and hopefully Haskell too sometime reasonably soon (they're working on it)), you might figure out how to articulate a proof-by-induction. (In case you're unfamiliar with dependent types, the idea is that proofs are reified first-class values that you can construct and pass around. So you can write down the property you want as a type-family called P, then construct a value of type P(0), and construct a function that takes a value of type P(k) and returns a value of type P(k+1), and use those to construct a function that can return a proof of type P(n) for any natural number n.) So now your understanding of this problem is deep enough that not only do you understand the general properties that you want your solution to have, but also you don't even have to run the code on a bunch of examples to see whether the property holds or not, because you've articulated the reasons why the property will hold for *every* example.
(At first I thought this was a bad example because Haskell doesn't actually have dependent types yet. But now I think that actually makes it a great example - type systems are always *almost* good enough.)
Writing the proof is more work (you still have to do basically everything that you did in the previous stages, and more), and requires more understanding of whatever problem you're trying to solve. It's *easier* to just run some examples than it is to explain why every example will work. Not every task calls for that level of rigor, so sometimes it's not worth the effort. So in that sense I agree with the "horses for courses" angle. But "running some examples" and "writing a proof" don't feel to me like separate paths; rather, they're steps along the same path. There's a step from examples to generalizations, and then there's another step from knowing *that* your generalization holds to knowing *why* your generalization holds. At each step, you're gaining a greater understanding of something that you were already doing.
A *lot* of the seemingly-unnecessary extra complexity in the typed-functional-programming world is like that. Learning those languages involves learning a bunch of new words, but that's not because they're working on watches instead of dogs; rather, it's because they've put a lot of effort into finding extremely-simple and mathematically-precise definitions of concepts that are present-but-not-explicitly-articulated in programs written in other languages. e.g. You can go through your whole career as a programmer without ever learning what the word "monad" means, but monads aren't some esoteric thing that you've never had to work with; "monad" is the name for a distilled version of the essence of a pattern that you're already vaguely familiar with but haven't articulated.
Anyway, each of those levels-of-understanding is an important part of the programming process. Self does a better job of the early stages where it's important to be able to play with tangible examples and run them to see what happens. Haskell does a better job of the later stages where you have a deep understanding of what you want your program to do and you want to articulate it. (And that's why Haskell's strengths are more about "compile time" than "run time" - after you've articulated your understanding of the program to that level of depth, the compiler might as well check it for you. But I've found it more useful to think in terms of depth-of-understanding than in terms of what-time-to-do-checking.) I don't see any reason why it would be impossible to make a system that's great at both.
On Wed, Apr 22, 2020 at 9:33 PM Russell Allen mail@russell-allen.com wrote:
On 22 Apr 2020, at 10:38 pm, Adam Spitz adam.spitz@gmail.com wrote: With that said, I've spent the past few years learning about Haskell and
functional programming and type theory and category theory and proofs and various other things that I used to consider icky, and I like it very much. I'm convinced that it's usually a very good idea to avoid imperativeness and mutation as much as possible, and I'm convinced that there's a huge amount of value in the kinds of mathematically-precise abstractions (functors, monads, algebraic data types, etc.) that are used in those kinds of languages, and I'm convinced that type systems are much much more-useful and less-awful than I believed back when the only typed languages I knew were C++ and Java. I wouldn't be happy going back to a dynamically-typed imperative language like Self (though I do desperately miss the Self environment).
I haven’t dealt with Haskell and family all that much, but the feeling I get is that the difference between its approach and Self is at least partially that Haskell focuses on the written syntax and ‘compile time’ and Self on the interacting world and ‘run time'. The is, Self programming is always interacting with something messy and existing, like being a vet, and Haskell is like being a watchmaker, creating an intricate and often beautiful static creation which is then wound up and left to tick.
So Self ends up producing morphic, and Haskell ends up producing Pandoc (which is fabulous btw - I use it all the time)
Is there a way to get the best of both worlds?
Russell
The discussion of Self vs. declarative programming has led to a discussion of dynamic typing vs. static typing. I wouldn't take away from the worth of such a discussion on its own merits. At the same time, I want to point out that declarative programming and dynamic typing could go together.
Also, I think a hybrid may be useful, where the datatype aspect of types, e. g. integer vs. float vs. string, would be dynamic, but the dataflow direction aspect of types would be static. Parameters to a procedure for example could be declared as copyable or linear.
self-interest@lists.selflanguage.org