Replies: 15 comments 1 reply
-
Implementation-wise, each compile-time scope would have a list of When an If there is exactly one If there is more than one If there is zero This allows an inner
This enables a form of "dynamically scoped variables" ala Common Lisp. Except unlike dynamically scoped variables (which use mutation), this is safe, because it gets compiled into function argument passing. Also, there is an additional complexity when a function which has
In this case, the Koka compiler has to create a lambda and resolve the
The same is true when storing For performance, it would be useful to do common subexpression removal, so that way the lambda
There's some open questions:
|
Beta Was this translation helpful? Give feedback.
-
Also, I would really like to use special syntax for |
Beta Was this translation helpful? Give feedback.
-
I thought about it some more, and I wonder if it would be better to keep implicit and non-implicit variables completely separate. In other words, it wouldn't automatically convert implicit to non-implicit. Essentially, implicit and explicit variables would have separate types, and therefore cannot be mixed. It is still possible to manually convert from implicit to non-implicit by using this function:
Now it's possible to convert from an implicit variable to an explicit value:
If there are many implicits in scope, you can select the one you want:
|
Beta Was this translation helpful? Give feedback.
-
Thinking about it further, perhaps we don't need functional dependencies. I'm not an expert on it, but it seems to me that functional dependencies were added to Haskell to prevent ambiguous type inference and to move errors to the place where the function is defined, rather than the place where the function is used. But with my proposed implicit arguments, ambiguities are already possible (and are resolved by manually passing in an instance), and there is no implicit type inference: implicit arguments must always be manually marked as In addition, Haskell allows for values which contain typeclass constraints, such as This creates a simpler mental model for the programmer, and it is also closer to the implementation: polymorphic values in Haskell are actually functions, because they can be instantiated with different typeclass instances. With my proposal, they would actually be functions, rather than something which looks like a value but is actually a function. This preserves the mental model that using a variable is cheap, and doesn't involve arbitrary computation. So in practice functional dependencies might not offer any advantages, and therefore we don't need to clutter up the simplicity of my proposal. |
Beta Was this translation helpful? Give feedback.
-
It's also interesting to note that algebraic effect handlers provide a lot of the same functionality as typeclasses. For example, consider this Haskell typeclass: class Add a where
add :: a -> a -> a
instance Add Int where
add left right = left + right
genericAdd :: Add a => a -> a -> a
genericAdd left right = add (add (add left right) right) right
main :: IO ()
main = print (genericAdd 1 2) We can define that in Koka like this:
However, I suspect this doesn't provide quite as much flexibility as typeclasses do, and it's a bit conceptually weird to use a "side effect" system for non-side effect polymorphism. It's also rather clunky to use, because we have to explicitly call I also have some concerns about the efficiency of using effect handlers for this: typeclasses are quite efficient, since it simply involves the compiler automatically adding an extra argument to functions. This means that programmers can freely use typeclasses all the time, without worrying about performance. This leads to more abstract libraries, which in turn helps reduce code duplication. I am a huge fan of Rust's motto of "zero-cost abstractions", which is one of the reasons I'm a huge fan of typeclasses. However, despite those concerns, it may be worthwhile to explore whether it's possible to avoid adding typeclasses, and instead rely solely upon effect handlers. |
Beta Was this translation helpful? Give feedback.
-
As an alternative to my proposal, we could instead have "implicit effect handlers", which are the same as regular effect handlers except that they are automatically applied based upon the type context:
When an effect needs to be discharged, it will search the current and parent scopes for an implicit handler and will then automatically call it. So the above code would be equivalent to this:
This also answers the question of "how does Koka know to automatically apply the I haven't thought this through, so I'm still hazy on the details, but it's an alternative to consider. |
Beta Was this translation helpful? Give feedback.
-
Something else I thought of: is it possible to implement the effect system using implicit parameter passing? That should offer a significant performance boost. If it's possible, I think that would make it viable to use the effect system to implement ad-hoc polymorphism. Then the only thing missing is implicit effect handlers. |
Beta Was this translation helpful? Give feedback.
-
If we decide to use the effect system instead of typeclasses, I think it would be useful to rename it. It's very weird to use the word "effect", because we're not really using it for side effects, we're just using it solely for polymorphism. I think a good suggestion is to use
This naming still works for side effects (like Some other good suggestions: |
Beta Was this translation helpful? Give feedback.
-
Another option is to use standard OOP interfaces:
In C# this would be compiled down to regular C# interfaces. In JavaScript this can be implemented in a variety of ways, but here is one possibility: Number.prototype._type = "std/core/int";
var add_vtable = {};
function add(left, right) {
return add_vtable[left._type].add(left, right);
}
function generic_add(left, right) {
return add(add(add(left, right), right), right);
}
function add_int_add(left, right) {
return left + right;
}
add_vtable["std/core/int"] = {
add: add_int_add
};
function my_int(value) {
return { _type: "path/to/my-int", value: value };
}
function add_my_int_add(a, b) {
return my_int(add_int_add(a.value, b.value));
}
add_vtable["path/to/my-int"] = {
add: add_my_int_add
};
generic_add(1, 2); // -> 7
generic_add(my_int(1), my_int(2)); // -> my_int(7) A few things to note:
|
Beta Was this translation helpful? Give feedback.
-
@Pauan I don't know whether you are still interested, but ...
Actually, this is what I do in Scala Effekt. This works really great in particular with Dotty's new implicit function types. Also you might be interested in #68 and #69 where I started to implement implicits in Koka. There are still some barriers to use the implicits as implemented there for ad-hoc polymorphism, but it is a first step. |
Beta Was this translation helpful? Give feedback.
-
I get a 404 for this page, but I took a look at the code, it's very cool!
Yeah, I saw those. Nowadays I use Rust (and I'm quite happy with it), but I'm glad to see things progressing with Koka. I still like its clean, minimal design. |
Beta Was this translation helpful? Give feedback.
-
Thanks for letting me know about the breakage of my page! Sad to hear you moved to Rust, I would have been interested how migrating the large code base to Koka went. |
Beta Was this translation helpful? Give feedback.
-
Interestingly I separately came up with a similar system in #336, which has now been implemented in Koka by Daan. However, my approach proposed using names instead of types and explicit When I talked with Daan, implicit effect handlers seemed like a bad idea after thinking it through more thoroughly. Effect handlers are dynamically scoped and can easily be intercepted, which is not really something desirable for ad-hoc polymorphism. Interestingly, sometime after your comment #16 (comment) effect handlers in Koka were altered to work by passing evidence down (implicitly in the runtime), and it is very performant like you predicted https://www.microsoft.com/en-us/research/publication/generalized-evidence-passing-for-effect-handlers/. Hope you give Koka a try again sometime soon, and leave some feedback on more issues or discussion topics! I'm also a fan of Koka's clean minimal design. Instead of closing this issue, I'm just going to convert it to a discussion topic since I feel like there is definitely interesting (non-issue) ideas that others visiting Koka might be interested in. |
Beta Was this translation helpful? Give feedback.
-
Is there a plan for dealing with the issues caused by implicits not being canonical? For example, I can create a set with one comparison (and hash) operator and try to use it with another. Or the "union" problem: https://github.com/koka-community/std/blob/b30173995d9fff49a6799ed68e9a1c2f83f8fc49/std/data/linearset.kk#L31-L32
This function only works as expected if the implicit |
Beta Was this translation helpful? Give feedback.
-
Typeclasses/traits are convenient and coherence is probably quite useful (I cannot tell as I've never worked on a large program in a language without typeclasses but with implicits), but I love that this is easy to do in Koka, with implicits:
Typeclasses don't allow this kind of thing, which I need quite often. For example, when printing AST in a compiler for debugging purposes, I sometimes want to hide or abbreviate some nodes, or print some details in others that are not printed by default. |
Beta Was this translation helpful? Give feedback.
-
The ability to have multiple functions with the same name and different types is very nice, but there are still situations where ad-hoc polymorphism is useful.
An example is a function which calls
map
, it would be nice if it worked with any type which implementsmap
There are many ways of implementing ad-hoc polymorphism: Haskell typeclasses, ML modules, and creating multiple copies of each function (used by Rust and others).
I propose a simplified typeclass mechanism for Koka.
Any variable can be marked as
implicit
:Function parameters can also be marked as
implicit
:When a function has
implicit
arguments and it is called, it will search forimplicit
variables which are in scope, and if there is animplicit
variable which matches the type, it will then automatically pass it in:It's also possible to pass in
implicit
arguments explicitly:It's also possible to convert from
implicit
variables to regular variables, and from regular variables toimplicit
variables:And that's it. That's the entire system. With this, it's possible to create Haskell typeclasses by combining
type
withimplicit
:The above code is equivalent to this Haskell program:
The Koka code is quite heavy-weight on the syntax. This can potentially be solved with the following trick:
The benefits of this light-weight typeclass system is that it is easy to implement and it's easy for the user to understand. It's the same as regular function arguments... except automatically passed in by the compiler.
It also gives a lot of the same power of ML modules, because it's possible to pass in
implicit
arguments explicitly, createimplicit
variables dynamically, and use locally scopedimplicit
variables.One downside is that it's not possible to define
union
for maps or sets, because they assume that there is only a single typeclass instance per type.Instead, you need to store the
Ord
instance in the data structure itself and useunionLeft
orunionRight
(which might be slower thanunion
). This is possible because implicits can be converted into regular values and then stored in data structures.Another downside is that I'm not sure how to define advanced features like functional dependencies or type families. But maybe the simplicity of
implicit
is more important than advanced type features.Beta Was this translation helpful? Give feedback.
All reactions