NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
A case against currying (emi-h.com)
paldepind2 1 days ago [-]
I completely agree with the points in this article and have come to the same conclusion after using languages that default to unary curried functions.

> I'd also love to hear if you know any (dis)advantages of curried functions other than the ones mentioned.

I think it fundamentally boils down to the curried style being _implicit_ partial application, whereas a syntax for partial application is _explicit_. And as if often the case, being explicit is clearer. If you see something like

    let f = foobinade a b
in a curried language then you don't immediately know if `f` is the result of foobinading `a` and `b` or if `f` is `foobinade` partially applied to some of its arguments. Without currying you'd either write

    let f = foobinade(a, b)
or

    let f = foobinade(a, b, $) // (using the syntax in the blog post)
and now it's immediately explicitly clear which of the two cases we're in.

This clarity not only helps humans, it also help compilers give better error messages. In a curried languages, if a function is mistakenly applied to too few arguments then the compiler can't always immediately detect the error. For instance, if `foobinate` takes 3 arguments, then `let f = foobinade a b` doesn't give rise to any errors, whereas a compiler can immediately detect the error in `let f = foobinade(a, b)`.

A syntax for partial application offers the same practical benefits of currying without the downsides (albeit loosing some of the theoretical simplicity).

riwsky 1 days ago [-]
The functional programming take is that “the result of foobinade-ing an and b” IS “foobinade applied to two of its arguments”. The application is not some syntactic pun or homonym that can refer to two different meanings—those are the same meaning.
AnimalMuppet 1 days ago [-]
Let us postulate two functions. One is named foobinade, and it takes three arguments. The other is named foobinadd, and it only takes two arguments. (Yes, I know, shoot anybody who actually names things that way.)

When someone writes

  f = foobinade a b
  g = foobinadd c d
there is no confusion to the compiler. The problem is the reader. Unless you have the signatures of foobinade and foobinadd memorized, you have no way to tell that f is a curried function and g is an actual result.

Whereas with explicit syntax, the parentheses say what the author thinks they're doing, and the compiler will yell at them if they get it wrong.

zahlman 1 days ago [-]
> Unless you have the signatures of foobinade and foobinadd memorized, you have no way to tell that f is a curried function and g is an actual result.

Yes, but the exact FP idea here is that this distinction is meaningless; that curried functions are "actual results". Or rather, you never have a result that isn't a function; `0` and `lambda: 0` (in Python syntax) are the same thing.

It does, of course, turn out that for many people this isn't a natural way of thinking about things.

raincole 1 days ago [-]
> Yes, but the exact FP idea here is that this distinction is meaningless; that curried functions are "actual results".

Everyone knows that. At least everyone who would click a post titled "A case against currying." The article's author clearly knows that too.

That's not the point. The point is that this distinction is very meaningful in practice, as many functions are only meant to be used in one way. It's extremely rare that you need to (printf "%d %d" foo). The extra freedom provided by currying is useful, but it should be opt-in.

Just because two things are fundamentally equivalent, it doesn't mean it's useless to distinguish them. Mathematics is the art of giving the same name to different things; and engineering is the art of giving different names to the same thing depending on the context.

kccqzy 1 days ago [-]
> It's extremely rare that

Not when a language embraces currying fully and then you find that it’s used all the fucking time.

It’s really simple as that: a language makes the currying syntax easy, and programmers use it all the time; a language disallows currying or makes the currying syntax unwieldy, and programmers avoid it.

momentoftop 23 hours ago [-]
> It's extremely rare that you need to (printf "%d %d" foo)

I write stuff like `map (printf "%d %d" m) ns` all the time. I daresay I even do the map as a partial application, so double currying.

gf000 22 hours ago [-]
But arguably your intent would be much more clear with something like `map (printf "%d %d" m _) ns` or a lambda.

I don't think parent is saying that partial application is bad, far from it. But to a reader it is valuable information whether it's partial or full application.

octachron 11 hours ago [-]
Not really when reading `iter (printf %"d %d" m) ns`, I am likely to read it in three steps

  - `iter`: this is a side-effect on a collection
  - `(printf`: ok, this is just printing, I don't care about what is printed, let's skip to the `)`
  - ns: ok, this is the collection being printed
Notice that having a lambda or a partial application `_` will only add noise here.

> But to a reader it is valuable information whether it's partial or full application.

This can be a valuable information in some context, but in a functional language, functions are values. Thus a "partial application" (in term of closure construction) might be better read as a full application because the main type of concern in the current context is a functional type.

AnimalMuppet 1 days ago [-]
Fine, it's a regular type. It's still not the type I think it is. If it's an Int -> Int when I think it's an Int, that's still a problem, no matter how much Int -> Int is an "actual result".
kccqzy 1 days ago [-]
Come on, just write

    let f :: Int = foobinade a b
And the compiler immediately tells you that you are wrong: your type annotation does not unify with compiler’s inferred type.

And if you think this is verbose, well many traditional imperative languages like C have no type deduction and you will need to provide a type for every variable anyways.

AnimalMuppet 1 days ago [-]
I spent the last three years on the receiving end of mass quantities of code written by people who knew what they were writing but didn't do an adequate job of communicate it to readers who didn't already know everything.

What you say is true. And it works, if you're the author and are having trouble keeping it all straight. It doesn't work if the author didn't do it and you are the reader, though.

And that's the more common case, for two reasons. First, code is read more often than it's written. Second, when you're the author, you probably already have it in your head how many parameters foobinade takes when you call it, but when you're the reader, you have to go consult the definition to find out.

But if I was willing to do it, I could go through and annotate the variables like that, and have the compiler tell me everything I got wrong. It would be tedious, but I could do it.

NetMageSCW 6 hours ago [-]
Doesn’t that just imply that your tooling is inadequate? In LINQPad (and, I assume VS though I have done it in a while), when you hover over a “var” declaration a tooltip tells you the actual type the compiler inferred.
jstanley 1 days ago [-]
If 0 and a function that always returns 0 are the same thing, does that make `lambda: lambda: 0` also the same? I suppose it must do, otherwise `0` and `lambda: 0` were not truly the same.
tikhonj 19 hours ago [-]
In a non-strict language without side-effects, having a function with no arguments does not make sense. Haskell doesn't even let you do that.

You can write a function that takes a single throw-away argument (eg 0 vs \ () -> 0) and, while the two have some slight differences at runtime, they're so close in practice that you almost never write functions taking a () argument in Haskell. (Which is very different from OCaml!)

ssivark 12 hours ago [-]
Yes, and that becomes more intuitive when you "un-curry" the nested lambdas into a single lamba with twice the number of arguments. The point is that the state of a constant does not depend whatsoever on the state of the (rest of the) world, how much ever of that state piles on.
fn-mote 1 days ago [-]
Another way to make the point: when you write 0, which do you mean?

In a pure language like Haskell, 0-ary functions <==> constants

skywhopper 1 days ago [-]
It’s not at all clear or the same to the new reader of the code.
riwsky 5 hours ago [-]
Sure—but that’s a property of the inferred types moreso than the mere application syntax. It can be hard to revisit or understand the type of JS or unannotated Python expressions, too—but unlike those cases, the unknown-to-the-reader type of the Haskell code will always be known on the compiler/LSP side.
attila-lendvai 15 hours ago [-]
and a few weeks down the line authors also turn back into new readers...
NetMageSCW 6 hours ago [-]
I have a lot more long term memory than that.
tabwidth 22 hours ago [-]
[dead]
munchler 1 days ago [-]
Well, I totally disagree with this. One of the main benefits of currying is the ability to chain function calls together. For example, in F# this is typically done with the |> operator:

    let result =
        input
            |> foobinade a b
            |> barbalyze c d
Or, if we really want to name our partial function before applying it, we can use the >> operator instead:

    let f = foobinade a b >> barbalyze c d
    let result = f input
Requiring an explicit "hole" for this defeats the purpose:

    let f = barbalyze(c, d, foobinade(a, b, $))
    let result = f(input)
Or, just as bad, you could give up on partial function application entirely and go with:

    let result = barbalyze(c, d, foobinade(a, b, input))
Either way, I hope that gives everyone the same "ick" it gives me.
emih 1 days ago [-]
You can still do this though:

  let result = (barbalyze(c, d, $) . foobinade(a, b, $)) input
Or if you prefer left-to-right:

  let result = input
    |> foobinade(a, b, $)
    |> barbalyze(c, d, $)
Maybe what isn't clear is that this hole operator would bind to the innermost function call, not the whole statement.
twic 1 days ago [-]
Even better, this method lets you pipeline into a parameter which isn't the last one:

  let result = input
    |> add_prefix_and_suffix("They said '", $, "'!")
raincole 1 days ago [-]
Yeah, especially in F#, a language that means to interpolate with .Net libraries (most not written with "data input at last" mindset.) now I'm quite surprised that F# doesn't have this feature.
raincole 1 days ago [-]
Wow, this convinced me. It's so obviously the right approach when you put it this way.
Smaug123 1 days ago [-]
This is essentially how Mathematica does it: the sugar `Foo[x,#,z]&` is semantically the same as `Function[{y}, Foo[x,y,z]]`. The `&` syntax essentially controls what hole belongs where.
skybrian 1 days ago [-]
For pipelines in any language, putting one function call per line often works well. Naming the variables can help readability. It also makes using a debugger easier:

  let foos = foobinate(a, b, input)
  let bars = barbakize(c, d, foos)
Other languages have method call syntax, which allows some chaining in a way that works well with autocomplete.
RHSeeger 1 days ago [-]
> Naming the variables can help readability

It can, or it can't; depending on the situation. Sometimes it just adds weight to the mental model (because now there's another variable in scope).

skybrian 22 hours ago [-]
Sure, I like chained method calls too, for simple things. But it gets ridiculous sometimes where people write a ten-stage pipeline in a single expression and then call that "readable."
RHSeeger 20 hours ago [-]
I'm with you 100%. The main thing is that sometimes a "break point" (using a variable rather than _more_ chain) can help readability. And sometimes it makes things worse. It's really a case-by-case type of thing.
1 days ago [-]
jhhh 1 days ago [-]
A benefit to using the currying style is that you can do work in the intermediate steps and use that later. It is not simply a 'cool' way to define functions. Imagine a logging framework:

  (log configuration identifier level format-string arg0 arg1 ... argN)
  
After each partial application step you can do more and more work narrowing the scope of what you return from subsequent functions.

  ;; Preprocessing the configuration is possible
  ;; Imagine all logging is turned off, now you can return a noop
  (partial log conf)
  ;; You can look up the identifier in the configuration to determine what the logger function should look like
  (partial log conf id)
  ;; You could return a noop function if the level is not enabled for the particular id
  (partial log config id level)
  ;; Pre-parsing the format string is now possible
  (partial log conf id level "%time - %id")
  
In many codebases I've seen a large amount of code is literally just to emulate this process with multiple classes, where you're performing work and then caching it somewhere. In simpler cases you can consolidate all of that in a function call and use partial application. Without some heroic work by the compiler you simply cannot do that in an imperative style.
maleldil 9 hours ago [-]
This is a good point, but I think it's better to have to explicitly opt into currying with `partial`. Automatic currying can be confusing.
vq 1 days ago [-]
One "feature of currying" in Haskell that isn't mentioned in the fine article is that parts of the function may not be dependent on the last argument(s) and only needs to be evaluated once over many application of the last argument(s) which can be very useful when partially applied functions are passed to higher-order functions.

Functions can be done explicitly written to do this or it can be achieved through compiler optimisation.

emih 1 days ago [-]
That's a very good point, I never thought really about how this relates to the execution model & graph reduction and such. Do you have an example of a function where this can make a difference? I might add something to the article about it.

It's also a question of whether this is exclusive to a curried definition or if such an optimization may also apply to partial application with a special operator like in the article. I think it could, but the compiler might need to do some extra work?

taolson 1 days ago [-]
An example where this is useful is to help inline otherwise recursive functions, by writing the function to take some useful parameters first, then return a recursive function which takes the remaining parameters. This allows the function to be partially in-lined, resulting in better performance due to the specialization on the first parameters. For example, foldr:

foldr f z = go

  where

    go [] = z

    go (x : xs) = f x (go xs)
when called with (+) and 0 can be inlined to

go xs = case xs of

    [] -> 0

    (x : xs) = x + go xs
which doesn't have to create a closure to pass around the function and zero value, and can subsequently inline (+), etc.
vq 1 days ago [-]
One slightly contrived example would be if you had a function that returned the point of a set closest to another given point.

getClosest :: Set Point -> Point -> Point

You could imagine getClosest build a quadtree internally and that tree wouldn't depend on the second argument. I say slightly contrived because I would probably prefer to make the tree explicit if this was important.

Another example would be if you were wrapping a C-library but were exposing a pure interface. Say you had to create some object and lock a mutex for the first argument but the second was safe. If this was a function intended to be passed to higher-order functions then you might avoid a lot of unnecessary lock contention.

You may be able to achieve something like this with optimisations of your explicit syntax, but argument order is relevant for this. I don't immediately see how it would be achieved without compiling a function for every permutation of the arguments.

twic 1 days ago [-]
I think we need to see a few non-contrived examples, because i think in every case where you might take advantage of currying like this, you actually want to make it explicit, as you say.

The flip side of your example is that people see a function signature like getClosest, and think it's fine to call it many times with a set and a point, and now you're building a fresh quadtree on each call. Making the staging explicit steers them away from this.

12_throw_away 1 days ago [-]
> and now you're building a fresh quadtree on each call [...] Making the staging explicit steers them away from this.

Irrespective of currying, this is a really interesting point - that the structure of an API should reflect its runtime resource requirements.

addaon 1 days ago [-]
Consider a function like ‘match regex str’. While non-lazy languages may offer an alternate API for pre-compiling the regex to speed up matching, partial evaluation makes that unnecessary.
emih 1 days ago [-]
Those are nice examples, thanks.

I was imagining you might achieve this optimization by inlining the function. So if you have

  getClosest(points, p) = findInTree(buildTree(points), p)
And call it like

  myPoints = [...]
  map (getClosest(myPoints, $)) myPoints
Then the compiler might unfold the definition of getClosest and give you

  map (\p -> findInTree(buildTree(myPoints), p)) myPoints
Where it then notices the first part does not depend on p, and rewrite this to

  let tree = buildTree(myPoints) in map (\p -> findInTree(tree, p)) myPoints
Again, pretty contrived example. But maybe it could work.
vq 1 days ago [-]
I didn't consider inlining but I believe you're correct, you could regain the optimisation for this example since the function is non-recursive and the application is shallow. The GHC optimisation I had in mind is like the opposite of inlining, it factors out a common part out of a lambda expression that doesn't depend on the variable.

I don't believe inlining can take you to the exact same place though. Thinking about explicit INLINE pragmas, I envision that if you were to implement your partial function application sugar you would have to decide whether the output of your sugar is marked INLINE and either way you choose would be a compromise, right? The compromise with Haskell and curried functions today is that the programmer has to consider the order of arguments, it only works in one direction but on the other hand the optimisation is very dependable.

ackfoobar 1 days ago [-]
> explicitly written to do this

In that case I want the signature of "this function pre-computes, then returns another function" and "this function takes two arguments" to be different, to show intent.

> achieved through compiler optimisation

Haskell is different in that its evaluation ordering allows this. But in strict evaluation languages, this is much harder, or even forbidden by language semantics.

Here's what Yaron Minsky (an OCaml guy) has to say:

> starting from scratch, I’d avoid partial application as the default way of building multi-argument functions.

https://discuss.ocaml.org/t/reason-general-function-syntax-d...

recursivecaveat 1 days ago [-]
Currying was recently removed from Coalton: https://coalton-lang.github.io/20260312-coalton0p2/#fixed-ar...
leoc 1 days ago [-]
> 3. Better type errors. With currying, writing (f 1 2) instead of (f 1 2 3) silently produces a partial application. The compiler happily infers a function type like :s -> :t and moves on. The real error only surfaces later, when that unexpected function value finally clashes with an incompatible type, often far from the actual mistake. With fixed arity, a missing argument is caught right where it happens.

'Putting things' (multi-argument function calls, in this case) 'in-band doesn't make them go away, but it does successfully hide them from your tooling', part 422.

emih 1 days ago [-]
Thanks for sharing, interesting to see that people writing functional languages also experience the same issues in practice. And they give some reasons I didn't think about.
brabel 1 days ago [-]
That's so cool. I already liked Coalton, and after this change I think it's definitely going to be even better. Can't wait to try it.
Blikkentrekker 24 hours ago [-]
> Simplicity: Every function takes exactly one input and produces exactly one output. No exceptions. If you didn’t care about the input or output, you used Unit, and we made special syntax for that.

Seems like a disaster to use s-expressions for a language like that. I love s-expressions but they only make sense for variadic languages. The entire point of them is to quickly delimit how many arguments are passed.

In say Haskell `f x y z` is the same thing as `(((f x) y) z)`. That is definitely not the case with s-expressions; braces don't delimit; they denote function application. It's like saying that `f(x,y,z)` being the same as `f(x)(y)(z)` which it really isn't. The point of s-expressions is that you often find yourself calling functions with many arguments that are themselves a result of a function application, at that point `foo(a)(g(a,b), h(x,y))` just becomes easier to parse as ((foo a) (g a b) (h x y))`.

sparkie 19 hours ago [-]
S-expressions are more like the tupled argument form, but better.

    (f x y z)
Is equivalent to:

    (f . (x . (y . (z . ())))
Every function takes one argument - a list.

Lists make partial application simpler than with tuples (at least Haskell style tuples), because we don't need to define a new form for each N-sized tuple. Eg, in Haskell you'd need:

    partial2 : (((a, b) -> z), a) -> (b -> z)
    partial3 : (((a, b, c) -> z), a) -> ((b, c) -> z)
    partial4 : (((a, b, c, d) -> z), a) -> ((b, c, d) -> z)
    ...
With S-expressions, we can just define a partial application which takes the first argument (the car of the original parameter list) and returns a new function taking a variadic number of arguments (the cdr of the original parameter list). Eg, using a Kernel operative:

    ($define! $partial
        ($vau (f first) env
            ($lambda rest
                (eval (list* f first rest) env))))

    ($define! f ($lambda (x y z) (+ x y z)))
    (f 3 2 1)
    => 6
    
    ($define! g ($partial f 3))
    (g 2 1)
    => 6
    
    ($define! h ($partial g 2))
    (h 1)
    => 6

    ($define! i ($partial h 1))
    (i)
    => 6
We could perhaps achieve the equivalent in Haskell explicitly with a multi-parameter typeclass and a functional dependency. Something like:

    class Partial full first rest | full first -> rest where
        partial :: (full -> z, first) -> (rest -> z)
        
    instance Partial ((a,b)) a b where
        partial (f, a) = \b -> f (a, b)
        
    instance Partial ((a, b, c)) a ((b, c)) where
        partial (f, a) = \(b, c) -> f (a, b, c)
        
    instance Partial ((a, b, c, d)) a ((b, c, d)) where
        partial (f, a) = \(b, c, d) -> f (a, b, c, d)

    ...
twic 1 days ago [-]
I couldn't agree more. Having spent a lot of time with a language with currying like this recently, it seems very obviously a misfeature.

1. Looking at a function call, you can't tell if it's returning data, or a function from some unknown number of arguments to data, without carefully examining both its declaration and its call site

2. Writing a function call, you can accidentally get a function rather than data if you leave off an argument; coupled with pervasive type inference, this can lead to some really tiresome compiler errors

3. Functions which return functions look just like functions which take more arguments and return data (card-carrying functional programmers might argue these are really the same thing, but semantically, they aren't at all - in what sense is make_string_comparator_for_locale "really" a function which takes a locale and a string and returns a function from string to ordering?)

3a. Because of point 3, our codebase has a trivial wrapper to put round functions when your function actually returns a function (so make_string_comparator_for_locale has type like Locale -> Function<string -> string -> order>), so now if you actually want to return a function, there's boilerplate at the return and call sites that wouldn't be there in a less 'concise' language!

I think programming languages have a tendency to pick up cute features that give you a little dopamine kick when you use them, but that aren't actually good for the health of a substantial codebase. I think academic and hobby languages, and so functional languages, are particularly prone to this. I think implicit currying is one of these features.

tikhonj 1 days ago [-]
> in what sense is make_string_comparator_for_locale "really" a function which takes a locale and a string and returns a function from string to ordering?

In the sense that "make_string_comparator" is not a useful concept. Being able to make a "string comparator" is inherently a function of being able to compare strings, and carving out a bespoke concept for some variation of this universal idea adds complexity that is neither necessary nor particularly useful. At the extreme, that's how you end up with Enterprise-style OO codebases full of useless nouns like "FooAdapter" and "BarFactory".

The alternative is to have a consistent, systematic way to turn verbs into nouns. In English we have gerunds. I don't have to say "the sport where you ski" and "the activity where you write", I can just say "skiing" and "writing". In functional programming we have lambdas. On top of that, curried functions are just a sort of convenient contraction to make the common case smoother. And hey, maybe the contraction isn't worth the learning curve or usability edge-cases, but the function it's serving is still important!

> Because of point 3, our codebase has a trivial wrapper to put round functions when your function actually returns a function

That seems either completely self-inflicted, or a limitation of whatever language you're using. I've worked on a number of codebases in Haskell, OCaml and a couple of Lisps, and I have never seen or wanted anything remotely like this.

marcosdumay 1 days ago [-]
> I think programming languages have a tendency to pick up cute features that give you a little dopamine kick when you use them, but that aren't actually good for the health of a substantial codebase.

That's not the case with Haskell.

Haskell has a tendency to pick up features that have deep theoretical reasoning and "mathematical beauty". Of course, that doesn't always correlate with codebase health very well either, and there's a segment of the community that is very vocal about dropping features because of that.

Anyway, the case here is that a superficial kind of mathematical beauty seems to conflict with a deeper case of it.

Blikkentrekker 24 hours ago [-]
I always felt Monads were an utterly disgusting hack that was otherwise quite practical though. It didn't feel like mathematical beauty at all to me but like a hack to fool to the optimizer to not sequence out of events.
ekipan 15 hours ago [-]
From the article:

    length = foldr (+) 0 . map (const 1)
    length2d = foldr (+) 0 . map length

    -- and the proposed syntax the author calls more readable:
    length = foldr((+), 0, $) . map(const(1), $)
    length2d = foldr((+), 0, $) . map(length, $)
But I don't understand. Why does the author think it's confusing to partially apply foldr and map but not (+), (.), const, and length? Applied consistently:

    length = (.)(foldr((+)($, $), 0, $), map(const(1, $), $), $)
    length2d = (.)(foldr((+)($, $), 0, $), map(length($), $), $)
And clearly no-one thinks this strawman is clearer. Further, it's impossible to make it 100% consistent: any function you write with polymorphic result could instantiate as a function needing more arguments. I think currying is the better default.
codethief 5 hours ago [-]
> Why does the author think it's confusing to partially apply foldr and map but not (+), (.), const, and length?

Neither of these functions is being partially applied in the code you cited. While the placeholder $ can be used to indicate all "free slots" of a function, you would only have to use it in partial applications of that function. (Analogous to how in mathematics the abstract index notation for tensors[0] is only really useful when you start contracting tensors, etc. Otherwise, the plain objects/tensors without indices are much easier to write & read.)

Specifically:

- (+) was already a function of two arguments, so usage of $ is unnecessary since (+)($, $) == (+). Similarly with the length function (a function of 1 argument): length($) == length.

- Function composition was being used as a binary operator between two functions. You just replaced the infix notation with prefix notation.

- I read const(1) as the function that maps everything to 1. I.e. `const` is a function of one argument x, which returns a function that always returns x. Once again, no need to indicate slots there.

[0]: See https://en.wikipedia.org/wiki/Abstract_index_notation and https://math.stackexchange.com/questions/455478/what-is-the-...

ekipan 50 minutes ago [-]
> - Function composition was being used as a binary operator between two functions. You just replaced the infix notation with prefix notation.

That's incorrect. Here's the definition of function compose:

  (f . g) x = f (g x)
So in `foldr (+) 0 . map (const 1)`, the author gives `f = foldr (+) 0` and `g = map (const 1)` but doesn't supply `x`. That's a partial application. Similarly for const:

  const x y = x
Even if I concede length and (+), these two are partially applied.

> you would only have to use it in partial applications of that function.

So why not const and (.)? If they're allowed to curry, why not foldr and map?

Pay08 1 days ago [-]
I'm biased here since the easy currying is by far my favourite feature in Haskell (it always bothers me that I have to explicitly create a lamba in Lisps) but the arguments in the article don't convince me, what with the synctactic overhead for the "tuple style".
lukev 1 days ago [-]
I'd got a step further and say that in business software, named parameters are preferable for all but the smallest functions.

Using curried OR tuple arg lists requires remembering the name of an argument by its position. This saves room on the screen but is mental overhead.

The fact is that arguments do always have names anyway and you always have to know what they are.

layer8 1 days ago [-]
I want to agree, but there is the tension that in business code, what you pass as arguments is very often already named like the parameter, so having to indicate the parameter name in the call leads to a lot of redundancy. And if you’re using domain types judiciously, the types are typically also different, hence (in a statically-typed language) there is already a reduced risk of passing the wrong parameter.

Maybe there could be a rule that parameters have to be named only if their type doesn’t already disambiguate them and if there isn’t some concordance between the naming in the argument expression and the parameter, or something along those lines. But the ergonomics of that might be annoying as well.

amluto 17 hours ago [-]
I write plenty of business code, and I do not like even the possibility of a mistake like:

    fn compute_thing(cost: whatever, num_widgets: whatever) -> Whatever;

    let cost = …;
    let num_widgets = …;
    let result = compute_thing(num_widgets, cost);
(This can by most any language including Haskell or Lean, with slightly different syntax.)

One can prevent this very verbosely with the Builder pattern. Or one can use named parameters in languages that support them.

An interesting analogue is tensor math. In Einstein’s work, there were generally four dimensions and you probably wouldn’t lose track of which letter was which. In linear algebra, at least at the high school or early undergrad level, there are usually vectors and tensors and, well, that’s it. But in data crunching or modern ML, tensors have all kinds of cool axes, and for some reason we usually just identify them by which slot they are in the order that they happen to be in in the input tensor. Some people try to creatively make this “type safe” by specializing on the length of the dimension, which is an incomplete solution at best. I would love to see adoption of some solution that gives these things explicit names and does not ever guess which axis is being referenced.

(I find 95% of ML code and a respectable fraction of papers and descriptions to be locally incomprehensible because you need to look somewhere else to figure out what on Earth A • B' actually means.

sestep 1 days ago [-]
This is an issue in Python but less so in languages like JavaScript that support "field name punning", where you pass named arguments via lightweight record construction syntax, and you don't need to duplicate a field name if it's the same as the local variable name you're using for that field's value.
layer8 1 days ago [-]
That forces you to name the variable identically to the parameter. For example, you may want to call your variable `loggedInUser` when the fact that the user is logged in is important for the code’s logic, but then you can’t pass it as-is for a field that is only called `user`. Having to name the parameter leads to routinely having to write `foo: blaFoo` because just `blaFoo` wouldn’t match, or else to drop the informative `bla`. That’s part of the tension I was referring to.
twic 1 days ago [-]
OCaml has a neat little feature where it elides the parameter and variable name if they're the same:

  let warn_user ~message = ... (* the ~ makes this a named parameter *)

  let error = "fatal error!!" in
  warn_user ~message:error; (* different names, have to specify both *)

  let message = "fatal error!!" in
  warn_user ~message; (* same names, elided *)
The elision doesn't always kick in, because sometimes you want the variable to have a different name, but in practice it kicks in a lot, and makes a real difference. In a way, cases when it doesn't kick in are also telling you something, because you're crossing some sort of context boundary where some value is called different things on either side.
hutao 1 days ago [-]
One language that uses the tuple argument convention described in the article is Standard ML. In Standard ML, like OCaml and Haskell, all functions take exactly one argument. However, while OCaml and Haskell prefer to curry the arguments, Standard ML does not.

There is one situation, however, where Standard ML prefers currying: higher-order functions. To take one example, the type signature of `map` (for mapping over lists) is `val map : ('a -> 'b) -> 'a list -> 'b list`. Because the signature is given in this way, one can "stage" the higher-order function argument and represent the function "increment all elements in the list" as `map (fn n => n + 1)`.

That being said, because of the value restriction [0], currying is less powerful because variables defined using partial application cannot be used polymorphically.

[0] http://mlton.org/ValueRestriction

emih 1 days ago [-]
I didn't know Standard ML, that's interesting.

And yeah I think this is the way to go. For higher-order functions like map it feels too elegant not to write it in a curried style.

bbkane 1 days ago [-]
The Roc devs came to a similar conclusion: https://www.roc-lang.org/faq#curried-functions

(Side note: if you're reading this Roc devs, could you add a table of contents?)

titzer 1 days ago [-]
I agree with this article. Tuples nicely unified multiple return values and multiple parameters. FWIW Scala and Virgil both support the _ syntax for the placeholder in a partial application.

    def add(x: int, y: int) -> int { return x + y; }
    def add3 = add(_, 3);
Or more simply, reusing some built-in functions:

    def add3 = int.+(_, 3);
ackfoobar 1 days ago [-]
As noted in the article:

> This feature does have some limitations, for instance when we have multiple nested function calls, but in those cases an explicit lambda expression is always still possible.

I've also complained about that a while ago https://news.ycombinator.com/item?id=35707689

---

The solution is to delimit the level of expression the underscore (or dollar sign suggested in the article) belongs to. In Kotlin they use braces and `it`.

    { add(it, 3) } // Kotiln
    add(_, 3) // Scala
Then modifying the "hole in the expression" is easy. Suppose we want to subtract the first argument by 2 before passing that to `add`:

    { add(subtract(it, 2), 3) } // Kotlin
    // add(subtract(_, 2), 3) // no, this means adding 3 to the function `add(subtract(_, 2)`
    x => { add(subtract(x, 2), 3) } // Scala
titzer 1 days ago [-]
I think I like the explicit lambda better; I prefer to be judicious with syntactic sugar and special variable names.

    fun x => add(subtract(x, 2), 3) // Virgil
ackfoobar 1 days ago [-]
Coming from Scala to Kotlin, this is what I thought as well. Seeing `it` felt very wrong, then I got used to it.
titzer 20 hours ago [-]
I don't mind adopting features from popular languages. (After all, Virgil using _ for partial application turned out to be a happy accident that aligned with Scala.) Adopting features that are popping up in other languages helps to reduce the explanation burden, but I'm not sure on this one. It took me about 10 years to finally settle on having `fun` as a keyword to introduce lambdas instead of the parser back-tracking madness that JS parsers do.
skybrian 1 days ago [-]
There are good ideas in functional languages that other languages have borrowed, but there are bad ideas too: currying, function call syntax without parentheses, Hindley-Milner type inference, and laziness by default (Haskell) are experiments that new languages shouldn’t copy.
raincole 1 days ago [-]
I believe one of the main reasons that F# hasn't never really taken off is that Microsoft isn't afraid to borrow the good parts of F# to C#. (They really should've ported discriminated unions though)
runevault 1 days ago [-]
Currently DUs are slated for the next version of c# releasing end of this year. However last I knew they only come boxed which at least to me partly defeats the point of having them (being able to have multiple types inline because of the way they share memory and only have a single size based on compiler optimizations).
jstrieb 1 days ago [-]
I like currying because it's fun and cool, but found myself nodding along throughout the whole article. I've taken for granted that declaring and using curried functions with nice associativity (i.e., avoiding lots of parentheses) is as ergonomic as partial application syntax gets, but I'm glad to have that assumption challenged.

The "hole" syntax for partial application with dollar signs is a really creative alternative that seems much nicer. Does anyone know of any languages that actually do it that way? I'd love to try it out and see if it's actually nicer in practice.

emih 1 days ago [-]
Glad to hear the article did what I meant for it to do :)

And yes, another comment mentioned that Scala supports this syntax!

runevault 1 days ago [-]
Clojure CL as well have macros that let you thread results from call to call, but you could argue that's cheating because of how flexible Lisp syntax is.
hencq 24 hours ago [-]
Clojure also has the anonymous function syntax with #(foo a b %) where you essentially get exactly this hole functionality (but with % instead of $). Additionally there’s partial that does partial application, so you could also do (partial foo a b).
rocqua 1 days ago [-]
Someone else in the comments mentioned that scala does this with _ as the placeholder.
drathier 1 days ago [-]
1. Such bad examples :( Tuples are data types you have to destruct, in every language. Somebody please show me a language where this doesn't require a tuple-to-function-argument translation:

  sayHi name age = "Hi I'm " ++ name ++ " and I'm " ++ show age
  people = [("Alice", 70), ("Bob", 30), ("Charlotte", 40)]
  -- ERROR: sayHi is String -> Int -> String, a person is (String, Int)
  conversation = intercalate "\n" (map sayHi people)

In python you have `*people` to destruct the tuple into separate arguments, or pattern matching. In C-languages you have structs you have to destruct.

2. And performance, you'd think a slow-down affecting every single function call would be high-up on the optimization wish list, right? That's why it's implemented in basically every compiler, including non-fp compilers. Here's GHC authors in 2004 declaring that obviously the optimization is in "any decent compiler". https://simonmar.github.io/bib/papers/eval-apply.pdf

3. Type errors, the only place where currying is actually bad, is not even mentioned directly. Accidentally passing a different number of arguments compared to what you expected will result in a compiler error.

Some very powerful and generic languages will happily support lots of weird code you throw at them instead of erroring out. Others will errors out on things you'd expect them to handle just fine.

Here's Haskell supporting something most people would never want to use, giving it a proper type, and causing a confusing type error in any surrounding code when you leave out a parentheis around `+`:

  foldl (+) 0 [1,2,3] :: Num a => a
  foldl + 0 [1,2,3]
    :: (Foldable t, Num a1, Num ((b -> a2 -> b) -> b -> t a2 -> b),
        Num ([a1] -> (b -> a2 -> b) -> b -> t a2 -> b)) =>
       (b -> a2 -> b) -> b -> t a2 -> b
Is it bad that it has figured out that you (apparently) wanted to add things of type `(b -> a2 -> b) -> b -> t a2 -> b` as if they were numbers, and done what you told it to do? Drop it into any gpt of choice and it'll find the mistake for you right away.
Blikkentrekker 24 hours ago [-]
In SML I believe. I never used SML but from how I understand it in ML all functions technically take one argument, which may be a tuple. In Haskell and Ocaml, all functions technically take one argument and just return a function that takes one argument again.

I never understood why the latter was so popular. Just for automatic implitic partial application which honestly should just have explicit syntax. In Scheme one simply uses the `(cut f x y)` operator which does a partial application and returns a function that consumes the remaining arguments which is far more explicit. But since Scheme is dynamically typed implicit partial application would be a disaster but it's not like in OCaml and Haskell the error messages at times can't be confusing.

I don't get simulating it with tuples either to be honest. Nothing wrong with just letting functions take multiple arguments and that's it. In Rust they oddly take multiple arguments as expect, but they can return tuples to simulate returning multiple arguments whereas in Scheme they just return multiple arguments. There's a difference between returning one argument which is a tuple of multiple arguments, and actually returning multiple arguments.

I think automatic implicit partial application, like almost anything “implicit” is bad. But in Haskell or Ocaml or even Rust it has to be a syntactic macro, it can't just be a normal function because no easy variadic functions which to be fair is incredibly difficult without dynamic typing and in practice just passing some kind of sequence is what you really want.

shawn_w 24 hours ago [-]
A bunch of Scheme implementations define little-known syntax for partial application[0] that lets you put limits on how many arguments have to be provided at each application step. Using the article's add example:

  (define (((add x) y) z) (+ x y z))
  (define add1 (add 1))
  (define add3 (add1 2))
  (add3 3) ; => 6
it gets tedious with lots of single-argument cases like the above, but in cases where you know you're going be calling a function a lot with, say, the first three arguments always the same and the fourth varying, it can be cleaner than a function of three arguments that returns an anonymous lambda of one argument.

  (define ((foo a b c) d)
    (do-stuff))
  (for-each (foo 1 2 3) '(x y z))
vs

  (define (foo a b c)
    (lambda (d) (do-stuff)))
  (for-each (foo 1 2 3) '(x y z))

There's also a commonly supported placeholder syntax[1]:

    (define inc (cut + 1 <>))
    (inc 2) ; => 3
    (define (foo a b c d) (do-stuff))
    (for-each (cut foo 1 2 3 <>) '(x y z))
And assorted ways to define or adapt functions to make fully curried ones when desired. I like the "make it easy to do something complicated or esoteric when needed, but don't make it the default to avoid confusion" approach.

[0]: https://srfi.schemers.org/srfi-219/srfi-219.html

[1]: https://srfi.schemers.org/srfi-26/srfi-26.html

layer8 1 days ago [-]
I completely agree. Giving the first parameter of a function special treatment only makes sense in a limited subset of cases, while forcing an artificial asymmetry in the general case that I find unergonomic.
dragonwriter 23 hours ago [-]
A case for currying:

In languages in which every function is unary but there is a convenience syntax for writing "multiargument" functions that produces curried functions, so that the type functions of type "a -> b -> c" can be written as if their type was "a b -> c", but which also have tuples such that "multiargument" functions could equally conveniently be written as having type "(a, b) -> c", and where the syntax for calling each type of function is equally straightforward in situations that don't require "partial application" (where the curried form has a natural added utility), people overwhelming use the syntax that produces curried functions.

People only predominantly use uncurried multiargument functions in languages which make writing and/or calling curried functions significant syntactic overhead.

jwarden 1 days ago [-]
Here’s an article I wrote a while ago about a hypothetical language feature I call “folded application”, that makes parameter-list style and folded style equivalent.

https://jonathanwarden.com/implicit-currying-and-folded-appl...

zyxzevn 1 days ago [-]
With a language like Forth, you know that you can use a stack for data and apply functions on that data. With currying it you put functions on a stack instead. This makes it weird. But you also obscure the dataflow.

With the most successful functional programing language Excel, the dataflow is fully exposed. Which makes it easy.

Certain functional programming languages prefer the passing of just one data-item from one function to the next. One parameter in and one parameter out. And for this to work with more values, it needs to use functions as an output. It is unnecessary cognitive burden. And APL programmers would love it.

Let's make an apple pie as an example. You give the apple and butter and flour to the cook. The cursed curry version would be "use knife for cutting, add cutting board, add apple, stand near table, use hand. Bowl, add table, put, flour, mix, cut, knife butter, mixer, put, press, shape, cut_apple." etc..

kubb 1 days ago [-]
The article lists two arguments against Currying:

  1) "performance is a bit of a concern"
  2) "curried function types have a weird shape"
2 is followed by single example of how it doesn't work the way the author would expect it to in Haskell.

It's not a strong case in my opinion. Dismissed.

et1337 22 hours ago [-]
I also think there’s an interesting effect when cool functional language features like currying and closures are adopted by imperative languages. They make it way too easy to create state in a way that makes you FEEL like you’re writing beautiful pure functions. Of course, in a functional language everything IS pure and this is just how things work. But in an imperative language you can trick yourself into thinking you’ve gotten away with something. At one point I stored practically all state in local variables captured by closures. It was a dark time.
yibers 22 hours ago [-]
I'm actually fascinated by what you wrote. Why was it a dark time?
et1337 22 hours ago [-]
No encapsulation… huge functions with tons of local variables shared between closures… essentially global state in practice. I think ant the time, objects with member variables felt “heavy” and local variables felt “light”. But the fact that they were so lightweight just gave me more opportunities to squirrel away state into random places with no structure around it. It really wasn’t all that horrific, and it helped me ship something quickly, but it wasn’t maintainable. These days I think the “heavy boilerplate” of grouping stuff into structs and objects forces me to slow down and think a bit harder about whether I really want to enshrine a new piece of state into the app’s data model. Most of the time I don’t.
mrkeen 23 hours ago [-]
If you're looking for this argument in a language closer to home, it's basically the opposite of your IDE's style guide:

If I write this Java:

  pair.map((a, b) -> Foo.merge(a, b));
My IDE flashes up with Lambda can be replaced with method reference and gives me

  pair.map(Foo::merge);
(TFA does not seem to be arguing against the idea of partial-function-application itself, as much as he wants languages to be explicit about using the full lambda terms and function-call-parentheses.)
gavinhoward 1 days ago [-]
Okay, but if you combine the curried and tuple styles, and add a dash of runtime function pointers, you can solve the expression problem. [1]

[1]: https://gavinhoward.com/2025/04/how-i-solved-the-expression-...

ajkjk 17 hours ago [-]
It's nice to see someone put this all into words. I've felt the same thing for years but never quite figured out how to articulate it.
Isognoviastoma 1 days ago [-]
> curried functions often don't compose nicely

Same for imperative languages with "parameter list" style. In python, with

def f(a, b): return c, d

def g(k, l): return m, n

you can't do

f(g(1,2))

but have to use

f(*g(1,2))

what is analogical to uncurry, but operate on value rather than function.

TBH I can't name a language where such f(g(1,2)) would work.

shawn_w 1 days ago [-]
perl, though that uses lists rather than multiple value or fixed-size tuples:

  #!/usr/bin/env perl
  use v5.36;
  
  sub f($a, $b) {
    return ($a+1, $b+1);
  }
  
  sub g($k, $l) {
    return ($k+1, $l+1);
  }
  
  say for f(g(1,2));
prints out

  3
  4
ifh-hn 22 hours ago [-]
I didn't know what currying was before I read the article, and having read some of it before I gave up, I think it's something to do with functions.
codethief 1 days ago [-]
I've long been thinking the same thing. In many fields of mathematics the placeholder $ from the OP is often written •, i.e. partial function application is written as f(a, b, •). I've always found it weird that most functional languages, particularly heavily math-inspired ones like Haskell, deviate from that. Yes, there are isomorphisms left and right but at the end of the day you have to settle on one category and one syntax. A function f: A × B -> C is simply not the same thing as a function f: A -> B -> C. Stop treating it like it is.
kajaktum 1 days ago [-]
I feel like not having currying means your language becomes semantically more complicated because where does lambdas come from?
calf 1 days ago [-]
What's the steelman argument though? Why do languages like Haskell have currying? I feel like that is not set out clearly in the argument.
mrkeen 23 hours ago [-]
I have a web backend with a top-level server that looks something like:

  ...

  :<|> collectionsServer compactor env registry
  :<|> queryServer qp sp struc registry warcFileReader logger
  :<|> indexationServer fetcher indexer

  ...
I.e. a request coming into the top-level server will go into the collections server or the query server or the indexation server. Each server is further broken down (collections has 4 routes, query has 4 routes, indexation has 5 routes.)

So lets try making the the arguments of just the collections server explicit. (It will take me too long to try to do them all.)

You can 'list' collections, 'delete' a collection, merge collectionA into collectionB, or get the directory where the collections live. So the input (the lambda term(s) we're trying to make explicit) can be () or (collectionName) or (collectionNameA, collectionNameB) or ().

In order to put these lambda terms explicitly into the source code, we need to add four places to put them, by replacing collectionsServer with the routes that it serves:

  ...

  :<|> (       listCollections registry
         :<|> (\collectionName -> deleteCollection env registry collectionName)
         :<|> (\collectionName1 collectionName2 -> mergeInto compactor collectionName1 collectionName2)
         :<|>  getCollectionDir env ) 
  :<|> queryServer qp sp struc registry warcFileReader logger
  :<|> indexationServer fetcher indexer

  ...
And now you know what explicit lambda terms collectionsServer takes!
tikhonj 18 hours ago [-]
The practical upside is that it makes using higher-order functions much smoother, with less distracting visual noise.

In Haskell this comes up all over the place. It's somewhat nice for "basic" cases (`map (encode UTF8) lines` vs `map (\ line -> encode UTF8 line) lines`) and, especially, for more involved examples with operators: `encode <$> readEncoding env <*> readStdin` vs, well, I don't even know what...)

You could replace the latter uses with some alternative or special syntax that covered the most common cases, like replacing monads with an effect system that used direct syntax, but that would be a lot less flexible and extensible. Libraries would not be able to define their own higher-order operations that did not fit into the effect system without incurring a lot of syntactic overhead, which would make higher-order abstractions and embedded DSLs much harder to use. The only way I can think of for recovering a similar level of expressiveness would be to have a good macro system. That might actually be a better alternative, but it has its own costs and downsides!

emih 1 days ago [-]
Mathematically it's quite pretty, and it gives you elegant partial application for free (at least if you want to partially apply the first N arguments).
calf 2 hours ago [-]
Well, I disagree, you are, effectively, calling entire textbooks and CS sub disciplines merely "pretty", which is again the strawman I am referring to. This is like calling theoretical Turing award level advances mathematically pretty. I hope you see why that is problematic and biased framing.

More plausible is that Haskell designers recognized that Currying is a fundamental phenomenon of the lambda calculus so it needed some kind of primitive syntax for it. I'm not an expert but that is the most reasonable supposition for a rationale to start with. One can then argue if the syntax is good or not, but to do away with currying entirely is changing the premise of recognizing fundamental properties of Turing-complete functional programming language paradigms. It's not about prettiness, it's about the science.

01HNNWZ0MV43FF 1 days ago [-]
I've never ever run into this. I haven't seen currying or partial application since college. Am I the imperative Blub programmer, lol?
messe 1 days ago [-]
What benefit does drawing a distinction between parameter list and single-parameter tuple style bring?

I'm failing to see how they're not isomorphic.

Kambing 1 days ago [-]
They are isomorphic in the strong sense that their logical interpretations are identical. Applying Curry-Howard, a function type is an implication, so a curried function with type A -> B -> C is equivalent to an implication that says "If A, then if B, then C." Likewise, a tuple is a conjunction, so a non-curried function with type (A, B) -> C is equivalent to the logic statement (A /\ B) -> C, i.e., "If A and B then C." Both logical statements are equivalent, i.e., have the same truth tables.

However, as the article outlines, there are differences (both positive and negative) to using functions with these types. Curried functions allow for partial application, leading to elegant definitions, e.g., in Haskell, we can define a function that sums over lists as sum = foldl (+) 0 where we leave out foldl's final list argument, giving us a function expecting a list that performs the behavior we expect. However, this style of programming can lead to weird games and unweildy code because of the positional nature of curried functions, e.g., having to use function combinators such as Haskell's flip function (with type (A -> B -> C) -> B -> A -> C) to juggle arguments you do not want to fill to the end of the parameter list.

messe 1 days ago [-]
Please see my other comment below, and maybe re-read the article. I'm not asking what the difference is between curried and non-curried. The article draws a three way distinction, while I'm asking why two of them should be considered distinct, and not the pair you're referring to.
Kambing 1 days ago [-]
Apologies, I was focused on the usual pairing in this space and not the more subtle one you're talking about. As others have pointed out, there isn't really semantic a difference between the two. Both approaches to function parameters produce the same effect. The differences are purely in "implementation," either theoretically or in terms of systems-building.

From a theoretical perspective, a tuple expresses the idea of "many things" and a multi-argument parameter list expresses the idea of both "many things" and "function arguments." Thus, from a cleanliness perspective for your definitions, you may want to separate the two, i.e., require function have exactly one argument and then pass a tuple when multiple arguments are required. This theoretical cleanliness does result in concrete gains: writing down a formalism for single-argument functions is decidedly cleaner (in my opinion) than multi-argument functions and implementing a basic interpreter off of this formalism is, subsequently, easier.

From a systems perspective, there is a clear downside in this space. If tuples exist on the heap (as they do for most functional languages), you induce a heap allocation when you want to pass multiple arguments! This pitfall is evident with the semi-common beginner's mistake with OCaml algebraic datatype definitions where the programmer inadvertently wraps the constructor type with parentheses, thereby specifying a constructor of one-argument that is a tuple instead of a multi-argument constructor (see https://stackoverflow.com/questions/67079629/is-a-multiple-a... for more details).

emih 1 days ago [-]
That's a fair point, they are all isomorphic.

The distinction is mostly semantic so you could say they are the same. But I thought it makes sense to emphasize that the former is a feature of function types, and the latter is still technically single-parameter.

I suppose one real difference is that you cannot feed a tuple into a parameter list function. Like:

fn do_something(name: &str, age: u32) { ... }

let person = ("Alice", 40);

do_something(person); // doesn't compile

recursivecaveat 1 days ago [-]
Probably just that having parameter-lists as a specific special feature makes them distinct from tuple types. So you may end up with packing/unpacking features to convert between them, and a function being generic over its number of parameters is distinct from it being generic over its input types. On the other hand you can more easily do stuff like named args or default values.
layer8 1 days ago [-]
The parameter list forces the individual arguments to be visible at the call site. You cannot separate the packaging of the argument list from invoking the function (barring special syntactic or library support by the language). It also affects how singleton tuples behave in your language.

The article is about programmer ergonomics of a language. Two languages can have substantially different ergonomics even when there is a straightforward mapping between the two.

rocqua 1 days ago [-]
It's not that they are meaningfully different. It's just acknowledging if you really want currying, you can say 'why not just use a single parameter of tuple type'.

Then there's an implication of 'sure, but that doesn't actually help much if it's not standar' and then it's not addressed further.

Pay08 1 days ago [-]
The tuple style can't be curried (in Haskell).
messe 1 days ago [-]
That's not what I'm talking about.

The article draws a three way distinction between curried style (à la Haskell), tuples and parameter list.

I'm talking about the distinction it claims exists between the latter two.

disconcision 1 days ago [-]
all three are isomorphic. but in some languages if you define a function via something like `function myFun(x: Int, y: Bool) = ...` and also have some value `let a: (Int, Bool) = (1, true)` it doesn't mean you can call `myFun(a)`. because a parameter list is treated by the language as a different kind of construct than a tuple.
antonvs 1 days ago [-]
A language which truly treats an argument list as a tuple can support this:

    args = (a, b, c)
    f args
…and that will have the effect of binding a, b, and c as arguments in the called function.

In fact many “scripting” languages, like Javascript and Python, support something close to this using their array type. If you squint, you can see them as languages whose functions take a single argument that is equivalent to an array. At an internal implementation level this equivalence can be messy, though.

Lower level languages like C and Rust tend not to support this.

Pay08 1 days ago [-]
Rust definitely should. C++s std::initializer_list is a great tool and you wouldn't need macros for variadic functions anymore.
naasking 1 days ago [-]
Presumably creating a different class for parameter lists allows you to extend it with operations that aren't natural to tuples, like named arguments.
instig007 1 days ago [-]
if you don't find currying essential you haven't done pointfree enough. If you haven't done pointfree enough you haven't picked equational reasoning yet, and it's the thing that holds you back in your ability to read abstractions easily, which in turn guides your arguments on clarity.
talkingtab 1 days ago [-]
[flagged]
emih 1 days ago [-]
It's not that serious :)
raincole 1 days ago [-]
Could you explain how this comment is relevant?
leoc 1 days ago [-]
Right. Currying as the default means of passing arguments in functional languages is a gimmick, a hack in the derogatory sense. It's low-level and anti-declarative.
hrmtst93837 1 days ago [-]
[dead]
mkprc 1 days ago [-]
Prior to this article, I didn't think of currying as being something a person could be "for" or "against." It just is. The fact that a function of multiple inputs can be equivalently thought of as a function of a tuple can be equivalently thought of as a composite of single-input functions that return functions is about cognition, and understanding structure, not code syntax.
kevincox 1 days ago [-]
But it is about code syntax. Languages like Haskell make it part of the language by only supporting single-argument functions. So currying is the default behaviour for programmers.

I think you are focusing on the theoretical aspect of partial application and missing the actual argument of the article which having it be the default, implicit way of defining and calling functions isn't a good programming interface.

bbkane 1 days ago [-]
Similar to how lambda calculus "just is" (and it's very elegant and useful for math proofs), but nobody writes non-trivial programs in it...
tromp 1 days ago [-]
Make that almost nobody.

I wrote a non-trivial lambda program [1] which enumerates proofs in the Calculus of Constructions to demonstrate [2] that BBλ(1850) > Loader's Number.

[1] https://github.com/tromp/AIT/blob/master/fast_growing_and_co...

[2] https://codegolf.stackexchange.com/questions/176966/golf-a-n...

ajkjk 17 hours ago [-]
You can be for or against anything. This is a lot like having an opinion about, say, Oxford commas in a style guide, or the format of a tax form. Which is to say: not likely to do anything in the short term, until the day that someone is designing a new language / set of forms, in which case promoting the stance ahead of time might affect their decision-making.
AnimalMuppet 1 days ago [-]
I'm a programmer, not a computer scientist. The equivalence is a computer science thing. They are logically equivalent in theoretical computer science. Fine.

They are not equally easy for me to use when I'm writing a program. So from a software engineering perspective, they are very much not the same.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 19:59:30 GMT+0000 (Coordinated Universal Time) with Vercel.