Home freenode/#haskell: Logs Calendar

Logs: freenode/#haskell

←Prev  Next→
Page 1 .. 900 901 902 903 904 905 906 907 908 909 910 .. 5022
502,152 events total
2020-10-27 21:02:05 feuerbach joins (~feuerbach@unaffiliated/feuerbach)
2020-10-27 21:03:45 <Athas> It's a lot easier to write fast C than it is to write fast C.
2020-10-27 21:04:00 <Athas> And fast C looks a lot more like idiomatic C than fast Haskell looks like idiomatic Haskell.
2020-10-27 21:04:39 <Athas> Er: It's a lot easier to write fast C than it is to write fast Haskell.
2020-10-27 21:04:39 × yianni quits (18390fbe@d24-57-15-190.home.cgocable.net) (Ping timeout: 245 seconds)
2020-10-27 21:06:09 <davean> Hum. I'd say yes and no. In C it "looks" idiomatic sometimes because theres no representation at all that its different then a horrificly slower design. Most C programmers I know would accidently trample some massice optimizations because they didn't see they were there because the language has litterly zero representation of the optimization. That said, theres a lot of basic optimizaiton mistakes
2020-10-27 21:06:11 <davean> people make in Haskell that don't look much different either. (Though some super important optimizations are directly not idiomatic Haskell and thats sad and GHC should improve because the ones I'm thinking of shouldn't have to be done by hand at all)
2020-10-27 21:06:15 conal joins (~conal@198.8.81.89)
2020-10-27 21:06:55 <davean> Also, more Haskell optimizations are actually optimizations than ways to trick the compiler into generating the code you want.
2020-10-27 21:07:00 heatsink joins (~heatsink@107-136-5-69.lightspeed.sntcca.sbcglobal.net)
2020-10-27 21:07:04 <davean> So they stay optimizations.
2020-10-27 21:07:05 GyroW_ joins (~GyroW@ptr-48ujrfd1ztq5fjywfw3.18120a2.ip6.access.telenet.be)
2020-10-27 21:07:05 × GyroW_ quits (~GyroW@ptr-48ujrfd1ztq5fjywfw3.18120a2.ip6.access.telenet.be) (Changing host)
2020-10-27 21:07:05 GyroW_ joins (~GyroW@unaffiliated/gyrow)
2020-10-27 21:07:22 <tromp> It's also a lot easier to write correct Haskell than it is to write correct C :-)
2020-10-27 21:07:30 <davean> Also C compilers are just *smarter*
2020-10-27 21:07:49 <Athas> How are C compilers smarter?
2020-10-27 21:08:03 <davean> Athas: Things like polygonal optimization for ASM instruction dependency breaking.
2020-10-27 21:08:09 <davean> Athas: C compilers try to optimize code.
2020-10-27 21:08:25 <davean> GHC translates what you write into ASM pretty directly.
2020-10-27 21:08:26 × GyroW quits (~GyroW@unaffiliated/gyrow) (Ping timeout: 272 seconds)
2020-10-27 21:08:48 <davean> GHC can't even unroll a fold against a CAF.
2020-10-27 21:08:53 × conal quits (~conal@198.8.81.89) (Client Quit)
2020-10-27 21:08:54 <Athas> For things like instruction scheduling and register allocation? When going through LLVM, GHC should benefit from the same optimisations.
2020-10-27 21:09:03 <davean> Athas: incorrect.
2020-10-27 21:09:11 teardown joins (~user@unaffiliated/mrush)
2020-10-27 21:09:59 <Athas> Why not?
2020-10-27 21:10:12 <davean> for a number of reasons, one LLVM doesn't have enough semantic representation left.
2020-10-27 21:10:27 <davean> For another Haskell has more semantics defined.
2020-10-27 21:10:38 <davean> which means that LLVM doesn't have the analysis capability
2020-10-27 21:10:51 <davean> LLVM is fairly weak in understanding semantics, its too late for a number of things.
2020-10-27 21:11:06 <Athas> Could you clarify what you mean by polygonal optimization? I'm not sure I've heard that term before (is it like polyhedral optimisation?), but my work is mostly in high-level optimisations.
2020-10-27 21:11:14 <tomsmeding> why does it have that information then if it receives code from e.g. clang?
2020-10-27 21:11:21 <davean> er, yes, sorry I got it autocorrected it seems.
2020-10-27 21:11:46 <davean> tomsmeding: well for one they're designed for each other.
2020-10-27 21:11:53 <tomsmeding> sure
2020-10-27 21:12:03 <Athas> I'm not sure GCC or Clang does polyhedral optimisations by default, but I could be wrong.
2020-10-27 21:12:10 <tomsmeding> but then it sounds to me like ghc is leaving some llvm attributes on the table
2020-10-27 21:12:21 <Athas> Also, LLVM for sure only does polyhedral optimisations at the LLVM IR level (with Polly), and I'm not sure the C compiler helps.
2020-10-27 21:12:31 <Athas> After all, LLVM barely even has loops - they are reconstructed on demand.
2020-10-27 21:12:55 <davean> yes but based on the concept of how the C compiler works.
2020-10-27 21:13:08 <davean> So GHC has things like boxing.
2020-10-27 21:13:21 <Athas> Sure, LLVM shows its lineage as a C compiler backend, but I thought mostly in the area of nasty undefined behaviour semantics.
2020-10-27 21:13:41 <davean> Athas: A) not only B) uh, don't you think thats the thing thats directly relivent here?
2020-10-27 21:14:13 <Athas> By "undefined behaviour semantics", I mean things like LLVM removing some infinite loops, because they happen to be undefined in C.
2020-10-27 21:14:25 <Athas> I'm not sure it matters much for the kinds of optimisations that would help GHC.
2020-10-27 21:14:34 conal joins (~conal@198.8.81.89)
2020-10-27 21:15:25 × raehik quits (~raehik@cpc95906-rdng25-2-0-cust156.15-3.cable.virginm.net) (Ping timeout: 240 seconds)
2020-10-27 21:15:29 <Athas> Actually, I'm not really sure which optimisations would help GHC! Better automatic unboxing maybe?
2020-10-27 21:15:30 hekkaidekapus_ joins (~tchouri@gateway/tor-sasl/hekkaidekapus)
2020-10-27 21:16:27 <davean> Well yes, though thats not the sort of thing LLVM can reason about. Also just inlining certain things. There are a lot. I have further studies in it to do, but have been working on a bit of a search for which main ones its missing.
2020-10-27 21:17:03 × hekkaidekapus quits (~tchouri@gateway/tor-sasl/hekkaidekapus) (Ping timeout: 240 seconds)
2020-10-27 21:17:15 teardown_ joins (~user@unaffiliated/mrush)
2020-10-27 21:17:17 <Athas> GHC does a lot of inlining, doesn't it? It's the enabler of all the other big GHC-level optimisations, like fusion, or anything else driven by simplification rules.
2020-10-27 21:17:20 × teardown_ quits (~user@unaffiliated/mrush) (Client Quit)
2020-10-27 21:17:24 <davean> You can't "just" got from boxed to unboxed sums for example, and returning stuff as an unboxed tuple can be pretty massive.
2020-10-27 21:17:41 <davean> Athas: I mean fusion and such is some pretty basic code rewriting.
2020-10-27 21:18:00 <davean> Athas: Its a moderate framework for code no one optimized at all.
2020-10-27 21:18:32 × justanotheruser quits (~justanoth@unaffiliated/justanotheruser) (Ping timeout: 260 seconds)
2020-10-27 21:18:37 <davean> These optimizations *are* semantic changing though I want to point out. Which means they're very hard to talk about
2020-10-27 21:18:42 <davean> and they're also data representation changing.
2020-10-27 21:18:43 <Athas> I think there is great value in optimisations that let us write modular code without overhead, which is exactly what fusion does (in ideal cases).
2020-10-27 21:18:57 <davean> Athas: with *less* overhead
2020-10-27 21:19:01 <Athas> An optimisation that changes semantics is simply wrong, in the nomenclature I'm familiar with.
2020-10-27 21:19:14 × teardown quits (~user@unaffiliated/mrush) (Quit: leaving)
2020-10-27 21:19:22 <davean> Athas: many change semantics locally but won't change them outside the function boundaries for example.
2020-10-27 21:19:48 zq parts (~zq@xorshift.org) ()
2020-10-27 21:20:50 <Athas> I'm still not sure I understand. Could you name an example of such an optimisation?
2020-10-27 21:21:03 <davean> Athas: well, a bang pattern.
2020-10-27 21:21:07 × thc202 quits (~thc202@unaffiliated/thc202) (Ping timeout: 260 seconds)
2020-10-27 21:21:20 <davean> That line evaluates differently but the function probably doesn't if the bang is appropriate.
2020-10-27 21:21:41 <Athas> Those are certainly semantics-changing in an observable way, at least in general.
2020-10-27 21:21:57 <davean> in general yes, but in many specific cases no
2020-10-27 21:22:04 <davean> Hence GHC's strictness analysis
2020-10-27 21:22:07 <Athas> GHC only does the equivalent of adding bang patterns when the strictness analyser determines it can be done without any observable semantic effect.
2020-10-27 21:22:21 <davean> Right, but define "observable" there
2020-10-27 21:22:31 <davean> Where is the observer?
2020-10-27 21:22:35 <Athas> With respect to Haskell's (unwritten...) operational semantics.
2020-10-27 21:22:38 teardown joins (~user@unaffiliated/mrush)
2020-10-27 21:22:46 <davean> that line *did* change but the function its in didn't usually is the answer
2020-10-27 21:23:18 <monsterchrom> It doesn't look like this conversation is getting productive.
2020-10-27 21:23:23 <davean> no it does not
2020-10-27 21:23:34 × teardown quits (~user@unaffiliated/mrush) (Client Quit)
2020-10-27 21:23:41 <davean> I've been looking for how to step away from it politely.
2020-10-27 21:23:44 PlasmaStrike joins (~mattplasm@38.73.141.198)
2020-10-27 21:24:01 <monsterchrom> I recommend "I need a drink" :)
2020-10-27 21:24:26 <monsterchrom> I'm always fond of "computer science has become a bit too technical, let's go for a drink"
2020-10-27 21:24:52 <monsterchrom> Jay Misra said that after a conference.
2020-10-27 21:24:53 × dbmikus quits (~dbmikus@cpe-76-167-86-219.natsow.res.rr.com) (Ping timeout: 258 seconds)
2020-10-27 21:25:24 <monsterchrom> And of all people, he wrote a super technical, hard-to-follow proof in a paper (though not for that conference).
2020-10-27 21:26:00 teardown joins (~user@unaffiliated/mrush)
2020-10-27 21:26:23 ahmr88 joins (~ahmr88@cpc85006-haye22-2-0-cust131.17-4.cable.virginm.net)
2020-10-27 21:26:36 × teardown quits (~user@unaffiliated/mrush) (Client Quit)
2020-10-27 21:27:08 <monsterchrom> To be fair, his proof was merely operational semantics chasing. In the conference, some of the speakers inflicted monads on us.
2020-10-27 21:27:17 × bob_twinkles quits (~quassel@ec2-52-37-66-13.us-west-2.compute.amazonaws.com) (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.)
2020-10-27 21:27:27 <davean> Monads are the definition of terrible, clearly.
2020-10-27 21:27:28 <monsterchrom> (basically the monad for Hoare triples)
2020-10-27 21:27:34 × nbloomf quits (~nbloomf@2600:1700:ad14:3020:4998:5831:a85a:ec6f) (Quit: My MacBook has gone to sleep. ZZZzzz…)
2020-10-27 21:27:35 teardown joins (~user@gateway/tor-sasl/mrush)
2020-10-27 21:27:37 bob_twinkles joins (~quassel@ec2-52-37-66-13.us-west-2.compute.amazonaws.com)

All times are in UTC.