Home liberachat/#haskell: Logs Calendar

Logs: liberachat/#haskell

←Prev  Next→ 1,803,009 events total
2025-10-30 18:18:34 L29Ah joins (~L29Ah@wikipedia/L29Ah)
2025-10-30 18:20:49 merijn joins (~merijn@host-vr.cgnat-g.v4.dfn.nl)
2025-10-30 18:25:40 Googulator55 joins (~Googulato@2a01-036d-0106-03fa-9dbb-a0af-2124-a319.pool6.digikabel.hu)
2025-10-30 18:25:40 × merijn quits (~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 256 seconds)
2025-10-30 18:25:46 × Googulator54 quits (~Googulato@2a01-036d-0106-03fa-9dbb-a0af-2124-a319.pool6.digikabel.hu) (Quit: Client closed)
2025-10-30 18:31:47 poscat joins (~poscat@user/poscat)
2025-10-30 18:33:33 × poscat0x04 quits (~poscat@user/poscat) (Ping timeout: 252 seconds)
2025-10-30 18:36:37 merijn joins (~merijn@host-vr.cgnat-g.v4.dfn.nl)
2025-10-30 18:40:56 × L29Ah quits (~L29Ah@wikipedia/L29Ah) (Read error: Connection timed out)
2025-10-30 18:41:25 × merijn quits (~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 264 seconds)
2025-10-30 18:43:56 × pr1sm quits (~pr1sm@24.91.163.31) (Remote host closed the connection)
2025-10-30 18:46:37 Tuplanolla joins (~Tuplanoll@91-159-187-167.elisa-laajakaista.fi)
2025-10-30 18:47:50 haltingsolver joins (~cmo@2604:3d09:207f:8000::d1dc)
2025-10-30 18:52:01 merijn joins (~merijn@host-vr.cgnat-g.v4.dfn.nl)
2025-10-30 18:55:08 tromp joins (~textual@2001:1c00:3487:1b00:5978:a504:f2fd:26f)
2025-10-30 18:55:50 × tromp quits (~textual@2001:1c00:3487:1b00:5978:a504:f2fd:26f) (Client Quit)
2025-10-30 18:56:58 ttybitnik joins (~ttybitnik@user/wolper)
2025-10-30 18:57:04 tromp joins (~textual@2001:1c00:3487:1b00:5978:a504:f2fd:26f)
2025-10-30 18:58:50 × merijn quits (~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 256 seconds)
2025-10-30 19:07:34 L29Ah joins (~L29Ah@wikipedia/L29Ah)
2025-10-30 19:08:13 × synchromesh quits (~john@2406:5a00:2412:2c00:a151:32b5:2959:c671) (Read error: Connection reset by peer)
2025-10-30 19:08:16 × L29Ah quits (~L29Ah@wikipedia/L29Ah) (Read error: Connection reset by peer)
2025-10-30 19:09:26 synchromesh joins (~john@2406:5a00:2412:2c00:a151:32b5:2959:c671)
2025-10-30 19:10:03 merijn joins (~merijn@host-vr.cgnat-g.v4.dfn.nl)
2025-10-30 19:10:07 L29Ah joins (~L29Ah@wikipedia/L29Ah)
2025-10-30 19:14:41 × merijn quits (~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 244 seconds)
2025-10-30 19:17:50 rvalue- joins (~rvalue@about/hackers/rvalue)
2025-10-30 19:18:55 × rvalue quits (~rvalue@about/hackers/rvalue) (Ping timeout: 264 seconds)
2025-10-30 19:18:58 × tromp quits (~textual@2001:1c00:3487:1b00:5978:a504:f2fd:26f) (Quit: My iMac has gone to sleep. ZZZzzz…)
2025-10-30 19:20:22 × haltingsolver quits (~cmo@2604:3d09:207f:8000::d1dc) (Ping timeout: 256 seconds)
2025-10-30 19:23:38 merijn joins (~merijn@host-vr.cgnat-g.v4.dfn.nl)
2025-10-30 19:27:16 rvalue- is now known as rvalue
2025-10-30 19:28:49 × merijn quits (~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 264 seconds)
2025-10-30 19:35:34 × opencircuit_ quits (~quassel@user/opencircuit) (Remote host closed the connection)
2025-10-30 19:36:44 opencircuit joins (~quassel@user/opencircuit)
2025-10-30 19:39:28 merijn joins (~merijn@host-vr.cgnat-g.v4.dfn.nl)
2025-10-30 19:44:44 × merijn quits (~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 256 seconds)
2025-10-30 19:45:25 segfaultfizzbuzz joins (~segfaultf@23-93-74-222.fiber.dynamic.sonic.net)
2025-10-30 19:46:04 × Sgeo quits (~Sgeo@user/sgeo) (Read error: Connection reset by peer)
2025-10-30 19:48:52 Sgeo joins (~Sgeo@user/sgeo)
2025-10-30 19:55:15 merijn joins (~merijn@host-vr.cgnat-g.v4.dfn.nl)
2025-10-30 19:56:33 peterbecich joins (~Thunderbi@172.222.148.214)
2025-10-30 19:57:01 <segfaultfizzbuzz> what are the norms these days regarding using "ai" to code among good quality professional programmers. is it fine to use or do i need to type everything into my keyboard myself
2025-10-30 19:59:08 <EvanR> you will be ridiculed for using your keyboard at all. Voice input to an LLM is the only way to signal how up to date you are
2025-10-30 20:00:06 <segfaultfizzbuzz> hahaha... but seriously...?
2025-10-30 20:00:08 <EvanR> in the same way that handwriting is not a thing anymore
2025-10-30 20:00:10 <haskellbridge> <sm> it varies a lot
2025-10-30 20:00:14 <haskellbridge> <loonycyborg> Direct neural uplink better
2025-10-30 20:00:19 <geekosaur> ai gets haskell very wrong still
2025-10-30 20:00:22 × merijn quits (~merijn@host-vr.cgnat-g.v4.dfn.nl) (Ping timeout: 260 seconds)
2025-10-30 20:00:27 <segfaultfizzbuzz> lol yeah sorry i forgot to mention using neuralink while using tesla self driving
2025-10-30 20:00:31 <geekosaur> even js needs to be checked
2025-10-30 20:00:36 <segfaultfizzbuzz> geekosaur: oh? nice :-)
2025-10-30 20:00:43 <haskellbridge> <sm> I don't think you can generalise, it depends what you're doing
2025-10-30 20:00:44 <segfaultfizzbuzz> js is awful,... rust is like,... not bad i find
2025-10-30 20:01:02 <EvanR> I use it for C and it still needs to be checked, obviously
2025-10-30 20:01:22 <geekosaur> the question is what it was trained on. if you have a lot of blog posts by people who're still learning the language, the code the AI will produce will mostly be at their level
2025-10-30 20:01:57 <haskellbridge> <sm> yes also the model, the ai-based coding tool, the context, the prompts all matter
2025-10-30 20:01:57 <segfaultfizzbuzz> geekosaur: there also is how you prompt,... if your language is better you get better results i think
2025-10-30 20:02:04 <geekosaur> keep in mind that current AI still doesn't understand anything; it's a Markov bot with a smarter notion of how language fits together
2025-10-30 20:02:24 <haskellbridge> <sm> the coding tools and chat bots are no longer just that
2025-10-30 20:02:30 <segfaultfizzbuzz> and then there is architecting your application so that you can kind of limit the damage that can happen, but i would imagine that's the same as structuring code for writing on a team
2025-10-30 20:02:34 <geekosaur> which means it's only as good as the Markov chains it can build from its training data
2025-10-30 20:02:47 <segfaultfizzbuzz> hahaha markov chains :-) you might not be wrong there
2025-10-30 20:03:10 <EvanR> if you start pasting large amounts of code generated by the LLM into the project without understanding any of it, well, it will start to break down, and there's plenty of memes about where this leads
2025-10-30 20:03:23 <geekosaur> seriously, it explains a lot of things, including why LLMs in the first place
2025-10-30 20:03:35 <segfaultfizzbuzz> geekosaur: why LLMs in the first place? explain?
2025-10-30 20:03:43 <geekosaur> and makes a lot of sense if you think about it
2025-10-30 20:04:11 <segfaultfizzbuzz> EvanR: yeah i would say that "without understanding any of it" isn't what i do, but it can save me a lot of round trips back and forth from documentation and also it can sometimes stich things nicely (type conversions, etc)
2025-10-30 20:04:17 <geekosaur> why large language models are what led to something that comes across as "actual AI"
2025-10-30 20:04:54 <segfaultfizzbuzz> roughly speaking if you can write the type signature of your function then ai seems like it can do decently at filling in the rest 50% to 75% of the time...
2025-10-30 20:05:00 <EvanR> lol
2025-10-30 20:05:01 <segfaultfizzbuzz> or at least that's what i find
2025-10-30 20:05:13 <EvanR> no
2025-10-30 20:05:21 <segfaultfizzbuzz> EvanR: oh?
2025-10-30 20:05:44 <EvanR> the type signature is usually not enough to judge what you want it to do
2025-10-30 20:05:52 <EvanR> maybe you mean the carefully chosen name of the function
2025-10-30 20:05:56 <segfaultfizzbuzz> type signature plus a description/comment
2025-10-30 20:06:17 <segfaultfizzbuzz> yeah sorry, not type signature in isolation, i meant type signature as a hard restriction on the validity of the LLM output, given a reasonable prompt
2025-10-30 20:07:35 <EvanR> yeah, trying its own output against the type checker before producing anything, and retrying until it works, explains a lot of the performance I've seen
2025-10-30 20:07:45 <EvanR> but I'm not sure if that's a thing
2025-10-30 20:07:53 <segfaultfizzbuzz> EvanR: can you elaborate on "a lot of the performance I've seen"
2025-10-30 20:08:14 <EvanR> on a particular product preview I was working with
2025-10-30 20:08:19 <EvanR> integrated into the IDE
2025-10-30 20:08:30 <segfaultfizzbuzz> by performance do you mean like memory and cpu usage?
2025-10-30 20:08:32 <EvanR> the haskell code generator started slowing down to unusable after a while
2025-10-30 20:09:14 <EvanR> I'm sure cpu usage and memory usage was high but no
2025-10-30 20:09:33 <EvanR> code per minute
2025-10-30 20:10:00 <fgidim> haskell code into an LLM? :O
2025-10-30 20:10:07 <segfaultfizzbuzz> oh you think people are calling some centralized resource over the web at a high rate...?
2025-10-30 20:10:12 <EvanR> into, and out of
2025-10-30 20:10:25 <segfaultfizzbuzz> what is "the haskell code generator" otherwise?
2025-10-30 20:10:27 <geekosaur> I hear of a lot of people using Claude…
2025-10-30 20:10:34 <EvanR> I'm not sure how much this product was dependent on remote resources
2025-10-30 20:10:37 <geekosaur> Or Copilot
2025-10-30 20:10:38 <segfaultfizzbuzz> geekosaur: yep, i do... more specifically claude code
2025-10-30 20:10:43 <EvanR> it was more like copilot
2025-10-30 20:10:54 <EvanR> but some proprietary thing
2025-10-30 20:10:54 <segfaultfizzbuzz> copilot used to be pretty bad, it improved more recently... claude code is better
2025-10-30 20:11:03 merijn joins (~merijn@host-vr.cgnat-g.v4.dfn.nl)

All times are in UTC.