Logs: liberachat/#haskell
| 2026-01-23 10:05:10 | <danza> | are you sure that is the right level of abstraction? Wai/warp sound too close to the server for a cache that has to query a database |
| 2026-01-23 10:05:50 | × | xff0x quits (~xff0x@fsb6a9491c.tkyc517.ap.nuro.jp) (Ping timeout: 265 seconds) |
| 2026-01-23 10:06:32 | → | __monty__ joins (~toonn@user/toonn) |
| 2026-01-23 10:07:29 | <[exa]> | bwe: can you be more specific on "internal data cache"? (does wai have a cache?) |
| 2026-01-23 10:12:56 | × | vanishingideal quits (~vanishing@user/vanishingideal) (Ping timeout: 240 seconds) |
| 2026-01-23 10:13:49 | <danza> | data-based contents would be provided at application level, but it's tricky to cache because database contents can change |
| 2026-01-23 10:14:20 | × | trickard quits (~trickard@cpe-93-98-47-163.wireline.com.au) (Read error: Connection reset by peer) |
| 2026-01-23 10:14:32 | → | trickard joins (~trickard@cpe-93-98-47-163.wireline.com.au) |
| 2026-01-23 10:17:09 | → | ljdarj joins (~Thunderbi@user/ljdarj) |
| 2026-01-23 10:25:39 | → | Googulator joins (~Googulato@team.broadbit.hu) |
| 2026-01-23 10:28:03 | × | merijn quits (~merijn@host-cl.cgnat-g.v4.dfn.nl) (Read error: Connection reset by peer) |
| 2026-01-23 10:28:10 | → | merijn joins (~merijn@77.242.116.146) |
| 2026-01-23 10:30:55 | × | hellwolf quits (~user@4cde-2438-8978-87b1-0f00-4d40-07d0-2001.sta.estpak.ee) (Ping timeout: 264 seconds) |
| 2026-01-23 10:30:59 | → | Square2 joins (~Square@user/square) |
| 2026-01-23 10:34:47 | <bwe> | [exa]: I currently load data when the web server starts. It hands over the data to hyperbole web framework in a Reader context through Effectful. What I want is that it loads it all 3 minutes for example without reloading the web server altogether. |
| 2026-01-23 10:35:52 | → | Googulator23 joins (~Googulato@team.broadbit.hu) |
| 2026-01-23 10:35:58 | <[exa]> | bwe: you can have a MVar in the Reader that points to data, and replace it every now and then from a completely independent thread? |
| 2026-01-23 10:35:59 | <bwe> | danza: So, the internal data cache actually lies in the hyperbole framework. But I pull it only in on start of warp. |
| 2026-01-23 10:36:34 | <[exa]> | (I'm not very sure how hyperbole works but if you use Reader, pushing in the MVar shouldn't be a big issue.) |
| 2026-01-23 10:37:16 | <bwe> | [exa]: Well, if I get you right, that is similar to what I thought. "How can I update some thing in a different thread from another (that just sleeps between updates)?" |
| 2026-01-23 10:38:43 | <bwe> | danza: I am quite tolerant for outdated database states within a range of up to 3 minutes (update time of my internal cache). |
| 2026-01-23 10:39:00 | <[exa]> | bwe: yeah MVars are great for that, loading them doesn't cost anything and you can atomically flip to the new state |
| 2026-01-23 10:39:45 | × | Googulator quits (~Googulato@team.broadbit.hu) (Ping timeout: 272 seconds) |
| 2026-01-23 10:40:33 | <danza> | but they should have one MVar per query? Anyway yes, sounds like something better solved in hyperbole |
| 2026-01-23 10:41:14 | → | hellwolf joins (~user@e7d0-28a4-0ea3-c496-0f00-4d40-07d0-2001.sta.estpak.ee) |
| 2026-01-23 10:46:35 | → | fp joins (~Thunderbi@2001:708:20:1406::10c5) |
| 2026-01-23 10:47:20 | <bwe> | ...and I thought data stored in Reader doesn't change (once loaded). |
| 2026-01-23 10:49:11 | <bwe> | Then MVar is nothing but a (changeable) State across different threads, does that mean different binaries? How do they find them each other, then? |
| 2026-01-23 10:50:48 | <__monty__> | Threads don't imply different binaries. They don't even imply different processes. Rather the reverse. |
| 2026-01-23 10:54:04 | <mauke> | :t forkIO |
| 2026-01-23 10:54:05 | <lambdabot> | error: [GHC-88464] Variable not in scope: forkIO |
| 2026-01-23 10:54:18 | <[exa]> | bwe: yeah technically the "variable" reference doesn't change, but you're allowed to rewrite what's it pointing to |
| 2026-01-23 10:57:02 | <bwe> | __monty__: So, when I start the web server, I need to fork from that the runner that updates the MVar. That would work while different binary doesn't, right? |
| 2026-01-23 10:57:11 | × | trickard quits (~trickard@cpe-93-98-47-163.wireline.com.au) (Read error: Connection reset by peer) |
| 2026-01-23 10:57:20 | × | XZDX quits (~xzdx@user/XZDX) (Remote host closed the connection) |
| 2026-01-23 10:57:25 | → | trickard_ joins (~trickard@cpe-93-98-47-163.wireline.com.au) |
| 2026-01-23 10:59:39 | × | thenightmail quits (~thenightm@user/thenightmail) (Ping timeout: 260 seconds) |
| 2026-01-23 11:00:04 | → | thenightmail joins (~thenightm@user/thenightmail) |
| 2026-01-23 11:00:20 | × | oskarw quits (~user@user/oskarw) (Remote host closed the connection) |
| 2026-01-23 11:01:05 | → | lantti joins (~lantti@xcalibur.cc.tut.fi) |
| 2026-01-23 11:01:43 | → | oskarw joins (~user@user/oskarw) |
| 2026-01-23 11:04:01 | <__monty__> | I'm not sure. mauke seems to suggest using forkIO. |
| 2026-01-23 11:04:48 | → | vanishingideal joins (~vanishing@user/vanishingideal) |
| 2026-01-23 11:06:19 | <mauke> | if we're just updating a data structure that someone else reads from and no other interaction, wouldn't an IORef suffice? |
| 2026-01-23 11:07:39 | <mauke> | https://hackage-content.haskell.org/package/base-4.22.0.0/docs/Data-IORef.html#v:atomicModifyIORef |
| 2026-01-23 11:08:38 | <__monty__> | Does an MVar have that much more overhead that the footgun factor is worth it? |
| 2026-01-23 11:08:55 | <tomsmeding> | an MVar definitely has much more overhead than an IORef |
| 2026-01-23 11:09:04 | <mauke> | footgun how? |
| 2026-01-23 11:09:19 | <tomsmeding> | it's a lock with an explicit queue attached (a list of threads waiting to take the lock) for fairness |
| 2026-01-23 11:09:33 | <tomsmeding> | atomicModifyIORef is little more than a single CPU instruction (compare-and-swap) |
| 2026-01-23 11:09:49 | <tomsmeding> | whether this is important in the application depends on how often you do this, of course |
| 2026-01-23 11:13:15 | <tomsmeding> | in return for the overhead, an MVar gives you 1. fairness (if you're blocking on the MVar and no one holds the MVar indefinitely, you're guaranteed to get it eventually), 2. the ability to do IO while holding the lock |
| 2026-01-23 11:13:27 | → | housemate joins (~housemate@2405:6e00:2457:9d18:a3e8:cd50:91c3:2f91) |
| 2026-01-23 11:15:59 | <tomsmeding> | also 3. an MVar can also function as a one-place channel instead of a lock |
| 2026-01-23 11:16:25 | × | merijn quits (~merijn@77.242.116.146) (Ping timeout: 246 seconds) |
| 2026-01-23 11:18:55 | <danza> | well that sounds saner to me for the goal |
| 2026-01-23 11:27:28 | × | Square2 quits (~Square@user/square) (Ping timeout: 256 seconds) |
| 2026-01-23 11:28:06 | × | housemate quits (~housemate@2405:6e00:2457:9d18:a3e8:cd50:91c3:2f91) (Quit: https://ineedsomeacidtocalmmedown.space/) |
| 2026-01-23 11:29:21 | <__monty__> | mauke: The footgun is thinking you'll be able to just add another IORef later and not run into trouble. |
| 2026-01-23 11:30:26 | <mauke> | applies to MVar, too |
| 2026-01-23 11:30:30 | → | merijn joins (~merijn@host-cl.cgnat-g.v4.dfn.nl) |
| 2026-01-23 11:31:45 | <__monty__> | So the doc suggesting MVars instead is misleading? |
| 2026-01-23 11:32:46 | <mauke> | well, it only talks about atomicity |
| 2026-01-23 11:32:56 | <mauke> | with MVars you can deadlock instead |
| 2026-01-23 11:33:06 | <int-e> | "Extending the atomicity to multiple IORefs is problematic, so it is recommended that if you need to do anything more complicated then using MVar instead is a good idea." |
| 2026-01-23 11:33:19 | <int-e> | If that's what you mean I don't know how it's misleading. |
| 2026-01-23 11:34:47 | × | Inline quits (~User@2001-4dd7-bc56-0-bf4e-84aa-8c9c-590c.ipv6dyn.netcologne.de) (Quit: KVIrc 5.2.6 Quasar http://www.kvirc.net/) |
| 2026-01-23 11:34:48 | → | danz20169 joins (~danza@user/danza) |
| 2026-01-23 11:35:37 | <__monty__> | Well, it suggests you can extend atomicity across multiple, no? So if you can't do that easily without deadlocking it's not a great suggestion. |
| 2026-01-23 11:36:31 | <Axman6> | you just have to be careful about the order you access things |
| 2026-01-23 11:36:36 | <int-e> | You can't atomically update two IORefs at the same time. |
| 2026-01-23 11:36:43 | trickard_ | is now known as trickard |
| 2026-01-23 11:36:44 | <Axman6> | IIRC MVar has a consistent Ord instance? |
| 2026-01-23 11:37:06 | × | danza quits (~danza@user/danza) (Ping timeout: 256 seconds) |
| 2026-01-23 11:37:25 | × | merijn quits (~merijn@host-cl.cgnat-g.v4.dfn.nl) (Ping timeout: 255 seconds) |
| 2026-01-23 11:37:42 | <Axman6> | IORefs with atomicModifyIORef are amazing, if you can store all your state in pure data structures that can always be changed without doing any other IO. if you can't guarantee those properties, other options are much safer |
| 2026-01-23 11:37:47 | <int-e> | (But you can have a single IORef that stores a tuple or a record.) |
| 2026-01-23 11:39:04 | <Axman6> | I've been reading a lot of the Cardano code recently, and they make a lot of use of STM, as well as pure data structures. |
| 2026-01-23 11:39:05 | <mauke> | or a Map |
| 2026-01-23 11:39:57 | <Axman6> | they can store an arbitrarily complicated record too, and aMIOref can be used to update as much or as little of that structure as you like |
| 2026-01-23 11:40:19 | <mauke> | I had a server that would answer client queries from a central data structure (a Map stored in an IORef) |
| 2026-01-23 11:40:41 | <mauke> | there was a writer thread that would occasionally update the structure by just overwriting the Map |
| 2026-01-23 11:41:08 | <mauke> | worked great |
| 2026-01-23 11:41:34 | <Axman6> | I have also done that - it needed to serve images of some live-ish data, and generating the images was pretty slow, so with each new piece of data it'd just make new PNGs and update the map in the IORef. Meant all the HTTP requests were instant |
| 2026-01-23 11:43:10 | <danz20169> | seems a solution suited to server-side data vis |
| 2026-01-23 11:44:39 | <danz20169> | did you use any library to encode PNGs as types? |
| 2026-01-23 11:45:34 | <danz20169> | maybe just passed them as black boxes |
| 2026-01-23 11:48:20 | → | merijn joins (~merijn@host-cl.cgnat-g.v4.dfn.nl) |
| 2026-01-23 11:49:47 | × | merijn quits (~merijn@host-cl.cgnat-g.v4.dfn.nl) (Read error: Connection reset by peer) |
| 2026-01-23 11:50:41 | × | Googulator23 quits (~Googulato@team.broadbit.hu) (Ping timeout: 272 seconds) |
| 2026-01-23 11:51:27 | <tomsmeding> | __monty__: while yes, adding another IORef later means you can't update both in the same atomic transaction, I'm not sure what part of the API would lead one to assume that you can |
| 2026-01-23 11:52:03 | <tomsmeding> | if anything, having to order locks to avoid deadlock is a more insidious risk that you may not see coming if you haven't studied concurrent programming |
| 2026-01-23 11:52:41 | <__monty__> | You may be right. |
| 2026-01-23 11:53:01 | <tomsmeding> | (for completeness: you have two locks, A and B, and two threads, 1 and 2. 1 locks A and then B, and 2 locks B and then A. If the two executions interleave, 1 has A locked and 2 has B locked and they both wait on the other, indefinitely) |
| 2026-01-23 11:53:20 | → | merijn joins (~merijn@77.242.116.146) |
| 2026-01-23 11:53:23 | <tomsmeding> | (if you only ever lock such locks in a particular global order, this problem cannot arise) |
| 2026-01-23 11:53:26 | <int-e> | You can perhaps criticize the IORef docs for not mentioning STM, but the reason for that is probably historical, and you'll find out about STM when you read the MVar docs. |
| 2026-01-23 11:56:14 | <tomsmeding> | and if you are worried about performance implications of using an MVar over an IORef, you should also be worried about STM, as it similar (?) overhead, and also has starvation issues if you have very long and also very short transactions that update the same TVars |
| 2026-01-23 11:56:28 | <tomsmeding> | *as it has similar |
| 2026-01-23 11:56:51 | → | fp1 joins (~Thunderbi@2001:708:150:10::9d7e) |
All times are in UTC.