Home liberachat/#haskell: Logs Calendar

Logs: liberachat/#haskell

←Prev  Next→ 1,804,106 events total
2021-08-18 20:20:55 <dsal> Is it using too much CPU, or too little?
2021-08-18 20:21:10 × merijn quits (~merijn@83-160-49-249.ip.xs4all.nl) (Ping timeout: 240 seconds)
2021-08-18 20:21:42 × burnsidesLlama quits (~burnsides@dhcp168-023.wadham.ox.ac.uk) (Ping timeout: 268 seconds)
2021-08-18 20:21:48 <chisui> dsal: Oh, it uses 100% on both threads it got.
2021-08-18 20:22:07 <dsal> `threadDelay` is often used as an approximation for a solution to a different problem.
2021-08-18 20:23:30 <davean> Yah, you usually want to wait until a specific time has happened, or something
2021-08-18 20:23:44 <chisui> dsal: Yeah, I should probably use something like a `TBQueue`. Unfortunately sdl2 requires the callback to be in `IO`
2021-08-18 20:24:03 <dsal> Sounds like too much CPU. :) I'd think this wouldn't be particularly expensive. Profiling might help you understand where all the CPU is going, but in general, I don't think any work should be done if there's nothing that needs computation.
2021-08-18 20:24:04 <dsal> :t atomically
2021-08-18 20:24:05 <lambdabot> error: Variable not in scope: atomically
2021-08-18 20:24:09 <dsal> boo
2021-08-18 20:24:17 <dsal> % :t atomically
2021-08-18 20:24:17 <yahb> dsal: STM a -> IO a
2021-08-18 20:24:57 <chisui> is that safe to use in a multythreaded environment?
2021-08-18 20:25:07 <davean> of course
2021-08-18 20:25:09 <davean> thats the point
2021-08-18 20:25:48 <dsal> Most other languages with multithreading support wish they could do this. :)
2021-08-18 20:25:59 <tomsmeding> chisui: the thing about STM is that all the STM actions within a single 'atomically' block should happen, well, atomically; how it does that in practice is try running it, and if it detects another thread has done something simultaneously, it rolls back and tries again
2021-08-18 20:26:40 × raehik quits (~raehik@cpc95906-rdng25-2-0-cust156.15-3.cable.virginm.net) (Ping timeout: 240 seconds)
2021-08-18 20:27:04 <tomsmeding> where a mutex is "pessimistic concurrency", i.e. always paying the cost of locking expecting that races are going to happen often, STM is "optimistic concurrency", i.e. just going for it and paying when there actually ended up being a race
2021-08-18 20:27:32 <davean> tomsmeding: Uh, thats an implimentaiton detail that can vary
2021-08-18 20:27:37 <tomsmeding> I know
2021-08-18 20:27:56 <tomsmeding> but I think it's helpful in getting an intuitive understanding about what's even going on here, and how it _can_ even be implemented
2021-08-18 20:28:01 × favonia quits (~favonia@user/favonia) (Ping timeout: 252 seconds)
2021-08-18 20:28:13 <davean> I mean it can be implimented as a mutex too
2021-08-18 20:28:14 <tomsmeding> I often find that when I have no idea how something could even be implemented, I have no feeling for how to work with it
2021-08-18 20:28:21 <tomsmeding> okay fair
2021-08-18 20:28:35 <tomsmeding> then read it as a bit of evangelising about why STM is cool :)
2021-08-18 20:28:38 <stevenxl> Can someone point out to me what is wrong with ths stack.yaml file:
2021-08-18 20:28:40 <davean> Ok :)
2021-08-18 20:28:45 <tomsmeding> or, why ghc's implementation of it is cool
2021-08-18 20:28:48 <stevenxl> https://www.irccloud.com/pastebin/nUoRzgeh/
2021-08-18 20:28:58 <davean> tomsmeding: Its no the the coolest!
2021-08-18 20:29:10 stevenxl Gives me a warning "Unrecognized field in Snapshot: extra-deps".
2021-08-18 20:29:12 <davean> tomsmeding: people have played with ones that use HTM, ones that have guarrenteed progress and fairness ...
2021-08-18 20:29:28 <davean> well, I don't know that the HTM ever happened
2021-08-18 20:30:41 <stevenxl> https://www.irccloud.com/pastebin/066TZS1X/
2021-08-18 20:30:51 stevenxl Even that simple file gives me an error, and that is supposedly the default.
2021-08-18 20:30:58 <chisui> Ok, I'll will change to stm. Somehow I'm still not convinced that this will fix the issue
2021-08-18 20:31:53 <tomsmeding> stevenxl: can you give the full command you're invoking, and the full error?
2021-08-18 20:32:19 stevenxl Hi tomsmeding - thank you for the offer to help. Apparently, A custom snapshot doesn't use extra-deps, they go under packages.
2021-08-18 20:33:02 <tomsmeding> oh this is not a stack.yaml of a project?
2021-08-18 20:33:47 <dsal> chisui: My guess is that the issue is that you're burning all the cores and putting in time delays to try to artificially slow stuff down. Just organize information exchanges with queues and have a thread keeping it populated and let the other one read from it when it needs it. Shouldn't be using much CPU.
2021-08-18 20:34:08 stevenxl tomsmeding: I completely missed the fact that we have a `stack.yaml` which points to a resolver.
2021-08-18 20:34:24 ystael joins (~ystael@user/ystael)
2021-08-18 20:35:05 <chisui> dsal: It' would be great if there was a streaming library that supports this.
2021-08-18 20:36:04 <dsal> Asking for data from IO is just `atomically . readTBQueue`
2021-08-18 20:36:16 × machinedgod quits (~machinedg@135-23-192-217.cpe.pppoe.ca) (Ping timeout: 252 seconds)
2021-08-18 20:36:23 × gehmehgeh quits (~user@user/gehmehgeh) (Quit: Leaving)
2021-08-18 20:36:35 <adamCS> chisui: Maybe streamly? (https://hackage.haskell.org/package/streamly)
2021-08-18 20:37:05 <chisui> should I use a bare TBQueue or chunk the data further?
2021-08-18 20:37:06 jolly joins (~jolly@208.180.97.158)
2021-08-18 20:37:21 <chisui> adamCS: thanks, I'll take a look
2021-08-18 20:37:39 <davean> chisui: lots of streaming libraries can do this sort of thing
2021-08-18 20:38:07 hexfive joins (~eric@50.35.83.177)
2021-08-18 20:38:22 <dsal> chisui: What you decide `a` should be there is up to you. Easy enough to change.
2021-08-18 20:40:07 × cheater quits (~Username@user/cheater) (Ping timeout: 252 seconds)
2021-08-18 20:40:16 <tomsmeding> chisui: how many things are you planning on pushing on that queue per second
2021-08-18 20:40:33 <tomsmeding> if that's 44100 things, then probably chunk that a bit :)
2021-08-18 20:40:44 cheater joins (~Username@user/cheater)
2021-08-18 20:41:05 <chisui> tomsmeding: It's currently running on a sample rate of 48k ;)
2021-08-18 20:41:23 <davean> chisui: haha, whats your latency requirement?
2021-08-18 20:41:31 <davean> chisui: use that to calculate chunk size
2021-08-18 20:41:37 <davean> but 48kps is nothing
2021-08-18 20:41:53 <davean> I do that many web requests in a thread
2021-08-18 20:43:05 <tomsmeding> TBQueue is the classic two-lists implementation of a queue, so it will do a list reversal of roughly the whole queue every once in a while
2021-08-18 20:43:27 <tomsmeding> while throughput is fine, that's probably not great for latency, depending on exactly how large the queue will be
2021-08-18 20:43:28 azeem joins (~azeem@dynamic-adsl-94-34-33-6.clienti.tiscali.it)
2021-08-18 20:43:36 <davean> chisui: I'd probably make the chunks half the size of your latency requirement
2021-08-18 20:44:10 <davean> as a first default
2021-08-18 20:44:22 <chisui> I think that sdl always requests a fixed size chunk. I'll just use that
2021-08-18 20:45:01 <davean> chisui: last I knew it was configurable
2021-08-18 20:45:17 × pompez quits (~martin@user/pompez) (Quit: WeeChat 3.2)
2021-08-18 20:45:23 <chisui> yeah, but once it's configured it doesn't change randomly right?
2021-08-18 20:45:41 wroathe joins (~wroathe@c-68-54-25-135.hsd1.mn.comcast.net)
2021-08-18 20:46:35 <davean> chisui: right, but you need to pick the size of that based on your latency requirement, so its just moving the problem
2021-08-18 20:55:37 × wroathe quits (~wroathe@c-68-54-25-135.hsd1.mn.comcast.net) (Ping timeout: 268 seconds)
2021-08-18 20:57:00 × fvr quits (uid503686@id-503686.highgate.irccloud.com) (Quit: Connection closed for inactivity)
2021-08-18 21:01:25 burnsidesLlama joins (~burnsides@dhcp168-023.wadham.ox.ac.uk)
2021-08-18 21:01:42 × nschoe quits (~quassel@2a01:e0a:8e:a190:f185:3872:6a89:c741) (Ping timeout: 245 seconds)
2021-08-18 21:01:56 × eggplantade quits (~Eggplanta@108-201-191-115.lightspeed.sntcca.sbcglobal.net) (Remote host closed the connection)
2021-08-18 21:03:39 acidjnk_new joins (~acidjnk@p200300d0c72b952850c7a959aba8feb6.dip0.t-ipconnect.de)
2021-08-18 21:03:52 wrengr_away is now known as wrengr
2021-08-18 21:05:40 × burnsidesLlama quits (~burnsides@dhcp168-023.wadham.ox.ac.uk) (Ping timeout: 240 seconds)
2021-08-18 21:08:19 × __monty__ quits (~toonn@user/toonn) (Quit: leaving)
2021-08-18 21:08:50 × ec quits (~ec@gateway/tor-sasl/ec) (Ping timeout: 244 seconds)
2021-08-18 21:09:31 favonia joins (~favonia@user/favonia)
2021-08-18 21:10:22 <chisui> Thank's everyone. Using TBQueue together with sensibly sized chunks worked wonders.
2021-08-18 21:11:56 <tomsmeding> 🎉
2021-08-18 21:14:31 × Pickchea quits (~private@user/pickchea) (Quit: Leaving)
2021-08-18 21:15:30 <monochrom> :)
2021-08-18 21:15:58 × hexfive quits (~eric@50.35.83.177) (Quit: WeeChat 3.0)
2021-08-18 21:17:35 × chisui quits (~chisui@200116b8681e48004d4a4305e410a0e6.dip.versatel-1u1.de) (Quit: Client closed)
2021-08-18 21:17:46 <monochrom> Yeah in your case you just go "atomically (enqueue this)" and "atomically (dequeue that)" and that's your IO level.
2021-08-18 21:18:00 chisui joins (~chisui@200116b8681e48004d4a4305e410a0e6.dip.versatel-1u1.de)
2021-08-18 21:18:54 <monochrom> The STM level needs to be more fine-grained because "enqueue this" for example is multiple lines of STM code.
2021-08-18 21:19:10 <monochrom> or more precisely, multiple operations.
2021-08-18 21:20:10 <chisui> monochrom: are you talking about the current version I pushed?
2021-08-18 21:20:11 <monochrom> And other people will also have use cases requiring "atomically (enqueue this and dequeue something else)".
2021-08-18 21:20:24 <monochrom> I think yes. I haven't checked.

All times are in UTC.