this is pretty much how it works in some cases, you need to port from one protocol to another, or to a different system altogether.
this is pretty much how it works in some cases, you need to port from one protocol to another, or to a different system altogether.
I’m pretty sure everyone has settled by now, Personally I hate systemd. It’s slow, relatively resource intensive, poorly designed in many aspects.
but as an init and service manager it’s the best. Though I do have to say dinit does get pretty close for me now.
I personally use Arch on my desktop and artix on my laptop. I want Systemd to die just as much as the next Systemd hater, but unfortunately I don’t believe we have anything better yet.
that’s just the thing, This is again, more fragmentation, Some compositors support always on top, some don’t, you choose x protocol for your app, and now your app works great on sway, but not on KDE or gnome, or it works great on gnome and not kde or sway etc. As an app developer the situation is a bloody joke. My current stance is “just use xwayland because wayland will never be suitable” and thankfully with cosmic and kde both supporting “don’t scale xwayland” this seems to work well.
EDIT: they also make enough deviances from the upstream protocols that this can’t really be considered a “experimental branch”
EX: https://github.com/misyltoad/frog-protocols/blob/main/frog-protocols/frog-color-management-v1.xml vs https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14/diffs
Yay, another set of protocols that will just lead to more and more fragmentation.
You do acknowledge one issue with Wayland, probably the biggest issue with Wayland, but then fail to acknowledge the second biggest issue with Wayland being fragmentation.
Solve one issue by making another issue worse.
yeah, that could indeed happen I suppose, didn’t think of that. Though I wonder if because of EME, an alternative drm solution could be viably implemented.
this has been a bit of a meme, but if you wanted to look at XL as extra large, then that could refer to the max resolution which is far great. I’ve seen people refere to it as “extra long-term” but I think the real reason is they just wanted to fuck with us
like what? I can kinda understand them not cooperating but how on earth could they lock them out of features?
ehh… not really, the amount of generated data you can get by snopping on LLM traffic is going to far out weigh the costs of running LLMs
I don’t even think this is the case, google does a lot pretty much everywhere. one example is one of the things they are pushing for is locally run AI (gemini, stable diffusion etc.) to run on your gpu via webgpu instead of needing to use cloud services, which is obviously privacy friendly for a myriad of reasons, in fact, we now have multiple implementations of LLMs that run locally in browser on webgpu, and even a stable diffusion implementation (never got it to work though since my most beefy gpu is an arc a380 with 6gb of ram)
they do other stuff too, but with the recent craze push for AI, I think this is probably the most relevant.
this is from the google research team, they contribute a LOT to many foss projects. Google is not a monolith, each team is made of often very different folk, who have very different goals
I remeber testing parrot a few years ago, it was quite nice back when I tested it, had some real cringe marketing back then, way worse then it has now by quick glance, that being said, it had some real good OOB configs for security stuff and some neat tools. wouldn’t mind trying it again sometime when I find the time.
color management is actually super hard to do, so making sure it’s done right is very important, so this is one of the few times it actually makes sense. I mean, just take a look at windows, it still looks like shit over there.
for the semantically inquisitive folk.
It’s worth noting if you are using this on an arm device, this isn’t a “virtualization VM” any more, as you are using the emulator backend, so this is far closer to a traditional emulator then anything else.
While the term virtual machine is extremely poorly defined, it could still apply.
also TCG is as slow as molasses, it’s a good demo, not actually usable for much, at least unless it’s a super beefy phone.
I’m not defending x11, both wayland and x11 are trash, it’s just whichever trash pile you find yourself most comfortable in.
On x11, fractional scaling is more or less just handled by the gui toolkit. It does suck that you need to set an env var for it, but IMO that isn’t too bad.
the multi monitor stuff does suck for sure. It’s not an issue for me personally. One thing that is a massive issue for me is x11’s terrible handling of touch, I use touch screens daily so that’s a massive issue for me, wayland compositors are also typically quite a bit faster then x11 + wms on low end systems now too (not to be confused with total resource usage/lightness).
Wayland has a lot of things going for it, but it also has a lot going against it. Both are terrible. Arcan save us (oh how a man can dream)
This is actually one thing that doesn’t involve wayland, as pretty much everyone is using at-spi. It’s not great, but it does work.
for one, it’s missing a good chunk of A11y stuff, activity watch requires something to monitor the active window, there is a PR for that, still not merged, this has been an issue for years
It’s missing protocols that will let applications request to be a privileged application, which is necessary for applications to use other functionality.
Missing protocols to control always-on-top / layers, which is needed for OSKs to function, and a couple other A11y things off the top of my head.
It’s not just a11y either, Window positioning still isn’t merged, which means if your app opens two “windows”, you cannot currently select where to open them, or to even bind two windows together (Android emulator does this for instance).
There is a LOT wayland is missing, it IS getting better, just at a snails pace.
Because Wayland is STILL lacking a lot of things that people need.
It’s the way it’s written, it’s typically hour:minutes.seconds
can’t say I have experienced that. I use a myriad of modern but lower end systems and stuff like dinit still uses less resources and is in turn better for the speed and responsiveness of my systems