Skip to main content


Which areas of Linux would benefit most from further standardization?


The diversity of Linux distributions is one of its strengths, but it can also be challenging for app and game development. Where do we need more standards? For example, package management, graphics APIs, or other aspects of the ecosystem? Would such increased standards encourage broader adoption of the Linux ecosystem by developers?
in reply to Einar

I'd say games. I'd that really takes off, Linux would replace Windows and all other standards will follow.
in reply to Mihies

That already happened though. Tens of thousands of games on Steam can be played by hitting the install and then the play button. Only a few "competitive multiplayer" holdouts with rootkits and an irrational hatred of Linux don't work.
in reply to Overspark

Yep. Two solid years of steady gaming on various Linux distributions. No issues aside from no more pubg, no more valorant. Oh wait, that’s not an issue at all. Fuck their rootkits.
in reply to Fecundpossum

Two solid years of steady gaming on various Linux distributions.


And in some cases, even better experience than on Windows (f.e. older games compatibility or higher FPS alongside smoother gameplay)

Tbh, about a year ago i checked the price difference between various laptops with Windows preinstalled and without any OS. The difference wasn't actually a flat amount of money, it was like +10% (price of the laptop 500$ --- ~ +50$ for Windows | price of the other laptop 1000$ --- ~ 100$ for Windows).

So because of the actual state of gaming on Linux (and overall experience) i wouldn't pay that 10% to play games with rootkits built-in, but rather spend it on other things

This entry was edited (2 weeks ago)
in reply to Overspark

with rootkits


These are eventually going to be blocked on Windows. Microsoft are making changes to what's allowed to run in the kernel after the Crowdstrike issue last year.

in reply to Mihies

Lenovo and HP have recently announced new non-windows gaming handhelds. It is getting better.

reshared this

in reply to Mihies

This has always been the key. Amazing to me that not many seem to take it seriously.
in reply to Mihies

Have you tried recently? We've been pretty much at parity for years now. Almost every game that doesn't run is because the devs are choosing to make it that way.
in reply to Einar

At this point, package management is the main differentiating factor between distro (families). Personally, I'm vehemently opposed to erasing those differences.

The "just use flatpak!" crowd is kind of correct when we're talking solely about Linux newcomers, but if you are at all comfortable with light troubleshooting if/when something breaks, each package manager has something unique und useful to offer. Pacman and the AUR a a good example, but personally, you can wring nixpkgs Fron my cold dead hands.

And so you will never get people to agree on one "standard" way of packaging, because doing your own thing is kind of the spirit of open source software.

But even more importantly, this should not matter to developers. It's not really their job to package the software, for reasons including that it's just not reasonable to expect them to cater to all package managers. Let distro maintainers take care of that.

in reply to Einar

in reply to SwingingTheLamp

The term "dependency hell" reminds me of "DLL hell" Windows devs used to refer to. Something must have changed around 2000 because I remember an article announcing, "No more DLL hell." but I don't remember what the change was.
in reply to SwingingTheLamp

I find the Darwin approach to dynamic linking too restrictive. Sometimes there needs to be a new release which is not backwards compatible or you end up with Windows weirdness. It is also too restrictive on volunteer developers giving their time to open source.

At the same time, containerization where we throw every library - and the kitchen sink - at an executable to get it to run does not seem like progress to me. It's like the meme where the dude is standing on a huge horizontal pile of ladders to look over a small wall.

At the moment you can choose to use a distro which follows a particular approach to this problem; one which enthuses its developers, giving some guarantee of long term support. This free market of distros that we have at the moment is ideal in my opinion.

This entry was edited (1 week ago)
in reply to Einar

This entry was edited (2 weeks ago)
in reply to

You'll never get perfect binary compatibility because different distros use different versions of libraries. Consider Debian and Arch which are at the opposite ends of the scale.
in reply to MyNameIsRichard

And yet, ancient Windows binaries will still (mostly) run and macOS allows you to compile for older system version compatibility level to some extent (something glibc alone desperately needs!). This is definitely a solvable problem.

Linus keeps saying “you never break userspace” wrt the kernel, but userspace breaks userspace all the time and all people say is that there’s no other way.

in reply to 2xsaiko

The difference is that most of your software is built for your distribution, the only exception being some proprietary shit that says it supports Linux, but in reality only supports Ubuntu. That's my pet peeve just so that you know!
This entry was edited (1 week ago)
in reply to MyNameIsRichard

Distributions are not the problem. Most just package upstream libraries as-is (plus/minus some security patches). Hence why programs built for another distro will a lot of the time just run as is on a contemporary distro given the necessary dependencies are installed, perhaps with some patching of the library paths (plenty of packages in nixpkgs which just use precompiled deb packages as a source, as an extreme example because nixpkgs has a very different file layout).

Try a binary built for an old enough Ubuntu version on a new Ubuntu version however...

in reply to 2xsaiko

Linus got it right, it's just that other userspace fundamental utilities didn't.
in reply to 2xsaiko

It works under Windows because the windows binaries come with all their dependency .dll (and/or they need some ancient visual runtime installed).

This is more or less the Flatpack way, with bundling all dependencies into the package

Just use Linux the Linux way and install your program via the package manager (including Flatpack) and let that handle the dependencies.

I run Linux for over 25 years now and had maybe a handful cases where the Userland did break and that was because I didn't followed what I was told during package upgrade.

The amount of time that I had to get out of .dll-hell on Windows on the other hand.
The Linux way is better and way more stable.

in reply to Magiilaro

in reply to 2xsaiko

Unreal Tournament 2004 depends on SDL 1.3 when I recall correctly, and SDL is neither on Linux nor on any other OS a core system library.

Binary only programs are foreign to Linux, so yes you will get issues with integrating them. Linux works best when everyone plays by the same rules and for Linux that means sources available.

Linux in its core is highly modifiable, besides the Kernel (and nowadays maybe systemd), there is no core system that could be used to define a API against.
Linux on a Home theater PC has a different system then Linux on a Server then Linux on a gaming PC then Linux on a smartphone.

You can boot the Kernel and a tiny shell as init and have a valid, but very limited, Linux system.

Linux has its own set of rules and his own way to do things and trying to force it to be something else can not and will not work.

This entry was edited (1 week ago)
in reply to 2xsaiko

There was the Linux Standard Base project, but there were multiple issues with it and finally it got abandoned. Some distributions still have a /etc/lsb-release file for compatibility.
in reply to

I think webassembly will come out on top as preferred runtime because of this, and the sandboxing.
in reply to

What you described as the weakness, is actually what is strong of an open source system. If you compile a binary for a certain system, say Debian 10, and distribute the binary to someone who is also running a Debian 10 system, it is going to work flawlessly, and without overhead because the target system could get the dependency on their own.

The lack of ability to run a binary which is for a different system, say Alpine, is as bad as those situations when you say you can't run a Windows 10 binary on Windows 98. Alpine to Debian, is on the same level of that 10 to 98, they are practically different systems, only marked behind the same flag.

in reply to CarrotsHaveEars

The thing is, everyone would agree that it's a strength, if the Debian-specific format was provided in addition to a format which runs on all Linux distros. When I'm not on Debian, I just don't get anything out of that...
in reply to

nix can deal with this kind of problem. Does take disk space if you're going to have radically different deps for different apps. But you can 100% install firefox from 4 years ago and new firefox on the same system and they each have the deps they need.
in reply to pr06lefs

Someone managed to install Firefox from 2008 on a modern system using Nix. Crazy cool: blinry.org/nix-time-travel/
in reply to AtariDump

I use nixos. But the package manager its based on, nix, can be used on other OSes.
in reply to

I don't think static linking is that difficult. But for sure it's discouraged, because I can't easily replace a statically-linked library, in case of vulnerabilities, for example.

You can always bundle the dynamic libs in your package and put the whole thing under /opt, if you don't play well with others.

in reply to

Statically linking is absolutely a tool we should use far more often, and one we should get better at supporting.
in reply to

Disagree - making it harder to ship proprietary blob crap "for Linux" is a feature, not a bug.
This entry was edited (1 week ago)
in reply to

That's a fair disagreement to have, and a sign that you're fighting bigger battles than just getting software to work.

Static linking really is only an issue for proprietary software. Free software will always give users the option to fix programs that break due to updated dependencies.

in reply to

Static linking is a good thing and should be respected as such for programs we don't expect to be updated constantly.
in reply to Einar

Flatpak with more improvements to size and sandboxing could be accepted as the standard packaging format in a few years. I think sandboxing is a very important factor as Linux distros become more popular.
in reply to asudox

Flatpak is very useful for a lot of things, but i really dont think it should be the default. It still has some weird issues. For example if you run a seperate home and root partition flatpak by default will install things into your root partition which quickly fills up. You have to go in and do a bunch of work to get it to use the home partition.

Or for example issues with themeing and cursors. Its a pretty common issue for flatpaks to not properly detect your cursor theme and just use the default until you mess around with perms and settings to fix it.

They also generally get updates slower. I guess maybe if its adopted more that would change but flatpak is already pretty widely used and thats still an issue. Especially for smaller programs not used by as many people.

Keeping it as just something that is good to use for the ones who like a GUI experience and want something simple and easy is great. But if we were to start doing like what ubuntu does with snaps where theyll just replace things you install with the snap version then im not in favor of that at all.

in reply to IHave69XiBucks

I agree that flatpak is not there yet. The API is limited, and it is also hard to package an app. But I really want to see it succeed
This entry was edited (1 week ago)
in reply to Einar

Stability and standardisation within the kernel for kernel modules. There are plenty of commercial products that use proprietary kernel modules that basically only work on a very specific kernel version, preventing upgrades.

Or they could just open source and inline their garbage kernel modules…

in reply to enumerator4829

I’m struggling with this now. There’s an out of tree module I want upstreamed, but the author (understandably) doesn’t want to put in the work to upstream, so I did. The upstream folks are reluctant to take it because I didn’t actually write it.

I really don’t know what to do.

in reply to enumerator4829

I don't use any of these, but I'm curious. Could you please write some examples?
in reply to fxdave

It mostly affects people working with ”fun” enterprise hardware or special purpose things.

But to take one example, proprietary drivers for high performance network cards, most likely from Nvidia.

in reply to Einar

ARM support. Every SoC is a new horror.

Armbian does great work, but if you want another distro you’re gonna have to go on a lil adventure.

in reply to kibiz0r

Wouldn't it make more sense to focus on an open standard like RISC-V instead of ARM?
in reply to Einar

Configuration gui standard.
Usually there is a config file that I am suppose to edit as root and usually done in the terminal.

There should be a general gui tool that read those files and obey another file with the rules. Lets say it is if you enable this feature then you can't have this on at the same time. Or the number has to be between 1 and 5. Not more or less on the number.
Basic validation. And run the program with --validation to let itself decide if it looks good or not.

in reply to lime!

I agree. OpenSuse should set the standards in this.

Tbf, they really need a designer to upgrade this visually a bit. It exudes its strong "Sys Admin only" vibes a bit much. In my opinion. 🙂

in reply to Mio

Fuckin hate having to go through config files to change settings...

It's always great when settings are easily accessible in a GUI, though! Mad props to the great developers that include them!

in reply to Einar

in reply to Einar

Domain authentication and group policy analogs. Honestly, I think it's the major reason it isn't used as a workstation OS when it's inherently more suited for it than Windows in most office/gov environments. But if IT can't centrally managed it like you can with Windows, it's not going to gain traction.

Linux in server farms is a different beast to IT. They don't have to deal with users on that side, just admins.

This entry was edited (1 week ago)
in reply to ikidd

I'm surprised more user friendly distros don't have this, especially more commercial ones
in reply to ikidd

I've never understood putting arbitrary limits on a company laptop. I had always been seeking for ways to hijack them. Once I ended up using a VM,
without limit...
This entry was edited (1 week ago)
in reply to fxdave

I mean, it sucks, but the stupid shit people will do with company laptops...
in reply to fxdave

TL;DR - Because people are stupid.

One of my coworkers (older guy) tends to click on things without thinking. He's been through multiple cyber security training courses, and has even been written up for opening multiple obvious phishing emails.

People like that are why company-owned laptops are locked down with group policy and other security measures.

This entry was edited (1 week ago)
in reply to ikidd

An immutable distro would be ideal for this kind of thing. ChromeOS (an immutable distro example) can be centrally managed, but the caveat with ChromeOS in particular is that it's management can only go through Google via their enterprise Google Workspace suite.

But as a concept, this shows that it's doable.

in reply to Lka1988

I don't think anyone was saying it's impossible, just that it needs standardization. I imagine windows is more appealing to companies when it is easier to find admins than if they were to use some specific linux system where only a few people are skilled to manage it.
in reply to ikidd

i’ve never understood why there’s not a good option for using one of the plethora of server management tools with prebuilt helpers for workstations to mimic group policy

like the tools we have on linux to handle this are far, far more powerful

in reply to ikidd

Ubuntu Server supports Windows Active Directory. I haven't used it for anything but authentication (and authentication works flawlessly) and some basic directory/share permissions but theoretically it should support group policy too.

It'd be cool if there was a mainstream FOSS alternative though (there might be, I've done literally 0 research), but this works okay-ish in the meantime.

But for management of the actual production servers at work I use a combination of ManageEngine (super great and reasonably priced) and Microsoft's Entra (doesn't work well, don't do it)

This entry was edited (1 week ago)
in reply to Einar

Small thing about filesystem dialogs. In file open/save dialogs some apps group directories at the top and others mix them in alphabetically with files. My preference is for them to be grouped, but being consistent either way would be nice.
This entry was edited (1 week ago)
in reply to Mactan

interoperability == API standardization == API homogeneity

standardization != monopolization

This entry was edited (1 week ago)
in reply to Einar

Manuals or notifications written with lay people in mind, not experts.
in reply to corsicanguppy

Would you mind providing some reasoning so this doesn't come off as unsubstantiated badmouthing?
in reply to eneff

My experience with systemd has been the opposite. Thanks to systemd, many core tools have consistent names and CLI behaviors.

Before systemd I used sysVinit, upstart and various other tools.

I’m glad systemd alternatives exist as part of a diverse Linux ecosystem but I haven’t had a compelling reason to not use systemd.

in reply to corsicanguppy

Systemd is fine. This sounds like an old sysadmin who refuses to learn because "new thing bad" with zero logic to back it up.
This entry was edited (1 week ago)
in reply to Lka1988

As a former sysadmin, there is plenty of logic in saying that. I have debugged countless systems that were using systemd, yet somehow the openrc ones just chug along. In the server space systemd is a travesty.

In the desktop space however, i much prefer systemd. Dev environments as well. So yes thst is where "it's fine". More than fine, needed!

I just hate this black and white view of the world, I cant stand it. Everything has its place, on servers you want as small a software footprint as possible, on desktop you want compatibility.

in reply to chaoticnumber

You're not wrong; I was just being hyperbolic.
This entry was edited (1 week ago)
in reply to Lka1988

I know, I was just having a bad day and I kinda took it out on you. My bad.
in reply to corsicanguppy

Yes, I find that dude to be very disagreeable. He's like everything that haters claim Linus Torvalds is - but manifested IRL.
in reply to steeznson

If the people criticizing him could roll up their sleeves and make better software, then I'd take their criticisms seriously.

Otherwise they're "just a critic."

in reply to Einar

Where app data is stored.

~/.local

~/.config

~/.var

~/.appname

Sometimes more than one place for the same program

Pick one and stop cluttering my home directory

This entry was edited (1 week ago)
in reply to HiddenLayer555

it's pretty bad. steam for example has both
~/.steam and
~/.local/share/Steam
for some reason. I'm just happy I moved to an impermanent setup for my PC, so I don't need to worry something I temporarily install is going to clutter my home directory with garbage
This entry was edited (1 week ago)
in reply to HiddenLayer555

I have good news and bad news:

A specification already exists. specifications.freedesktop.org…

in reply to Mactan

Eh, things have gotten better, and there are tools that make these tools respect them.
in reply to HiddenLayer555

This would be convenient indeed, but I've learned to be indifferent about it as long as the manual or readme provides helpful and succinct information.
in reply to HiddenLayer555

This would also be nice for atomic distros, application space and system space could be separated in more cases.
in reply to Einar

I'm not sure whether this should be a "standard", but we need a Linux Distribution where the user never has to touch the command line. Such a distro would be beneficial and useful to new users, who don't want to learn about command line commands.

And also we need a good app store where users can download and install software in a reasonably safe and easy way.

This entry was edited (1 week ago)
in reply to gandalf_der_12te

Why do people keep saying this? If you don't want to use the command line then don't.

But there is no good reason to say people shouldn't. It's always the best way to get across what needs to be done and have the person execute it.

The fedora laptop I have been using for the past year has never needed the command line.

On my desktop I use arch. I use the command line because I know it and it makes sense.

Its sad people see it as a negative when it is really useful. But as of today you can get by without it.

This entry was edited (1 week ago)

reshared this

in reply to AugustWest

It’s always the best way to get across what needs to be done and have the person execute it.


Sigh. If you want to use the command line, great. Nobody is stopping you.

For those of us who don't want to use the command line (most regular users) there should be an option not to, even in Linux.

Its sad people see it as a negative when it is really useful.


It's even sadder seeing people lose sight of their humanity when praising the command line while ignoring all of its negatives.

in reply to lumony

lose sight of their humanity


Ok this is now a stupid conversation. Really? Humanity?

Look, you can either follow a flowchart of a dozen different things to click on to get information about your thunderbolt device or type boltctl -list

Do you want me to create screen shots of every step of the way to use a gui or just type 12 characters? That is why it is useful. It is easy to explain, easy to ask someone to do it. Then they can copy and paste a response, instead of yet another screenshot.

Next thing you know you will be telling me it is against humanity to "right click". Or maybe we all should just get a Mac Book Wheel

Look, I am only advocating that it is a very useful tool. There is nothing "bad" about it, or even hard. What is the negative?

But I also said, I have been using a Fedora laptop for over a year and guess what? I never needed the command line. Not once.

This entry was edited (1 week ago)
in reply to AugustWest

Ok this is now a stupid conversation. Really? Humanity?


Yeah, humanity. The fact you think it's 'stupid' really just proves my point that you're too far gone.

or type boltctl -list


Really? You have every command memorized? You never need to look any of them up? No copy-pasting!

Come on, at least try to make a decent argument to avoid looking like a troll.

I'm glad rational people have won out and your rhetoric is falling further and further by the wayside. The command line is great for development and developers. It's awful for regular users which is why regular users never touch it.

You lost sight of your humanity, which is why you don't even think about how asinine it is to say "just type this command!" as though people are supposed to know it intuitively.

Gonna block ya now. Arguing with people like you is tiresome and a waste of time.

Have fun writing commands. Make sure you don't use a GUI to look them up, or else you'd be proving me right.

This entry was edited (1 week ago)
in reply to lumony

You blocked me over a difference of opinion?

Wow.

All I am trying to say it that it is a tool in the toolbox. Telling people Linux needs it is not true, telling people it's bad is not true.

Quit trying to make it a negative. I would encourage anyone to explore how to use this tool. And when trying to communicate ideas on the internet it is a very useful one.

I have never blocked anyone, I find that so strange. It's like saying because of our difference on this issue, we could never have common ground on any other.

And you ask me to remember my humanity?

This entry was edited (1 week ago)
in reply to gandalf_der_12te

@gandalf_der_12te @Einar

Linux Mint and some Kind of Ubuntu-Flavour are the Goto. Preferably the LTS Vefsions. For Ubuntu its 24.04, for Mint it is 22. So you ever need the commandline only for one short line and only in 2029.

So for the next few years you don't need to touch the commandline.

in reply to gandalf_der_12te

I really don't understand this. I put a fairly popular Linux distro on my son's computer and never needed to touch the command line. I update it by command line only because I think it's easier.

Sure, you may run into driver scenarios or things like that from time to time, but using supported hardware would never present that issue. And Windows has just as many random "gotchas".

in reply to RawrGuthlaf

I try to avoid using the command line as much as possible, but it still crops up from time to time.

Back when I used windows, I would legitimately never touch the command line. I wouldn't even know how to interact with it.

We're not quite there with Linux, but we're getting closer!

in reply to lumony

I try to avoid using the command line as much as possible


Why would you do that?

in reply to gandalf_der_12te

I think there are some that are getting pretty close to this. Like SteamOS (although not a traditional DE) and Mint.
in reply to Jack Waterhouse

Mint is pretty good, but I found the update center GUI app to always fail to update things like Firefox with some mirror error (regardless of whether you told it to use it or not). It happened for my old desktop (now my dad’s main computer), my LG laptop or used HP elitedesk G4. Using “sudo apt update” + “sudo apt upgrade” + Y (to confirm) on the command line was 10x easier and just worked. I do feel better/safe now that they use Linux for internet browsing instead of windows too.
in reply to Einar

Each monitor should have its own framebuffer device rather than only one app controlling all monitors at any time and needing each app to implement its own multi-monitor support. I know fbdev is an inefficient, un-accelerated wrapper of the DRI, but it's so easy to use!

Want to draw something on a particular monitor? Write to its framebuffer file. Want to run multiple apps on multiple screens without needing your DE to launch everything? Give each app write access to a single fbdev. Want multi-seat support without needing multiple GPUs? Same thing.

Right now, each GPU only gets 1 fbdev and it has the resolution of the smallest monitor plugged into that GPU. Its contents are then mirrored to every monitor, even though they all have their own framebuffers on a hardware level.

This entry was edited (1 week ago)
in reply to Codilingus

Yes and no. It would solve some problems, but because it has no (non-hacky) graphics acceleration, most DEs wouldn't use it anyway. The biggest benefit would be from not having to use a DE in some circumstances where it's currently required.
in reply to muusemuuse

There is a separate kernel which is being written entirely in rust from scratch that might interest you. I'm not sure if this is the main one github.com/asterinas/asterinas but it is the first one that came up when I searched.

By the tone of your post you might just want to watch the world burn in which case I'd raise an issue in that repo saying "Rewrite in C++ for compatibility with wider variety of CPU archs" ;)

in reply to steeznson

I'm of the opinion that a full rewrite in rust will eventually happen, but they need to be cautious and not risk alienating developers ala windows mobile so right now it's still done in pieces. I'm also aware that many of the devs who sharpened their teeth on the kernel C code like it as it is, resist all change, and this causes lots of arguments.

Looking at that link, I'm not liking the MPL.

This entry was edited (1 week ago)
in reply to Einar

While all areas could benefit in terms of stability and ease of development from standadization, the whole system and each area would suffer in terms of creativity. There needs to be a balance.
However, if I had to choose one thing, I'd say the package management. At the moment we have deb, rpm, pacman, flatpak, snap (the latter probably should not be considered as the server side is proprietary) and more from some niche distros. This makes is very difficult for small developers to offer their work to all/most users.
Otherwise, I think it is a blessing having so many DEs, APIs, etc.