@flo Now, i’m studying computer sciences, so i could just pull rank on this one, but i won’t, when there’s perfectly good logic i can explain this with.
In short: you’re mistaking usability with reliability.
In long:
On Software sucking:
Using computers has nothing to do with computers working properly. Computers most of the time don’t have problems, hence why they are so consistently relied upon by the totality of western civilization, and when they do, they are minor inconveniences, assuming those who are using them aren’t drunk chimps. If you’re working on an important project, you should be saving once every ten minutes, you should also have a daily backup on at least two separate devices. Why? Because just as you can fuck up a piece of paper beyond repair by spilling ink on it, you can lose a file by accident. The hard drive can get fucked up, or you could accidentally delete the file itself. These are normal precautions, and those who use them rarely complain about the fact that the computer crashed making them lose “all their work”. If you don’t use them you have nobody to blame but yourself, that’s not an “ease of use” issue.
These problems also have nothing to do with usability, they are reliability problems.
As for usability, that’s the only issue that determines who can or cannot use a computer. How easy it is to use. For instance, i can use a command line program, no problem; people i know just freeze up in front of a command line because they have no fucking clue what to do without a mouse. Both systems are reliable but one is easier to use than the other, the problem is that it’s also much more complicated to program, and that’s why the earliest examples were gynormous pieces of shit, see Windows 3.1.
Hence why we started coming up with methods to efficiently and reliably program mouse based interfaces. Mind you, we had to come up with them. Computers as we know them have less than half a century of development behind them, and already we’ve reached the point where they can render close to photorealistic images in real time, 60 per second. And this startig from jack shit. That’s a fucking achevement if i ever saw one. And we’re actually progressing very fast. The problem is that the programs that do fuck up are much more complex than that, not in small part because we also have to make them comprehensible and intuitive. Because most people don’t know how a computer actually works, and wouldn’t use a program that needed them knowing about computers. Which means we need to (metaphorically) make a guy who knows nothing about pen and paper able to write books, without teaching them about pen and paper.
So, because of this, yes, user oriented software does suck. User oriented software is less reliable, but easier to use. It’s the tradeoff you get for wanting stuff to be easier, because the difficulty is still there, only you don’t see it because it’s all on our shoulders.
The programs that crash the least are the ones that do the least things, because they’re simpler, easier to write and easier to bugfix. The bigger, more complex and easier to use is the program, the more work falls on the programming team and thus the more errors slip through the cracks.
That’s why an unreliable yet easy to use system like Windows rules the market while a much more reliable yet much harder to use free OS (Linux) is consistently shunned for everything that doesn’t have the utmost reliability as a priority.
As for the “necessary things”:
What is a necessary computer program? You can do pretty much anything a computer allows you to do using pen and paper, computers only make it a fuck of a lot easier to do, reproduce, distribute and preserve. What computer programs are necessary is entirely subjective on the single user. Photographers want photoshop to work, 3D artists need Studio Max and i only need notepad and gcc because i program. Which of these is necessary from an objective standpoint? None.
What is necessary is the OS and, even then, no macine works indefinitely without routine maintenance, so even a perfect OS ends up fucking up as time passes, because errors accumulate and increase exponentially over time.
That’s why every company produces a different set of programs, because programs are made for niche markets, and the only program that everyone needs is the OS itself.
And since humans are falliable, the OS is going to have problems. The more complex the OS, the more problems it’s going to have, mainly because the more stuff there is the more room for errors there is. If you write a sentence, you can easily spot inconsistencies and errors in it, if you write a book, it’s harder, if you write a book on separate pieces of paper with no page number it’s hard to find mistakes and even harder to spot inconsistencies. And when the OS has problems, the other programs are going to have them as well, and more of their own.
As for the industry:
You seem to work under the assumption that there is such a thing as a homogenous “industry” in computer sciences as there is in, say, food production. There is no such thing.
The three major OSs (Win, Mac and Linux) are completely incompatible with eachother (except for emulation, which can’t be relied upon anyway) and, with the exception of Linux, they’re products owned by companies that are in competition with eachother, at least in theory. Companies that don’t even share similar development philosophies.
So there is no such thing as “focusing on solving the necessary things”.
FIAT isn’t going to help Ford solve the problems of gas usage their cars have out of sheer goodwill, because they’re both in the same market and an enemy’s weakness is your strenght. And in a market dominated by two titans, like computer OSs, there is no alliance.
To close everything up:
What is transparent from the questions you raised is that you have no idea of what exactly software design entails and how the computer sciences “industry” works.
You need to realize that everything a computer does could be very well be typed in by hand in 0s and 1s, all software does is make it more accessible (“higher”, in jargon).
The “higher” a program is (aka the less actual computer knowledge it needs to be operated) the more complex its coding is, the heavier on the OS and the machine itself and, ultimately, the more unreliable it is.
That’s simply how it is, and it’s still much more efficient than bashing out line after line of code yourself every time, because a) you do it once and then it’s always there and b) it needs no knowledge from the end user.
That’s why i mentioned Photoshop and 3D Studio Max: they’re programs that are targeted at people that could very well know nothing about a computer’s inner workings. Photographers aren’t required to know how to program in C++ or Java, and they shouldn’t be.
They also shouldn’t be complaining if the software they understand nothing about, crashes sometimes, because if it were for people with their skillset they’d be out of a job, because film and cameras weren’t invented by photographers but by engineers of various fields.
Therefore, just as they can’t honestly complain about the top of the range in cameras without sounding like unsatisfiable whiny twats, they can’t complain about the best software we can put together. Why? Because they had no input in actually making it and nobody is forcing them to use it and, being this a market based on competition, they can bet their asses that if any company could seriously jump forward and upstage their competition, they would. What we have now is what we can manage now. What we will have tomorrow is what we will be able to manage tomorrow. Simple as that.