Archive for the ‘ubuntu’ Category

Update 20090713 am – Miscellaneous

July 12, 2009

I didn’t boot back into #! until late last night. I’ll try to edit the jwmrc later. I’ll add screenshots to the previous entry momentarily.

I still get a ton of hits for DSL-related things. I see in the search engine terms for today that I already have someone looking for how to use DSL to install Debian. Don’t! I can’t believe people still think they can do this but it’s not a jumpstart to a small Debian system. Go read my hard drive install page linked up in the top right-hand corner on my blog (click on the banner to to “home” if you’ve landed here and the links in the right column aren’t visible).

In a nutshell, DSL was based on an old version of Knoppix which was based on a now-deprecated version of Debian. Debian no longer offers any support for that version (which was called Woody). You cannot do a dist-upgrade from DSL to Debian-stable without breaking every freaking thing in the system.

You can still use DSL as it is but it appears development has ended. I haven’t seen if John Andrews has posted a roadmap or even announced what he’s doing development-wise.

If you want a small-ish Debian system, the best idea I’ve seen is Kerry’s which he’s posted in various forums including at DSL, IIRC. I searched and found it at the Ubuntu forums again just now. You can use that with any net-install-able distro like Debian, Ubuntu, Fedora, etc. The BSDs require a bit more work but also can be used to create lighter systems. Etc. If your computer can run Slitaz or a POS distro like Puppy, you can run Tiny Core and build a light system, too.

I’m still a bit anxious about how this Atheros card is performing under Linux. I’d hoped the more recent kernel in #! would resolve the matter. I’d suspended and resumed a few times to see if it would flake out on me, but what happened yesterday was the longest time it had been suspended while running #!.

I’m also noticing that my transfer rate is swinging wildly from 1 mb/s to 54 mb/s. I’m going to have to delve through more bug reports and see wtf is happening. I knew the other stuff I’d experienced with older (and apparently not patched as much as I thought) kernels was a known issue. I haven’t looked to see if any other reports have been filed since the big re-write. I compared my signal using my old ThinkPad which uses a Broadcom card and the wireless signal strength, transfer rate, etc., seems more stable with it.

I also need to double check my warranty. I know it’s supposed to be one year but I don’t remember the fine print to see how nitpicky it is. If it’s voided, I may bust this thing up and find another card.

(Edit: Signal strength and rate isn’t an issue in Windows, only in Linux. I likely won’t change the card unless I decide against upgrading to Windows 7.)

Update 20090708: crunchbang, etc.

July 8, 2009

Following up on a recommendation from kruce, I saw there was a new release of crunchbang (#!) and decided to download it since it’s been a while since I last looked at it. I wanted to see if it would work better on my AA1 than Fedora, particularly with the ath5k problem I’ve had. I also want to see how much lighter it is than Xubuntu.

I ran into a problem trying  to get the latest ISO:


Oh well. I’ll try again later. Even though it’s based on Ubuntu.

I’m seriously looking at alternatives to Fedora 10, including Fedora 11 despite some problems I’ve noticed (including broken Synaptics driver and requirement for ext4 root partition when installing from the live CD). I ran the KDE image again the other day. As much as I like KDE and its integration, it’s a bit much for my AA1’s lone GB of RAM. And I think I’d wait for PCLOS 2009.3 if I were really KDE-inclined.

More soon, maybe.

UPDATE 15:28 US/Central – 8 July 2009: Still no luck with the “official” download link which isn’t mirrored. The forum has torrent links and others are making the ISOs — some of them anyway (understandable since most people would ordinarily only download one or two depending on architecture and whether they want the regular ISO, the lite ISO, or both) — available on their servers. First success in starting the download from the main site resulted in failure; the server timed out and I couldn’t reconnect to it. I wanted to download the lite ISO but the only link I saw for it was via The Pirate Bay; that would require a change in DNS servers since my ISP blocks TPB. Trying to get the regular ISO now (really big thanks to those hosting). It’s very slow and tedious, though.

UPDATE 2 – 17:15 US/Central – 8 July 2009: (EDITED AND SCREENSHOTS ADDED) Got it downloaded and installed to USB thumbdrive via unetbootin. First boot was faster than I thought it would be. Default keyboard came up as GB, no problemo. Freaking NetworkManager again. Got connected to my router, no problem. Kept an eye for messages related to my Atheros card freaking out like in Fedora, but nothing in the short time I was running #!.  (I downloaded latest kernel sources to try and see if that fixes the problem with the card flaking out.)

Here are a couple screenshots. Nothing special, just screen and some pinging.



The RAM use in conky was cache, actual memory used was closer to 530 after running Firefox. That’s very admirable compared to other things I’ve run off USB.

Will I install? Ummm, don’t know yet. I want to give the lite version a spin first because this has some stuff on it that I’ll replace anyway. To its credit, it recognizes my hard drive as /dev/sda and the USB stick as /dev/sdb, which means it should install (e. g., GRUB) properly if I do put it on my hard drive.

Definitely Not Installing Xubuntu 9.04 on AA1

June 11, 2009

Just booted Xubuntu 9.04 (Jizzy Jackass or something) from USB on my AA1 to see if it’s possibly something I’d consider installing even though I think Ubuntu is really Swahili for “fucktard.” Or worse.

First thing I did when I got a desktop was open a terminal and use free -mt to see how much RAM it was using: 522 MB. Fuck, that’s about 200 (208, to be precise) more than my installation of Fedora 10 using Gnome. Then again, I have a lot of services off in Fedora, a smaller shell (mksh), no fancy wallpaper, etc. Maybe I could pare down Xubuntu to a more usable degree. But why? I’ve done that already with PCLOS and Fedora 10.

I already knew from a comparison of Xubuntu and Debian with Xfce at Distrowatch a few weeks ago that Xubuntu is a bloated pig. Now I have my own comparison after running Debian Live (Xfce) the other evening.

Would I possibly install Xubuntu on my AA1? Umm, hell no.

But I could be getting closer to installing something soon.

UPDATE – 11 Jun 2009 @ 20:05 US/Central: I booted into my PCLOS installation while ago to get an idea of resource use. I boot into runlevel 3 so I don’t do a splashy graphical login. Once I login and have a shell, I’m using 69 MB of RAM (per free -mt). Out of curiosity, I started xfce, opened a terminal, and looked at free again:


That, too, is with a few tweaks to turn off some services — but I’m still running bluetooth and cups and other stuff that I chose to keep running out of convenience. I expect some difference between running off CD/USB compared to hard drive. Over twice as much? Please. (I’m using pdksh in PCLOS rather than mksh. I saw a shitload of bash instances in Xubuntu when I ran ps aux.)

If I could get better hardware support under PCLOS, I would stick with it exclusively. Maybe a new kernel is in order? Last time I tried, I got an error that the kernel wouldn’t compile with the version of gcc in the base. Might screw around with it some more soon. Or maybe not. Right now I’ve narrowed down what I want to Xfce at most (I’d be just as happy with ratpoison or jwm or some other small window manager) rather than Gnome or KDE, OpenOffice 3.x (or Lotus Symphony), a handful of apps and utilities I like, and mplayer.

Gnome RAM Use, LXTerminal, Tiny Core 2.0, FLWM, and a Long Frigging Rant About It All

May 27, 2009

Rebooted into Gnome this morning after giving the latest Tiny Core release candidate I’d downloaded over the weekend a quick spin. I wanted a quick and dirty benchmark for where my AA1 is on a clean boot using Gnome so I can compare to other environments. This is with networking started along with a bit of stuff I could probably slim down a bit (e.  g., I could only start cupsd when I intend to print).


See, LXDE guys, this is how a terminal should behave; yours doesn’t work right. My shell is running as it’s intended to and I don’t have to force the terminal to read my profile settings to get my prompt or my aliases or any other settings I have (in my .mkshrc). Kind of stupid to have to set up a shell wrapper to invoke the LXTerminal to read ~/.profile (and from that .mkshrc) when it starts so I don’t falsely presume my aliases and other settings are loaded. My complaint last night (in the screenshot) wasn’t about the prompt, which serves as a marker or symptom that a particular file has been properly read, it’s about an application that ignores what should be considered a standard — read particular environment setting files (not just a fucking bashrc because not everyone uses bash) so the proper environment is available to the user. Does that make any sense?

Okay, now about my thoughts of the changes in Tiny Core 2.0. I’m not able to do much with it yet because I didn’t load the modules I need for the AA1 (not close enough to an ethernet cable to connect to the Internet). It’s what I expected: spartan. It’s like an empty canvas just waiting for the artist to express himself. Only instead of painting a few pieces of fruit or a barn or something, users get to add only what they want or need to it. No pretenses, no clutter, just what you need. Alas, people confuse desires with needs and vice versa.

I know there will be lots of bitching about FLWM. I saw some already last week at Distrowatch and also in the TCL forums — some of it was the drawa queen “you’re killing your distro” kind. I don’t know why that’s such a hard thing for users to accept since there are other window managers available in the repository and they’re not limited to what’s in the base. The window manager is only there to manage windows, not to be admired. If you want to admire your computer screen, turn it into a picture frame and don’t bother using applications. You can dress it all up however you want. Seriously, why should aesthetics be a show stopper?

Let’s contrast it with Moblin, which has all the sizzling sexy eye candy but has things that either don’t work yet or that crash over and over again. Every reviewer writes like he or she had multiple orgasms from using it despite the fact that it’s advertised as beta-level (haha, what an overstatement — try alpha) and buggy as hell. Reviews and feedback about stable little Tiny Core (and DSL before it) are filled with complaints that it’s not flashy enough compared to everything else out there. Okay, it may not be the fanciest distro but it doesn’t crash and repeatedly pester you with notices about them so you can decide if you want to e-mail the developers.

Robert and the Tiny Core team are putting out a rock-solid little distro. Why can’t that shine on its own without being all dressed up in Web 2.0 shite shine? Distros are about more than eye candy — at least they should be. What should count most is their efficiency and stability. Tiny Core has that. It’s not the easiest thing to set up and use, but once you get a few concepts down it’s easy to manage and won’t give you much grief because it’s stable.

I tried to help other DSL users who whined about the lack of sex appeal see how they could change JWM from “boring” to “fancy.” In one ear and out the other. As if DSL and Tiny Core are about window managers.

If FLWM is a deal breaker, you’re trying the wrong distro anyway so keep your thoughts to  yourself. Go back to Ubuntu and its sloppy Netbook Remix with the ever-crashing desktop menu. Or go ahead and use Moblin’s preview even though it’s not intended for production (and lives down to that!). Or use some other bloated piece of shit that looks fantastic and awesome and will make you cum all over yourself from the sensory overload. Just remember that there are more stable options available when you get tired of the system failing, breaking, or doing odd things because more concern was given to gussying it up than making it run right.

The irony: people now demand JWM back in the base. Wonder how many of them were complaining when JWM was made DSL’s default window manager over fluxbox.

Can’t please everyone. Can’t please some people at all.

Memorial Day Weekend Finale: Ubuntu Netbook Remix on AA1

May 25, 2009

My little Memorial Day Weekend Linux Fest continues with a look at Ubuntu Netbook Remix (UNR), which I probably would never have tried had I not tried Gnome in Fedora.

Here’s my quick summary of UNR: FAIL.


Let’s start backwards so I can complain about one aspect of Ubuntu’s philosophy that makes it less than ideal as far as I’m concerned. I seriously considered installing UNR despite the issues I had with the menu/desktop thing repeatedly crashing (see below). I also hate trying to understand their goofy installer which makes it convoluted to do a “custom” installation. I realize Ubuntu tries to make things more “approachable” for unsophisticated users, but the over-simplification and “recommended” settings make me wonder if their installer really understands what I want to do. Maybe it’s the paranoia from trusting the PCLOS installer to automatically decide what’s best for the space I set aside. But part of it’s also due to a (IIRC) 6.04-era Xubuntu install that ignored everything I tried to do.

My concerns this time with the dumbed-down (retarded even) techniques of Ubuntu began when I read on the UNR wiki that the “preferred” way to get the image to USB was to download and use their bloated software in Windows or Linux rather than dd it via Linux or BSD. I don’t think more steps is necessarily preferable to fewer. But Team Ubuntu is so hung up on doing every damn thing via graphical interface that it’s part of the deal.


Anyway, I ignored their preferred way and just used dd if=/path/to/theirnearlyoneGBimage.img of=/dev/myusb from my Fedora installation. La ti da. I rebooted from the USB stick and watched the Ubuntu splash screen hide all the boot processes. That’s another reason I hate Ubuntu. I want to see what process(es) start and might need to be turned off, to see what hardware is or isn’t detected. Ubuntu’s motto may as well be “ignorance is bliss.” Maybe the less noobs know, the less they’ll fuck things up and flood the forums with the same old questions.

When I got the UNR desktop, it wasn’t the standard Gnome desktop but it was a shade of that shit-brown color Ubuntu has always used but really shouldn’t. How the fuck do  they get away with such a boring earthtone when so many noobs insist on bright, shiny eye-candy? Come on, we’re in the 21st century already.

Let me digress and confess that I briefly ran PudX or XPud or whatever the hell it’s called — sorry, but I find it hard to keep a straight face with anything that includes “pud” as part of its name — a couple weeks ago. It has one of these tabbed menu interfaces which I think belongs on devices like phones and PDAs rather than on netbooks. It was more intuitive and cleaner and clearer than the crappy attempt by Moblin. I think I could live with the XPud/PudX (whatever, heh) interface. The one with Moblin deserves a hot corner of Hell for giving me such a headache the other night because it didn’t seem as coherent.

The UNR desktop/menu interface is kind of like the offspring you’d expect if the Moblin interface defiled mated with the PudX/XPud (sorry, told y’all a couple entries ago that these distro names are getting to be too screwy to care anyway and here’s one with PUD in it). It’s a bit clearer but it also has some strange ideas that I think might work on a phone but make me long for a standard desktop instead.


Worse, the thing kept fucking crashing on me! The first time was when I looked to see if I could change the shit-brown to something a little more normal for a white Aspire One. I went with ClearLooks and changed the light blue to a richer cobalt-like shade. Then the desktop menu thing disappeared and a few seconds later I was asked if I wanted to see more of these messages and if I cared to send a bug report to the developers. No to both, and restart the desktop menu thing. Opened Firefox, tried to get some streaming (PLS) audio. Had to find the codecs, install them, then it started. Went back to get a screenshot, found it in the menu, then the menu thing crashed again. And again. And again. And again.


I realize some people think they need to reinvent wheels to differentiate themselves from an upstream distro, but you really shouldn’t fix what’s not broken. I don’t find a tabbed desktop menu any more useful than a traditional one, and one that crashes every 90 seconds is not an improvement — it’s a fucking annoyance. Look at the above screenshot and note that there’s only an icon tray for open windows: two Firefox icons (one for the Firefox browser and the other for the download manager), another for Totem (which I manually selected to play the stream instead of the default Rhythmbox), and the Ubuntu logo which doesn’t pop down a traditional menu but takes you back to the crummy desktop/menu thing.

I decided I didn’t want this on my hard drive even though stuff that hasn’t worked in other distros — like the internal mic, though I didn’t try the card readers, ear phones, etc.  — worked to some extent. That doesn’t mean things worked flawlessly. I mentioned the microphone worked but I could only record clear sounds at the lowest possible settings (for spx, IIRC). Other settings resulted in popping sounds. I didn’t capture any video successfully via Cheese, either, but it did take some clear pictures.

Even though I didn’t want to commit to installing it, I did go through the installer to see if I had reason to be concerned it would override how I’d want to install it. When I got to the partitioner, the whole thing ran off the screen so I had to move the window back and forth to see what was happening. Why? Too many partitions? That’s messed up regardless of why because this is geared towards machines which tend to have 1024×600 resolution. Why can’t you get that set up so it scales to the width of the screen instead of to infinity regardless of how many partitions are set up?!

To Ubuntu’s credit, their installer recognized the other partitions and the distros used on the / for each. I have two Windows partitions (one recovery and one for XP installation), one Linux swap, and five Linux partitions (one entirely unused ~20GB I could use to test another distro), and a big chunk of free space which will most likely be used for an encrypted Windows partition whether I merge or unify my Linux partitions whenever Fedora 11 (coming first week in June, maybe), or a better option, comes along.

Anyway, I stopped everything when the installer looked like it switched from my chosen partition to the “use whole disk” setting — not sure if that happened when I was alt-mouse moving the window so I could see WTF was happening or if it did that itself. I didn’t care because I think I’d just as soon use standard Ubuntu as this remix and its buggy desktop menu. Which means screw Ubuntu, I’ll stick with Fedora.

Just as I was getting ready to shut it down, the desktop/menu thing crashed for the final time. I tried every fucking keystroke combination I could to get a menu to no avail. This was the straw that broke UNR’s back as far as I’m concerned. I couldn’t see what the Ubuntu splash screen was hiding during boot but I decided to see if it disabled the AA1’s on-off button. As soon as I clicked it, I got Ubuntu’s shutdown menu. Yea! I rebooted and will likely wipe the USB stick very soon.

Like Moblin, UNR seems a great idea — on paper. Only problem is, it sucks on the computer. I’ll give Moblin and UNR each an A for effort but have to give both an F for flawed/failed execution. I really think a standard distro will suit my needs better than a machine- or netbook-specific one at this point. I don’t want my netbook to run or look like a cellphone or PDA. It’s a computer and I use it like one (which is why XP models have far outsold the Linux-based cloud versions of  these things — people use them as computers rather than as net appliances). I don’t want some quirky interface (no, ratpoison isn’t quirky and it doesn’t have cascading walls of “m-zones” and other bizzarre novelties getting in the way if the mouse moves too far).

This turned out to be a bad way to spend the weekend, though it wasn’t a complete waste of time. Nothing I tried this weekend (Linux-wise) was much of an improvement over anything else I’ve already installed. XP remains flawless on the Aspire One and will remain my primary OS on this thing. I converted PCLOS from KDE to Xfce/JWM/ratpoison (from slim login manager) and it’s actually okay despite how pissed off I am about its automatic installation and some of the unresolved hardware issues remain unresolved. Fedora has also been surprisingly good on this even though I don’t consider myself a Gnome fan, and Fedora is the direction I’m leaning if I ever settle on a binary-based distro. Maybe PCLOS will get stronger now that Tex has resumed control and in spite of all the defections.

I still may yet give Tiny Core another run because I’m finding myself doing so much tweaking regardless of which distro I use that I may as well go back to the modular concept I wanted. I keep saying that but I never have time to mess with anything anymore. But who knows. I saw someone whining about the aesthetics of FLWM, the new default TC 2.0 window manager, at Distrowatch last week. That sounds very promising after the glitzy do-nothing shite of Moblin.

Substance trumps style. Sometimes less is more, especially when it’s not crashing incessantly and getting in the way of the user. I’ll take stability over fancy every damn time.

Open Source Conspiracy Nuts: _OSI, Your BIOS, and You

July 28, 2008

I’m not a big fan of conspiracy theories. They exist to give weak-minded, irrational people the extravagant and irrational explanations for irrational events they seem to need — belief in widespread conspiracy is a coping mechanism for the mentally unstable.

Bogeymen, secret societies, remote control aircraft, grassy knolls, UFO secrets, and all the rest.

Now add Foxconn and Microsoft. At least for certain Ubuntu fanboys.

Turns out someone ran into some serious ACPI issues with a new Foxconn mobo. A bit of BIOS hacking revealed something a bit odd — Linux support appears to be broken. Rather than learn more or even wait for answers, the user decided to run to the Ubuntu forums and present this is the latest MS attempt to kill Linux. It gets picked up by semi-coherent twits at Slashdot, snowballs, and before you know it there are all kinds of allegations and insinuations being made.

Uh, what’s the definition of FUD again? Nothing like a conspiracy theory to demonstrate the power of fear, uncertainty, and doubt. Especially among the uncritical thinkers who use Linux as some anti-Microsoft fashion(less) statement.

Matthew Garrett delved deeper into the issues, the BIOS, and Linux ACPI.

mjg59: Further Foxconn fun:

Take home messages? There’s no evidence whatsoever that the BIOS is deliberately targeting Linux. There’s also no obvious spec violations, but some further investigation would be required to determine for sure whether the runtime errors are due to a Linux bug or a firmware bug. Ryan’s modifications should result in precisely no reasonable functional change to the firmware (if it’s ever hitting the mutex timeout, something has already gone horribly wrong), and if they do then it’s because Linux isn’t working as it’s intended to. I can’t find any way in which the code Foxconn are shipping is worse than any other typical vendor. This entire controversy is entirely unjustified.

That’s what happens when you shoot first and ask questions later. Anyone who’s ever compiled a kernel and taken the time to read the documentation knows of all the hardware-specific kludges (or “bugfixes”) contained therein. It wouldn’t be the first time there’s a problem related directly to a bug in the kernel source or in the way it was compiled. It’s not the manufacturer’s fault when Linux kernel development is often over-ambitious and frequently imperfect. Dittos for the problem of using a default one-size-fits-all (when they don’t) kernel. Usually default kernels are adequate for most hardware. But not for all. Is this something related to Ubuntu’s config?

I have an old board that will not even boot with SMP kernels and, being a fan of older hardware, I also have boards that have other SMP issues. That’s no cause for me to attack the board makers, just compile a non-SMP kernel for them. BFD. That’s why you have the source in the first place — so you can use it as you need it to run and as you see fit. Not so you can whine about MS and hardware vendors.

Now how the hell do these anti-MS zealots and conspiracy-peddling crackpots put the toothpaste back in the tube?

“Free Software Community” = Freeloaders

July 15, 2008

I saw a headline and snippet in my news feeds this morning that made me wonder if the article was worth reading or just more inane BS confusing what “free” means with respect to the GPL. I should’ve known that it would be belly-aching about price.

Why all the fuss over whether you can sell something that is free? How fair is it if a company like Best Buy starts distributing open source software and is actually making a profit from it? According to the licensing, it is perfectly fair! Maybe not 100% ethical, but fair! Personally, I’d like to see them donate something of their proceeds back to the open source projects they affect, but they aren’t obligated.

The GPL is not about free (gratis) software. It’s about freedom.

Contrary to the author’s claim earlier in his article that associating a price with “free software” is like nailing jelly to a tree, there’s quite a bit more involved here. Best Buy isn’t merely “selling” copies of Ubuntu for $20 a pop and pocketing all but the cost of the media and packaging. Included in the package is documentation and a sixty-day service plan with Canonical.

That’s worthless? That’s hard to quantify? That’s like nailing jelly? I don’t think so — not when you run a company with a payroll. Canonical isn’t staffed by volunteers. Neither is Redhat, whom the author also mentioned in the article.

I think the “gratis” nature of opens source software has led to a subculture of entitlement. How else do you explain the comment that charging for distribution and service is “not 100% ethical”? That remark followed allusions to the GPL and LGPL, both of which are neutral on the point of charging for either software or service.

The Free Software Foundation was founded by Richard Stallman, who wrote the GPL. The FSF site is very clear about the “price” of “free” software. They have at least one page specifically focusing on the issue of selling software. Are they opposed? Nope. They want people to charge as much as they can for “free” software.

But that’s beside the point in this case. Entirely. Because it’s not the software that causes there to be a $20 charge. The service — paying someone to answer questions and help with setting up a new operating system — has a value. Is it unethical in any degree to pay people for their time to get out of bed and come to work? I think it’s just the opposite.

Such is the state of “free” software today. The “free software community” has been infiltrated by freeloaders. They don’t care about freedom, just how much  they have to pay. As soon as you talk about exchanging money for software and/or service, you see their true colors.

By the way and for what it’s worth, last time I looked it seemed like Canonical does “donate something of their proceeds back to the open source projects.” Just like many other companies — Redhat, IBM, Cisco, Oracle, etc. — do.

How much do the freeloaders give back to the “community”?

November 6, 2007

Interview with gOS Founder: “Linux For Human Beings (Who Shop At WAL*MART)”:

At first look, the systems specifications seem pretty meager, until you have a gander at the list of applications. Instead of utilizing applications on the computer locally, the gPC leverages online applications that are delivered via web browser, such as Google Docs and Spreadsheets. This is an absolutely brilliant idea. All you need is a fast internet connection (and a monitor) to use the computer.

It’s been a while since I’ve blogged, but this is too smart to pass up. You get a Mini-ITX computer running a derivative of Ubuntu that uses enlightenment window manager and is web-based (mostly Google). It also comes with Open Office, and the whole thing can use Ubuntu’s repositories. I’m going to be interested in finding out how well it sells at WalMart. I have a hunch gOS will outlast this Everex computer, even though it should be ideal for people looking for an entry-level box for their web lives they don’t intend to expand.

Bloatware Update

September 13, 2007

Here’s another sign that there’s really no difference between the mindsets of Microsoft and those churning out Linux distros . The latest abortion is Ubuntu’s decision to enable Compiz by default. Why do I have a problem with this? Because it means users will have to weigh their options between OS upgrades and hardware upgrades.

The hypocrites at FSF joined forces with a few leftwing organizations recently to attack Microsoft for doing this very thing. With so many Linux distros now using Beryl and Compiz by default, maybe it’s time they focus their attention to what’s happening under their own noses.

Ubuntu Technical Board votes on Compiz for Ubuntu 7.10:

The Ubuntu Technical Board voted yesterday to ship Ubuntu 7.10 (“Gutsy”) with Compiz enabled by default. Although Compiz has been featured in Ubuntu 7.10 Tribe prerelesases, the board has had difficulty determining whether or not it is reliable and functionally complete enough to warrant inclusion in the final release.

Here are some plugs for users of older hardware who want continual operating system updates without having to accommodate it with new hardware or hardware upgrades:

  • Damn Small Linux is targeted at users of older hardware and minimal systems, as well as users who want a variety of options in how they run their systems. DSL will run on a 486 with 16 MB of RAM. It can also be run from USB, directly from the CD, or installed in a couple different ways on hard drive.
  • Slackware and Debian both allow minimal installs. This remains a good option for users with vintage hardware who want up-to-date options. Note that Slackware has moved to Linux 2.6 by default; this may or may not be in the best interest of those running older, leaner systems (2.6 also deprecated support for certain hardware which is still supported in 2.4). It also requires a bit of knowledge about the kind of system you want to build. One of the problems encountered with such systems (and this also applies to DSL) is when users have unrealistic goals of adding the latest versions of resource-demanding software like Gnome and KDE. Match your apps to your hardware and you’ll do fine.
  • FreeBSD, OpenBSD, and NetBSD all have very low hardware requirements. Each has its own method for installing binaries (packages) or source (ports), but NetBSD’s pkgsrc is portable across all three. I use FreeBSD and can report, anecdotally, that it seems to schedule processes much more effectively than Linux (2.4). Like the two previous suggestions, the idea of using one of this Unix-like operating systems (much more Unix-like than Linux) is to add applications suitable for the hardware you have.

One more note PCBSD: It’s in the same boat as Ubuntu and bloated Linux distros. PCBSD includes Beryl by default. It’s not suitable for older hardware.

Walt Mossberg Reviews Dell Ubuntu Offering

September 13, 2007

Walt Mossberg notes Ubuntu isn’t for people who want a computer that doesn’t require tech-oriented tweaking and writes, “Even in the relatively slick Ubuntu variation, Linux is still too rough around the edges for the vast majority of computer users.”

Linux’s Free System Is Now Easier to Use, But Not for Everyone:

Dell and Canonical tell me there are complex workarounds for some of the problems I encountered, and that built-in improvements are planned for others. But for now, I still advise mainstream, nontechnical users to avoid Linux.