Archive for the ‘open source’ Category

Linux Audio versus Everything Else

October 4, 2008

I had a chance yesterday to read Linux Hater’s post about problems with Linux audio drivers and APIs. The post is about pulse audio’s inclusion in Fedora, which led to broken audio for many Fedora users. Like lemmings, other distros decided if it’s good enough for Fedora then it’s good enough for them. Tumbling dominoes…

The issue reminded me of problems I’ve had at various times with applications in Linux. Pulse audio is hardly alone in issues related to Linux audio. One of the things that’s caused me more trouble than it’s worth has been getting mplayer to play nice with ALSA. The mplayer front page says ALSA is supported so I didn’t know what the problem was:

A little background. I hadn’t bothered to use mplayer in Linux at all; I stuck with default apps — typically XMMS, xine, etc. — that shipped with the distros I used. I first started using mplayer in FreeBSD a couple years ago and grew to appreciate it. So much so that I decided to use it when I switched back to using DSL last year. I also installed it in a few other distros I tried on both laptop (before ditching multimedia altogether on it) and desktop.

It had always worked fine in FreeBSD — and also in OpenBSD (which has become my operating system of choice) — but it totally sucked in Linux, especially synching audio and video. I thought it was maybe due to the binary packaging of the distros on which I’d tried it. I decided to compile it myself and got the same wretched results. Then I wondered if my hardware was the problem, but I reinstalled my BSD hard drive and quickly knew that everything was fine in BSD. So I looked for help on the mplayer site to see what the problem was with using it in Linux.

One of the first things I looked at was the following section in the documentation:

Ahh, well, that first sentence certainly clears it all up: “Linux sound card drivers have compatibility problems.” No shit? I’d run into problems before with the OSS driver in DSL playing single-channel audio at double speed. I’d also noticed other anomalies — at least I thought they were anomalous — using ALSA in other distros. Not only that, the problems weren’t always isolated to mplayer. One of the reasons I wanted to use mplayer in DSL was because XMMS was butchering things.

So I had plenty of reason to take the mplayer developers at their word. Things that “just worked” in Solaris, Windows, and the BSDs could be a total abortion in Linux. Different drivers. Solaris and BSD have Sun audio, Linux doesn’t.

The next section notes that ALSA 0.5 has buggy OSS emulation and causes mplayer to crash. That’s not fun.

Yes, both of those documentation sections provide workarounds. They also explain the reasons why workarounds are needed: immature, buggy, and/or shoddily-written drivers and emulation layers. In short, the problem is on the kernel side and not the application side. I know the application “just works” in BSD; I also know it doesn’t in Linux, at least not as it’s supposed to (why should I have to maintain two sets of scripts, one of which only employs workarounds for buggy drivers, to use the same app?).

I’d presumed — quite wrongly — that Linux would have better audio support than the BSDs on the dual grounds that Linux development tends to be more cutting-edge than the BSDs and Linux has been promoted harder as a desktop system. Lesson learned.

the smarter solution

the smarter solution

This isn’t isolated to mplayer and pulse audio, offerings in which users may expect a little imperfection due to the ever-changing nature (or is it chaos?) of open source development. It also affects how things like Flash, which isn’t open source, operate within Linux. Many users fault Adobe for the unpredictable performance (i. e., crashing) of Linux Flash. The problem, which Adobe has pointed out, lies in the diversity of APIs (ALSA versus OSS, which ALSA has won as far as Adobe is concerned) and UI tools in the Linux universe — qt versus GTK+, etc. Things are much easier in a proprietary environment like Windows because the libraries and drivers are unified and homogeneous. Microsoft doesn’t have myriad distributions with unique configurations. What works right for one Linux distro often doesn’t work across the board — something which has been true for a lot of things beyond audio because there’s no standardization.

There are many things Linux does exceptionally well. Audio processing really isn’t one of them. This is one area in which the bazaar isn’t going to supplant the cathedral anytime soon. Where Windows users take audio performance for granted, Linux users take audio bugs for granted. It’s yet another reason I doubt Linux will make a significant dent in Microsoft’s desktop market share.

UMPCs Will Not Lead to More Linux Desktop Share

August 1, 2008

Here are a few links of interest in relation to a discussion at LXer about Linux desktop adoption, UMPCs, and the mathematics of market share. (Edit: It appears I’ve been banned from posting at LXer. Fine. Whatever.)

Our first article of interest asks whether UMPCs like the Asus Eee are Windows killers. The conclusion: no. Far from being 1:1 replacements for Windows units, there are a couple interesting points. First, these tend to be “second computers” and not primary units. So they’re being used alongside Windows computers. Second, these units don’t function equivalently as Windows computers — the companies even give warning about adding repositories or changing distros.

NOT ALL FUN AND GAMES
The little Acers can lead you into hell on earth. I’m still struggling with sound, having had to switch distributions to get wireless to work. To try to cure a trackpad sensitivity issue, I installed Synaptics [trackpad] drivers under OpenSuSE. The machine would start but, because the driver changed an X configuration file, it would not load the graphical desktop. I managed to restore this without re-installing, but it was difficult and very painful….”

Contemplate explaining Synaptic repositories to your parents, or your young children, and the Achilles’ heel of the new devices becomes evident: they work fine as advertised, but any changes are at your own risk. If you recommend one of these units to family or friend, count on spending lots of long nights helping them get the devices set up the right way — and cleaning up the mistakes they’ve made….

With the momentum is has already gathered, could the Eee beat off its rivals to become the Holy Grail of Linux computing — that killer product that brings Linux into the mainstream?

Don’t bet on it, says Hugo Ortega, principal of Tegatech, a distributor that handles the Eee alongside competing devices such as HP’s 2133 Mini-Note PC and ultra-mobile PCs (UMPCs) that run Windows XP and Vista and range well past the $3000 mark.

The HP 2133s are outselling the Eee PC 20 to 1,” Ortega says, “and Linux only accounts for probably 20% of Eee PC sales and less than 5% of overall UMPC sales. The fact that there’s a $500 notebook out there is a big plus, but we find most [buyers] are more than happy to use a license in their office to upgrade them to [Windows] XP.”

That the Eee is even selling Linux versions at all is a big coup: previous Linux-based UMPCs, from Chinese manufacturer Beijing Peace East Technology, were offered by Tegatech but ended up being withdrawn after “we had not one phone call on them,” he adds.

Herein lies the vast difference between perception and reality, which seems to be rapidly diluting the value proposition of Linux-based mini notebooks. ASUS and Acer may have overcome some users’ perceptions that Linux is too complicated or esoteric for mainstream use, but mainstream demand has caught up with the units as customers shy away from Linux once again.

Indeed, many manufacturers entering this class of notebook are doing so with Windows-only machines that seem poised to undo the Linux mindshare gains that the Eee made over the past year….

Asustek recently revised its distribution strategy, steering Linux-based Eee PCs towards resellers capable of providing more personalised support, while pushing Windows-based Eees into mass-market retailers.

Acer, which continues its commitment to Linux, is likely to take a similar path. “It’s a give and take between simplicity of usage for the masses versus full customisation,” says Lee. “The Linux version is really only to use exactly what is provided, and someone in the know can easily remove what’s been installed. But consumers are accustomed to the Windows environment, and the Windows version will be a stronger player eventually.”

(http://apcmag.com/linux_not_essential_to_eee_pc_success_asus.htm)

You can’t underestimate the role of familiarity and comfort levels and learning curves. The kind of people already attracted to Linux or BSD aren’t typical of “average” computer users. Average users just want stuff to work, they don’t want to edit config files and most of them don’t want to see a shell. They didn’t like it in DOS so they bought Macs instead or waited for Windows. They want to download a zip or exe file, click and it installs itself — “Dependencies? WTF are dependencies, I just want to use the freaking application.” Windows has a simpler set of libraries; Linux isn’t standardized like that. That really does matter in adoption, or in why the masses won’t adopt Linux on their desktops.

Our next stop is an article noting that shipping of Eee units has waned by 15%. Acer is shipping more units with XP. Acer’s president thinks UMPCs may reach 10-15% of laptop sales. (More on this math with the next article.)

SALES WANING ALREADY?
Asus has revealed that it shipped 1.7 million of the devices in the first six months of the year — 300,000 fewer than it had forecast, according to a report in the Digitimes….

Acer says it will ship 15,000 of the devices every day following the launch of a Windows XP version in July….

Acer president Scott Lin claims that netbooks will eventually comprise 10-15% of overall laptop sales, echoing earlier reports of a PC shipment boom because of the devices.
(http://www.pcpro.co.uk/news/211419/eee-pc-sales-fall-short.html)

Not everyone is jumping on the UMPC bandwagon. Fujitsu sees the margins being untenable. This is a niche product with a small margin. This isn’t something where they can make much money even in volume.

More importantly, note that the overall laptop market is 271 million units and these currently make up a tiny fraction of that number. The number of Linux units is going to tumble as XP units become available. If this market does trend the way Lin suggested and the rate of growth is the same, you’re looking at 30 million units. Of those, you’ll see fewer Linux units in the ratio. And since the rate of growth will increase rather than stay the same, those Linux units will be about where they are in other desktop sales now. A drop in the bucket.

HOT MARKET? NOT FOR THE OEMS
Some of the big computer companies put a positive spin on the low-cost machines, saying they welcome new categories. But they would just as soon this niche did not take off, given the relatively low profit margins.

“When I talk to PC vendors, the No. 1 question I get is, ‘How do I compete with these netbooks when what we really want to do is sell PCs that cost a lot more money?'” said J. P. Gownder, an analyst with Forrester Research.

Even as some PC vendors are jumping into the fray, others say they are resisting. Fujitsu, one of the world’s top 10 personal computer makers, said it believes the low-cost netbook trend is a dangerous one for the bottom line.

“We’re sitting on the sidelines not because we’re lazy. We’re sitting on the sidelines because even if this category takes off, and we get our piece of the pie, it doesn’t add up,” said Paul Moore, senior director of mobile product management for Fujitsu. “It’s a product that essentially has no margin.” Stan Glasgow, chief executive of Sony Electronics, said, “We are not looking at competing with Asus.” But he said the company is investigating what consumers want in a second PC….

With an emphasis not in onboard applications (like word processing ), but Internet-based ones like Google Docs, the Linux-based Eee PC sold out its 350,000 global inventory. It has been in short supply ever since, said Jackie Hsu, president of the American division of Asus. Everex has sold around 20,000 of its CloudBook, which sells for about $ 350.

The sales are a veritable drop in the bucket compared with the 271 million desktop and laptop PCs shipped globally last year. But there is an intensifying debate about how big the category can become, and what segment of the market finds these computers appealing.

IDC, a market research firm, is predicting that the category could grow from fewer than 500, 000 in 2007 to 9 million in 2012 as the market for second computers expands in developed economies….

William Calder, an Intel spokesman, said that the cost of the Atom for PC makers is around $44, compared with $100 for a state-of-the-art chip. He said that Intel executives think the market for low-cost PCs is too big to pass up, though it does raise a potential threat to more powerful and more profitable computing lines.

Microsoft has been a reluctant participant, too. Even though it is no longer selling its Windows XP operating system software, it made an exception for makers of these low-cost laptops and desktops. Microsoft said it was responding to a groundswell of consumer interest in the lowcost machines, but some makers of those machines say Microsoft did so reluctantly because it did not want to lose market share to Linux.

Tim Bajarin, an industry analyst with Creative Strategies, a technology consulting firm, said that while the big computer companies have been caught off guard by the market’s potential, they are finding little choice but to dive in.
(http://www.nwanews.com/adg/Business/232688/)

Finally, here’s the baby that may have started it all. It’s no longer Linux-only. The XO does XP. And XP appears to be a better solution, once you adjust the storage requirements and the price. Well, it never really was a $100 laptop, was it.

MEANWHILE, THE OLPC-XP IS GAINING MOMENTUM…
Utzschneider blogged in May that the Windows port to the XO “is a snappy release that doesn’t cut features or functionality in order to work in the constrained memory and storage environment of the XO.” The build is said to support all the laptop’s features, including networking, speakers, microphone, and webcam. It also allows the display to pivot into its “e-book” configuration, and change into a power-saving, sunlight-readable monochrome mode (shown above), according to Microsoft….

Unlimited Potential’s Bohdan Raciborski said the XO can boot Windows XP in about 50 seconds, four times faster than its previously standard Linux environment. By tapping into the device’s power-saving capabilities, it can also offer up to 20 hours of battery life, he added.
(http://www.windowsfordevices.com/news/NS3549485633.html)

THE ABOVE ARTICLE LINKS TO A COUNTERPOINT WITH INTERESTING ADMISSIONS
Microsoft starts with its “good news” that XP boots faster (but not four times faster) than Sugar; (1:05 into the video). Good going, folks. First off, it turns out that XP doesn’t boot that much faster, as the scene only shows a boot to user login, not to the full user interface….

Sugar and other Linux versions on the XO do take longer to boot; but once the suspend and hibernation features are completely working (and the current Update.1 Release Candidate has most of it working) — you’ll never need to turn it off, rarely reboot, and it recovers almost instantaneously from sleep, so this to me is a non-issue.
(http://www.olpcnews.com/sales_talk/microsoft/windows_xo_video_dissection_.html)

Well, it won’t be an issue once it works. XP is new to the XO game and it already has such functions working. Regardless of its faster boot time.

All of this shows a few things.

One, Linux is not seeing accelerated growth or adoption from UMPC sales. The number of units sold with Linux — when Linux was the only option! — have already peaked. Windows is new on the Eee scene and is already outselling the cheaper Linux versions. If you can get excited about 0.4% fluctuations in Linux desktop adoption and see it as turning the tables on Microsoft, you really need help. Especially considering the number of sites and videos showing one of the first Eee hacks: installing XP. More people were installing XP on Eee than alternative distros. Smell the coffee yet?

Two, these sales have been of secondary computers and so they do nothing to reduce the aggregate number of Windows desktops or Windows users. They have done nothing to reduce the number of Windows installs anywhere. The typical Eee owner has another computer that runs XP or Vista (or both). And if anything, these UMPCs have opened a new market for XP (which MS was ready to deprecate!). That means more Windows computers rather than fewer. Net. Gross. However you cut it. And that further dilutes the share of Linux on desktops.

Three, Linux versions haven’t been warmly received. This is a foreign OS to most people, and they’re on their own when they venture too far out of their abilities to manage it. They know how to install whatever application they want in Windows. They don’t have to fiddle around with config files to make hardware function properly. Etc. This matters. Especially when users get frustrated and choose to go back to the OS they know, no matter how they feel about it.

Finally, XP is working better on XO than Sugar does. XP is working better than Linux on XO, period. The anti-MS people working on OLPC can bitch as loud as they want to and promise the moon, but they’re still trying to deliver the kind of performance XP has achieved in a shorter time. Yes, XP requires other concessions like more storage. With storage prices ever tumbling, this is trivial. The point is, XP works and met the performance goals the Sugar team have failed to meet thus far.

There were two devices that were intended to showcase Linux on the desktop — to change lives, to change the world. The OLPC/XO hasn’t yet lived up to that promise. Now it will have XP available, and it runs better than Linux does on XO. And while cheap UMPCs have sold well with Linux, they’re selling even better with Windows.

That doesn’t bode so well for Linux. It bodes well for Microsoft.

If you don’t know where to pick and choose your fights, you’re going to lose a lot more often than you need to. There are many places where Linux excels. On the desktop, it hasn’t and it probably won’t excel. Why keep fighting a losing battle there rather than in areas where Linux already has been successful — like on servers, phones, PDAs, and media devices like DVRs, where users don’t need technical savvy to make it work right. Those working on putting Linux on devices like XO are still fiddling around with getting it to work properly, don’t expect people who can’t figure out an “intuitive” system like OSX or Windows to do much better.

I’m not against Linux devices or desktops. But Linux isn’t a panacea, it’s not for everyone. Moreover, the variety of open source software that desktop users can use in Linux can also be used in Windows. Why is it not enough to encourage the use of those programs instead of throwing out the baby with the bathwater? Why tilt at windmills (or at Windows) when there are plenty of other inroads to be made?

Open Source Conspiracy Nuts: _OSI, Your BIOS, and You

July 28, 2008

I’m not a big fan of conspiracy theories. They exist to give weak-minded, irrational people the extravagant and irrational explanations for irrational events they seem to need — belief in widespread conspiracy is a coping mechanism for the mentally unstable.

Bogeymen, secret societies, remote control aircraft, grassy knolls, UFO secrets, and all the rest.

Now add Foxconn and Microsoft. At least for certain Ubuntu fanboys.

Turns out someone ran into some serious ACPI issues with a new Foxconn mobo. A bit of BIOS hacking revealed something a bit odd — Linux support appears to be broken. Rather than learn more or even wait for answers, the user decided to run to the Ubuntu forums and present this is the latest MS attempt to kill Linux. It gets picked up by semi-coherent twits at Slashdot, snowballs, and before you know it there are all kinds of allegations and insinuations being made.

Uh, what’s the definition of FUD again? Nothing like a conspiracy theory to demonstrate the power of fear, uncertainty, and doubt. Especially among the uncritical thinkers who use Linux as some anti-Microsoft fashion(less) statement.

Matthew Garrett delved deeper into the issues, the BIOS, and Linux ACPI.

mjg59: Further Foxconn fun:

Take home messages? There’s no evidence whatsoever that the BIOS is deliberately targeting Linux. There’s also no obvious spec violations, but some further investigation would be required to determine for sure whether the runtime errors are due to a Linux bug or a firmware bug. Ryan’s modifications should result in precisely no reasonable functional change to the firmware (if it’s ever hitting the mutex timeout, something has already gone horribly wrong), and if they do then it’s because Linux isn’t working as it’s intended to. I can’t find any way in which the code Foxconn are shipping is worse than any other typical vendor. This entire controversy is entirely unjustified.

That’s what happens when you shoot first and ask questions later. Anyone who’s ever compiled a kernel and taken the time to read the documentation knows of all the hardware-specific kludges (or “bugfixes”) contained therein. It wouldn’t be the first time there’s a problem related directly to a bug in the kernel source or in the way it was compiled. It’s not the manufacturer’s fault when Linux kernel development is often over-ambitious and frequently imperfect. Dittos for the problem of using a default one-size-fits-all (when they don’t) kernel. Usually default kernels are adequate for most hardware. But not for all. Is this something related to Ubuntu’s config?

I have an old board that will not even boot with SMP kernels and, being a fan of older hardware, I also have boards that have other SMP issues. That’s no cause for me to attack the board makers, just compile a non-SMP kernel for them. BFD. That’s why you have the source in the first place — so you can use it as you need it to run and as you see fit. Not so you can whine about MS and hardware vendors.

Now how the hell do these anti-MS zealots and conspiracy-peddling crackpots put the toothpaste back in the tube?

“Free Software Community” = Freeloaders

July 15, 2008

I saw a headline and snippet in my news feeds this morning that made me wonder if the article was worth reading or just more inane BS confusing what “free” means with respect to the GPL. I should’ve known that it would be belly-aching about price.

Why all the fuss over whether you can sell something that is free? How fair is it if a company like Best Buy starts distributing open source software and is actually making a profit from it? According to the licensing, it is perfectly fair! Maybe not 100% ethical, but fair! Personally, I’d like to see them donate something of their proceeds back to the open source projects they affect, but they aren’t obligated.

The GPL is not about free (gratis) software. It’s about freedom.

Contrary to the author’s claim earlier in his article that associating a price with “free software” is like nailing jelly to a tree, there’s quite a bit more involved here. Best Buy isn’t merely “selling” copies of Ubuntu for $20 a pop and pocketing all but the cost of the media and packaging. Included in the package is documentation and a sixty-day service plan with Canonical.

That’s worthless? That’s hard to quantify? That’s like nailing jelly? I don’t think so — not when you run a company with a payroll. Canonical isn’t staffed by volunteers. Neither is Redhat, whom the author also mentioned in the article.

I think the “gratis” nature of opens source software has led to a subculture of entitlement. How else do you explain the comment that charging for distribution and service is “not 100% ethical”? That remark followed allusions to the GPL and LGPL, both of which are neutral on the point of charging for either software or service.

The Free Software Foundation was founded by Richard Stallman, who wrote the GPL. The FSF site is very clear about the “price” of “free” software. They have at least one page specifically focusing on the issue of selling software. Are they opposed? Nope. They want people to charge as much as they can for “free” software.

But that’s beside the point in this case. Entirely. Because it’s not the software that causes there to be a $20 charge. The service — paying someone to answer questions and help with setting up a new operating system — has a value. Is it unethical in any degree to pay people for their time to get out of bed and come to work? I think it’s just the opposite.

Such is the state of “free” software today. The “free software community” has been infiltrated by freeloaders. They don’t care about freedom, just how much  they have to pay. As soon as you talk about exchanging money for software and/or service, you see their true colors.

By the way and for what it’s worth, last time I looked it seemed like Canonical does “donate something of their proceeds back to the open source projects.” Just like many other companies — Redhat, IBM, Cisco, Oracle, etc. — do.

How much do the freeloaders give back to the “community”?

Thoughts on Freedom and Free Software

June 30, 2008

As I’ve written in various places, many users of open source are clueless when it comes to what various licenses are all about. Today, one hapless and muddleheaded chap decided to try and stir some shit and gave us prima facie evidence that users are confused over what “free software” — as defined by the Free Software Foundation — is really about.

This issue arose when the aforementioned person complained that I hadn’t yet submitted an extension even though I’d previously written that I was withholding it pending release of what’s now called dslcore. Because of his snotty, demanding attitude I decided that from now on I won’t submit anything unless users who want particular extensions are willing to support one of two projects used by DSL: either OpenSSH directly or vim (which is “charityware” with contributions directed to help children in Uganda). I chose these as my “bounty” targets because they’re worthwhile causes and supporting both of them further supports DSL and its community. I thought this was fair since the submissions cost me time away from things I value and are probably of some value to others.

Nope. Too many users see “free” software and demand it with respect to cost. (And rarely to freedom.)

The snotty, demanding person took exception to this and, as you can see above, suggested it was at odds with GPL. There are a couple problems with his analysis in the context of the particular libraries the thread was about: not one of them is under GPL. OpenSSH is BSD licensed, zlib has its own “permissive” (in the view of FSF) license, and OpenSSL has a relaxed license as well. All three allow their code to be used in proprietary systems without accompanying source code. Sell it, change it, do what you will, just give credit where it’s due.

The other problem is an error that is far too common among Linux users: the GPL is NOT against the sale of software. In fact, FSF openly encourages people to sell free software so long as it’s in compliance with the freedoms enumerated by the GPL. You can charge whatever you want for it, but you must not put an excessive or prohibitive cost on the source code (which must accompany GPL binaries).

That’s because “free” in the GPL has nothing whatsoever to do with cost. It has to do with freedom — whether the user has unfettered access to the source code, can use it as he or she sees fit, can change it as he or she needs, can redistribute it.

Unfortunately, this error persists and users don’t think in terms of freedom. It’s ironic the person quoted above raised the name and circumstance he did because the developer in question publicly offered his code under GPL and then attached strings the license doesn’t allow and complained there was some violation (nope) when users actually exercised their rights under the GPL. The offenses the developer initially stated were that the bindings had been separated against his wishes and then redistributed, but those are freedoms central to GPL. As it turned out, the only changes to the code were after the false accusation of GPL violation — DSL added copyright information where he’d never bothered to put it himself because he assumed he could control how users compiled the various pieces of the runtime he assembled.

When it came to that developer’s demands, many DSL users were open to compromise and even insisted that I be just like they are in that regard. No debate about what it means to compromise away your freedom, no discussion desired at all. I was called obstinate, told to go start my own distro, and to leave the forums alone and post my thoughts here on my blog instead. They didn’t care about the GPL. They didn’t care about their freedoms. They only cared about the cost.

What’s the cost in the long run, though, when you lose your rights to use code because you don’t stand up to a petty tyrant of a developer who offers something under the GPL and then pulls the rug out as soon as you use your freedoms that license allows?

I’m hardly one to defend the GPL. I have a list of entries categorized as “FSF sucks” reflecting some of my grievances against GPL. But the prevailing confusion over it — what it actually means — doesn’t serve the wider community who use and rely on software licensed under it.

Such confusion causes whiners like the person quoted above to whine even louder because they don’t understand GPL isn’t about price or money at all. Not only do they object to even a token “bounty” like he did, they’re willing to overlook the conditions beyond the GPL that a developer tried to slap on DSL and all its users. They’re more concerned that something is offered “without charge” than “with strings.” They’re offended when someone offers to do something for a few dollars that will benefit either a project they already benefit from or a program that helps children in a nation ravaged by HIV/AIDS; and they’ll roll over and give away their rights — not to mention their dignity because false accusations were leveled against DSL without any apology — as long as a developer will give them a freebie.

I think the free software movement has its work cut out when it comes to educating the masses. The masses aren’t software ideologues, they just want free (as in beer, as in price) software. And they’ll trade away their freedom to get it.

GPL versus GPL-with-Strings

June 20, 2008

A resolution appears to have been made between DSL and John Murga in a matter I addressed in my previous entry. Sometimes, though, the best resolution is to simply walk away from a bad situation.

At issue was an allegation that DSL had stripped Murga’s lua/FLTK bindings of copyright information. This was shown to be false.

Murga then claimed his bindings were a command line invocation. This, again, was demonstrated to be false.

Throughout the episode over the last few days, Murga was repeatedly asked (including by me) to state his grievance as it relates to how DSL used his bindings before the refactoring of the bindings to the time afterward and present. He did not answer but chose instead to lash out at others and accuse them of “butchering” his project, “molesting” his project, as well as various and sundry ad hominem attacks.

The only thing that happened differently was murgalua was recompiled so its full runtime wouldn’t load at invocation of any of its parts. The runtime had become so bloated that it was impractical to use as-is for the purposes of DSL.

This is what led Murga to claim it had been butchered. In the post in which he accused DSL of GPL violations, his sole link to reference his sentiments on untying the FLTK-lua bindings on his forums said he would not condone or approve of anyone doing that. Even though he chose the GPL for his bindings (most of the parts of what constitutes “murgalua” are under much less restrictive licenses).

He admitted throughout his accusation that his feelings were hurt, that he would need time to be more reasonable, etc. So it was at least as much about his feelings as it was about the licensing.

In the course of the resolution of the matter, Murga asked for things DSL isn’t in the position to give him — such as a copyright notice when things he didn’t write, like FLTK and lua, are invoked independently of his bindings. To the credit of the DSL developers, this was not agreed to.

But something else caught my eye among his replies. He stated that he had given permission for DSL to use his GPL code.

Permission? Permission beyond the scope of the terms of the GPL? Or just a personal approval?

Between his initial complaint (and hurt feelings) over the bindings being separated, to the odd (and unethical!) demands that he be given ego strokes every time pieces (which he didn’t write) of what he put in his runtime were invoked, and the statement that DSL either had or required his assent to use code he released under GPL, I was leery of including his code in the base.

The first thing with the binding separation is allowed under GPL. The GPL gives users the right to see and change the code and include it in whole or in part in other things so long as the rest of the GPL is obeyed (and it was in this case). The GPL is a solution to restricted use of code — which is what Murga wanted (and wants — he’s suggested that he wants to amend the license) to do.

The second issue with the demands is also central to the GPL. DSL didn’t remove any attributions to Murga. In the process of resolving the issue, DSL even offered to go above and beyond what Murga had previously stated was required (his terms and copyright information are all very muddled — another reason to consider avoiding using his code in the first place). DSL couldn’t and wouldn’t comply with giving him acknowledgments when lua and FLTK are invoked independently of his bindings. Those things belong to other people, not to John Murga. Credit should only be given to whom it’s due, not to whomever demands it in such reckless fashion.

The third part with permission also is antithetical to the GPL. It’s a PUBLIC license, not a PRIVATE license. It allows user A to give it to user B without developer Y meddling over the matter. As long as users comply with all the GPL’s terms, and DSL did, then the developer is supposed to yield to the user — not demand it be run in a certain way, be configured or compiled in a certain way, etc.

As things stand now, Murga appears to be offering “GPL but with conditions” instead of GPL. This, though, isn’t GPL because it’s not free and it restricts users with respect to what they can do with the whole or part of code under GPL.

Until Murga further clarifies (or gives up) his position with respect to the above points or changes his license to be more congruent to his dictatorial demands and novel conditions upon users, I think it’s probably best for DSL and other projects to steer clear of his code or to fork the GPL’ed bindings between lua and FLTK. Anything this tainted, offered by someone so petty and emotive, is more of a hassle than it’s worth — as proven by the way he chose to handle it in such a spectacle.

And that’s especially true when he chooses to renege on or demand more than the very terms he offered it in the first place. The GPL has specific requirements, not strings. Murgalua, unfortunately, has strings.

DSL, GPL, etc.

June 18, 2008

Recent threads at the DSL Forums have covered issues pertaining to licensing, the GPL in particular. Many people casually praise the GPL without considering what it actually says and what it means to casual users and developers alike.

The first issue arose when someone posted links to his remasters of DSL. I was annoyed that he posted the same information twice in the forums, and in places where it wasn’t really on-topic. I asked how I could get sources for GPL software he used. I reminded him of the judgment of the FSF/SFLC that downstream and/or derivative distros (like Knoppix, Mepis, DSL, Slax, Vector, etc.) had to maintain and provide sources regardless of availability of sources for unmodified binaries taken from upstream repositories. This led to some heated discussion (and also some productive discussion as well) about the whole issue and whether it was appropriate for distros to sell media with their sources.

This gets at the heart of many misunderstandings about GPL. It is NOT about free/no-pay transmission of software. It’s about the freedom to see and change source code. As FSF very clearly says throughout the gnu.org site and elsewhere, you can charge a billion dollars for GPL’ed software. The only restriction is that you cannot charge an excessive amount to restrict access to the sources.

Second, DSL has another GPL controversy today. DSL had switched from using flua, lua with a set of FLTK bindings, to murgalua (which has FLTK bindings and a lot of other stuff thrown in) several months back. Unfortunately, murgalua requires the full runtime of lua and fltk and libz and sqlite and luafs and who-knows-what-else to be run all at once even if it’s for a simple lua non-GUI task.

So DSL refactored the bindings so lua can be run on its own and FLTK and all the other bindings can be used independently as-needed — something much more suitable for the needs of DSL and its users.

John Murga is the author of murgalua. He licensed his bindings under GPL even though the bulk of the parts of his runtime — lua, etc. — are under much more permissive licenses like LGPL, MIT-X, and BSD. Today he’s posted a notice on his forum that DSL has transgressed the GPL and linked to another post he made on his forum in which he said (or suggests) he won’t condone or support the re-use of his bindings apart from the runtime. He reiterated that

Either way I am unhappy with MY CODE being used in this way (if that counts for anything).

The GPL gives users freedom to change the code to suit their own needs so long as redistribution follows the rest of the GPL’s terms. If Mr Murga has ANY objection to others using his bindings under the license he used, he should re-license it in manner which will give him as much control over how others use it as he wants. The more permissive licenses used by lua, sqlite, etc., certainly allow that.

Both issues relate to similar problems. First, most users and developers wrongly associate GPL with things it doesn’t mean. It doesn’t mean zero-cost, it means sources must be made available (directly or via normal computer-readable media) when distribution occurs. Second, it doesn’t give anyone the right to determine how it’s used on anyone else’s computer. THAT IS WHAT THE FOUR FREEDOMS ARE ALL ABOUT — the right to see and change the code as well as the right to redistribute it as it was received or as it has been changed. So, to Mr Murga I say: no, your feelings REALLY DON’T matter.

I’m not a fan of the GPL. I’ve written plenty of places here and in other places why I object to it. Some of its demands are onerous, such as the requirements that downstream derivatives maintain their own source trees for unmodified binaries, for requiring a hypothetical user who compiles an app for his friend or relative to make the sources available, etc. I’ve found that it appeals to two groups of people: one is the zealot who sees software as a political (or even religious) issue and the other is the uninformed who makes the false link between GPL and “free as in beer” with nary a thought about the actual meaning of the license. Sometimes the line is crossed and you have a hybrid — you can find many instances of that in the Linux/FOSS advocacy with lists of reasons that give very little about “you can see the sources” (even if you don’t know wtf it all means) and a whole lot about how your only costs for Linux is the CDs onto which you burn a zillion distros to try and find one that works for you.

These recent spats have only served to reinforce my objections to the GPL.

Productivity Tip 2: Calendar Apps

June 4, 2008

I don’t use calendar apps because I’m punctual and attentive about things like scheduling. I use them because no matter how punctual and attentive I try to be about schedules, I’m really not. Without them, I’d do a worse job prioritizing events and let things conflict more often than not. At least that’s what I think.

I used to be a fan of Sunbird and Lightning from Mozilla. Sunbird is their standalone version and Lightning integrates into Thunderbird. These are fine if you have a fast processor and lots of RAM. They’re dreadfully slow if you don’t.

Instead, I’ve become a very big fan of calcurse. This is a three-pane console app that handles just about everything you need from a scheduling application. In the default main panel is the daily calendar. On the right side are two more panes. On top is a navigable monthly calendar which is used for navigating to the day that shows up in the main panel. Beneath the monthly calendar is a to do list.

By default, the navigable monthly window is active. It can be toggled with a tab or you can use a keybinding to set tasks in the other two panes — ctrl-a to add an event to a daily calender or ctrl-t to add to the to do list. Users of screen will see an immediate problem with using the default binding: ctrl-a is the preceding escape used in screen. So I use tab.

Setting events is very straightforward. The entry area is hinted so users can enter start and end times, events, priority (on to do entries), etc. Commands are also hinted in a fashion of pine/pico across the bottom part of the terminal.

Its power doesn’t end with keeping events straight. You can use multiple calendars with the -c filename flag (it will use its default if you don’t use -c). You can export your calendar to an ICS calendar file that anyone with just about any other mail application or web-based calendar can import (I have a cron job that does this every week and I use an alias to pipe it to a file as needed) by using the -x option and piping it to filename.ics — e.g.,
% calcurse -x > lucky13.ics

It can also be used to print out notes, the to do list, or any particular day’s events. See the documentation page on the calcurse link above for examples.

Okay, you say, but what about other programs like remind and wyrd?

Those are also certainly very nice. The reason I prefer calcurse is because it doesn’t have any unusual requirements (like ocaml), because it’s very easy to use (the hints are right there in front of you if you forget what you’re doing), and I think it’s every bit as flexible as anything else out there. If not more so.

And why is it better than Sunbird? It loads immediately. I can export a calendar in a fraction of a second even on an older computer. I can generate my to do list and either print it out in a terminal, pipe it to a file, or “:r! calcurse -t” (or -d) inside vim and include it in a note or report without having to do or open anything else.

Anything that helps you schedule your life shouldn’t take up an extraordinary amount of time. Not to compile, not to start, not to use, and not to quickly get information out of it. Of all the calendaring applications I’ve tried, calcurse lets me get things done most quickly.

Tilting at Windows: Don’t Fight for Desktop Linux Adoption

April 26, 2008

I picked up this article by Caitlyn Martin from Steven Rosenberg’s Click blog. She takes a different tack on some of the issues I’ve addressed when I’ve commented on some of the more exhuberent (and less honest) Linux activism. Her article references one such article, a list of ten points about how Linux has outgrown its geeky past and is appropriate for desktop use.

She writes, “All 10 points in the article are valid. None of them, nor any other efforts at Linux evangelism over the last decade, have worked when it comes to moving the masses towards Linux in the home and office on the desktop. Look, I’m not critical of the article. It may even convince a handful or people to give Linux a look. It, and articles like it, won’t have a major impact.”

This is true and so is her suggestion that it’s preaching to the choir. There have been many activists who’ve attempted to make inroads and get Linux adopted on the desktop. She’s correct that it’s not about cost, it’s not about ease as Linux desktop environments and driver support have improved. Resistance is hard to overcome no matter what price tag you put on or take off.

I think she too easily dismisses a couple things, such as the ease with which devices still work with Windows because their vendors are Windows-centric. If I buy any webcam, I know it will probably work very easily with Windows — plug it in, voila. If I do the same for use on a Linux desktop, I need to first check to see if it’s natively supported in Linux. Failing that, I have to see if anyone else has a driver for it. Then, if there is one, I have to check and see if that driver has enough functionality to be worthwhile. And if I already have the camera and it already works in Windows, why would I want to switch to a “free” operating system that will require me to compile a separate module for my device, run depmod, etc., just to use it? We can argue all we want about closed hardware and software, but these things are reality. We don’t have a magic wand to make it go away.

Substitute scanner, printer, or any other device for webcam. The more stuff a user already has, the more resistance he or she will probably have in switching.

Martin suggests two ways that users may be lured to desktop Linux. One is via well-conceived and well-configured devices like the Asus Eee UMPC that come with Linux. These have been met with more enthusiasm by existing Linux users, though. With these UMPCs increasingly shipping with XP, I don’t see how this bodes well for Linux. (I’m neutral on superiority of Windows or Linux because often it boils down to the same thing: how well things are pre-configured for the less savvy user. The less savvy the user, the worse the perception is if it’s inadequately set up even if the “problems” are very benign.)

The second thing Martin says may help Linux adoption is via concerted effort between Linux developers and hardware vendors — more of a Microsoft approach. There’s a big problem with this: there aren’t many manufacturers of hardware ready to embrace open source, at least not with the kinds of strings that GPLv3 would attach with respect to firmware. While some companies are becoming more lenient when it comes to distributing their firmware (since I just compiled two 2.6.25 kernels, I noticed there’s a lot more of that than in the 2.4 line), they really don’t benefit by pushing Linux on desktops — even a 100% increase in Linux desktop adoption wouldn’t reach the 5% share, so spectacular growth rate keeps you in a marginal market. Hooray, BeOS!

I think where all of this is moot is, we’re moving away from traditional desktop computing. Whether you want to look at mobile computing vis-a-vis laptops, notebooks, and UMPCs or in the direction things appear to be headed with cell/smart phones and PDAs, the real growth is away from desktops. Dittos for other devices popping up in homes all over the world: TiVOs and other DVRs, game consoles, etc. Many of these devices are the real initial contact points people have with Linux.

That’s where hardware vendors are already onboard with Linux. And vice versa. Except for FSF and fringe types who object to the way the real world operates.

My beloved told me she would never use Linux on her laptop. She’s dead serious. She hates my computers. She stopped telling me she’d never use Linux when I pointed out where she was already using it: her cell phone, the router, the server, the DVR, the TV. Things she takes for granted because she turns them on and they work without “eye candy” or code she can audit herself (which would be quite interesting to see!) or lack of command lines. Just like Windows. She’s not in the open source choir and not interested in how many different window managers she can try. She’s just pragmatic.

What Martin argues for is already reality, just not on desktops.

Linux is widely used on mobile hardware like cell phones and it stands a much better chance of widespread adoption there barring more goofy GPL turf wars by zealots who make up words like “tivoization” for problems that don’t even exist (TiVo plays by the rules; so FSF changes the rules and moves the goalposts). Licensing really matters as much as whether the source is open or not. I’ll even argue that adoption of Linux and its funding from vendors would already be more widespread if its license were less restrictive — much the same way TCP/IP became an adopted standard in both open source and proprietary worlds because there weren’t petty restrictions preventing others from integrating it as they saw fit (it’s important to understand that Microsoft and Apple had every bit as much freedom to use TCP/IP as BSD; otherwise, we’d have closed networking stacks that don’t communicate very effectively with each other).

The real battle in this decade is away from the desktop. Those who want to win market share on the desktop are tilting at Windows.

Linux Won’t Win the Desktop

December 3, 2007

I like Steven Rosenberg’s CLICK blog and have replied to him in the past. He likes to make use of “low-end” hardware (including his famous $15 laptop). That makes him a good guy in my book.

I was just catching up on what he’s been writing about lately and saw him address the Linux desktop issue and why people haven’t migrated to the Linux desktop the way companies have embraced LAMP.

He writes:

With free, open-source applications like Firefox, Thunderbird, OpenOffice, the GIMP and others being ported to Windows and Mac architectures, users who have never worked on anything but a closed, proprietary operating system will be using FOSS for the first time, and that’s a small step over to making the rest of their system FOSS as well.

I think the fact so many open source applications are available for Windows and Mac only insures people will continue using those OSes instead of trying to learn Linux (or BSD). Why should they go through hassles of ditching other software they’ve already bought, storing and/or converting data, installing something new with a very different directory structure and system of permissions, when they can have what’s already familiar to them? No matter how much people grumble about Microsoft Windows and no matter how many Mac-PC ads Apple runs, it’s still the first choice for most computer users when they buy or assemble a system for their own use.

As free as Linux is, it lacks the same appeal Firefox, WinAmp, and other software have. Windows users aren’t averse to free software (never have been: my old modem used to run all night downloading freeware and shareware off BBSes). Most of them don’t give a flip if they can access the source. They’re as happy with Opera as they are with Firefox because it doesn’t cost anything to try. And even after trying, most users are content or so familiar with IE and Outlook that they go back. Why? Because of comfort zones, because of familiarity, because they have investments in time and resources.

Familiarity can’t be underestimated. People take a look at KDE- and Gnome-based systems and are familiar enough with the common aspects of the interfaces. They really couldn’t care less that it’s Linux, BSD, or Cygwin underneath the hood. They can see the familiarity in the interfaces, so they feel comfortable. Improvements in those two (KDE, Gnome) projects have made Linux more accessible to desktop users than earlier attempts that weren’t as familiar or integrated.

Using free software like Firefox doesn’t require repartitioning or learning a new OS. Or wondering if some device — or special software they insist on using or are required to use — will work in Linux. If they can use Firefox and Abiword and GIMP in Windows, they don’t need to mess with Linux. They will continue to use Windows. Best of both worlds.

So, yes, free software is a small step towards OS migration. It’s not clearing the hurdles, though — not even close. I think it will take a lot more to win over the masses, and the platform most likely won’t be desktop. I think it’s much more likely to be a phone, PDA, or similar mobile device. And that’s the future.