Archive for the ‘ubuntu’ Category

Totally F’ing Retarded

August 1, 2011

I noticed when I was installing CentOS 6 that I was getting packages for git, cvs, and svn. I didn’t see bzr flash up but I wasn’t too distraught since I doubted I’d even use the other versioning systems much on this computer. I was wrong about that — I’ve been using some development versions of various software and most of them use either git or cvs.

This morning, I needed something in a hurry and found it uses bizarre bazaar. I checked to see if bzr was already installed and ended up installing it. I started to fetch a branch and decided while it was downloading to check some websites. Mind you, all this is occurring in GNU screen. I decided to maximize my terminal. A few minutes later I realized my wireless LED was no longer blinking so I presumed my downloading had finished.

Wrong.

I had a message that said: [Errno 4] Interrupted system call. WTF?

So I looked it up. First thing I found was this bug at launchpad. I’ve run into curses-based applications crashing due to resized console issues but never something that’s strictly run command line.

Seriously?

Then I remembered who sponsors its development. The same people who bring us Ubuntu.

At least the bug has reportedly been fixed (per the launchpad link above). Hopefully that will quickly make its way into the versions used by RHEL/clones. In the meantime, if you’re using the bzr version found in CentOS and SL base, don’t resize your terminal during bzr operations.

Cost of Freedom: New Ubuntu Font

July 26, 2010

I think a picture is worth a thousand words. The company Canonical has contracted to develop its own “Ubuntu font” apparently eats another brand of dog food, as it were. I realize the font designer, not Canonical, is the source of the PDF of the slides but it’s still amusing that Canonical makes a competing operating system in which the PDF creation software doesn’t run (except when using WINE).

I’m not against mixing open source and closed source systems to get the best results. After all, it worked pretty well for the Linux kernel for a while. But Canonical has made a big deal about taking on Microsoft — so much so that I’d expect their contracts to require use of their own Linux distribution as much as possible.

I’m curious how much development and/or advocacy of of Liberation fonts occurred on non-free software. Red Hat contracted with a company called Ascender.

Rant 2010-03-02: Evil of Open Source

March 2, 2010

I know many Linux and open source advocates want to claim that they have replacements for every conceivable software package. Yesterday, I dealt with two separate issues that prove that no matter how much improvement there’s been in open source software, the benchmarks against which they’re compared are the proprietary applications they seek to mimic; and open source rarely achieves parity with its proprietary counterparts.

One of the problems with open source is too many people want to reinvent wheels rather than join ongoing development of existing projects. So instead of all the time and resources going into making one thing work better, we end up with several things that don’t work; and more often than not, lists of things “to do” remain longer than lists of stable features. And also more often than not, the comparisons that any way relate to one application or utility being a “clone” are laughable because the proprietary versions aren’t sitting targets and because they have far more working, stable features.

Among the poor comparisons are OOo for Microsoft Office, GIMP for Photoshop, Gnash for Flash, and xsane for any proprietary scanning utility (which typically comes “free” with the scanner/printer). While these projects have vastly improved, they’re not really in the same league as the proprietary applications they seek to replace.

I take news of forks and “new” projects with a bit of hesitation, except when they’re backed by big money — that means companies with a vested financial interest in open source. It’s the people from Novell, IBM, Red Hat, and so on — and the money those companies have put into development — who’ve made the greatest improvements in everything from Gnome to NetworkManager to the Linux kernel to a lot of the little things that make Linux approachable by a lot more people than it otherwise would.

I saw this morning courtesy of Lifehacker that the Ubuntards (Canonical) are reinventing another wheel. It’s scanning this time.

Although his software isn’t officially at a finished, 1.0 stage, it’s already decent enough to be an attractive install…

Most open source software isn’t at “a finished, 1.0 stage.”And I’m not sure what the writer at Lifehacker considers 1.0 — I looked at the bugs listed at the launchpad site for simplescan and can say right off the bat that it’s not even close to being as functional as the much more mature xsane. And with the scanner I use most often, I can say that xsane isn’t nearly as functional as the Windows utility that came with my scanner .

The most recent blueprint notes one of the issues facing development of such a program:

A general problem is to ensure that white is really white (your interface mockup displays a scanned page with gray background). A simple scan interface does not serve so much if the scans display gray backgrounds, and necessitate to open gimp to pick the white point to correct balance. User could check “automatic white balance” box in case of scanned pages with white background.

In fact, among the “bugs” listed as wishlist items are things I take for granted using Windows-based utilities — including real greyscale (as opposed to black-white) and color separation.

I’m not knocking this project or even Canonical; I know that Canonical’s philosophy is to make everything work simply even for the least savvy computer user. It just seems that a lot of the money and (wo)manpower these projects throw at reinventing wheels could go a long ways to bringing existing projects up to speed so they’re legitimate contenders against proprietary counterparts.

The wheel doesn’t need constant reinventing. Right now, most open source projects only compete against other open source projects rather than against proprietary software. For some things, this is probably just fine. In the bigger picture, though, it leads to stark choices between half-implemented, amateurish efforts from the open source world and well-polished proprietary applications that work as intended. It would be nice if open source advocates would improve what they already have so that it’s stable and works well. Instead they’re constantly shifting goalposts with changes in APIs and spending more time with paradigm shifts in user interfaces. Broken support, broken applications. When that happens with Microsoft or Apple, they’re held accountable by consumers — screw up enough times and people will stop buying your software. When that happens in open source, the user is left holding the bag until the user or developers, anyone, makes a fix.

I’ve dealt with this in several ways. I bought a device that was ogg-friendly but found out MTP devices lack open source support, so using it under Linux and BSD has been very dicey — but the damn thing works great under Windows and syncs through WMP like a charm. The same trouble has proven true with onboard hardware, including the card readers in my AA1 (flawless in Windows, only one works in Linux and a card has to be inserted before booting or else it won’t ever read a card), wireless (I quit using Linux altogether on the AA1 due to rampant, incorrigiable issues with ath5k), etc.

Some day maybe all this stuff will work as well under open source as it does currently under proprietary software. Until then, proprietary software isn’t evil: it fills a valid niche for people whose computers and hardware have to work in a certain and consistent manner. But as long as open source continues forking without good cause and duplicating its own labor over the same recurring sets of issues, it’s only going to improve slowly. And the slower that ever-reinvented wheel turns, the faster it’s going to get left behind. And why — just for some turf battles over who controls any particular project, not over anything substantive. That’s petty, and it’s also evil.

Alive (Barely) and Kicking (Barely)

November 2, 2009

I’d alluded a couple months ago that I was going to have a little time around Labor Day to mess with my new-old laptop. Boy, was that optimistic. I’ve had my hands full with family health issues again and have had very little time to work, sleep, or take care of myself the past couple months. I returned home yesterday and will have more catching up as I did last time before my life feels — and actually is — back to normal.

Linux-wise long story short, I’ve had only enough time with it to install a couple different distros and OpenSolaris to determine what I’m going to end up running on it. I thought it was coming sans hard drive or that it would at  least be wiped, but I have a genuine Microsoft license and key for XP Professional for it. It currently has no Windows partitions and is running Xubuntu Jizzy Jackshit, which I really can’t wait to replace.

I end up regretting every *buntu installation I do. This has been no different, and in some ways may be the worst yet. Details may follow. Or not.

I left Open Solaris on the laptop for less than an hour after installation completed. I know ZFS has its fans and it certainly has some interesting features. It uses too many resources, though, for my tastes — at least with the specs of my hardware. Sun recommends at least 1 GB of RAM. Maybe it’s less of an issue with at least 2 GB?

In any event, I haven’t had much time to mess with the new-old laptop except to see which distro or OS could manage the hardware and do rudimentary tasks with the least drama out of the box. And guess what that’s been? TinyCore booted from USB. I still want to give more “enterprise-grade” distros a shot. I just need some time. 

I’ve stopped using Linux on my Aspire One. I may try to sell it in the near future. It remains my primary computer but I really am hamstrung by the features that appealed to me enough to get a netbook. Where its portability won me over, now the lack of an optical drive, small screen, and small keyboard are its downfall. I won’t bring up the Atheros wireless card in this context but it merits some consideration even though it works well under Windows (sucks ass under Linux).

Anyway, my life’s again been on hold and it will take a while to get back in the swing of it. Hopefully I’ll be able to make more frequent posts again soon.

Video: Linux ath5k Reboot (Part 2)

August 18, 2009

Yeah, it so deserves a friggin’ sequel.  This was post-reboot. As you can see, the Atheros device wasn’t even detected in dmesg, lspci, etc.

That meant no scanning, no connecting, nothing. What’s the purpose of a netbook if it can’t fucking network?

As I note in the comments, it took me several reboots this time — and yet again in both Linux and Windows — to re-detect the device and be able to network.

Let me make a disclaimer. I’ve followed the bug reports on this from the first time it happened to me. What is it now, nearly six months? I appreciate the serious effort these people are making to make this device class function under Linux. As of my most recent kernel, though, I think it’s proven to still be at a development-stage and still quite unstable. I don’t recommend using Linux on Aspire Ones or other devices using the ath5k driver unless you have a high threshhold for pain and can live with it stopping suddenly and for no apparent reason like this. Maybe they’ll get it functioning better soon. I hope they do.

In the meantime, I’m faced with either changing cards or using Windows rather than Linux. My AA1 is off warranty in a couple months. I may switch cards before then. Or I may spring for Windows 7. The irony of this is, all of this happened while I was making another screencast demonstrating why I was going to ditch CrunchBang/Ubuntu for TinyCore. That’s when the audio from the stream stopped and I realized I was dealing with this mess again.

Caveat emptor. One man’s “free as in freedom” is another man’s quintuple reboot to get the damn thing to work correctly again.

Update 20090817

August 17, 2009

I’ve been busier than expected the past few days. Taking a bit of a break this afternoon to clear my head. Here’s a little update of what’s going on with my computers.

My AA1 continues to be my primary computer, for better or worse. I’ve decided I really need a bigger, faster laptop for full time service. I’m looking at a used high-end business model and also at new mid-level business models now. I’ll probably continue using the AA1 quite a bit since it’s by far the easiest to tote around.

I’m really hard pressed to say I’m still running CrunchBang since I’ve removed a lot of its defaults and replaced them with other things. If anything, it’s more like un-buntu. Nothing against CrunchBang but, even though I think it’s certainly a decent implementation for users wanting less point-click bullshit and less overhead, I think it could benefit from changing window managers and some of the default stuff like the tint2 panel and conky.

I’m still using ion3, which I’ve “fixed” so it doesn’t get full primary use of my function keys. I basically run two “desktops” in it. The first is full screen for well-behaved applications; the other is split so about 30% of the screen is used by the smaller windows of multi-windowed applications like GIMP and Skype. Simple and works for my needs.

I’m also using TinyCore and MicroCore more often but I haven’t had time to finish compiling some of the apps I want. Once I do that, I may enlarge my second Windows partition and reduce the Linux partitions, and get rid of CrunchBang or un-buntu or whatever the hell it is now. I hope to have more to add about all this shortly. {Micro,Tiny}Core is really growing on me.

I’m using NetBSD on my last remaining home server, which I was about to set up as my VPN server. Unfortunately, the server is just about FUBAR. I think the mobo is shot. Regardless, I’m going to scrap it if I can’t figure out if it’s just a bad ribbon cable. It was a rescued MMX box so no big loss. I got a little over a year’s service out of it. I’m thinking of using my old ThinkPad as a VPN server. It’s probably a fire hazard so that could be interesting.

How have I tried to clear my head today? By getting some stuff set up (config files and such) so I can transfer it over to MicroCore, writing a script, etc. I’ll do a separate posting on all that this week. Maybe a video, too, to show the speed and efficiency of console applications and how they can be integrated to work together.

Running monkey httpd on AA1 under crunchbang

August 10, 2009

One of the things I liked about DSL was its inclusion of just about everything you’d ever need whether you wanted to use it as a desktop or a server. I figure from the posts in the forums that desktop use far outpaced server use but it’s quite capable as a server. DSL included everything from SSH and SSL to FTP server to a small HTTP daemon called monkey, which along with any of the other services could be started at boot from a cheatcode.

I think monkey was one of those things that kind of grew on me even though I’d often use thttpd (personal favorite). I’d use it at home for a variety of things including running a local bloxsom blog and hosting family calendars. I’d sometimes use it at work to test things and to set up a temporary server for our group. Despite its tiny size, monkey does CGI and can handle just about whatever you’d want a small HTTP daemon to do without any bloat and with easy configuration. It’s also been rock-solid in my experience, even with moderate traffic. I think you don’t need a full LAMP stack if your needs are fairly simple and you’re not setting up a production server with loads of traffic (and thttpd, which I think is more robust, should suffice if that’s the case).

I needed to look at something and needed to host it on my own network, so I looked to see if monkey is available in the Ubuntu repositories. It is, so I installed it.

The first thing I discovered is that its default conf file (/etc/monkey/monkey.conf) uses port 2001, which is kind of stupid (IMO). I edited it to a more suitable and easily remembered port (8080).

Once it’s set up the way you want, you can start the daemon:
sudo /etc/init.d/monkey start

Actually, you should first check to see if it’s started by default (see below) when you install it. Whether it is or isn’t, it’s safe to issue a stop before starting and/or reconfiguring it if you need to use a different configuration than its default. I think it’s fucked up to set things up to start automatically upon installation or even upon reboot unless/until the user decides to run it. Guess that’s why I still hate Ubuntu and the mindset of the user it attracts (I was going to add a post about this utter shitheadedness affecting the wider Ubuntu community the other day but I’m trying to be more diplomatic — really).

Once it’s started, you can point your browser to localhost:portnumber (e. g., http://localhost:8080) or even to your IP (if not proxied) to reach it from the Internet. Here’s the default monkey page:

screenshot-20090810102113

You can set up your own index.html and configure it as you see fit, including starting with a conf file in your home directory. Just copy the default in /etc/monkey to wherever you want to set it up (such as ~/.monkey). My own preference is to set things up in my home directory, so I have ~/www set up with a directory tree suited to my needs.

screenshot-20090810102056

Another thing (not surprising) I discovered about the Ubuntu package is that it’s set up to start at boot. I moved the S monkey file in my default runlevel (e.g., /etc/rc3.d/) to K. I don’t intend to run a full-time httpd so I’d just as soon start it manually as-needed. It’s not that big but it’s the thought that counts. Keep that in mind when using packages from the bloated distros. (Didn’t I write above that I’m trying to be more diplomatic? See? I didn’t repeat how totally fucked up I think it is to start these kinds of services without users taking full control of them first.)

Also, I noticed {Tiny,Micro}Core doesn’t have a package for monkey yet. I’ll probably add that to my compile list shortly.

Productivity: Setting Up at in Crunchbang

July 27, 2009

I needed to set up something to start at a time certain last night in what used to be crunchbang (considering I replaced over half the stuff in the default base, it’s not so much crunchbang anymore). That means using the at command; cron is a great tool for things that need to repeat on a schedule, but at is the tool to use for “one shot” events.

I entered “at 22:00” and wasn’t too surprised it threw out an error when I hit return. I looked in /etc and saw there was only the at.deny file but no at.allow. So I quickly added my username to at.allow. Then running at again showed that a certain file didn’t exist. Again, no big surprise — I openly admit a bias against Ubuntu because it uses shitty graphical utilities rather than setting up standard tools. So my next step was to setting up (touch) the file .SEQ in /var/spool/cron/atjobs.

% sudo su
root@pluto:/etc# cd /var/spool/cron/atjobs/
root@pluto:/var/spool/cron/atjobs/# touch .SEQ
root@pluto:/var/spool/cron/atjobs# ls -al
total 8
drwxrwx--T 2 daemon daemon 4096 2009-07-26 21:27 .
drwxr-xr-x 5 root   root   4096 2009-06-30 05:53 ..
-rw-r--r-- 1 root   root      0 2009-07-26 21:27 .SEQ

Uh oh, that won’t friggin’ work — it’ll result in denial of permission (unless you run as root, which isn’t necessary):

% at 21:30
warning: commands will be executed using /bin/sh
Cannot open lockfile /var/spool/cron/atjobs/.SEQ: Permission denied

The file needs daemon-daemon ownership. This is very easy to fix. See how easy it is, boys and girls?

root@pluto:/var/spool/cron/atjobs# chown daemon.daemon .SEQ
root@pluto:/var/spool/cron/atjobs# ls -al
total 8
drwxrwx--T 2 daemon daemon 4096 2009-07-26 21:27 .
drwxr-xr-x 5 root   root   4096 2009-06-30 05:53 ..
-rw-r--r-- 1 daemon daemon    0 2009-07-26 21:27 .SEQ
root@pluto:/var/spool/cron/atjobs# exit

Once I had that set up, I could test it to play a file (I wrote this Sunday night).

% at 21:30
warning: commands will be executed using /bin/sh
at> ogg123 ~/audio/fuckinaye.ogg<EOT>
job 2 at Sun Jul 26 21:30:00 2009

The EOT at the end of fuckinaye.ogg is just ctrl-d (remember that from when mail was a true commandline program?). At 9:30pm, I heard the test ogg file.

Now with it properly set up, I can use it to launch individual tasks when I need them and I don’t have to run X to do it from some stupid box with dials and buttons.

Another Way (Maybe) to Skin The MTP Cat

July 19, 2009

I knew there was a nexus between MTP and PTP but I hadn’t checked to see if I could use libgphoto2 to access my Samsung S3 before today. I decided to check because I saw the S3 listed among the devices supported by libgphoto2. Imagine that.

I’d already installed gtkam, which uses libgphoto2, to manage my old Kodak digital camera. I looked to see if the S3 was among the “cameras” listed in the camera selection dialog. It wasn’t listed there but several similar Samsung models were. I didn’t have anything to lose so I plugged it in. I then ran the “detect” option and, voila, I had a listing for my MTP device. I expanded the entry and I had access to everything on the device.

screenshot-20090719160151

What gets me is that this (in #!/Jaunty) is the current version of gtkam and libgphoto2 2.4.2 (current is 2.4.6 and the S3 is named among supported devices in that version). Even with the current version of libmtp, I don’t have the ability to see things by directory (not shown but take my word for it: “Datacasts” and all  the other directories are listed above this “Music” directory) when using apps like rhythmbox. My only option is to use mtp-tools (aka “mtp-examples” to those of you still hitting my blog searching for Fedora help). The only options I have in rhythmbox are to view by artist, song, album, etc. Useful but limited. At least mtp-tools is adequate to manage the device.

I haven’t looked to see if there are any other apps using libgphoto2 to manage MTP devices or to allow mounting them via fuse. Speaking of fuse, the version of mtpfs in Jaunty’s repositories is of no use to me. I can mount the device but a command like ls results with question marks rather than file sizes and permissions. It also shows the filenames but doesn’t allow any other operation on them.

Anyway, it’s nice to see there might be another way to use MTP devices under non-Windows operating systems and that it may actually yield better results. Of course, I’ve only tried to read files and directories and delete files. It may be back to square one if I try to add files.

UPDATE: I installed gphotofs, a fuse system for libgphoto2 which allows PTP/MTP cameras to be mounted as any other filesystem. Yes! I can mount the device and have full access of it. Just deleted a bunch of podcasts from the Datacasts directory.

screenshot-20090719165825

My shell, mksh, carries text beyond the screen (<) so you can’t see the rm command but you can see the result. Finally something freaking works right.

UPDATE 2: Add another 16MB (27MB when various {u} dependencies are removed) of cruft removed. Gone are rhythmbox, libmtp, libusb-dev (needed to recompile libmtp), mtp-tools, mtpfs, etc. It’s redundant to gphoto2/gphotofs and I have much better access to my device now.

UPDATE 3: Using gphotofs is very easy, especially if you’ve used fuse before. You need to be in group plugdev. I chose to create a mount point in my home directory (~/mtp) rather than use a point like /mnt. To mount, first make sure fuse is loaded (lsmod if it was built as a module) and then use the gphotofs command:

gphotofs ~/mtp

Or whatever your mount point is. Once mounted, you can navigate and issue commands as you would any other directory (in a terminal, file manager, whatever you want). When finished, unmount the point:

fusermount -u ~/mtp

Or whatever your mount point is. Give it a moment to umount and then you can remove your device. It’ll work for your camera (if the camera is MTP or PTP) as well.

UPDATE 4: Fuck. If it’s too good to be true, it probably is. I can read from the device and copy and delete from it. Copying to it:

cp: cannot create regular file... Function not implemented

So tomorrow I reinstall libmtp and mtp-tools. Yippee.

Separation of Church and Software

July 19, 2009

Warning: If you’re easily offended, don’t bother reading below this paragraph.

I saw in my distrowatch feed that Ubuntu Christian Edition has a new release. Last I’d heard — and I openly admit I haven’t paid close attention — the project was dead. So it’s kind of like Lazarus rising from the dead.

After seeing the screenshots, I can see that my blog is likely unreachable by people using UCE because of my profanities. Fuck. That’s reason enough for me to recommend others stay the hell away from UCE!

ucewtf01

My bad words no doubt trigger such filters, but I bet embedded video of or even links to Fox News Channel hotties (un)crossing their legs and showing panties won’t. Don’t whine to me about the link. Would you want O’Reilly over that? I’m only making a point about the ineffectiveness of filtering software and how relative things — some of the content on “safe” sites (the above link is pretty tame compared to what else I could’ve linked) can be more offensive to some people than a few bad words– are.

I’m not against parents taking steps to protect their kids from things they shouldn’t hear or see (we do that, too). I think the effectiveness of filtering software is very debatable and not a replacement for supervision. To make it a central part of a “remix” or operating system under any guise is a bit flimsy. Ultimately it restricts the user(s) from desired data. The famous example of blocked searches for breast cancer because of the word breast is the tip of the iceberg. And, ultimately, it can be defeated by using more clever search terms.

Filtering software is no match for a fourteen year-old boy’s impulses no matter how devout he and his family are. He will find the content he wants whether it’s sexually arousing or instructive in the manufacture of small explosives. A filter is but a speed bump, a minor obstacle.

How does use of such software make anything Christian or Muslim or Jewish or even Satanic? It just makes life a bit restrictive and cumbersome. That’s all.

I’m also not against people believing whatever they want. I think there’s a disservice to humanity when people join together over axioms, things they can’t prove or measure, particularly when those axioms have been used throughout history as reasons to separate humanity (which seems anti-thetical to Ubuntu’s raison d’être, which is a wider and more humanist view of the world) through war and oppression. Jihad is crusade is jihad. It doesn’t really matter which religion is being pushed if it’s at the end of a blade or the barrel of a gun. You can’t separate the good from the bad in history; it’s patently dishonest to brush aside inconvenient parts of the story to paint only a rosy picture. With every religion comes fundamentalism, and with fundamentalism comes crimes against humanity — it’s historically inseparable whether the religion is Christianity, Islam, Judaism, Hinduism, or anything else.

One more thing about history. The ugly brown wallpaper shown in the UCE screenshots I saw had a common (mis)representation of Jesus. Such art, whether painting or statue, makes Jesus to be WASPy, much the goyim. For its Son of God series, the BBC commissioned an artist to come up with a more historically and culturally accurate portrait. Somehow, even if there were no copyright issues I don’t think the UCE people would use such a representation. Even if they’ve never seen him to know the difference.

I don’t know how many people use religious-oriented versions of any operating system. I don’t know if the people drawn to things like UCE or Islamic remixes are up to no good. I presume most of them are devout and sincere, fine and upstanding members of the wider community of man. Hopefully these things aren’t being used to further divide people, beguiling the weak and impressionable with promises of another, better world if only they make this one hell for any who oppose them.

I’d like to think that with technology and the Internet the world is growing closer together rather than further apart. Just as science and technology have dispelled many myths and legends, science and technology can do what most religions have promised but none has delivered: a better world for all people, in the here and now.