20100527

Amarok is Usable Again!

Okay, I feel kinda bad after my last Amarok rant. I really do like the thing on a good day, it's just got some weird quirks.

I managed to fix some of my problems today, and they weren't all Amarok's fault.

The intermittent crashing I finally tracked down to a gstreamer bug. I managed to patch it myself, before realizing that it was already fixed in git. At any rate, not Amarok's fault, and not a problem any more.

My playlist filtering has actually been working, I just have this block of 100+ tracks that seem to be immune. I'm going to assume that it can't index them for some reason, and displays them by default. Weird default behavior maybe, but whatever. Removing them from my play list solves the problem, so I'm happy.

I used to have issues with other applications stealing my sound and cutting off Amarok, or vice versa, yay linux sound! This seems to be a xine-lib problem, and switching everything to gstreamer solves it. Ironically, switching to gstreamer forced me to upgrade all my gstreamer plugins to get .m4a files working which exposed the aforementioned crashing bug. At any rate, Amarok's off the hook on this one.

And for the record, the new versions have dramatically improved performance when updating meta-data, though I'll still get the 1-minute cpu-bound hangs every once in a while for some operations. To be honest this really puzzles me, since while I was looking at backtraces from the crashing problem, it's running 28 separate threads at any given time, most of which are running mysql code. I guess they're not getting a whole lot of parallelism out of them for whatever weird cases I trigger.

But yeah, Amarok's back in my good graces for the moment.

20100526

New Laptop and Linux

I recently bought myself a new Toshiba Laptop, model T135D-S1325 for anyone who's interested.  I'm always impressed when laptop hardware actually works in Linux, and since I couldn't find much on Google, I thought I'd share the highlights.

Whatever Windows 7/Toshiba partitioning setup they put on here causes ntfs-resize to break Windows when I tried it.  Not that I really mind, but I tried (and failed) to back up the existing setup in case I ever needed it.

The screen brightness function keys are done in hardware, so they work under Linux.  This is really nice, since on my last Toshiba laptop where I had to boot into Windows to change it.

However, most of the other function-keys don't work. No real surprise here, but there doesn't seem to be any other hardware wifi switch, which is mildly annoying.

I haven't tried the webcam, and maybe never will. I don't trust them...
[Edit: I have since gotten the webcam working. It was also fairly painless.]

My X server came up on HAL without hardly any configuration on my part, but the synaptics touchpad was a little bit of a pain.  I'm not sure if I'm doing something wrong, but I can't seem to initialize the thing right, so that tapping the pad sends a mouse click.  But, I can get it to work manually by running synclient TapButton1=1, so I just set that to run as part of my XFCE startup and I'm off and running.

There also doesn't seem to be a way in the BIOS to require a password to boot off of a flashdrive. I set an "administrator password" (to change BIOS settings) only, because otherwise the "user password" needs to be entered every boot to access the hard disk, so it's possible that would do it for me. At any rate, after digging around on the internet, it sounds like there's a master Toshiba password that'll clear it all out anyway, so I suspect I am vulnerable to flashdrive booting no matter what. Such is life.

The only real challenge was the wireless card, which took me several days to finally nail. The wireless card does not currently have a driver in the mainline kernel.  It shows up as this in lspci.

09:00.0 Network controller: Realtek Semiconductor Co., Ltd. Device 8172 (rev 10)

After splicing together several howto guides, and the terribly written readme in the driver itself, I finally managed to get it working. Getting wpa_supplicant working with it was even harder. Apparently over several versions of the chipset/driver they switched the wpa interface from ipw2200 to wext. The readme tries to explain as much with a step-by-step guide that's on par with the IRS tax code in clarity. Here's what finally worked for me:

Get the driver from here Realtek Drivers. The driver is "RTL8192SE" (despite the fact that it doesn't match the version in lspsci...) Extract it somewhere and run make install. Every time I update my kernel the make install fails, and I have to manually create the /lib/modules/2.6.32-gentoo-r7/kernel/drivers/net/wireless directory before make install works. I also had to manually copy the contents of firmware/RTL8192SE into /lib/firmware/RTL892SE. You can do it yourself without make install, but you have to copy the module into /lib/modules and run depmod yourself. At this point, it kind of worked, but I would get weird error messages. Apparently the driver needs a bunch of library routines in the kernel's wifi code. Specifically the 802.11 TKIP routine and one of the MAC algorithms. I just went through and turned on as much as I could and it eventually worked.

Use the following settings for wpa_supplicant: (This highly depends on specific chipset and revisions!)

wpa_supplicant_args="-c /etc/wpa_supplicant/wpa_supplicant.conf -D wext -i wlan0"

And that's it. Other than the wifi card, the whole setup was much less painful than my last laptop. Linux has come a long way...

20100515

Fun with gst-plugins

To anyone trying to figure out how to play .m4a files with gstreamer, try installing gst-plugins-faad.  At least, 50 plugins later, I think that was the one that finally fixed it...

20100506

Statistics

Since we have entered the "Propaganda Age," there seems to be an increasing distrust of published statements and experts in general, and statistics specifically.  I think that this is deplorable, so I would like to go on record objecting to this trend.  It is true that there are many dishonest ways to gather information and misleading ways to present it, however that is only half the story.  For every deceptive presentation of data, there is an honest interpretation that can refute it once all the facts are known.  If you accept the existence of objective reality, and the consistency of rational thought, anyway (so my arguments won't reach the true laissez-faire existentialists, but what can?).  If you accept objective reality, there is one set of data, and if you have enough knowledge you can interpret it, or at least be swayed by someone who presents a superior interpretation.  So what you should do is verify statistics, not simply doubt all of them.  Check the sources, find credible interpretations, and make yourself smarter if needs be (it can't hurt).

Because in all honesty, as much as the producers of propaganda like to slant statistics, they would love it if everyone stopped believing in them altogether.  Once you give up your right to rational thought and argument, all you have left are your innate emotional prejudices and opinions, which are way easier to manipulate (research autobiographical advertising and cognitive dissonance for starters).

So remember to Think!

20100505

Adventures in buffer underflow...

I wanted to play around with encrypting my swap partition recently, and was impressed by how easy it was in Gentoo.

I add 2 lines to /etc/conf.d/dmcrypt
swap=crypt-swap
source='/dev/sda2'

and change my fstab from /dev/sda2 to /dev/mapper/crypt-swap and I was good to go.  The Gentoo dmcrypt init scripts automatically generate a random key, and format the partition during boot.

Then the fun started... I just wanted to make sure it was working, but the only way to do that is to use up enough memory that my system starts swapping.  Not such an easy task when you have 8 GB of ram.  After a few failed attempts, (even Firefox doesn't use that much memory), I figured that decompressed image files use lots of memory, and decided to open an entire pictures folder with GIMP.  A few hundred windows later, I managed to crash my window server, and still not use enough memory.  Restart X.  Now I was annoyed and determined.  I'm a programmer, I figure I know how to use up memory.  So I wrote a perl script that looks like this:

#!/usr/bin/perl
@temp = <STDIN>;

Let's just load entire files into RAM.  Seems simple enough.  But I needed some big files.  The biggest thing I could find on hand was a 2 GB avi video.  So I cat that into my script, and it doesn't use up memory fast enough.  So I decided to run 3 of them simultaneously.  That's gotta chew up some serious ram, right?  Well, in your memory display, you've got all these fields, like total, used, free, cached, and then  you've got this little neglected field called "buffers."  I've never really paid attention to it before, but apparently, when you run out of it, you have a kernel panic.  (At least I think you do.  As much flack as I give Windows for BSODing, at least they reliably display an error message when they crash.  If you kernel panic in X, everything just stops...)  The last thing I saw before the screen stopped updating was top displaying my buffer memory as 128k.

So here's what I think happened in retrospect.  Perl is a text processing language.  So when I read all of STDIN into an array, it's reading things one line at a time.  Makes sense, right?  But when I decided to send an avi file down the pipe, I'm going to assume that avi files don't have line-ends very often.  So somewhere in my I/O chain, something was buffering gigantic amounts of data looking for a line-end.  Either that, or most of the file got memory mapped, (multiple times?)  and it uses buffer space.  I don't really know what layer actually uses that memory, but apparently it's important.

So, I think the crypto stuff worked fine, but I couldn't verify it, so I just gave up.  If someone actually has access to my swap partition I'm pretty much pwned anyway. It's not worth the instability.

So the moral of the story?  Don't do stupid things on a massive scale.

Unix-isms

I recently read "The UNIX-HATERS Handbook" (which I highly recommend), and learned about a bunch of the deficiencies of Unix.  To be fair, I figured I'd check out the other side, so I found a copy of the "Unix (R) System V Release 4.2 Documentation" in my local library.  The following is taken specifically the "Programming with Unix System Calls" volume.


A few gems:
  • The difference between /sbin and /usr/sbin
    • "The /sbin directory contains executables used in the booting process and in manual recovery from a system failure."
    • "/usr/sbin: This directory contains executables used for system administration."
    • Reading between the lines here,  I think /sbin used to be mounted on a separate partition, for when your file system or disk failed.  There's actually a lot of weird directory structure Unix-isms that hark back to either the lack of stable file systems (journalizing was a revolution in disk stability), or the fact that Unix used to handle running out of disk space particularly poorly (and still does as far as I can tell).  But it's still better than what it does when it runs out of memory.  One of the great testaments to the infrastructure that is Unix, is that killing arbitrary processes was the most stable option they had when running out of memory.  But I digress...
    • /var is the directory for "files and directories that vary from machine to machine."
        • I used to think Unix was a fairly secure system until I read the UNIX-HATERS Handbook, (granted, the adoption of ACLs and the invention of SELinux have improved things slightly).  I guess back in the day (you know, when Unix was being invented, back in the '70s and '80s) the big security paradigm was "Trusted Computing."  The idea being, you only gave privilege to a set of "trusted software" which was thoroughly audited.  This sounds great, assuming you can actually perform these kinds of audits reliably.  And it certainly can't hurt to try to enumerate what you trust and to what extent.  But, even assuming a lack of malice in your trusted set, one bug in something privileged is enough to do you in.  Especially in Unix, where there is "one root to rule them all," and anything installed with setuid root has the potential to open a root shell and do whatever it wants.  So it is telling that the entire Appendix devoted to security is called "Guidelines for Writing Trusted Software," which I can summarize as "Don't give things root access."  A neat trick from the UNIX-HATERS Handbook is to mount a floppy with a setuid binary...
        • These guys seemed to worship I/O Streams.  Granted, they were new, and kinda neat back then, but come on.  Once  you move out of the realm of tape-drives and modems, things get kinda hairy even for things like terminals and, heaven forbid, files.  I quote from the section on file access, "A file is an ordered set of bytes of data on a I/O-device.  The size of the file on input is determined by an end-of-file condition dependent on device-specific characteristics.  The size of a regular-file is determined by the position and number of bytes written on it, no predetermination of the size of a file is necessary or possible."  Okay, I'll grant that its nice not to have to worry about file sizes, but making it impossible?  Some input streams have finite length, and sometimes you'd like output streams to have a finite length.  Being proud of the fact that you support neither, seems mildly crazy.  They're also quite proud of the fact that in Unix, files are "nothing more than a stream of bytes" (paraphrased, I couldn't find the exact quote), and that Unix imposes no record structures.  (Old school operating systems used to make you read or write to your file in X-byte chunks (records) only, because they were stored on disk this way.)  Again, it's great that they have the capability for arbitrary file structures, but (and correct me if I'm ignorant), I thought that record-structures were imposed as an optimization, for when you were doing record-oriented I/O.  Otherwise, the OS can split and fragment your file wherever it feels like, instead of at a record boundary.  But maybe advances in file systems and disk-drives have rendered this point moot.  I'm too ignorant to say.

          Those were the highlights.  It's amazing to think how far computers have come in 30 years.