Showing posts with label computers. Show all posts
Showing posts with label computers. Show all posts

20170111

Pidgin Errors with Gmail

I discovered yesterday that my Pidgin client is refusing to connect to GMail/GoogleTalk with the error:

"SSL peer presented an invalid certificate"

After half an hour of digging, I finally found this Pidgin bug with a work-around:

https://developer.pidgin.im/ticket/17118

I have no idea what's going wrong here, and I can only assume Gmail has some new cert that's not playing nice with something on my system.  But seeing as it took me so long to figure out, I thought I'd help make this easier to find for anyone else who runs into the same problem find it.

UPDATED:
After a few days, this stopped working.  I can only guess that gmail is rotate SSL certs faster than I can keep up.

But I found this Gentoo forums post:

https://forums.gentoo.org/viewtopic-t-1057862.html

Which pointed the problem at gnutls, and sure enough, emerging pidgin without the gnutls use-flag solved the problem.

20151129

Piwigo Thumbnails

I've been trying to find a nice way to host images on my Piwigo server... The problem I've had is that I don't have enough disk space on my server to host all of my full-size photographs, so until recently I'd been sshfs-mounting them off of an AWS instance that was connected to my Dropbox.  Then my year of free AWS ran out, and I was back to searching for a solution.

I eventually ran across this post at Odd One Out who had figured out how to generate the thumbnails Piwigo needs offline.  So I hacked his script up, and now I'm sshfs-mounting my photos off a much slower (and cheaper!) network connection, and pushing the thumbnail images there separately.  (I figure it's very rare for someone to actually download my full images, so I can live with that being a little slower.)

20130720

Gentoo, ReadyNAS, and iSCSI

I recently bought a Netgear ReadyNAS 104 while trying to recover from a failure of my old RAID-5 enclosure.  Now that it's all put together, it looks like I can pull a maximum of 50 MB/s, with typical speeds of 30-40 MB/s.  I'm not sure what the limiting factor is at this point, since it doesn't seem to be network bandwidth, or CPU on either end.  I think something in the system is latency bound, since it seems to pull a little faster when my desktop CPU is completely idle.

At first I was going to set it up using sshfs, but the little ARM CPU in that thing can only push about 3-4 MB/s when it has to do the encryption itself.  So what I finally settled on was exporting it as iSCSI, and having my desktop do the encryption with cryptfs (LUKS).

I get the feeling that iSCSI doesn't get a lot of love on Gentoo, so I figured I'd post my troubles and what I finally got to work.  This post from the Gentoo Wiki Archives was the most helpful, though I skipped all his interface setup, which I'm assuming was designed for a dedicated storage network.

iSCSI Basics:
A target is a server which is offering up a drive for clients to use.
An initiator is the client which consumes a drive to read or write it.

Init Script:
The init script for open-iscsi was kind of primitive... I had to install the unstable version (sys-block/open-iscsi-2.0.872-r2) since it hadn't been updated since modprobe stopped supporting "-l" (which still makes me sad...).  At that point, I had to go redo all my kernel setup since I compiled the iSCSI stuff into my kernel, and the init script assumes that it can manually load and unload the modules.  I eventually commented out all the do_modules calls to work around that.

I also installed net-libs/libiscsi although I never figured out if it was required or not.

I never did figure out how to get it to automatically connect to my drive, but I got the init script to stop complaining at me by setting AUTOSTARTTARGETS="no" in /etc/conf.d/iscsid.  At this point (plus commenting out the modprobes) I could start and stop the daemon cleanly.

To manually mount a drive:
#Start a session to a target
iscsiadm -m discovery -t st -p 192.168.1.100

#Open a drive named "group1"
# on the target "iqn.1994-11.com.netgear:host:a1b2c3d4f"
# (as generated by my NAS) at IP 192.168.1.100
iscsiadm -m node --targetname iqn.1994-11.com.netgear:host:a1b2c3d4f:group1 --portal 192.168.1.100 --login

#At this point /dev/sde should appear and you can mount/format it.

#Disconnect from the target to make the drive disappear 
iscsiadm -m node --targetname iqn.1994-11.com.netgear:host:a1b2c3d4f:group1 --portal 192.168.1.100 --logout  


You can add "-d 8" (with a number 1-8) for increased debug messages if things are going wrong, but I can't say I found it very helpful myself.

The drive letter it shows up at will increment as you keep connecting/disconnecting.  Which was enough to convince me I wanted to manually mount the thing so I've given up trying to get the init script to autostart it anyway.

CHAP:
CHAP is the iSCSI authentication protocol, and was the biggest pain point of the whole experience.  I couldn't get my NAS or open-iscsi to give me any kind of useful error message other than "it didn't work" (or more precisely "iscsiadm: discovery login to 192.168.1.100 rejected: initiator error (02/01), non-retryable, giving up" ).

There are two kinds of CHAP, one used to authenticate initiators (clients), and one used to authenticate the targets (servers).  Since I was already encrypting my data on my desktop, I didn't bother setting up the server authentication, but I think it works basically the same way.  Here's what finally worked:

First I had to generate an InitiatorName and tell my NAS to restrict access to only that initiator.  My NAS only had a field for one "Initiator (IQN)" entry, while the Linux settings had 3 different values.  I don't really know what the difference is supposed to be, but I had problems until I set them all to the same thing.
## -*- mode: Conf-space;-*-
##/etc/iscsi/initiatorname.iscsi
## For examples, see /etc/iscsi/initiatorname.iscsi.example

InitiatorName=iqn.2008-10.com.example.mybox:openiscsi-a1b2c3f
InitiatorAlias=iqn.2008-10.com.example.mybox:openiscsi-a1b2c3f

Uncomment the following lines in /etc/iscsi/iscsid.conf, and set them accordingly.
##/etc/iscsi/iscsid.conf 
node.session.auth.authmethod = CHAP
node.session.auth.username = iqn.2008-10.com.example.mybox:openiscsi-a1b2c3f
node.session.auth.password = ThisIsAPassword

For whatever reason, CHAP passwords have to be between 12 and 16 characters.  I spent a while failing to get it to work with an 8-character password before I figured this out.

It probably goes without saying, but make sure you type the same password into your NAS.  (And set it for your initiator, instead of what my NAS called the "bidirectional authentication" for letting clients know they're talking to the right server.)

There are also settings in iscsid.conf for using CHAP during discovery.  You might have to set those too, but my NAS didn't seem to support it, so I left them alone.

After some trial and error, I finally figured out that my NAS was picking up password changes immediately, but my client was saving some kind of a session that was making things difficult.  Eventually I determined that to actually get my new settings to take effect I had to stop my daemon, rm -rf the session folder it had created in /etc/iscsi/nodes/, start the daemon again, and then start over from the discovery phase.

And there you go!  Good luck.

20121223

Alarm clock

In a sudden stroke of genius (the kind that can only occur at 1:30 in the morning), I have just invented the most annoying alarm clock ever.

while true; do beep -f `random 100 2000` -l `random 5 300`; done;

(Where random is a script I wrote that does the obvious.)

Not only is it incredibly annoying, but you can't control-c it, since the beeps are running so fast.  I also had the luck (misfortune?) of running it while sudoed as root, so it ended up with some kind of weird reparenting and so it didn't even stop when I closed my terminal... I finally had to rmmod pcspkr to get it to quit while I hunted down the offending process.

I don't yet know how I will fully use this newfound power, but if nothing else I can guarantee I will be waking up tomorrow morning.

20121214

New Computer

I recently upgraded my main system from a Phenom II to an Ivy Bridge (Xeon E3-1270), and this may be stating the obvious, but dang is it faster...
I have a little benchmark program I wrote to test a simple C-library of data structures, that normally took 10 seconds on my old machine, that now completes in just under 4. If I launch eight of them at once (two per real core), the hyper-threading lets them complete in just under 6 seconds a piece. Wow!
With my spare cycles, I normally run GIMPS work units, which is a distributed prime number search that consists of heavy floating point operations. The new machine processes work units about 6 times faster than the old one. I'm assuming the new AVX instructions are mostly responsible.
Which brings me to an awkward problem... I normally run 4-threads of GIMPS at a niceness of 19, which usually translates into very little CPU usage, and I just let it run without typically noticing any performance hit. But with hyper-threaded cores, the Linux scheduler notices that I've got spare CPU's, and happily schedules them with GIMPS tasks. So what ends up happening, is I get my interactive workload on a core, and a GIMPS thread on it's hyper-peer, and they split CPU roughly 50% (modulo the hyper-threading boost). That kind of hurts, but I also hate to turn off hyper-threading since it was a 30% throughput boost when I do have real multi-threaded workloads.
At the moment I'm just living with it, and turning off GIMPS if I run into a situation where I care, but that kind of sucks. I tried playing with cgroups, but from all I can tell, they're not really designed for this. You can use them to limit things to a percentage of the CPU when you're under load, but when there are spare cores they optimize for throughput, and schedule things anyway.
After digging through the Internet, I finally found a utility called cpulimit that is almost what I want. It'll let me limit the total CPU of a process tree to X% of a core, so I can limit GIMPS to 3 cores, and leave one completely spare, but I still have to manually pay adjust to it if I want to give it my whole CPU while I'm gone.
What I really want, is for it to check my load average, and automatically scale up/down the limit it's applying to GIMPS based on how busy my system is. After pulling the source for it, I think I know how to make it do that, so sometime when I get really motivated I'm going to give it a try.
Among the other fun things I've learned while setting up this machine is what happens when you don't have your static /dev setup right (the kernel can't launch init...), when you don't have a /run folder (your hostname never sets, and you can't successfully halt or reboot among other things), and Gentoo live cd's haven't setup the net.eth0 symlink by default since at least January. Oh, and don't dd your disk while you have it mounted... It doesn't end well.
I have also now become well-acquainted with NewEgg's return policies, having sent them back a bad motherboard, and a set of four 8-GB RAM chips (which I mistakenly ordered despite the fact that registered memory wasn't compatible with my motherboard). Props to NewEgg for painlessly taking it all back.  (Update: NewEgg has since refused to accept my motherboard back... :-(  )

I'd like to think this experience has made me wiser, but if nothing else, at least I now have a faster computer. :-D

20110612

Ubuntu 11.04 "Window Grippers"

In case anyone else was as annoyed by the Ubuntu 11.04 "Window Grippers" (aka the obnoxiously large resize handles), instructions to disable them can be found here.

Saving the world, from one obnoxious UI change at a time...

20110524

grep 2.8

So I was sitting around today, lamenting the fact that grep isn't multithreaded, and decided to actually do something about it.  Since all of the appropriate Google searches just seem to list other people lamenting the same fact, I started messing around with things.  Turns out, I can get a 20-50% speedup just by doing this:

find $files -type f -print0 | xargs -0 -n10 -P8 grep -H $ARGS "$pattern"

(Word to the Wise: There is also a similar, but very wrong way to do it...)

find $files -print0 | xargs -0 -n10 -P8 grep -H -r "$pattern"

 So after tuning the threading numbers there, and feeling pretty good about myself, I decided to look up the mainline GNU Grep source, and see if there wasn't something I could do to help.  So I pull up the current source, and upgrade my box to use grep 2.8 instead of Gentoo's stable 2.5.4, and Holy Snapdragon!  (Yes, I actually said that...)  I got a 10x-60x speedup on the workloads I was testing.  Yes, you read that right.  My tests that were taking 48-80 seconds, were now finishing in 1-2 seconds.  I dunno what kinda magic they threw into grep 2.8 but good work guys.  Still no multithreading, but with the new version, I have real trouble just finding a workload that needs it.

So, my Gentoo countrymen, if ever there were a time to use unstable, this is it!

20110507

Skype and Gentoo amd64

I have a weird problem where my skype video intermittently (read mostly) doesn't work.

I found a solution here:
http://papers.ch/?p=94

Apparently it doesn't figure out it needs to load the 32-bit versions of v4l ...

20110505

Bash-ism

Fun fact of the day: "echo *" and "echo */" do not do the same thing... (with the obvious distinction).

20110419

Standards of Proof

Standards of Proof in the Internet Age

The Dark Ages
1960?-1970?: Published Work

Birth of the Internet
1970-1979: Electronic Article by a University Professor
ca 1980: a posting on Usenet

The Web
late 1980s - early 1990s: some grad student's personal website
ca 1992: AOL encyclopedia article
ca 1996: a page on GeoCities
late 1990s: corporate sponsored web pages

Social Networking
early 2000s: some guys Blog
ca 2006: a video on YouTube
2007: a Wikipedia article
2009: some celebrity's Twitter feed
2010: a poll of your Facebook followers

Source: My Personal Thoughts, April 2011.
(Some of the above mentioned sources were consulted for historical accuracy... the rest was fabricated.)

20110323

Firefox 4 animations....

I just installed Firefox 4, and the juries still out, but I needed to disable the tab animations just to make it usable.

What is with people and fancy UI animations these days?  I mean, they're fun to write, and they're cool for about 10 seconds, and then it's just annoying.  Like, after I've clicked a button, or closed a tab, I'm done with it.  I've already moved on, and am paying attention to something else, so if it suddenly starts flashing, or blinking, or sliding, it's completely distracting.

But thank you Firefox 4 developers for adding an option to disable it!  The nice thing about open source is there's usually someone who complains loud enough to get an option added.  :-D

20110102

How to disable middle-click clipboard URL loading in Firefox

It has always annoyed me that if I accidentally middle-click somewhere in Firefox, it tries to load whatever is in my clipboard as a URL.  Granted, I'm not one of the old-school Unix folks who actually uses middle-click to paste things normally, but that's always seemed crazy to me.  I mean, if I want to enter a URL, I click on the address bar, where URLs are normally entered, and not in the middle of the window.  Weird.

At any rate, go to "about:config" and set "middlemouse.contentLoadURL" to false, and this obnoxious behavior goes away.

20101124

Fun with Fedora

I don't want to be the guy who rants all the time, because life's too short to spend it complaining.  But I had such a negative experience that I'm going to share it anyway.  Thumper's Mother can scold me later...

I had to test some things for work on the new shiny Fedora 14 release, and decided that it was the most brain-dead and hard to setup Linux distribution I've ever worked with.  (After some reflection, the only other serious contender was Topologilinux, who's built-in upgrade system used to leave the installation unbootable.  But that was back in 2002, and since they're essentially some fancy installation scripts around Slackware, I'll cut them some slack (no pun intended), since I don't expect them to maintain the entire Slackware repository.)    All I wanted to do was get VMware tools installed properly, and to do that I needed to install gcc and some kernel headers, which sounded simple enough.  I'm willing to overlook (what I perceive as) the grave crimes of any distro that doesn't ship with gcc and the kernel headers included, because I realize that not everyone uses systems the same way I do.

Now, granted, I've never worked with Fedora or RedHat before, and Gentoo's package management system has pretty much spoiled me for life, so I was prepared for a bit of a learning curve to get everything setup properly.  The first task was networking.  I'm behind the firewall-proxy setup at work, so I didn't expect networking to work out of the box.  I'm even willing to overlook the fact that there is no happy graphical utility to set a system-wide proxy configuration.  Nobody else seems to do this either, despite the fact that it would just have to dump two environment variables to a config file.  But, on a modern Linux distribution, and especially on standard hardware environments like a VMware virtual machine, I expect my network devices to come up under DHCP without any user intervention.  So I wasted several minutes trying to play with proxy settings, before I realized that I had the more fundamental problem: eth0 does not start by default.

The next thing I expect from a software system, is reasonably descriptive error messages. If there's a network problem, I expect to see something to the effect of "Unable to connect to fedora.com," or even "Unable to download package database" or even "Plug in your network cable you moron."  But, the message yum gave me was something like "Error: Cannot retrieve repository metadata (repoman.xml)," which just doesn't help me at all.  Now, maybe they tried, since the word "retrieve" is in there, which sounds vaguely networky, but it's not terribly descriptive.  It sounds like the kind of generic error message you get when the programmers were either too lazy to implement proper error handling, or the errors are obscure enough that the programmers could not have reasonably predicted them.  So I went digging around with yum, trying to rebuild package databases, and running obscure maintenance-looking commands, trying to figure out what could possibly be weirdly screwed up on my stock install.  After fixing my proxy settings (aka, copying the environment variables so many places, yum has to get it right), I was confronted with a different error message.  Now yum was listing websites and telling me it was failing to connect with them.  Somehow, after giving yum access to the Internet, it was finally giving me an error message that looked definitively network related.... go figure.  Clearly something else was wrong.  I managed to convince myself that the sites in question really did exist (at this point I was praying they weren't dumb enough to ship with dead mirrors), and that I really could connect to them through the proxy and the firewall at work.

After longer than I'd like to admit, and double and triple checking the network connectivity and proxy settings, I gave up and started digging around on Google.  I finally found some other poor saps looking at the same error message who had managed to find a fix.  Apparently, and for reasons I cannot adequately explain, yum is unable to access HTTPS sites out of the box.  This is, as you might imagine, a severe limitation, when all of the default mirrors include https:// in the URL.  So, I went and hand-edited my repo list, and converted all of the https:// to http://, and prayed that nobody was running a https only mirror.  That finally worked for me, and I was able to start the onerous process of matching up real-world package names like "gcc" and "kernel headers" to the cryptic, numbered formulas that seem to be the best non-Gentoo package managers can offer.  I'm really hoping that this was an issue with the Squid proxy at my work, and that the Fedora folks didn't ship a release that was unable to validate SSL certificates.  For that matter, I'm not completely sure what added security running over HTTPS really gives you.  I'm hoping they checksum the binaries, so the only advantage I see to HTTPS is to make it so that people upstream from me can't tell exactly which packages I'm downloading...

Meanwhile, I had another problem: yum was deadlocking itself.  Apparently, yum will happily spin for over 10 minutes, trying to acquire some internal yummy lock.  Furthermore, Unix file-locking being what it is, killing the competing processes doesn't release the lock.  So, I had to go Google around for the lock files yum uses, so I can kill all the yums on my system, free the locks, and try again.  (I had by this point accumulated quite a number of hung yums.)  Somehow, this was unsuccessful.  So I logged out, and restarted my X session.  No luck.  I rebooted.  No luck (after reworking the solutions for eth0, and the proxy above).  I finally figured out that whatever system-tray icon their default session launches to inform me of all the wonderful updates I'm missing was also running yum, and hanging in some new and interesting way.  I never solved that one.  But, if I killed it, and all my hung yums, and cleaned all the lock files, finally I could install packages.  However, mistyping a shell command would cause my shell to hang indefinitely... I can only assume it was asking yum what I meant to type, so it could offer to install it for me, but that yum was still horribly broken in some unusual way.

This, is the Linux distribution that gets all the money?  I fear for our novices...

20101003

Fun fact of the day: Ping

Pressing "Ctrl+\" while running ping will display the current statistics without quitting.

You may now return to your regularly scheduled programming...

20101002

Of uids and shell scripts ...

So, I've got this extensive collection of shell scripts that I've written sitting in ~/bin, that I use all time.  In fact, I like them so much, that I wanted to be able to call them while  I was root.  But that's a huge security hole, because it means that anyone who has become me and can edit my scripts, now has a pretty good chance of me running them as root.  Granted, if someone actually owns my user account, I already have some serious problems, because in all reality I care about my data much more than I care about "root access" by itself.  But, seeing as I wrote all of these scripts myself, there's a reasonable chance that there's some egregious bug in them that I don't want to run as root anyway.

I used to just put my user scripts dir in my path as root, figuring that I'm the only one who's ever me, and I usually have physical security of my box.  But as I started copying my scripts to more and more computers (including my linode, which I don't  have physical control of), this started making me nervous.   So eventually I solved this problem by making two copies of my scripts, one to sit in /root and one to sit in ~/bin.  This worked great for security, but now I had to maintain and sync two copies of this stuff, and bug fixes weren't propagating quickly.  But I didn't have a better way to do things.


I was talking to one of my friends yesterday, who told me that he uses "su -c" for that problem.  That let's you run a command as another user, so you can drop root privileges.  If the syntax was a little different, this is almost exactly what I wanted, but unfortunately I play enough games with arguments, that I wanted to be able to pass ./script 1 2 3 'a b c' correctly, and I couldn't figure out how to do that on a shell level.  I also don't want to worry about sanitizing arguments with spaces or escape sequences or anything.

So I finally convinced myself that the right way to do this was in C (after a failed attempt to find perl commands that would let me do this).  It actually turned out to be way easier than I expected.  Essentially I can just call setuid, pull off the command line arguments, and pass them to exec, and I have a script which runs a command as the specified user.  I ended up pulling some library routines out of the "su" utility to do some environment sanitization , but other than that the code is a few system calls and some error checking.  If I ever clean it up, and make sure I've preserved all the copyright notices I'll stick it up here.

But here's the gist of it (error checking removed):
Called as "suwrap user command args"

int main(int argc, char ** argv)
{     
    int x;
    uid_t uid;
    char **args;
    passwd *p;
    int argsize; 
     
     
    p = getpwnam(argv[1]);
     
    addenv("HOME", p->pw_dir);
    addenv("USER", p->pw_name);
    addenv("LOGNAME", p->pw_name);
    
    uid = p->pw_uid;
    
    setuid(uid); 
         
    argsize = argc-COMMAND_INDEX+1;
    args = malloc(argsize*sizeof(char*)); 
     
    for(x=COMMAND_INDEX;x<argc;x++) { 
        args[x-COMMAND_INDEX] = argv[x]; 
    }
         
    args[argsize-1] = NULL; 
    
    execvpe(argv[COMMAND_INDEX], args, newenvp); 
     
    //No one will ever see this... 
    printf("\nDone!\n");
    return 0;
} 


I don't know if I should feel proud or disturbed that I've progressed from writing shell scripts to C code, but I definitely feel that I have crossed a Unix threshold here...

20100607

Another victory over Amarok!

Playlist filtering works!!!!


I was bemoaning my inability to search, when I noticed that editing tags on the previously unhideable tracks made them start working normally.  But only some tags.  I can only assume, for some reason, they were never entered in the amarok database?  I dunno.  It was  a mix of flac and mp3 files that were in there.  Possibly things I've never edited by hand since the reincarnation of my amarok database.  But at any rate, giving them all a star rating makes them work now.

Now maybe I can finally finish tagging this mess...

20100527

Amarok is Usable Again!

Okay, I feel kinda bad after my last Amarok rant. I really do like the thing on a good day, it's just got some weird quirks.

I managed to fix some of my problems today, and they weren't all Amarok's fault.

The intermittent crashing I finally tracked down to a gstreamer bug. I managed to patch it myself, before realizing that it was already fixed in git. At any rate, not Amarok's fault, and not a problem any more.

My playlist filtering has actually been working, I just have this block of 100+ tracks that seem to be immune. I'm going to assume that it can't index them for some reason, and displays them by default. Weird default behavior maybe, but whatever. Removing them from my play list solves the problem, so I'm happy.

I used to have issues with other applications stealing my sound and cutting off Amarok, or vice versa, yay linux sound! This seems to be a xine-lib problem, and switching everything to gstreamer solves it. Ironically, switching to gstreamer forced me to upgrade all my gstreamer plugins to get .m4a files working which exposed the aforementioned crashing bug. At any rate, Amarok's off the hook on this one.

And for the record, the new versions have dramatically improved performance when updating meta-data, though I'll still get the 1-minute cpu-bound hangs every once in a while for some operations. To be honest this really puzzles me, since while I was looking at backtraces from the crashing problem, it's running 28 separate threads at any given time, most of which are running mysql code. I guess they're not getting a whole lot of parallelism out of them for whatever weird cases I trigger.

But yeah, Amarok's back in my good graces for the moment.

20100526

New Laptop and Linux

I recently bought myself a new Toshiba Laptop, model T135D-S1325 for anyone who's interested.  I'm always impressed when laptop hardware actually works in Linux, and since I couldn't find much on Google, I thought I'd share the highlights.

Whatever Windows 7/Toshiba partitioning setup they put on here causes ntfs-resize to break Windows when I tried it.  Not that I really mind, but I tried (and failed) to back up the existing setup in case I ever needed it.

The screen brightness function keys are done in hardware, so they work under Linux.  This is really nice, since on my last Toshiba laptop where I had to boot into Windows to change it.

However, most of the other function-keys don't work. No real surprise here, but there doesn't seem to be any other hardware wifi switch, which is mildly annoying.

I haven't tried the webcam, and maybe never will. I don't trust them...
[Edit: I have since gotten the webcam working. It was also fairly painless.]

My X server came up on HAL without hardly any configuration on my part, but the synaptics touchpad was a little bit of a pain.  I'm not sure if I'm doing something wrong, but I can't seem to initialize the thing right, so that tapping the pad sends a mouse click.  But, I can get it to work manually by running synclient TapButton1=1, so I just set that to run as part of my XFCE startup and I'm off and running.

There also doesn't seem to be a way in the BIOS to require a password to boot off of a flashdrive. I set an "administrator password" (to change BIOS settings) only, because otherwise the "user password" needs to be entered every boot to access the hard disk, so it's possible that would do it for me. At any rate, after digging around on the internet, it sounds like there's a master Toshiba password that'll clear it all out anyway, so I suspect I am vulnerable to flashdrive booting no matter what. Such is life.

The only real challenge was the wireless card, which took me several days to finally nail. The wireless card does not currently have a driver in the mainline kernel.  It shows up as this in lspci.

09:00.0 Network controller: Realtek Semiconductor Co., Ltd. Device 8172 (rev 10)

After splicing together several howto guides, and the terribly written readme in the driver itself, I finally managed to get it working. Getting wpa_supplicant working with it was even harder. Apparently over several versions of the chipset/driver they switched the wpa interface from ipw2200 to wext. The readme tries to explain as much with a step-by-step guide that's on par with the IRS tax code in clarity. Here's what finally worked for me:

Get the driver from here Realtek Drivers. The driver is "RTL8192SE" (despite the fact that it doesn't match the version in lspsci...) Extract it somewhere and run make install. Every time I update my kernel the make install fails, and I have to manually create the /lib/modules/2.6.32-gentoo-r7/kernel/drivers/net/wireless directory before make install works. I also had to manually copy the contents of firmware/RTL8192SE into /lib/firmware/RTL892SE. You can do it yourself without make install, but you have to copy the module into /lib/modules and run depmod yourself. At this point, it kind of worked, but I would get weird error messages. Apparently the driver needs a bunch of library routines in the kernel's wifi code. Specifically the 802.11 TKIP routine and one of the MAC algorithms. I just went through and turned on as much as I could and it eventually worked.

Use the following settings for wpa_supplicant: (This highly depends on specific chipset and revisions!)

wpa_supplicant_args="-c /etc/wpa_supplicant/wpa_supplicant.conf -D wext -i wlan0"

And that's it. Other than the wifi card, the whole setup was much less painful than my last laptop. Linux has come a long way...

20100505

Adventures in buffer underflow...

I wanted to play around with encrypting my swap partition recently, and was impressed by how easy it was in Gentoo.

I add 2 lines to /etc/conf.d/dmcrypt
swap=crypt-swap
source='/dev/sda2'

and change my fstab from /dev/sda2 to /dev/mapper/crypt-swap and I was good to go.  The Gentoo dmcrypt init scripts automatically generate a random key, and format the partition during boot.

Then the fun started... I just wanted to make sure it was working, but the only way to do that is to use up enough memory that my system starts swapping.  Not such an easy task when you have 8 GB of ram.  After a few failed attempts, (even Firefox doesn't use that much memory), I figured that decompressed image files use lots of memory, and decided to open an entire pictures folder with GIMP.  A few hundred windows later, I managed to crash my window server, and still not use enough memory.  Restart X.  Now I was annoyed and determined.  I'm a programmer, I figure I know how to use up memory.  So I wrote a perl script that looks like this:

#!/usr/bin/perl
@temp = <STDIN>;

Let's just load entire files into RAM.  Seems simple enough.  But I needed some big files.  The biggest thing I could find on hand was a 2 GB avi video.  So I cat that into my script, and it doesn't use up memory fast enough.  So I decided to run 3 of them simultaneously.  That's gotta chew up some serious ram, right?  Well, in your memory display, you've got all these fields, like total, used, free, cached, and then  you've got this little neglected field called "buffers."  I've never really paid attention to it before, but apparently, when you run out of it, you have a kernel panic.  (At least I think you do.  As much flack as I give Windows for BSODing, at least they reliably display an error message when they crash.  If you kernel panic in X, everything just stops...)  The last thing I saw before the screen stopped updating was top displaying my buffer memory as 128k.

So here's what I think happened in retrospect.  Perl is a text processing language.  So when I read all of STDIN into an array, it's reading things one line at a time.  Makes sense, right?  But when I decided to send an avi file down the pipe, I'm going to assume that avi files don't have line-ends very often.  So somewhere in my I/O chain, something was buffering gigantic amounts of data looking for a line-end.  Either that, or most of the file got memory mapped, (multiple times?)  and it uses buffer space.  I don't really know what layer actually uses that memory, but apparently it's important.

So, I think the crypto stuff worked fine, but I couldn't verify it, so I just gave up.  If someone actually has access to my swap partition I'm pretty much pwned anyway. It's not worth the instability.

So the moral of the story?  Don't do stupid things on a massive scale.

Unix-isms

I recently read "The UNIX-HATERS Handbook" (which I highly recommend), and learned about a bunch of the deficiencies of Unix.  To be fair, I figured I'd check out the other side, so I found a copy of the "Unix (R) System V Release 4.2 Documentation" in my local library.  The following is taken specifically the "Programming with Unix System Calls" volume.


A few gems:
  • The difference between /sbin and /usr/sbin
    • "The /sbin directory contains executables used in the booting process and in manual recovery from a system failure."
    • "/usr/sbin: This directory contains executables used for system administration."
    • Reading between the lines here,  I think /sbin used to be mounted on a separate partition, for when your file system or disk failed.  There's actually a lot of weird directory structure Unix-isms that hark back to either the lack of stable file systems (journalizing was a revolution in disk stability), or the fact that Unix used to handle running out of disk space particularly poorly (and still does as far as I can tell).  But it's still better than what it does when it runs out of memory.  One of the great testaments to the infrastructure that is Unix, is that killing arbitrary processes was the most stable option they had when running out of memory.  But I digress...
    • /var is the directory for "files and directories that vary from machine to machine."
        • I used to think Unix was a fairly secure system until I read the UNIX-HATERS Handbook, (granted, the adoption of ACLs and the invention of SELinux have improved things slightly).  I guess back in the day (you know, when Unix was being invented, back in the '70s and '80s) the big security paradigm was "Trusted Computing."  The idea being, you only gave privilege to a set of "trusted software" which was thoroughly audited.  This sounds great, assuming you can actually perform these kinds of audits reliably.  And it certainly can't hurt to try to enumerate what you trust and to what extent.  But, even assuming a lack of malice in your trusted set, one bug in something privileged is enough to do you in.  Especially in Unix, where there is "one root to rule them all," and anything installed with setuid root has the potential to open a root shell and do whatever it wants.  So it is telling that the entire Appendix devoted to security is called "Guidelines for Writing Trusted Software," which I can summarize as "Don't give things root access."  A neat trick from the UNIX-HATERS Handbook is to mount a floppy with a setuid binary...
        • These guys seemed to worship I/O Streams.  Granted, they were new, and kinda neat back then, but come on.  Once  you move out of the realm of tape-drives and modems, things get kinda hairy even for things like terminals and, heaven forbid, files.  I quote from the section on file access, "A file is an ordered set of bytes of data on a I/O-device.  The size of the file on input is determined by an end-of-file condition dependent on device-specific characteristics.  The size of a regular-file is determined by the position and number of bytes written on it, no predetermination of the size of a file is necessary or possible."  Okay, I'll grant that its nice not to have to worry about file sizes, but making it impossible?  Some input streams have finite length, and sometimes you'd like output streams to have a finite length.  Being proud of the fact that you support neither, seems mildly crazy.  They're also quite proud of the fact that in Unix, files are "nothing more than a stream of bytes" (paraphrased, I couldn't find the exact quote), and that Unix imposes no record structures.  (Old school operating systems used to make you read or write to your file in X-byte chunks (records) only, because they were stored on disk this way.)  Again, it's great that they have the capability for arbitrary file structures, but (and correct me if I'm ignorant), I thought that record-structures were imposed as an optimization, for when you were doing record-oriented I/O.  Otherwise, the OS can split and fragment your file wherever it feels like, instead of at a record boundary.  But maybe advances in file systems and disk-drives have rendered this point moot.  I'm too ignorant to say.

          Those were the highlights.  It's amazing to think how far computers have come in 30 years.