Sunday, September 29, 2013

KDE screensaver in Ubuntu 13.04, part 2: disable locking

Some security updates upgraded kde-workspace-bin, which upgraded /usr/lib/kde4/libexec/kscreenlocker_greet, which restored the infuriating behavior I've previously described: notably, that the screen locker runs after ~4 seconds of idle.

It's still not entirely clear to me what's happening, but it looks like the scenario goes something like this: ksmserver runs kscreenlocker_greet all the time. kscreenlocker_greet looks like it examines the configuration file(s) $HOME/.kde/share/config/kscreensaverrc and/or /usr/share/kde4/config/kscreensaverrc for settings and triggers off of those.

This can't be how it's supposed to work, because kscreenlocker_greet is only run once at the beginning of each idle period. I've verified this by replacing kscreenlocker_greet with this script:

#!/bin/sh
echo "`date` - start: $0 $@" >> $HOME/kscreenlocker_greet.log
exit 0
I can then "tail -f" the log file and watch as KDE tries to lock my screen.

Fortunately, kscreenlocker_greet appears to read the kscreensaverrc configuration file, which I've modified to read as this:
Enabled=false
LegacySaverEnabled=false
Lock=false
PlasmaEnabled=false
Saver=krandom.desktop
Timeout=1800
What all of those mean I cannot say, but "Enabled=false" and "Lock=false" seem to be helpful.

The part that's still troublesome is that while the screensaver is being activated, my keystrokes are being consumed, so it appears that I'm always dropping letters as I try to type.

Whatever's going wrong is doing it in ksmserver, and that program, naturally, has no man page. For now, this config file change should prevent the annoyance from recurring after an upgrade. I'll leave digging into ksmserver for another day...




Wednesday, September 18, 2013

Fix to prevent line breaks at hyphens in HTML

The problem

When writing HTML you may want to include something like a phone number in the form 1-800-555-1212. The problem is that this might be displayed as:
... 1-800-555-
1212...
This is probably not what you want - you'd prefer the entire thing to appear on one line.

Solution(s)

Non-breaking Hyphen

This is the most obvious logical solution - we have non-breaking spaces ( ) so all we need is a non-breaking hyphen, right? Well, not quite.
First, there is no standard entity for a non-breaking hyphen. Instead, this is a character defined in the Unicode specification as character #8209. If you want to use this in your HTML you can use either ‑ or ‑. The problem here is the font - most fonts probably don't include this character and will render it as some box with either digits or punctuation inside. What other options are there?


CSS white-space

This is the most correct answer. It requires two steps. First, define a CSS class:
.nobreak { white-space: nowrap; }
Then use it your HTML as:
<p>Lorem ipsum <span class="nobreak">1-800-555-1212</span>.</p>
The especially nice thing about this is that it's generic and will work equally well with:
<p>Dolor amit <span class="nobreak">1 (800) 555-1212</span>.</p>


My hack: super-under

And just in case you're curious what kind of craziness I can come up with, here's my third possible solution:
<p>Sibiliy si emgo: 1<sup>_</sup>800<sup>_</sup>555<sup>_</sup>1212.</p>
Yes, that's right: Take an underscore character ('_') and superscript it. A work of genius I say ...

Tuesday, September 03, 2013

Moving virtual hard disks in VirtualBox

Under Linux, VirtualBox likes to put it's virtual disks in a relatively inconvenient directory. Newer versions are more obnoxious, and have removed the ability to easily manage your virtual disks. As most of the information I find online is old and does not apply to VirtualBox 4 or 4.2 (the specific version I have, shipped with Ubuntu 13.04), or may work but just seems too hard, here's a nice little recipe:

To move a virtual disk from point A to point B:

  1. Ensure the virtual machine(s) owning the disk are powered off.
  2. Open the settings for the virtual machine(s) owning the disk.
  3. Go to "Storage", select the disk, and click the Remove icon. Then click OK.
    • Repeat for each associated VM as necessary
  4. From the main VirtualBox panel, select File->Virtual Media Manager
  5. Click on the Virtual Disk image, verify the Location, and that it is not attached to any VMs.
  6. Click the Remove icon along the top *
  7. Click Close.
  8. Now, from the commandline or file manager, move the newly-freed VirtualImage.* files from point A to point B.
  9. Finally, back in VirtualBox, select the virtual machine configuration and select "Settings".
  10. Go to "Storage" and select the Controller where the virtual disk used to be.
  11. Click on the Add Attachment icon and select Add Hard Disk.
  12. Select Choose existing disk, navigate to point B, and select your virtual disk.
If you try to skip steps 4-7 VirtualBox will complain about adding a disk image with an already existing UUID, so that image has to be removed from the Virtual Media Manager first. If you have multiple VMs attached to the same disk this can get tedious, but it's better then the other suggestions I've read, including cloning the disk or VM, or even removing the virtual machine itself (including all your settings!)

This also has the nice benefit that *you're never going to be prompted to delete the files from disk. (And if I'm misremembering that and you are prompted, but sure to select NO.) You may be able to select "Detach" from the Virtual Media Manager instead of going through each configuration, but I'd recommend doing it the hard way so you can be sure to take notes and verify what you're doing as you do it.

Tuesday, August 20, 2013

Crash course for CVS users switching to git

I've used CVS for a long time and have been generally happy with it. Some of its quirks have caused me to consider subversion but none were really pressing enough for me to do so. I've tried to use git in the past - honestly, I have - and have always ended up extremely frustrated. But I have crossed the mental rubicon and wanted to share what was so difficult for me to grasp, in case there are any dinosaurs like me still out there.

How I use(d) CVS

It is probably best to describe my usage of CVS first. CVS has a model of a single central repository, and multiple simultaneous editors. Each checkout by an editor is referred to as a 'sandbox'. Editors work in their sandbox and commit to the repository.

One additional thing to note is that CVS has a concept of 'tags', which is a way to label a snapshot of all the files in the repository (it could also apply to a subset, but we're keeping this simple). A quirk here is that a "tag -b", or "branch tag" is a label that can be used to denote a separate line of development.
This model has allowed me to use the following workflow:

  • Create some product and release version 1.0
  • cvs tag -b VERSION_1_0
    • The name format is another CVS quirk, but this branch tag lets me not only label the files that make up version 1.0, but create a branch point where I can develop new features.
  • Begin working on version 2.0. As features get completed, commit to the (main branch of the) repository. NEVER commit a broken configuration as this will upset other developers when they check out the code.
  • A bug is reported in version 1.0
  • Go to a different directory, and cvs checkout -r VERSION_1_0
    • This new sandbox will be devoted to identifying/fixing this bug.
  • Fix the bug, and cvs commit it. Then cvs tag BUG_1001.
    • This effectively creates VERSION_1_0 / BUG_1001.
  • Continue until you release and branch tag VERSION_1_1.
    • Always do the tag in a clean sandbox (with a fresh checkout) to ensure it functions.
  • Meanwhile, the other sandbox can continue developing VERSION_2_0
  • Prior to release of version 2.0, review bug fixes to see if they are applicable.
    • cvs merge BUG_1001, etc. as necessary.
  • Delete any sandboxes used for failed experiments or resolved bugs
This is an iterative process that gets more involved with more people and features, etc, but the key is that simultaneous development happens, each in its own sandbox.

This is not how git wants to work.

Development under git

There are two key things to recognize when developing with git:

  • Discard the idea of a sandbox
  • Discard the idea of a commit

I'll also add:

  • Discard the idea of a repository

In git, most frustratingly to me, the 'sandbox' and the 'repository' are gone. Instead, there is only your work area. When you commit files, they go into (by default) the .git directory in your work area. If you remove your work area, you remove your repository (the .git directory). These CVS ideas must be completely disabused in order to use git well.

Also in git, branches are not 'clean'. Under my CVS workflow, a sandbox was a branch. I would commit what I wanted and delete the entire sandbox after I'd verified my commit. Any stray files created would be removed. In git, any files created in your work area will be carried around from branch to branch as you checkout.

The key idea of git that you have to get used to is what is called the 'index'. You can think of the index as your commit target. When you are happy that your code doesn't break things, you stage it in the index. Think of this as a lightweight 'commit':

  • Create files onefish.txt and twofish.txt
  • Modify files redfish.txt and bluefish.txt
  • See these files and see that they are good.
  • git add '*fish.txt'
The result of 'git add' isn't just "Hey, add these files", like it is in CVS -- instead, it's "Hey, I want these changes to be staged in the index". When this command completes, the 'index' has been updated, just as though you had done a 'commit' to it. Then, a 'git commit' pushes the changes in the index into the repository. The files in your work area are not a part of this equation AT ALL.

If you edit README, 'git add' README to the index, edit README again, and then 'git commit', you are committing the first set of changes because the second set was not staged before the commit. I suspect many people will use the 'git commit -a' form of the command by default, which does an implicit 'add' to the index of all modified and removed files, but not new files not yet 'git add'ed. This actually makes good sense because, as I noted above, git branches are not clean.

I don't like the 'commit -a' form as it seems too risky at this time -- I prefer to manually 'git add' each change before the commit. But sometimes I forget to add a new file that I created. Fortunately, git makes this really easy to fix: 'git commit --amend'. No matter how many things you got wrong with the last commit, you can just fix it like this. Add new files, new changes, fix the comment, etc. That's really handy.

So this is a workflow that I'm happy with for now. Creating branches is a simple 'git checkout -b branchname', switching branches is the same without the '-b' parameter. Crap in your work area remains in your work area, and the index remains as well. Some may like this because you can commit to a different branch than you started out on, but CVS has a simpler "cvs commit -r banchname" (and git probably does too).

The problem I have is the crud that persists in your work area as you switch branches. Technically I don't know where else it would go, but it seems bad form to me to carry this around. What do people do here? Do you check this in-progress and probably-won't-compile code into the branch while you switch to work on a bug? Is "never commit something that doesn't work" no longer a valid rule? Or do you leave it and 'git clone' the repository into another 'sandbox'?

Using 'git clone' seems like an acceptable solution, except it makes me ill at ease to 'rm -rf' the directory because it carries around the entire repository in its .git folder. And without a central server, there is no backup if you make a mistake and accidentally remove the last copy of the project's .git folder. Under CVS you may lose your changes, but in git you lose everything.

So I've set up git as a server which I'm using as my master repository and developing in multiple "sandboxes" cloned from the central repository, using minor feature branches in each work area as I go. When I believe I am feature complete, I push to the server as my final 'commit' and, if appropriate, 'tag'.

Once CVS users can get into the flow of adding to the index as a lightweight commit, the rest of git is fairly intuitive. Though I clearly still have much more to learn, I think that this fundamental understanding will make me a very happy git user.

Next project: do something about Blogger's horrid editor and/or my blog styling.

Thursday, August 15, 2013

Part 2: Create a SOAP::Lite Server that uses Basic Authentication for password verification

Today is a better day than yesterday (see yesterday's part one of this post for details). Despite Google appearing to be ever more useless in the chaff that is today's WWW, and Bing no better, I've made some good progress with my SOAP experiment. It turns out the key to my next problem was solved in 2004. But let's recap:

I want to write a SOAP server. I want to authenticate clients somehow - username and password are a good start. I want every request to be authenticated, and a Basic Authentication Realm will work just fine.

Why make every request authenticate? Because the web is a stateless protocol. When you "log in" to a website, you really are just requesting a cookie. This cookie is a magic number that is stored on the server and means "your session". When you load another page on that same site, the browsers sends that cookie along as well. But the server still has to authenticate the validity of that cookie because it will be sent on the next page load, whether it's made in 2 seconds or 2 years. Basic Authentication is similar, except it's built-in to the browser. This has downsides though, the biggest is that your browser asks you to log in so the website developer can't make a fancy branded prompt. But I digress...

The second issue I will run into is that I want to be able to retrieve those credentials from the class methods being called. There are a number of reasons to do this: perhaps different clients will get different responses, or perhaps I want to log some AAA data. This is where I need to search back to 2004.

So let's revisit with some source code.

The SOAP server

  • soapDaemon.pl

    use strict;
    use SOAP::Transport::HTTP +'trace';

    # don't want to die on 'Broken pipe' or Ctrl-C
    #$SIG{PIPE} = $SIG{INT} = 'IGNORE';
    $SIG{PIPE} = 'IGNORE';

    #my $daemon = SOAP::Transport::HTTP::Daemon
    my $daemon = BasicAuthDaemon
          -> new (LocalPort => 8000, Reuse => 1)
          -> dispatch_to('myClass')
          ;

    print "Connect to SOAP server at ", $daemon->url, "\n";
    $daemon->handle;

This is the same code that can be found in the documentation all over the internet. I've added tracing and disabled the ^C handler, but this should be straightforward to understand. Again, note that I am replacing the default HTTP::Daemon with my custom BasicAuthDaemon. We'll run on port 8000 and send requests to the 'myClass' object.

The modified HTTP::Daemon class

  • soapDaemon.pl

    package BasicAuthDaemon;
    use strict;
    use warnings;
    use MIME::Base64;
    use SOAP::Lite +'trace';
    use SOAP::Transport::HTTP ();
    our @ISA= 'SOAP::Transport::HTTP::Daemon';

    sub handle {
        my $self = shift->new;
        while ( my $c = $self->accept ) {
            while ( my $r = $c->get_request ) {
                $self->request($r);

                my ($type, $creds) = split /\s+/, $r->headers->authorization;
                my ($user, $pass) = split /:/, decode_base64( $creds )
                    if( $type eq 'Basic' );
    print "type: [$type], user:[$user], pass:[$pass]\n";

                if( $user eq 'user' and $pass eq 'password' ) {
                    $self->{'auth'} = "123123123";
                    SOAP::Transport::HTTP::Server::handle $self;
                    #$self->SUPER::handle;
                } else {
                    $self->response( $self->make_fault(
                          $SOAP::Constants::FAULT_CLIENT, 'Authentication required',
                          'Give authentication credentials for Basic Realm security'
                      ));
                  }
                  $c->send_response( $self->response );
              }

    # replaced ->close, thanks to Sean Meisner <Sean.Meisner@VerizonWireless.com>
    # shutdown() doesn't work on AIX. close() is used in this case. Thanks to Jos Clijmans <jos.clijmans@recyfin.be>
              $c->can('shutdown')
               ? $c->shutdown(2)
               : $c->close();
              $c->close;
        }
    }
    1;
This code is in the same file as the server code above. I actually have this as the top of the file but it takes a bit more explaining. Firstly, this is still just test code because you're unlikely to use HTTP::Daemon in production. I think this actually gets easier in production because the handle() method doesn't have a service loop where requests have to be processed. See here for some clarification. 

However, what I've done is gone into the file perl5/SOAP/Transport/HTTP.pm and found the handle() method within the SOAP::Transport::HTTP::Daemon package, and copied it into my code. I've left the original comments and expanded formatting to show that this is a copy-paste and not my own code. My additions are bold and removals are italic.

My added logic takes the HTTP request and references into the authorization headers that are sent for the Basic Authentication mechanism. You can use a similar routine to extract and verify cookie data, etc.

If the username and password match, I jam a value into $self, which is a SOAP context. I'm simply creating a hash called 'auth', which could be enhanced. Is there a 'stash' in this architecture? I don't think so... Should I call it something more unique like 'myAppAuth'? Probably. But this is where we begin.

Aside from adding my custom logic, one more modification was necessary: remove the call to $self->SUPER::handle. This is necessary because our superclass has the same event loop and will just hang waiting for another client. Instead we need to call the superclass of our superclass, which is SOAP::Transport::HTTP::Server. I pass $self as the first argument because this is how perl seems to do OO things, and it works nicely.

Finally, if the credentials don't match, I return a SOAP error.

The client (in perl)

  • soapClient.pl

    use strict;
    use SOAP::Lite +'trace';

    my $soapClient = new SOAP::Lite
        uri => 'http://example.com/myClass',
        proxy => 'http://user:password@localhost:8000/',
        ;

    my $result = $soapClient->hi( 'this is how the world ends', 'kabaam' );
    unless ($result->fault) {
        print "\nresult: [" . $result->result . "]\n\n";
    } else {
        print join ' --=-- ',
        $result->faultcode,
        $result->faultstring,
        $result->faultdetail, "\n";
    }

This is the same client as before. Again, the uri is the method being called and the host name appears to be ignored. The proxy is the actual web server, and is where we place the username and password.

The client (in PHP)

  • soapClient.php

    $client = new SoapClient(null,
        array('location' => "http://user:somepass@localhost:8000/",
              'uri'      => "http://test-uri/myClass",
              'login'    => "some_name",
              'password' => "some_password",
        ));

    $res = $client->hi( 'from php!' );
    print "got: $res\n";

Here's the same SOAP client written in PHP for comparison. Placing the username and password in the URL doesn't work in PHP, so these have to be explicitly added. Note again that the server doesn't seem to care about the server portion of the URI. On error, PHP throws an exception.

The SOAP class

  • myClass.pm

    package myClass;

    use strict;
    use vars qw(@ISA);
    @ISA = qw(Exporter SOAP::Server::Parameters);
    use SOAP::Lite;# +'trace';

    sub hi {
        my $evp = pop;
        my $context = $evp->context;

        my ($name, @args) = @_;
        print "[$name] got arguments: [@args]\n";

        return "hello, world, auth user=$context->{'auth'}";
    }

    sub bye {
        return "goodbye, cruel world";
    }
1;

Finally, here's the class handling the SOAP calls. Not being a perl expert, there is magic happening in here that I don't fully understand.

First, this code must exist in an external file. I surmised that this could be placed in the same file as the daemon code, and it does work for how I was using it yesterday. However, this version is a subclass of SOAP::Server::Parameters (doesn't that seem like how it should work?)  For reasons I don't understand, the subclassing doesn't work if this code is in the same file, even if I place it in the BEGIN{} block. If anyone knows why, I'd love to hear an explanation.

So now that we're a subclass of SOAP::Server::Parameters, we automatically get a new parameter added as the last argument to every function call we make. We pop this value off the end of the parameter stack, saving it as our environment pointer. This is really a SOAP::SOM object but what I want is the original SOAP context. Fortunately, this is easy to access through the ->context method.

Once I get the context, the 'auth' hash I created is ready available, and I return it in the call to hi() for verification. Fortunately, it works like - and appears to be slightly - magic.


Wednesday, August 14, 2013

Create a SOAP::Lite Server that uses Basic Authentication for password verification

Today was a rough day. I'm writing a SOAP service in perl. The basics required to make this work are fairly straightforward, but, as usual, the documentation and I can't seem to understand one another very well. As far as I can tell, this recipe exists nowhere on the internet.

Let me start with the basic example. This is what a standalone server looks like:
    use strict;
    use SOAP::Transport::HTTP;
    # don't want to die on 'Broken pipe'
    $SIG{PIPE} = 'IGNORE';
    my $daemon = SOAP::Transport::HTTP::Daemon
        -> new (LocalPort => 8000, Reuse => 1)
        -> dispatch_to('class::method')
        ;
    print "Contact to SOAP server at ", $daemon->url, "\n";
    $daemon->handle;
How much simpler can this get? We use the required SOAP module, setup a signal handler, and create an HTTP daemon. We set it to run on an unprivileged port (8000) and setup socket reuse. It has a single method that can be called from the 'class' package. We print out the URL and start handling requests.

If you're really just starting out, this should work to define the 'class::method' handler, for reference:
    package class;
    use strict;
    sub method { return 'Hello, world!' };
Go ahead and put it at the bottom of the file with the daemon code.



Now, to test this service you also need to write a SOAP client. Here's that code:
    use strict;
    use SOAP::Lite;
    my $soapClient = new SOAP::Lite
        uri => 'http://example.com/class',
        proxy => 'http://user:password@localhost:8000/',
        ;
    my $result = $soapClient->method( 'some_args' );
    unless ($result->fault) {
        print "\nresult: [" . $result->result . "]\n\n";
    } else {
        print join ' ** ',
            $result->faultcode,
            $result->faultstring,
            $result->faultdetail, "\n";
    }
It's a bit longer because of the error checking, but still fairly straightforward. We use the SOAP module, create a client, and call the method. Then we either print the result or the fault information. There are a couple things to note in here:
  1. The 'uri' parameter can contain any hostname. The only thing the server seems to care about is the name 'class'. This identifies what code is going to be dispatched.
  2. The name 'class' in the client 'url' must match the dispatch_to parameter in the server.
  3. The 'proxy' is the actual URL the request will be sent to. It can be http, https, mailto:, etc.
  4. We have added username and password credentials to the URL. This is important later.
Now, running this should Just Work. $soapClient becomes a remote representation of 'class', and we can call method() as though it were a local function. Of course it's not a local function and the web is stateless, so each request also passes that username and password we want to use to authenticate.

Here's what needs to happen on the server to receive and process the username and password. At the top of the file with the server code, add this:
    # subclass the default soap server daemon to handle authenticated requests
    package BasicAuthDaemon;
    use strict;
    use MIME::Base64;
    use SOAP::Transport::HTTP ();
    our @ISA= 'SOAP::Transport::HTTP::Daemon';
    sub handle {
        my $self = shift->new;
        while ( my $c = $self->accept ) {
            while ( my $r = $c->get_request ) {
                $self->request($r);
                my ($type, $crypt) = split /\s+/, $r->headers->authorization;
                my ($user, $pass) = split /:/, decode_base64( $crypt )
                    if( $type eq 'Basic' );
                print "user:[$user], pass:[$pass]\n";
                if( $user eq 'user' and $pass eq 'pass' ) {
                    #$self->SUPER::handle;
                    SOAP::Transport::HTTP::Server::handle $self;
                } else {
                    $self->response( $self->make_fault(
                        $SOAP::Constants::FAULT_CLIENT, 'Authentication required',
                        'Give authentication credentials for Basic Realm security'
                    ));
                }
                $c->send_response( $self->response );
            }
            $c->can('shutdown')
                ? $c->shutdown(2)
                : $c->close();
            $c->close;
        }
    }
What we're doing here is taking the code that can be found around line 688 of SOAP/Transport/HTTP.pm (in the SOAP::Transport::HTTP::Daemon package) and copying it into the 'BasicAuthPackage' that we are defining here. We use SOAP::Transport::HTTP and declare ourselves to be of the same type of the package we are subclassing.

Assuming the code is still legible despite the formatting, the code in blue that we added serves to extract the authentication data from the request headers, which looks like "Basic BASE64DATA==". We call the BASE64DATA "crypt" in the code but there's nothing secure about this - be sure to send credentials over SSL in production. We then decode the Base64 data to get the "user:password" string that was passed before the '@' in the client's URL. It is up to the reader to expand the code to do proper database lookups for production release.

However, do note the commented code in red: the call to $self->SUPER::handle will hang if left inline. Instead, I've replaced this with an explicit call to the handle method for HTTP::Server, which appears the be the super class for Transport::HTTP::Daemon. I'm also not an expert on SOAP so the make_fault may not be technically correct but it's working for now.

To complete the transition, we then simply redefine our daemon to use our derived class:

    #my $daemon = SOAP::Transport::HTTP::Daemon
    my $daemon = BasicAuthDaemon


Wednesday, May 01, 2013

Upgrading to Kubuntu 13.04 from 12.10 breaks Ubuntu 13.04 in VirtualBox

Another post in my short saga of upgrading to Kubuntu 13.04.

The situation


So here's my scenario: I'm running Kubuntu 12.10 and all is well (enough). In a VirtualBox instance I have a copy of Ubuntu 13.04 that I'm keeping up to date as it gets closer to release. In all, everything in working fine as I work inside the VM and prod at it from the outside.

Suddenly my system starts to act a bit flaky, and ultimately freezes up completely. I suspect this is actually an audio driver issue, and I figure Kubuntu 13.04's new kernel may just fix it, so I decide to upgrade to Kubuntu 13.04 after doing a hard-reset to recover from the lockup.

Debugging a hosed virtual machine


The upgrade goes smoothly enough, but my virtual machine is now completely baffooned. The screen comes up but is just the default purple wallpaper. Clicking does nothing. Eventually I get an error message that there was a system failure and perhaps rebooting will make things better. There is a button on the dialog that says "Continue" (or "OK") but doesn't show up until you hover over it. Rebooting does not help.

From the VirtualBox 'Machine' menu I can send the Ctrl-Alt-Delete signal and this is captured correctly. I can log out and log back in, but this changes nothing. I can get to a command prompt by hitting <Host>-<F1> and log in there. I create a new user with 'adduser', hit <Alt>-<F7> to return to the GUI, and log in as the new user ... but the new user has the same problem. This means that the default user account is likely fine and the problem is elsewhere.

I go to the Devices menu in VirtualBox and mount the new Guest Additions. I go back to the command prompt and install these as root. While this step is probably necessary (this is a new version of VirtualBox, afterall), it is insufficient to fix the problem. As the root user, I can run 'startx -- :1' but this presents the same type of problems.

So I reset the machine and think a bit. The VM is setup to auto-login the default user, so I leave X running and go back to the command prompt via <Host>-<F1>. I log in and try running things from the command line:

DISPLAY=unix:0 compiz

Interesting - compiz is crashing with I/O errors and complaining about not being able to access sessions, waiting for dbus to respond, not able to access pixmaps, etc. In short, complete madness. But I can run
DISPLAY=unix:0 gnome-terminal
and, after installing metacity,
DISPLAY=unix:0 metacity --replace

Some searching suggests these compiz errors have been around for a long time and therefore aren't likely to help. However, one post recommends reinstalling ubuntu-desktop. This package is already up to date, but they go on to suggest installing it from the Ubuntu Software Center. This doesn't make much sense but what could it hurt? Suddenly, I find myself strace'ing software-center trying to figure out why it's crashing and, furthermore, why it's complaining about not being able to find icons.

The ugly fix


I try reinstalling the icons it appears to be complaining about but that doesn't work. Eventually, I give up, and decide to just reinstall the entire system. For me, this doesn't mean getting out a CD, however. It means -- (a) print a list of every available package - (b) print the names of the packages that are installed - (c) filter out the packages that contain circular dependencies - and (d) reinstall them live.

apt-get install --reinstall `dpkg -l '*' | awk '/^ii/ { print $2 }' | egrep -v '(mountall|perl-base|libpam-modules|kbd|ntpdate|resolvconf|ifupdown|dpkg|bash|debconf)'`

Somewhere in there, despite some unnecessary exclusions, magic. The best part is that all my settings are retained so I don't need to reconfigure any of my settings.

Kubuntu 13.04 and immediate screen blanking after upgrade

Upgrading from Kubuntu 12.10 to Kubuntu 13.04 is a less-than completely smooth experience. I'll skip any niceties and dig right into the problems.

Screen lock problems


I don't want to spend a lot of time debugging this, but it doesn't even appear possible to shut off the screen blanker so I have to do something. My google-fu is completely overwhelmed by another screenlock bug in KDE 4.10 that appears to lock the screen if you tell it to blank but not lock the screen.

This is not my problem. My problem is that the screen locker appears not to be able to tell time. If I'm idle for more than about 15 seconds, the screen locks. Suffice it to say, this gets annoying really quickly.

The dirty hack


Here's what I've done to fix the issue, and to completely confuse myself the next time the package gets upgraded:

mv /usr/lib/kde4/libexec/kscreenlocker_greet /usr/lib/kde4/libexec/kscreenlocker_greet.old

I welcome real solutions in the comments.

Linux, Ubuntu, INotify - flash and pulseaudio sound stutter

This post was initially drafted on March 6.

Having issues with Linux slowing to a crawl over time? I certainly seemed to have this problem when running Ubuntu 12.10 and KDE (Kubuntu 12.10).

Initial symptom: Audio stutter using flash (e.g: YouTube)


Recently I had problems with Shockwave Flash misbehaving (frozen, stuttered video) in Chrome so I used the task manager (shift-escape, or right click in the tabs area) to kill the plugin and GPU processes. Unfortunately, reloading flash didn't fix the stutter, so I took a step further and killed pulseaudio. Pulseaudio manages access to the audio hardware and because has caused weird effects on my system's performance in the past it was next on the list.

pulseaudio refuses to start


However, trying to restart pulseaudio by typing 'pulseaudio' on the command line kept giving me this error:

E: [pulseaudio] module-udev-detect.c: inotify_init1() failed: Too many open files
E: [pulseaudio] module.c: Failed to load module "module-udev-detect" (argument: ""): initialization failed.
E: [pulseaudio] main.c: Module load failed.
E: [pulseaudio] main.c: Failed to initialize daemon.

Debugging the error


"Too many open files" is one of those generic and woefully overloaded errors that tends to provide little help in figuring out what's really happening. I checked my file descriptor limits using "ulimit -a" but the number of open files used by chrome was under this limit. In retrospect, this is not helpful information because 'ulimit' is used to set a limit on the number of file descriptors that can be opened by subsequent commands; it does not tell you how many files are currently open whether globally or in that shell. But running "pulseaudio -vvv" didn't provide much more useful detail:

shell$ ulimit -a[...]open files                (-n) 1024
The next tool in the toolbox is 'strace' - running "strace -fff pulseaudio" shows the error at the point that the system call being made:

inotify_init1(O_NONBLOCK|O_CLOEXEC)     = -1 EMFILE (Too many open files)

I checked the manpage for inotify_init1 which describes the error EMFILE as: "The user limit on the total number of inotify instances has been reached". By combining that with some additional searching, it appears that the system limit may be set too low for the number of inotify user instances. I found that my kernel defaults to 128:

shell$ cat /proc/sys/fs/inotify/max_user_instances 
128

Finding and fixing the root cause



root# sysctl -w  fs.inotify.max_user_instances=512 
fs.inotify.max_user_instances = 512 
root# grep inotify /etc/sysctl.conf 
root# vi /etc/sysctl.conf

Once I knew what I was looking for, this was easier to find:




Sunday, February 17, 2013

Newtown redux: Raising Adam Lanza

By pure chance I went to see what science documentaries were available on Hulu and ended up at PBS, thinking I might find something interesting on Nova. Yes, I do have other things I should be doing, but finding Frontline's story, "Raising Adam Lanza" gave me an opportunity to publicly admit I was wrong in a previous post.

Frontline's story on the shooting appears to be a two-part series starting on February 19, but some portion of the story has already been printed at the Hartford Courant. Despite some of my assumptions in my previous post - and the fact that Adam was sent to a private Catholic school for a time - the statement that I heard said by Adam's aunt that his mother bought the guns she bought "for protection" was likely errant.

To the contrary, the story describes this as "... a portrait of a mother, apparently devoted but perhaps misguided, struggling to find her son a place in society, ..." and reading the article certainly paints it that way. I can't say the article explains anything, though, because the descriptions remain frustratingly superficial. I perceive a "blame the mother" vibe that I don't really care for but we aren't given much to empathize with either. In the end, I suspect that reality will be just as frustrating.