Blogs

My SMWCon Fall 2011 Talk now on YouTube

I blogged it earlier, but better to get everything in one post, so taking the summary again:

After doing my GSoC project for Wikimedia foundation / Semantic MediaWiki in 2010, resulting in the RDFIO Extension, I finally could make it to the Semantic MediaWiki Conference, which was in Berlin in September.

Now, the video of my talk, "hooking up Semantic MediaWiki with external tools via SPARQL" (such as Bioclipse and R), is out on YouTube, so please find it below. For your convenience, you can find the slides below the video, as well as the relevant links to the different stuff shown (click "read more" to see it all on same page).

One week off to work on RDFIO

From today evening, I'm taking one week off from work to make a sprint to try to finalize the RDFIO extension, for RDF import and export in Semantic MediaWiki.

This will be a required step for finalizing the vision described in my SMWCon fall 2011 talk, the other month.

I developed RDFIO as part of Google Summer of Code 2010, and it got into a working proof-of-concept state. Some issues, such as with performance, never were resolved though. Also it depended upon two other modules, which, after Semantic MediaWiki changed a lot of it's internals in version 1.6, have still not been updated to support these, leaving RDFIO in a state where it does not support SMW 1.6.

So, a little sprint is definitely needed, to get RDFIO in working condition again. At the same time I hope to look at it with fresh eyes, after having a lot more coding experience now, than when I coded RDFIO, after one year of quite some Java and Python development in my work at UPPMAX.

Things I plan to have a look at (or at least ponder):

  • Look at possiblity to use the Wiki Object Model instead of the Page Object Model / SMWWriter combo (which unfortunately is not SMW 1.6 compliant anymore)
  • See if/how I can make more use of the new infrastructure in SMW, summarized by Markus in this post in SMW devel mailinglist.
  • Take an overall fresh look at the architecture of the code ... try to follow Domain Driven Design principles much better, to get clean maintaineable code.
  • Use existing MediaWiki feature uch more, such as the HTMLForm form builder class (for Specialpages and the like).
    (More suggestions like this highly welcome!)
  • Import ARC2 library via the ARCLibrary extension rather than with a separate import.
  • Use the existing "Equivalent URI" special property, instead of the custom "Original URI" (don't remember why I created a custom one...)
  • Run big imports as jobs?
  • OWL class import (as categories) ?
  • Allow updating Wiki articles from any connected store, by using the new SMW Internals?
  • Other things?

I would love to get some feedback and input to the project during this intensive week, so don't hesitate to drop in at #semantic-mediawiki on irc.freenode.net (IRC chat) or in the SMW-devel mailing list! My contact options, summarized:

Looking forward to your input during this week!

Switched to Xubuntu

I was upgrading to Ubuntu 11.10 the other day, after sticking to the pre-Unity, Ubuntu 10.10. Now I thought the Unity stuff would have gotten better, and sure it has, but still I couldn't stand it ... so, switched to Xubuntu (sudo apt-get install xubuntu-desktop), which uses XFCE as desktop environment instead of GNOME.

And after some tweaking ... Wow! ... so snappy, so beatiful, so fast, so consistent! just wow ... It's definitely my distro of choice from now on ...

Have to throw a screenshot at you (sorry, haven't got the Lightbox up working yet):

For your referece:

New majority voted twitter hashtag for NextGen Sequencing: #deepseq

As I concluded in a question on Biostar, there has been no real consensus on a short, non-hijacked hashtag to use for "High-Througput sequencing" / "Next Generation Sequencing" on social media sites such as twitter and identi.ca.

After some community voting, a new winner turned out: #deepseq (click for twitter feed)

So, do spread the word, and start using it!

Grepping SQL dumps with endless lines? Use the fold command!

Grepping for stuff in MySQL dumps is not that nice, with miles-wide lines. You could send the grep output to a command such as "cut -c 1-200", but that would still not be guaranteed to give you the actual matched content.

Enter the "fold" command, which formats output into lines with a max count of chars:

grep "stuff" sqldump.sql | fold -w 200 | grep -C 1 "stuff"

... will give you a much better view of the context of the match!

(The first grep gets the (mile-wide) line that has the match, then fold will split the mile-wide line into 200 char long lines, and "grep -C 1" will show only the one 200 char wide line where the match is + 1 line of context before and after).

Tags:

My SMWCon Fall 2011 Talk

After doing my GSoC project for Wikimedia foundation / Semantic MediaWiki in 2010, resulting in the RDFIO Extension, I finally could make it to the Semantic MediaWiki Conference, which was in Berlin this week.

While I write up a longer review of the many interesting talks, you can in the meantime find the slides from my talk below, on "hooking up Semantic MediaWiki with external tools" (such as Bioclipse and R):

Links

  • For the SMW/Bioclipse Hookup there is a status update on my blog.
  • ... with a demo screencast.
  • More info on the RDFIO extension available on the Extension page
  • Code for the Bioclipse SMW module is available at github
  • Bioclipse website is at bioclipse.net
  • ... and the (SMW) Bioclipse wiki
  • The SMW/R hookup is not yet published in any journal, but this is what is available:
    • Egon Willighagen, who did it, has blogged it.
    • Also, the rrdf package he wrote, is is available in the CRAN, and there's a PDF available, describing it.

Essential screen flags and shortcuts

GNU Screen is a nice little program, allowing you to have "terminals" that you can detach in the background, so that you can have long batch jobs started, outputting stuff to the stdout, for example, but still don't be afraid to close down your terminal by accident etc.

Unfortunately screen has, IMO, quite an awkward syntax, but I managed to learn 3 flag combinations, and two keyboard combinations (from inside screen) that seems to be what I need for basic usage of screen:

Flags

Start a new named screen session:

screen -dmS ASessionName

List all detached screen sessions:
screen -ls

Re-attach a named screen session:
screen -r ASessionName

Shortcuts

Detach the current session in background:

Ctrl + a, Ctrl + d

Close current screen session:
Ctrl + d

Tags:

HPC Client Screencast: Experimental Job Config Wizard

My work at UPPMAX, on the Bioclipse based HPC Client i is progressing, slowly but steadily. I just screencasted an experimental version of the job configuration wizard, which loads command line tool definitions from the Galaxy workbench, and use them to generate a GUI for configuring the parameters to the command line tool in question, as well as the parameters for the Slurm Resource manager (used at UPPMAX). Have a look if you want :) :

The Wizard obviously has quite some rough edges still. My current TODO is as follows:

  • Set sensible default values i widgets (i.e. when there is just 1 alt)
  • Use checkboxes and radiobuttons for select fields with few options
  • Use progress bar between wizard pages that takes time to load
  • Decide how to take care of the cheetah #if#lse#endif syntax, available in some galaxy tool config files.
  • Add validation
  • Use a time widget for the job time
  • Add a custom view with just a "connect" button, and showing only remote files for the configured host.
  • More modular loading of modules (hierarchical etc.)
  • More advanced parsing of options (i.e. allowing to omit params, rather than just saying "no" on them).

Etc ... More suggestions? :)

(E)BNF parser for (parts of) the Galaxy ToolConfigs with ANTLR

As blogged earlier, I'm currently into parsing the syntax of some definitions for the parameters and stuff of command line tools. As said in the linked blog post, I was pondering whether to use the Galaxy Toolconfig format or the DocBook CmdSynopsis format. It turned out though Well, that cmdsynopsis lacks the option to specify a list of valid choices, for a parameter, as is possible in the Galaxy ToolConfig format (see here), and thus can be used to generate drop-down lists in wizards etc. which is basically what I want to do ... so, now I'm going with the Galaxy format after all.

Enter the Galaxy format then. Look at an example code snippet:

<tool id="sam_to_bam" name="SAM-to-BAM" version="1.1.1">
  <description>converts SAM format to BAM format</description>
  <requirements>
    <requirement type="package">samtools</requirement>
  </requirements>
  <command interpreter="python">
    sam_to_bam.py
      --input1=$source.input1
      --dbkey=${input1.metadata.dbkey} 
      #if $source.index_source == "history":
        --ref_file=$source.ref_file
      #else
        --ref_file="None"
      #end if
      --output1=$output1
      --index_dir=${GALAXY_DATA_INDEX_DIR}
  </command>
  <inputs>
    <conditional name="source">
      <param name="index_source" type="select" label="Choose the source for the reference list">
        <option value="cached">Locally cached</option>
        <option value="history">History</option>
      </param>
      <when value="cached">
      ... cont ...

Here I've got some challenges. XML parsing is easy, even in Java (I use the Java XPath libs for that). But look inside the <command> tag ... that's some really non-xml stuff, no? (it is instructions for a python based template library, used in galaxy). I have to parse this though, in order to replicate the logic of it ... so what to do? ... well, I turned to the ANTLR Parser Generator.

ANTLRWorks works nicely out of the box

I heard a lot of good things about ANTLR, like that it is more easily debugged than typical BNF parsers etc, so the choice wasn't that hard. I tried the ANTLR for Eclipse, but though it looks nice, it that was quite buggy, and I couldnt get it to work properly in neither Eclipse 3.5 or 3.6. So, finally I went with the easy option and developed my EBNF grammar in ANTLRWorks, which is an integrated Java App, with the correct ANTLR lib already installed etc. Turned out to work really good!

The grammar I came up with so far (only for the syntax inside the <command> tag so far, though!) is available on GitHub ... and below (in condensed syntax to save some space), for you convenience :)

grammar GalaxyToolConfig;
options {output=AST;}
 
command    : binary (ifstatement param+ (ELSE param+)? ENDIF | param)*;
binary     : WORD;
ifstatement 
        : IF (STRING|VARIABLE) EQTEST (STRING|VARIABLE) COLON;
param   : DBLDASH WORD* EQ (VARIABLE|STRING);
WORD    : ('a'..'z'|'A'..'Z')('a'..'z'|'A'..'Z'|'.'|'_'|'0'..'9')*;
VARIABLE 
        : '$'('{')?WORD('}')?;
STRING  : '"'('a'..'z'|'A'..'Z')+'"';
IF      : '#if';
ELSE    : '#else';
ENDIF   : '#end if';
EQ      : '=';
EQTEST  : '==';
DBLDASH : '--';
COLON   : ':';
WS      : (' '|'\t'|'\r'|'\n') {$channel=HIDDEN;};

Suggestions for improvements? :) ... Then go ahead and mail me ... samuel dot lampa at gmail dot com)

Also, see a little screenshot from ANTLRWorks below:

ANTLRWorks Screenshot

As you can see in the screenshot, the different parts have correctly been identified as "param", "if statement" and so forth. You can se also how I can click in the test syntax, to see where in the parse tree that actual part appears.

When done, I just exported the resulting parser code in ANTLRWorks with "Generate > Generate Code", copied the code from the "output" folder into my Eclipse project, added the antlr-3.3 jar into the build path of it, and then ran the __Test__.java file that comes with the output.

I wanted to do a little more parsing in my test though, so I ended up with this little test code:

package net.bioclipse.uppmax.galaxytoolconfigparser;
import org.antlr.grammar.v3.*;
import org.antlr.runtime.ANTLRStringStream;
import org.antlr.runtime.CharStream;
import org.antlr.runtime.CommonTokenStream;
import org.antlr.runtime.RecognitionException;
import org.antlr.runtime.TokenStream;
import org.antlr.runtime.tree.CommonTree;
import org.antlr.runtime.tree.DOTTreeGenerator;
import org.antlr.runtime.tree.Tree;
import org.antlr.runtime.tree.TreeAdaptor;
import org.antlr.stringtemplate.StringTemplate;
 
public class ParseTest {
    // Generated stuff from ANTLR, which I can use to recognize token types   
    public static final int EOF=-1;
    public static final int ELSE=4;
    public static final int ENDIF=5;
    public static final int WORD=6;
    public static final int IF=7;
    public static final int STRING=8;
    public static final int VARIABLE=9;
    public static final int EQTEST=10;
    public static final int COLON=11;
    public static final int DBLDASH=12;
    public static final int EQ=13;
    public static final int WS=14;
 
    public static void main(String[] args) throws RecognitionException {
        String testString = "    sam_to_bam.py" 
                + "      --input1=$source.input1\n"
                + "      --dbkey=${input1.metadata.dbkey}\n"
                + "      #if $source.index_source == \"history\":\n"
                + "        --ref_file=$source.ref_file\n" 
                + "      #else\n"
                + "        --ref_file=\"None\"\n" 
                + "      #end if\n"
                + "      --output1=$output1\n"
                + "      --index_dir=${GALAXY_DATA_INDEX_DIR}\n"; 
        CharStream charStream = new ANTLRStringStream(testString);
        GalaxyToolConfigLexer lexer = new GalaxyToolConfigLexer(charStream);
        TokenStream tokenStream = new CommonTokenStream(lexer);
        GalaxyToolConfigParser parser = new GalaxyToolConfigParser(tokenStream, null);
 
        System.out.println("Starting to parse ...");
        // GalaxyToolConfigParser.command_return command = parser.command();
        CommonTree tree = (CommonTree)parser.command().getTree();
        System.out.println("Done parsing ...");
 
        int i = 0;
        while (i<tree.getChildCount()) {
            Tree subTree = tree.getChild(i);
            System.out.println("Tree child: " + subTree.getText() + ", (Token type: " + subTree.getType() + ")");
            i++;
        }
 
        // Generate DOT Syntax tree
        //DOTTreeGenerator gen = new DOTTreeGenerator();
        //StringTemplate st = gen.toDOT(tree);
        //System.out.println("Tree: \n" + st);
 
        System.out.println("Done!");
    }
}

... generating this output:

Starting ...
Done executing command ...
Subtree text: sam_to_bam.py, (Token type: 6)
Subtree text: --, (Token type: 12)
Subtree text: input1, (Token type: 6)
Subtree text: =, (Token type: 13)
Subtree text: $source.input1, (Token type: 9)
Subtree text: --, (Token type: 12)
Subtree text: dbkey, (Token type: 6)
Subtree text: =, (Token type: 13)
Subtree text: ${input1.metadata.dbkey}, (Token type: 9)
Subtree text: #if, (Token type: 7)
Subtree text: $source.index_source, (Token type: 9)
Subtree text: ==, (Token type: 10)
Subtree text: "history", (Token type: 8)
Subtree text: :, (Token type: 11)
Subtree text: --, (Token type: 12)
Subtree text: ref_file, (Token type: 6)
Subtree text: =, (Token type: 13)
Subtree text: $source.ref_file, (Token type: 9)
Subtree text: #else, (Token type: 4)
Subtree text: --, (Token type: 12)
Subtree text: ref_file, (Token type: 6)
Subtree text: =, (Token type: 13)
Subtree text: "None", (Token type: 8)
Subtree text: #end if, (Token type: 5)
Subtree text: --, (Token type: 12)
Subtree text: output1, (Token type: 6)
Subtree text: =, (Token type: 13)
Subtree text: $output1, (Token type: 9)
Subtree text: --, (Token type: 12)
Subtree text: index_dir, (Token type: 6)
Subtree text: =, (Token type: 13)
Subtree text: ${GALAXY_DATA_INDEX_DIR}, (Token type: 9)
Done!

... seemingly I have the stuff I need, for doing some logic parsing now! :)

Some words about BNF

ANTLR is an (E)BNF parser generator. I had heard a little about BNF before, and was more or less scared off from the topic, thinking it looked too advanced, but really, I found it isn't that hard at all!

It strikes me that BNF is quite much RegEx but with functions added, which allows for recursive pattern matching, which you'll need for anything more advanced, such as nested braces/xml tags etc ... but as you can see in the example above also, much of the pattern matching syntax actually has big similarities to RegEx.

In terms of tutorials, for the (E)BNF/ANTLR combo at least, I'd highly recommend this set of screencasts on using ANTLR in Eclipse. Though I didn't use the Eclipse version, these screencasts quickly give you an idea of how it all works ... I watched at least a bunch of them, and I'm happy I did.

Exercise in XSLT RegEx: (Partial) Galaxy ToolConfig to DocBook CmdSynopsis conversion

As blogged about before, I was interested in knowing the difference between the Galaxy Toolconfig, and the DocBook cmdsynopsis format, for the purpose of automatically generating wizards (see an example that I screencasted here) to fill in the required parameters to command line tools. To quickly get some hands-on experience with the formats, I started creating an XSLT transformation from galaxy toolconfig format to the docbook cmdsynopsis format.

I quite quickly realized some important differences, such as that cmdsynopsis lacks the ability to specify a list of possible/valid options for a parameter, which could be used for creating drop-downs in the wizards. But apart from that, the little work on the transformation I had already done when realizing this, actually was a nice little exercise in using regex with xslt. Look at the command tag content in this excerpt of a Galaxy ToolConfig XML file:

<tool id="sam_to_bam" name="SAM-to-BAM" version="1.1.1">
  <description>converts SAM format to BAM format</description>
  <requirements>
    <requirement type="package">samtools</requirement>
  </requirements>
  <command interpreter="python">
    sam_to_bam.py
      --input1=$source.input1
      --dbkey=${input1.metadata.dbkey} 
      #if $source.index_source == "history":
        --ref_file=$source.ref_file
      #else
        --ref_file="None"
      #end if
      --output1=$output1
      --index_dir=${GALAXY_DATA_INDEX_DIR}
  </command>
  <inputs>
    <conditional name="source">
      <param name="index_source" type="select" label="Choose the source for the reference list">
        <option value="cached">Locally cached</option>
        <option value="history">History</option>
      </param>
      <when value="cached">
        <param name="input1" type="data" format="sam" label="SAM File to Convert">
           <validator type="unspecified_build" />
           <validator type="dataset_metadata_in_file" filename="sam_fa_indices.loc" metadata_name="dbkey" metadata_column="1" message="Sequences are not currently available for the specified build." line_startswith="index" />
        </param>
      </when>
      <when value="history">
        <param name="input1" type="data" format="sam" label="Convert SAM file" />
        <param name="ref_file" type="data" format="fasta" label="Using reference file" />
      </when>
    </conditional>
  </inputs>
  <outputs>
    <data format="bam" name="output1" label="${tool.name} on ${on_string}: converted BAM" />
  </outputs>
</xml>

... you see that in the command tag, the actual syntax of the command is specified in a kind of "free text" format ... This might not be exactly what one might think to use XSLT transformations for, but together with the regex functionality in XSLT 2.0 you definitely has this option too. Helped by this article on xml.com, I put together this little XSLT stylesheet for parsing up the free text content of that command tag (haven't got to the more detailed config inside the inputs-tag in the galaxy format, but might not need either, if staying with the galaxy format anyway):

<?xml version="1.0"?>
 
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0">
 
    <xsl:output method="xml" indent="yes" encoding="UTF-8" />
 
    <xsl:template match="/">
        <cmdsynopsis>
            <xsl:apply-templates select="tool/command" />
        </cmdsynopsis>
    </xsl:template>
 
    <xsl:template match="tool/command">
        <command>
            <xsl:value-of select="@interpreter" />
        </command>
        <xsl:for-each select='tokenize(
                                    replace(
                                        replace(
                                            replace(
                                                replace(
                                                    .,
                                                    "[ ]+",
                                                    ""),
                                                "\n#[^\s]+",
                                                ""),
                                            "\n+",
                                            " "),
                                        "(^\s+|\s+$)",
                                        ""),
                                    "\s")'>
        <xsl:if test='matches(.,"\{")!=true()'>
            <arg>
                <xsl:value-of select='replace(.,"=.*","")'></xsl:value-of>
                <xsl:if test='matches(.,".*=.*")'>
                    <xsl:text> </xsl:text>
                    <replaceable>
                        <xsl:value-of select='replace(.,".*=\s*\$?","")'></xsl:value-of>
                    </replaceable>
                </xsl:if>
            </arg>
        </xsl:if>
        </xsl:for-each>
    </xsl:template>
</xsl:stylesheet>

... a bit crazy with all these nested regex replace function calls, no? :) ... but, I can tell you, it actually works very good! Found it easier to work with than many other regex implementations (i.e. matching newlines could be done with "\n", which I think you can't do by default in some other ones).

I can also mention that the tokenize function splits a string into an "array" of the parts between the parts that is matched by the expression given to tokenize (similar to "split" in some other languages, like python).

The result of the transoformation? Here it goes:

<?xml version="1.0" encoding="UTF-8"?>
<cmdsynopsis>
   <command>python</command>
   <arg>sam_to_bam.py</arg>
   <arg>--input1 <replaceable>source.input1</replaceable>
   </arg>
   <arg>--ref_file <replaceable>source.ref_file</replaceable>
   </arg>
   <arg>--ref_file <replaceable>"None"</replaceable>
   </arg>
   <arg>--output1 <replaceable>output1</replaceable>
   </arg>
</cmdsynopsis>

Not perfect (there are double "--ref_file" arguments still), but at least it has parsed up the different arguments, removed some galaxy specific stuff (the parts enclosed by "{}") and the conditional statements. At least I think it shows that xslt + regex is actually an option, don't you think? :)

A caveat here though: I found out that most of the XSLT processor tools for Ubuntu (xsltproc, xalan, the one built into php5) don't accept XSLT 2.0 features such as regex, so I ended up using the java based saxon processor.

To call it for doing a transformation, you simply go (when using the open source "home edition"):

java -jar saxon9he.jar [xml-file] [xslt-file] > [output-file]

Works good! (does a good job of formatting the XML too).