This is a great SPARQL FAQ, and seemingly a good starting point for finding out anything SPARQL.


Think I finally got it what Horn clauses are good for

The Wikipedia article on Horn clauses states the following: "Horn clauses are relevant to theorem proving by first-order resolution, in that the resolution of two Horn clauses is itself a Horn clause"

That seems to explain why horn clauses are so foundational for prolog, since in prolog one can compose goal functions as compounds of other goals.

(I realize my lack of basic knowledge of Prolog, heavily regretting not having been able to have any formal training in logic programming / prolog at my uni :( (they removed the prolog part of the course where I hoped to learn it, just before I entered that course) ).


Tricky reasnong problem for SPARQL and OWL: Lists of property chains containing numerical value constraints

EDIT (19/2): The problem with doing this with a SPARQL query, is now solved! See this blog post.

Thought I should write up on my NMRSpectrum similarity search problem, which I've got quite stuck with ... even after trying to get some advice from Semantic Overflow. So ... here we go.

I have a problem that I can't seem to express successfully in either pure SPARQL nor using OWL class descriptions, seemingly because the problem combines lists, property chains, and numerical value constraints, in a troublesome mix.

Representing lists in RDF

I was wondering how to express lists in RDF. Here is the answer, in RDF/N3 (there's a nice shorthand version available in N3).


W3C Starter primer for semantic web

There are many W3C Semantic Web primers, but this one seems to be the one to start with.

Learning what OWL profiles is (OWL 2 RL in particular)

Just got to know there is something called "OWL (2) profiles", which basically seems to be certain sets of restrictions one can infer on what you can express, with the aim to make certain usage patterns possible.

OWL 2 RL seems particularly interesting for my concern, since it is (according to the above link) meant to be "a syntactic subset of OWL 2 which is amenable to implementation using rule-based technologies", and rule-based technologies is exactly what I'm looking at with SWI-Prolog and BLIPKIT.

Semantic Science Portal contains interesting stuff

Just got to know the Semantic Science Portal today (though I've read with keen interest the papers of some of the people behind it, and know SADI and Bio2RDF from before, since Egon W told me about them).

On the portal I found some interesting new things though, including:

Powerful new modules for Drupal layouting and ease of use

France24, a french news website, recently upgraded from Drupal 5 to Drupal 6, taking benefit of improved performance and functionality. In the course, they have developed a seemingly quite impressive set of modules that provide the functionality that was missing in Drupal in order to create a highly flexibly layouted, and easy-worked news website - features which are highly general to many categories of websites. The good news is that they now have released most, if not all, of their custom modules as open source, as reported on Great news!


"Orthogonal expressivity" of Pellet and Prolog?

Found a very interesting quote:

"Both OWL-DL and function-free Horn rules are decidable fragments of first-order logic with interesting, yet orthogonal expressive power"Motik B, Sattler U, Studer R. Query Answering for OWL-DL with rules. Web Semantics: Science, Services and Agents on the World Wide Web. 2005;3(1):41-60. Available at:

"Horn rules", is what prolog builds upon (a prolog statement are horn rules, AFAIS), so maybe Prolog fits into the category of "function-free horn rules"? (Gotta try to figure that out), and OWL-DL is the W3C standard for expressive semantics, that reasoners like pellet (which is available in bioclipse build upon.

Automating answering of questions with no answers - by wrapping simulations in semantics

What do you think of that title? :) To me it sounds like one of the (many) natural next steps forward for Bioclipse sometime in future1.

Explicit knowledge is too expensive

There are lots of things that can't be answered by a computer from data alone. Maybe the majority of what we humans perceive as knowledge is inferred from a combination of data (simple fact statements about reality) and rules that tell how facts can be combined together to allow making implicit knowledge (knowledge that is not persisted as facts anywhere, but has to be inferred from other facts and rules) become explicit.

One can easily imagine though, that storing every single piece of knowledge that could be stated, as an explicit fact, would require more storage than can probably ever be made available in this universe.

Simulations can make knowledge explicit, from first princples

It is not too hard to come up with some processes which are just too complex and involves too much variability2 that it is unrealistic to try to capture every imaginable state of of that system or process in explicit facts. Instead we must seek the "first principles" that defines the process, and through simulations make explicit any knowledge we are looking for, at the time we need it (one can of course cache often accessed knowledge).