summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--docs/reference/libtracker-sparql/base-ontology.md12
-rw-r--r--docs/reference/libtracker-sparql/defining-ontologies.md399
-rw-r--r--docs/reference/libtracker-sparql/examples.md53
-rw-r--r--docs/reference/libtracker-sparql/implementation.md10
-rw-r--r--docs/reference/libtracker-sparql/limits.md18
-rw-r--r--docs/reference/libtracker-sparql/meson.build1
-rw-r--r--docs/reference/libtracker-sparql/mfo-introduction.md8
-rw-r--r--docs/reference/libtracker-sparql/migrating-2to3.md26
-rw-r--r--docs/reference/libtracker-sparql/nepomuk.md10
-rw-r--r--docs/reference/libtracker-sparql/nie-introduction.md38
-rw-r--r--docs/reference/libtracker-sparql/nmm-introduction.md10
-rw-r--r--docs/reference/libtracker-sparql/ontologies.md420
-rw-r--r--docs/reference/libtracker-sparql/overview.md21
-rw-r--r--docs/reference/libtracker-sparql/performance.md27
-rw-r--r--docs/reference/libtracker-sparql/security.md65
-rw-r--r--docs/reference/libtracker-sparql/sparql-and-tracker.md23
-rw-r--r--docs/reference/libtracker-sparql/sparql-functions.md87
-rw-r--r--docs/reference/libtracker-sparql/tutorial.md42
18 files changed, 609 insertions, 661 deletions
diff --git a/docs/reference/libtracker-sparql/base-ontology.md b/docs/reference/libtracker-sparql/base-ontology.md
deleted file mode 100644
index 8eec6a707..000000000
--- a/docs/reference/libtracker-sparql/base-ontology.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Base ontology
-short-description: Base for defining ontologies
-...
-
-# Base ontology
-
-The base ontology is the seed for defining application-specific
-ontologies. It defines the basic types and property/class
-definitions themselves, so that classes and properties may be
-created.
-
diff --git a/docs/reference/libtracker-sparql/defining-ontologies.md b/docs/reference/libtracker-sparql/defining-ontologies.md
deleted file mode 100644
index 24746453d..000000000
--- a/docs/reference/libtracker-sparql/defining-ontologies.md
+++ /dev/null
@@ -1,399 +0,0 @@
----
-title: Defining ontologies
-short-description: Defining Ontologies
-...
-
-# Defining ontologies
-
-An ontology defines the entities that a Tracker endpoint can store, as
-well as their properties and the relationships between different entities.
-
-Tracker internally uses the following ontologies as its base, all ontologies
-defined by the user of the endpoint are recommended to be build around this
-base:
-
-- XML Schema (XSD), defining basic types
-- Resource Description Framework (RDF), defining classes, properties and
- inheritance
-- Nepomuk Resource Language (NRL), defining resource uniqueness, inheritance
- and indexes.
-- Dublin Core (DC), defining common superproperties for documents
-
-Ontologies are Turtle files with the .ontology extension, Tracker parses all
-ontology files from the given directory. The individual ontology files may
-not be self-consistent (i.e. use missing definitions), but
-all the ontology files as a whole must be.
-
-Tracker loads the ontology files in alphanumeric order, it is advisable
-that those have a numbered prefix in order to load those at a consistent
-order despite future additions.
-
-## Creating an ontology
-
-### Defining a namespace
-
-A namespace is the topmost layer of an individual ontology, it will
-contain all classes and properties defined by it. In order to define
-a namespace you can do:
-
-```turtle
-# These prefixes will be used in the definition of the ontology,
-# thus must be explicitly defined
-@prefix nrl: <http://tracker.api.gnome.org/ontology/v3/nrl#> .
-@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
-@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
-@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
-
-# This is our example namespace
-@prefix ex: <http://example.org/#>
-
-ex: a nrl:Namespace, nrl:Ontology
- nrl:prefix "ex"
- rdfs:comment "example ontology"
- nrl:lastModified "2017-01-01T15:00:00Z"
-```
-
-### Defining classes
-
-Classes are the base of an ontology, all stored resources must define
-themselves as "being" at least one of these classes. They all derive
-from the base rdfs:Resource type. To eg. define classes representing
-animals and plants, you can do:
-
-```turtle
-ex:Eukaryote a rdfs:Class;
- rdfs:subClassOf rdfs:Resource;
- rdfs:comment "An eukaryote".
-```
-
-By convention all classes use CamelCase names, although class names
-are not restricted. The allowed charset is UTF-8.
-
-Declaring subclasses is possible:
-
-```turtle
-ex:Animal a rdfs:Class;
- rdfs:subClassOf ex:Eukaryote;
- rdfs:comment "An animal".
-
-ex:Plant a rdfs:Class;
- rdfs:subClassOf ex:Eukaryote;
- rdfs:comment "A plant".
-
-ex:Mammal a rdfs:Class;
- rdfs:subClassOf ex:Animal;
- rdfs:comment "A mammal".
-```
-
-With such classes defined, resources may be inserted to the endpoint,
-eg. with the SPARQL:
-
-```SPARQL
-INSERT DATA { <merry> a ex:Mammal }
-INSERT DATA { <treebeard> a ex:Animal, ex:Plant }
-```
-
-Note that multiple inheritance is possible, resources will just inherit
-all properties from all classes and superclasses.
-
-### Defining properties
-
-Properties relate to a class, so all resources pertaining to that class
-can define values for these.
-
-```turtle
-ex:cromosomes a rdf:Property;
- rdfs:domain ex:Eukaryote;
- rdfs:range xsd:integer.
-
-ex:unicellular a rdf:Property;
- rdfs:domain ex:Eukaryote;
- rdfs:range xsd:bool;
-
-ex:dateOfBirth a rdf:Property;
- rdfs:domain ex:Mammal;
- rdfs:range xsd:dateTime;
-```
-
-The class the property belongs to is defined by `rdfs:domain`, while the
-data type contained is defined by `rdfs:range`. By convention all
-properties use dromedaryCase names, although property names are not
-restricted. The allowed charset is UTF-8.
-
-The following basic types are supported:
-
-- `xsd:boolean`
-- `xsd:string` and `rdf:langString`
-- `xsd:integer`, ranging from -2^63 to 2^63-1.
-- `xsd:double`, able to store a 8 byte IEEE floating point number.
-- `xsd:date` and `xsd:dateTime`, able to store dates and times since
- January 1st 1 AD, with microsecond resolution.
-
-Of course, properties can also point to resources of the same or
-other classes, so stored resources can conform a graph:
-
-```turtle
-ex:parent a rdf:Property;
- rdfs:domain ex:Mammal;
- rdfs:range ex:Mammal;
-
-ex:pet a rdf:Property;
- rdfs:domain ex:Mammal;
- rdfs:range ex:Eukaryote;
-```
-
-There is also inheritance of properties, an example would be a property
-in a subclass concretizing a more generic property from a superclass.
-
-```turtle
-ex:geneticInformation a rdf:Property;
- rdfs:domain ex:Eukaryote;
- rdfs:range xsd:string;
-
-ex:dna a rdf:Property;
- rdfs:domain ex:Mammal;
- rdfs:range xsd:string;
- rdfs:subPropertyOf ex:geneticInformation.
-```
-
-SPARQL queries are expected to provide the same result when queried
-for a property or one of its superproperties.
-
-```SPARQL
-# These two queries should provide the exact same result(s)
-SELECT { ?animal a ex:Animal;
- ex:geneticInformation "AGCT" }
-SELECT { ?animal a ex:Animal;
- ex:dna "AGCT" }
-```
-
-### Defining cardinality of properties
-
-By default, properties are multivalued, there are no restrictions in
-the number of values a property can store.
-
-```SPARQL
-INSERT DATA {
- <cat> a ex:Mammal .
- <dog> a ex:Mammal .
-
- <peter> a ex:Mammal ;
- ex:pets <cat>, <dog>
-}
-```
-
-Wherever this is not desirable, cardinality can be limited on properties
-through nrl:maxCardinality.
-
-```turtle
-ex:cromosomes a rdf:Property;
- rdfs:domain ex:Eukaryote;
- rdfs:range xsd:integer;
- nrl:maxCardinality 1.
-```
-
-This will raise an error if the SPARQL updates in the endpoint end up
-in the property inserted multiple times.
-
-```SPARQL
-# This will fail
-INSERT DATA { <cat> a ex:Mammal;
- ex:cromosomes 38;
- ex:cromosomes 42 }
-
-# This will succeed
-INSERT DATA { <donald> a ex:Mammal;
- ex:cromosomes 47 }
-```
-
-Tracker does not implement support for other maximum cardinalities
-than 1.
-
-<!---
- XXX: explain how cardinality affects subproperties, superproperties
---->
-
-### Defining uniqueness
-
-It is desirable for certain properties to keep their values unique
-across all resources, this can be expressed by defining the properties
-as being a nrl:InverseFunctionalProperty.
-
-```turtle
-ex:geneticInformation a rdf:Property, nrl:InverseFunctionalProperty;
- rdfs:domain ex:Eukaryote;
- rdfs:range xsd:string;
-```
-
-With that in place, no two resources can have the same value on the
-property.
-
-```SPARQL
-# First insertion, this will succeed
-INSERT DATA { <drosophila> a ex:Eukariote;
- ex:geneticInformation "AGCT" }
-
-# This will fail
-INSERT DATA { <melanogaster> a ex:Eukariote;
- ex:geneticInformation "AGCT" }
-```
-
-<!---
- XXX: explain how inverse functional proeprties affect sub/superproperties
---->
-
-### Defining indexes
-
-It may be the case that SPARQL queries performed on the endpoint are
-known to match, sort, or filter on certain properties more often than others.
-In this case, the ontology may use nrl:domainIndex in the class definition:
-
-```turtle
-# Make queries on ex:dateOfBirth faster
-ex:Mammal a rdfs:Class;
- rdfs:subClassOf ex:Animal;
- rdfs:comment "A mammal";
- nrl:domainIndex ex:dateOfBirth.
-```
-
-Classes may define multiple domain indexes.
-
-**Note**: Be frugal with indexes, do not add these proactively. An index in the wrong
-place might not affect query performance positively, but all indexes come at
-a cost in disk size.
-
-### Defining full-text search properties
-
-Tracker provides nonstandard full-text search capabilities, in order to use
-these, the string properties can use nrl:fulltextIndexed:
-
-```turtle
-ex:name a rdf:Property;
- rdfs:domain ex:Mammal;
- rdfs:range xsd:string;
- nrl:fulltextIndexed true;
- nrl:weight 10.
-```
-
-Weighting can also be applied, so certain properties rank higher than others
-in full-text search queries. With nrl:fulltextIndexed in place, sparql
-queries may use full-text search capabilities:
-
-```SPARQL
-SELECT { ?mammal a ex:Mammal;
- fts:match "timmy" }
-```
-
-### Predefined elements
-
-It may be desirable for the ontology to offer predefined elements of a
-certain class, which can then be used by the endpoint.
-
-```turtle
-ex:self a ex:Mammal.
-```
-
-Usage does not differ in use from the elements of that same class that
-could be inserted in the endpoint.
-
-```SPARQL
-INSERT DATA { ex:self ex:pets <cat> .
- <cat> ex:pets ex:self }
-```
-
-### Accompanying metadata
-
-Ontology files are optionally accompanied by description files, those have
-the same basename, but the ".description" extension.
-
-```turtle
-@prefix dsc: <http://tracker.api.gnome.org/ontology/v3/dsc#> .
-
-<virtual-ontology-uri:30-nie.ontology> a dsc:Ontology ;
- dsc:title "Example ontology" ;
- dsc:description "A little bit of this and that." ;
- dsc:upstream "http://www.example.org/ontologies";
- dsc:author "John doe, &lt;john@example.org&gt;";
- dsc:editor "Jane doe, &lt;jane@example.org&gt;";
- dsc:gitlog "http://git.example.org/cgit/tracker/log/example.ontology";
- dsc:contributor "someone else, &lt;some1@example.org&gt;";
-
- dsc:localPrefix "ex" ;
- dsc:baseUrl "http://www.example.org/ontologies/ex#";
- dsc:relativePath "./10-ex.ontology" ;
-
- dsc:copyright "All rights given away".
-```
-
-## Updating an ontology
-
-As software evolves, sometimes changes in the ontology are unavoidable.
-Tracker can transparently handle certain ontology changes on existing
-databases.
-
-1. Adding a class.
-2. Removing a class.
- All resources will be removed from this class, and all related
- properties will disappear.
-3. Adding a property.
-4. Removing a property.
- The property will disappear from all elements pertaining to the
- class in domain of the property.
-5. Changing rdfs:range of a property.
- The following conversions are allowed:
-
- - `xsd:integer` to `xsd:bool`, `xsd:double` and `xsd:string`</listitem></varlistentry>
- - `xsd:double` to `xsd:bool`, `xsd:integer` and `xsd:string`</listitem></varlistentry>
- - `xsd:string` to `xsd:bool`, `xsd:integer` and `xsd:double`</listitem></varlistentry>
-
-6. Adding and removing `nrl:domainIndex` from a class.
-7. Adding and removing `nrl:fulltextIndexed` from a property.
-8. Changing the `nrl:weight` on a property.
-9. Removing `nrl:maxCardinality` from a property.
-
-<!---
- XXX: these need documenting too
- add intermediate superproperties
- add intermediate superclasses
- remove intermediate superproperties
- remove intermediate superclasses
---->
-
-However, there are certain ontology changes that Tracker will find
-incompatible. Either because they are incoherent or resulting into
-situations where it can not deterministically satisfy the change
-in the stored data. Tracker will error out and refuse to do any data
-changes in these situations:
-
-- Properties with rdfs:range being `xsd:bool`, `xsd:date`, `xsd:dateTime`,
- or any other custom class are not convertible. Only conversions
- covered in the list above are accepted.
-- You can not add `rdfs:subClassOf` in classes that are not being
- newly added. You can not remove `rdfs:subClassOf` from classes.
- The only allowed change to `rdfs:subClassOf` is to correct
- subclasses when deleting a class, so they point a common
- superclass.
-- You can not add `rdfs:subPropertyOf` to properties that are not
- being newly added. You can not change an existing
- `rdfs:subPropertyOf` unless it is made to point to a common
- superproperty. You can however remove `rdfs:subPropertyOf` from
- non-new properties.
-- Properties can not move across classes, thus any change in
- `rdfs:domain` is forbidden.
-- You can not add `nrl:maxCardinality` restrictions on properties that
- are not being newly added.
-- You can not add nor remove `nrl:InverseFunctionalProperty` from a
- property that is not being newly added.
-
-The recommendation to bypass these situations is the same for all,
-use different property and class names and use SPARQL to manually
-migrate the old data to the new format if necessary.
-
-High level code is in a better position to solve the
-possible incoherences (e.g. picking a single value if a property
-changes from multiple values to single value). After the manual
-data migration has been completed, the old classes and properties
-can be dropped.
-
-Once changes are made, the nrl:lastModified value should be updated
-so Tracker knows to reprocess the ontology.
diff --git a/docs/reference/libtracker-sparql/examples.md b/docs/reference/libtracker-sparql/examples.md
index 474ac46d8..524637c06 100644
--- a/docs/reference/libtracker-sparql/examples.md
+++ b/docs/reference/libtracker-sparql/examples.md
@@ -1,38 +1,33 @@
----
-title: Examples
-short-description: Examples
-...
+Title: Examples
-# Examples
-
-This chapters shows some real examples of usage of the Tracker
+This document shows some real examples of usage of the Tracker
SPARQL Library.
## Querying a remote endpoint
-All SPARQL queries happen on a [](TrackerSparqlConnection), often these
-connections represent a remote endpoints maintained by another process or
+All SPARQL queries happen through a [class@Tracker.SparqlConnection], often
+these connections represent a remote endpoints maintained by another process or
server.
This example demonstrates the use of these connections on a remote
-endpoint. Concretely creating a D-Bus [](TrackerSparqlConnection),
+endpoint. Concretely creating a D-Bus [class@Tracker.SparqlConnection],
creating a prepared statement from a SPARQL query string, executing
the query, and obtaining the query results from the cursor.
-The [](tracker_sparql_connection_query_statement) function can be used
-to obtain a [](TrackerSparqlStatement) object holding a prepared SPARQL
-query that can then be executed with [](tracker_sparql_statement_execute).
+The [method@Tracker.SparqlConnection.query_statement] method can be used
+to obtain a [class@Tracker.SparqlStatement] object holding a prepared SPARQL
+query that can then be executed with [method@Tracker.SparqlStatement.execute].
The query string can contain `~name` placeholders which can be replaced with
arbitrary values before query execution with
-[](tracker_sparql_statement_bind_string) and similar functions.
+[method@Tracker.SparqlStatement.bind_string] and similar functions.
This allows parsing the query string only once and to execute it multiple
times with different parameters with potentially significant performance gains.
Multiple functions offer asynchronous variants, so the application
main loop is not blocked while these operations are executed.
-Once you end up with the query, remember to call [](tracker_sparql_cursor_close).
-The same applies to [](tracker_sparql_connection_close) when no longer needed.
+Once you end up with the query, remember to call [method@Tracker.SparqlCursor.close].
+The same applies to [method@Tracker.SparqlConnection.close] when no longer needed.
<div class="gi-lang-c">
@@ -52,23 +47,23 @@ The same applies to [](tracker_sparql_connection_close) when no longer needed.
## Creating a private database
-Applications may create private stores via the [](tracker_sparql_connection_new)
+Applications may create private RDF triple stores via the [ctor@Tracker.SparqlConnection.new]
constructor.
This example demonstrates the creation of a private store, for simplicity the
example uses the builtin Nepomuk ontology, but the data structures may be defined
by the application, see the documentation on
-[defining ontologies](defining-ontologies.md) for more information about this.
+[creating custom ontologies](ontologies.html#creating-custom-ontologies) for more information about this.
-The example also demonstrates the use of [](TrackerResource) and [](TrackerBatch)
-for insertion of RDF data. It is also possible the direct use of SPARQL update
-strings via [](tracker_sparql_connection_update).
+The example also demonstrates the use of [class@Tracker.Resource] and [class@Tracker.Batch]
+for insertion of RDF data. It is also possible to use [class@Tracker.SparqlStatement] for
+updates through the [method@Tracker.Batch.add_statement] methods, or plain SPARQL strings
+through [method@Tracker.Batch.add_sparql].
Multiple functions offer asynchronous variants, so the application
main loop is not blocked while these operations are executed.
-Once you no longer need the connection, remember to call
-[](tracker_sparql_connection_close) on the [](TrackerSparqlConnection).
+Once you no longer need the connection, remember to call [method@Tracker.SparqlConnection.close].
<div class="gi-lang-c">
@@ -89,13 +84,13 @@ Once you no longer need the connection, remember to call
## Creating a SPARQL endpoint
For some applications and services, it might be desirable to export a
-SPARQL store as an endpoint. Making it possible for other applications to
+RDF triple store as an endpoint. Making it possible for other applications to
query the data they hold.
-This example demonstrates the use of [](TrackerEndpoint) subclasses,
+This example demonstrates the use of [class@Tracker.Endpoint] subclasses,
concretely the creation of a D-Bus endpoint, that other applications
may query e.g. through a connection created with
-[](tracker_sparql_connection_bus_new).
+[ctor@Tracker.SparqlConnection.bus_new].
<div class="gi-lang-c">
@@ -118,10 +113,10 @@ may query e.g. through a connection created with
As an additional feature over SPARQL endpoints, Tracker allows for
users of private and D-Bus SPARQL connections to receive notifications
on changes of certain RDF classes (Those with the
-[nrl:notify](nrl-ontology.md#nrl:notify) property, like
-[nmm:MusicPiece](nmm-ontology.md#nmm:MusicPiece)).
+[nrl:notify](nrl-ontology.html#nrl:notify) property, like
+[nmm:MusicPiece](nmm-ontology.html#nmm:MusicPiece)).
-This example demonstrates the use of [](TrackerNotifier) to receive
+This example demonstrates the use of [class@Tracker.Notifier] to receive
notifications on database updates.
<div class="gi-lang-c">
diff --git a/docs/reference/libtracker-sparql/implementation.md b/docs/reference/libtracker-sparql/implementation.md
deleted file mode 100644
index 998f66dc8..000000000
--- a/docs/reference/libtracker-sparql/implementation.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Implementation details
-short-description: Tracker implementation specifics
-...
-
-# Implementation details
-
-This section highlights the chosen interpretations of the
-SPARQL specification, and specifies how to get the best of
-Tracker's implementation of the SPARQL standard.
diff --git a/docs/reference/libtracker-sparql/limits.md b/docs/reference/libtracker-sparql/limits.md
index 9d514b834..0aac7a6f8 100644
--- a/docs/reference/libtracker-sparql/limits.md
+++ b/docs/reference/libtracker-sparql/limits.md
@@ -1,12 +1,8 @@
----
-title: Limits
-short-description: Implementation limits
-...
+Title: Implementation limits
+Slug: implementation-limits
-# Limits
-
-Tracker is implemented on top of SQLite, and all of its benefits and
-[limits](https://www.sqlite.org/limits.html) apply. This
+Tracker is implemented on top of [SQLite](https://sqlite.org), and all of its
+benefits and [limits](https://sqlite.org/limits.html) apply. This
document will break down how those limits apply to Tracker. Depending on
your distributor, the limits might be changed via SQLite build-time
options.
@@ -65,7 +61,7 @@ SQLite has a limit on the number of databases that can be attached,
defined by `SQLITE_MAX_LIMIT_ATTACHED` (defaults to 10, maximum 128).
Tracker uses attached databases to implement its support for multiple
-graphs, so the maximum amount of graphs for a given [](TrackerSparqlConnection)
+graphs, so the maximum amount of graphs for a given [class@Tracker.SparqlConnection]
is equally restricted.
## Limits on glob search
@@ -78,7 +74,7 @@ SPARQL syntax.
SQLite defines a maximum of 999 parameters to be passed as arguments
to a statement, controlled by `SQLITE_MAX_VARIABLE_NUMBER`.
-[](TrackerSparqlStatement) has the same limit.
+[class@Tracker.SparqlStatement] has the same limit.
## Maximum number of pages in a database
@@ -90,4 +86,4 @@ applies per graph.
Integers are 64 bit wide. Floating point numbers have IEEE764
double precision. Dates/times have microsecond precision, and may
-range between 0001-01-01 00:00:00 and 9999-12-31 23:59:59.
+range between `0001-01-01 00:00:00` and `9999-12-31 23:59:59`.
diff --git a/docs/reference/libtracker-sparql/meson.build b/docs/reference/libtracker-sparql/meson.build
index d55a15b06..019a54bf5 100644
--- a/docs/reference/libtracker-sparql/meson.build
+++ b/docs/reference/libtracker-sparql/meson.build
@@ -45,6 +45,7 @@ nepomuk_ontology_docs = custom_target('nepomuk-docgen',
generated_content = [
'xsd-ontology.md',
'rdf-ontology.md',
+ 'rdfs-ontology.md',
'nrl-ontology.md',
'dc-ontology.md',
diff --git a/docs/reference/libtracker-sparql/mfo-introduction.md b/docs/reference/libtracker-sparql/mfo-introduction.md
index 73b0b09b2..65ee78b83 100644
--- a/docs/reference/libtracker-sparql/mfo-introduction.md
+++ b/docs/reference/libtracker-sparql/mfo-introduction.md
@@ -6,12 +6,12 @@ This ontology is an abstract representation of entries coming from feeds. These
The basic assumption in the ontology is that all these feeds are unidirectional conversations with (from) the author of the content and every post on those channels is a message.
-The source of the posts, the feed itself, is an instance of [mfo:FeedChannel](mfo-ontology.md#mfo:FeedChannel). Each post in that feed will be an instance of [mfo:FeedMessage](mfo-ontology.md#mfo:FeedMessage). The relation between the messages and the channel comes from their superclasses, [nmo:communicationChannel](nmo-ontology.md#nmo:communicationChannel) (taking into account that [mfo:FeedChannel](mfo-ontology.md#mfo:FeedChannel) is a subclass of [nmo:CommunicationChannel](nmo-ontology.md#nmo:CommunicationChannel) and [mfo:FeedMessage](mfo-ontology.md#mfo:FeedMessage) a subclass of [nmo:Message](nmo-ontology.md#nmo:Message).
+The source of the posts, the feed itself, is an instance of [mfo:FeedChannel](mfo-ontology.html#mfo:FeedChannel). Each post in that feed will be an instance of [mfo:FeedMessage](mfo-ontology.html#mfo:FeedMessage). The relation between the messages and the channel comes from their superclasses, [nmo:communicationChannel](nmo-ontology.html#nmo:communicationChannel) (taking into account that [mfo:FeedChannel](mfo-ontology.html#mfo:FeedChannel) is a subclass of [nmo:CommunicationChannel](nmo-ontology.html#nmo:CommunicationChannel) and [mfo:FeedMessage](mfo-ontology.html#mfo:FeedMessage) a subclass of [nmo:Message](nmo-ontology.html#nmo:Message).
-A post can be plain text but can contain also more things like links, videos or Mp3. We represent those internal pieces in instances of [mfo:Enclosure](mfo-ontology.md#mfo:Enclosure). This class has properties to link with the remote and local representation of the resource (in case the content has been downloaded).
+A post can be plain text but can contain also more things like links, videos or Mp3. We represent those internal pieces in instances of [mfo:Enclosure](mfo-ontology.html#mfo:Enclosure). This class has properties to link with the remote and local representation of the resource (in case the content has been downloaded).
-Finally, the three important classes (mfo:FeedChannel, mfo:FeedMessage, mfo:Enclosure) are subclasses of [mfo:FeedElement](mfo-ontology.md#mfo:FeedElement), just an abstract class to share the link with mfo:FeedSettings. [mfo:FeedSettings](mfo-ontology.md#mfo:FeedSettings) contains some common configuration options. Not all of them applies to all, but it is a quite cleaner solution. For instance the [mfo:maxSize](mfo-ontology.md#mfo:maxSize) property only makes sense per-enclosure, while the [mfo:updateInterval](mfo-ontology.md#mfo:updateInterval) is useful for the channel.
+Finally, the three important classes (mfo:FeedChannel, mfo:FeedMessage, mfo:Enclosure) are subclasses of [mfo:FeedElement](mfo-ontology.html#mfo:FeedElement), just an abstract class to share the link with mfo:FeedSettings. [mfo:FeedSettings](mfo-ontology.html#mfo:FeedSettings) contains some common configuration options. Not all of them applies to all, but it is a quite cleaner solution. For instance the [mfo:maxSize](mfo-ontology.html#mfo:maxSize) property only makes sense per-enclosure, while the [mfo:updateInterval](mfo-ontology.html#mfo:updateInterval) is useful for the channel.
## Special remarks
-In some feeds there can be multiple enclosures together in a group, representing the same resource in different formats, qualities, resolutions, etc. Until further notify, the group will be represented using [nie:identifier](nie-ontology.md#nie:identifier) property. To mark the default enclosure of the group, there is a [mfo:groupDefault](mfo-ontology.md#mfo:groupDefault) property.
+In some feeds there can be multiple enclosures together in a group, representing the same resource in different formats, qualities, resolutions, etc. Until further notify, the group will be represented using [nie:identifier](nie-ontology.html#nie:identifier) property. To mark the default enclosure of the group, there is a [mfo:groupDefault](mfo-ontology.html#mfo:groupDefault) property.
diff --git a/docs/reference/libtracker-sparql/migrating-2to3.md b/docs/reference/libtracker-sparql/migrating-2to3.md
index 2330e88e6..172b65b70 100644
--- a/docs/reference/libtracker-sparql/migrating-2to3.md
+++ b/docs/reference/libtracker-sparql/migrating-2to3.md
@@ -1,9 +1,5 @@
----
-title: Migrating from 2.x to 3.0
-short-description: Migrating from libtracker-sparql 2.x to 3.0
-...
-
-# Migrating from libtracker-sparql 2.x to 3.0
+Title: Migrating from 2.x to 3.0
+Slug: migrating-2-to-3
Tracker 3.0 is a new major version, containing some large
syntax and conceptual changes.
@@ -21,8 +17,8 @@ in one graph at a time. In other words, this yields the wrong
result:
```SPARQL
-INSERT { GRAPH <A> { <foo> nie:title 'Hello' } }
-INSERT { GRAPH <B> { <foo> nie:title 'Hola' } }
+INSERT { GRAPH <http://example.com/A> { <foo> nie:title 'Hello' } }
+INSERT { GRAPH <http://example.com/B> { <foo> nie:title 'Hola' } }
# We expect 2 rows, 2.x returns 1.
SELECT ?g ?t { GRAPH ?g { <foo> nie:title ?t } }
@@ -37,10 +33,10 @@ skipped if a GRAPH is requested or defined, e.g.:
```SPARQL
# Inserts element into the unnamed graph
-INSERT { <foo> a nfo:FileDataObject }
+INSERT { <http://example.com/foo> a nfo:FileDataObject }
# Inserts element into named graph A
-INSERT { GRAPH <A> { <bar> a nfo:FileDataObject } }
+INSERT { GRAPH <A> { <http://example.com/bar> a nfo:FileDataObject } }
# Queries from all named graphs, A in this case
SELECT ?g ?s { GRAPH ?g { ?s a nfo:FileDataObject } }
@@ -110,18 +106,18 @@ those elements in place. Other ontologies might have similar concepts.
Notifiers are now created through tracker_sparql_connection_create_notifier().
-## Different signature of [](TrackerNotifier::events) signal
+## Different signature of [signal@Tracker.Notifier::events] signal
A TrackerNotifier may hint changes across multiple endpoints (local or remote),
in consequence the signal additionally contains 2 string arguments, notifying
about the SPARQL endpoint the changes came from, and the SPARQL graph the changes
apply to.
-## Return value change in `tracker_sparql_connection_update_array()`
+## Return value change in [method@Tracker.SparqlConnection.update_array_async]
This function changed to handle all changes within a single transaction. Returning
an array of errors for each individual update is no longer necessary, so it now
-simply returns a boolean return value.
+simply returns a boolean return value and a single error for the whole transaction.
## No `tracker_sparql_connection_get()/get_async()`
@@ -129,13 +125,13 @@ There is no longer a singleton SPARQL connection. If you are only interested in
tracker-miner-fs data, you can create a dedicated DBus connection to it through:
```c
-conn = tracker_sparql_connection_bus_new ("org.freedesktop.Tracker3.Miner.Files", ...);
+conn = tracker_sparql_connection_bus_new ("org.freedesktop.Tracker3.Miner.Files", …);
```
If you are interested in storing your own data, you can do it through:
```c
-conn = tracker_sparql_connection_new (...);
+conn = tracker_sparql_connection_new (…);
```
Note that you still may access other endpoints in SELECT queries, eg. for
diff --git a/docs/reference/libtracker-sparql/nepomuk.md b/docs/reference/libtracker-sparql/nepomuk.md
deleted file mode 100644
index 4745414b6..000000000
--- a/docs/reference/libtracker-sparql/nepomuk.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Nepomuk
-short-description: The Nepomuk ontology
-...
-
-# Nepomuk
-
-Nepomuk is the swiss army knife of the semantic desktop. It defines
-data structures for almost any kind of data you might want to store
-in a workstation machine.
diff --git a/docs/reference/libtracker-sparql/nie-introduction.md b/docs/reference/libtracker-sparql/nie-introduction.md
index cc386c45b..f79535281 100644
--- a/docs/reference/libtracker-sparql/nie-introduction.md
+++ b/docs/reference/libtracker-sparql/nie-introduction.md
@@ -3,26 +3,26 @@
## Introduction
The core of the NEPOMUK Information Element Ontology and the entire
-Ontology Framework revolves around the concepts of [nie:DataObject](nie-ontology.md#nie:DataObject) and
-[nie:InformationElement](nie-ontology.md#nie:InformationElement). They express the representation
+Ontology Framework revolves around the concepts of [nie:DataObject](nie-ontology.html#nie:DataObject) and
+[nie:InformationElement](nie-ontology.html#nie:InformationElement). They express the representation
and content of a piece of data. Their specialized subclasses (defined
in the other ontologies) can be used to classify
a wide array of desktop resources and express them in RDF.
-[nie:DataObject](nie-ontology.md#nie:DataObject) class represents a bunch of
+[nie:DataObject](nie-ontology.html#nie:DataObject) class represents a bunch of
bytes somewhere (local or remote), the physical entity that contain
data. The *meaning* (interpretation) of that entity, the
information for the user contained in those bytes (e.g. a music file,
a picture) is represented on the
-[nie:InformationElement](nie-ontology.md#nie:InformationElement) side of the
+[nie:InformationElement](nie-ontology.html#nie:InformationElement) side of the
ontology.
Both sides are linked using the
-property [nie:interpretedAs](nie-ontology.md#nie:interpretedAs) (and its reverse
-[nie:isStoredAs](nie-ontology.md#nie:isStoredAs)), indicating the correspondence
+property [nie:interpretedAs](nie-ontology.html#nie:interpretedAs) (and its reverse
+[nie:isStoredAs](nie-ontology.html#nie:isStoredAs)), indicating the correspondence
between the physical element and its interpretation. There is also a
property to
-link [nie:InformationElement](nie-ontology.md#nie:InformationElement)s,
+link [nie:InformationElement](nie-ontology.html#nie:InformationElement)s,
representing the logical containment between them (like a picture and
its album).
@@ -33,22 +33,22 @@ everything in the Nepomuk set of ontologies, the
properties defined here will be inherited for a lot of classes. It is
worth to comment few of them with special relevance:
- - [nie:title](nie-ontology.md#nie:title): Title or name or short text describing the item
- - [nie:description](nie-ontology.md#nie:description): More verbose comment about the element
- - [nie:language](nie-ontology.md#nie:language): To specify the language of the item.
- - [nie:plainTextContent](nie-ontology.md#nie:plainTextContent): Just the raw content of the file, if it makes sense as text.
- - [nie:generator](nie-ontology.md#nie:generator): Software/Agent that set/produced the information.
- - [nie:usageCounter](nie-ontology.md#nie:usageCounter): Count number of accesses to the information. It can be an indicator of relevance for advanced searches
+ - [nie:title](nie-ontology.html#nie:title): Title or name or short text describing the item
+ - [nie:description](nie-ontology.html#nie:description): More verbose comment about the element
+ - [nie:language](nie-ontology.html#nie:language): To specify the language of the item.
+ - [nie:plainTextContent](nie-ontology.html#nie:plainTextContent): Just the raw content of the file, if it makes sense as text.
+ - [nie:generator](nie-ontology.html#nie:generator): Software/Agent that set/produced the information.
+ - [nie:usageCounter](nie-ontology.html#nie:usageCounter): Count number of accesses to the information. It can be an indicator of relevance for advanced searches
## Date and timestamp representations
There are few important dates for the life-cycle of a resource. These dates are properties of the nie:InformationElement class, and inherited for its subclasses:
- - [nie:informationElementDate](nie-ontology.md#nie:informationElementDate): This is an ''abstract'' property that act as superproperty of the other dates. Don't use it directly.
- - [nie:contentLastModified](nie-ontology.md#nie:contentLastModified): Modification time of a resource. Usually the mtime of a local file, or information from the server for online resources.
- - [nie:contentCreated](nie-ontology.md#nie:contentCreated): Creation time of the content. If the contents is created by an application, the same application should set the value of this property. Note that this property can be undefined for resources in the filesystem because the creation time is not available in the most common filesystem formats.
- - [nie:contentAccessed](nie-ontology.md#nie:contentAccessed): For resources coming from the filesystem, this is the usual access time to the file. For other kind of resources (online or virtual), the application accessing it should update its value.
- - [nie:lastRefreshed](nie-ontology.md#nie:lastRefreshed): The time that the content was last refreshed. Usually for remote resources.
+ - [nie:informationElementDate](nie-ontology.html#nie:informationElementDate): This is an ''abstract'' property that act as superproperty of the other dates. Don't use it directly.
+ - [nie:contentLastModified](nie-ontology.html#nie:contentLastModified): Modification time of a resource. Usually the mtime of a local file, or information from the server for online resources.
+ - [nie:contentCreated](nie-ontology.html#nie:contentCreated): Creation time of the content. If the contents is created by an application, the same application should set the value of this property. Note that this property can be undefined for resources in the filesystem because the creation time is not available in the most common filesystem formats.
+ - [nie:contentAccessed](nie-ontology.html#nie:contentAccessed): For resources coming from the filesystem, this is the usual access time to the file. For other kind of resources (online or virtual), the application accessing it should update its value.
+ - [nie:lastRefreshed](nie-ontology.html#nie:lastRefreshed): The time that the content was last refreshed. Usually for remote resources.
## URIs and full representation of a file
@@ -58,7 +58,7 @@ One of the most common resources in a desktop is a file. Given the split between
2. Even when Data Objects and Information Elements are different entities.
3. The URI of the DataObject is the real location of the item (e.g. ''file://path/to/file.mp3'')
3. The URI of the InformationElement(s) will be autogenerated IDs.
- 4. Every DataObject must have the property [nie:url](nie-ontology.md#nie:url), that points to the location of the resource, and should be used by any program that wants to access it.
+ 4. Every DataObject must have the property [nie:url](nie-ontology.html#nie:url), that points to the location of the resource, and should be used by any program that wants to access it.
5. The InformationElement and DataObject are related via the nie:isStoredAs / nie:interpretedAs properties.
Here comes an example, for the image file /home/user/a.jpeg:
diff --git a/docs/reference/libtracker-sparql/nmm-introduction.md b/docs/reference/libtracker-sparql/nmm-introduction.md
index 8b89fc709..82df241fd 100644
--- a/docs/reference/libtracker-sparql/nmm-introduction.md
+++ b/docs/reference/libtracker-sparql/nmm-introduction.md
@@ -10,14 +10,14 @@ Our approach in NMM is to keep the minimum properties that make sense for the us
## Images domain
-The core of images in NMM ontology is the class [nmm:Photo](nmm-ontology.md#nmm:Photo). It is (through a long hierarchy) a [nie:InformationElement](nie-ontology.md#nie:InformationElement), an interpretation of some bytes. It has properties to store the basic information (camera, metering mode, white balance, flash), and inherits from [nfo:Image](nfo-ontology.md#nfo:Image) orientation ([nfo:orientation](nfo-ontology.md#nfo:orientation)) and resolution ([nfo:verticalResolution](nfo-ontology.md#nfo:verticalResolution) and [nfo:horizontalResolution](nfo-ontology.md#nfo:horizontalResolution)).
+The core of images in NMM ontology is the class [nmm:Photo](nmm-ontology.html#nmm:Photo). It is (through a long hierarchy) a [nie:InformationElement](nie-ontology.html#nie:InformationElement), an interpretation of some bytes. It has properties to store the basic information (camera, metering mode, white balance, flash), and inherits from [nfo:Image](nfo-ontology.html#nfo:Image) orientation ([nfo:orientation](nfo-ontology.html#nfo:orientation)) and resolution ([nfo:verticalResolution](nfo-ontology.html#nfo:verticalResolution) and [nfo:horizontalResolution](nfo-ontology.html#nfo:horizontalResolution)).
-Note that for tags, nie:keywords (from nie:InformationElement) can be used, or the [NAO](nao-ontology.md) ontology.
+Note that for tags, nie:keywords (from nie:InformationElement) can be used, or the [NAO](nao-ontology.html) ontology.
## Radio domain
-NMM includes classes and properties to represent analog and digital radio stations. There is a class [nmm:RadioStation](nmm-ontology.md#nmm:RadioStation) on the [nie:InformationElement](nie-ontology.md#nie:InformationElement) side of the ontology, representing what the user sees about that station (genre via PTY codes, icon, plus title inherited from nie:InformationElement)
+NMM includes classes and properties to represent analog and digital radio stations. There is a class [nmm:RadioStation](nmm-ontology.html#nmm:RadioStation) on the [nie:InformationElement](nie-ontology.html#nie:InformationElement) side of the ontology, representing what the user sees about that station (genre via PTY codes, icon, plus title inherited from nie:InformationElement)
-A [nmm:RadioStation](nmm-ontology.md#nmm:RadioStation) can have one or more [nmm:carrier](nmm-ontology.md#nmm:carrier) properties representing the different frequencies (or links when it is digitial) it can be tuned. This property links the station with [nfo:MediaStream](nfo-ontology.md#nfo:MediaStream), but usually it will point to one of the subclasses: [nmm:DigitalRadio](nmm-ontology.md#nmm:DigitalRadio) (if digital) or [nmm:AnalogRadio](nmm-ontology.md#nmm:AnalogRadio) (if analog). An analog station has properties as modulation and frequency, while the digial station has streaming bitrate, encoding or protocol.
+A [nmm:RadioStation](nmm-ontology.html#nmm:RadioStation) can have one or more [nmm:carrier](nmm-ontology.html#nmm:carrier) properties representing the different frequencies (or links when it is digitial) it can be tuned. This property links the station with [nfo:MediaStream](nfo-ontology.html#nfo:MediaStream), but usually it will point to one of the subclasses: [nmm:DigitalRadio](nmm-ontology.html#nmm:DigitalRadio) (if digital) or [nmm:AnalogRadio](nmm-ontology.html#nmm:AnalogRadio) (if analog). An analog station has properties as modulation and frequency, while the digial station has streaming bitrate, encoding or protocol.
-Note that nfo:MediaStream refers to a flux of bytes/data, and it is on the [nie:DataObject](nie-ontology.md#nie:DataObject)<link linkend="nie-DataObject">nie:DataObject</link> side of the ontology.
+Note that nfo:MediaStream refers to a flux of bytes/data, and it is on the [nie:DataObject](nie-ontology.html#nie:DataObject) side of the ontology.
diff --git a/docs/reference/libtracker-sparql/ontologies.md b/docs/reference/libtracker-sparql/ontologies.md
index 884e7c481..bb931cc36 100644
--- a/docs/reference/libtracker-sparql/ontologies.md
+++ b/docs/reference/libtracker-sparql/ontologies.md
@@ -1,15 +1,417 @@
----
-title: Ontologies
-short-description: Structure of the stored data
-...
+Title: Ontologies
-# Ontologies
-
-Ontologies define the structure of the data that the triplestore
+Ontologies define the structure of the data that the RDF triple store
can hold. It defines the possible resource classes,
the properties these classes may have, and the relation between
the different classes as expressed by these properties.
-Tracker defines stock ontologies that are ready for use, but
-also allows developers to define ontologies that are tailored
+# Base ontology
+
+The base ontology is the seed for defining application-specific
+ontologies. It defines the building blocks to build ontologies
+upon, like the definition of classes, properties, and literal
+types themselves. The base ontology is based on
+[RDFS Schema](https://www.w3.org/TR/rdf-schema/).
+
+It is made up of several components:
+
+- [XML schema (XSD)](xsd-ontology.html) defines the basic literal types.
+- [Resource description framework](rdf-ontology.html) defines properties,
+ lists and language-tagged strings.
+- [RDF Schema](rdfs-ontology.html) defines classes and inheritance.
+- [Nepomuk Resource Language (NRL)](nrl-ontology.html) defines resource
+ cardinality and database-level indexes.
+- [Dublin core metadata (DC)](dc-ontology.html) defines a common set of
+ document-oriented superproperties for RDF resources.
+
+# Nepomuk
+
+Nepomuk is the swiss army knife of the semantic desktop, similar
+in scope to [Schema.org](https://schema.org). It defines
+data structures for almost any kind of data you might want to store
+in a personal computer.
+
+It is split into several domains:
+
+- [Nepomuk Information Element (NIE)](nie-ontology.html) is the
+ core of Nepomuk. It settles the basic principles like the split
+ between "container" and "content", and defines the base
+ [nie:DataObject](nie-ontology.html#nie:DataObject) and
+ [nie:InformationElement](nie-ontology.html#nie:InformationElement) objects
+ that represent this split.
+- [Nepomuk File Ontology (NFO)](nfo-ontology.html) describes the basic
+ filesystem-oriented objects.
+- [Nepomuk Multimedia (NMM)](nmm-ontology.html) describes multi-media data.
+- [Nepomuk Contacts Ontology (NCO)](nco-ontology.html) describes contacts and
+ addresses.
+- [Libosinfo ontology](osinfo-ontology.html) describes OS images.
+- [Maemo Feeds Ontology (MFO)](mfo-ontology.html) describes feeds.
+- [Simplified Location Ontology (SLO)](slo-ontology.html) extends metadata
+ with geolocation tagging.
+- [Nepomuk Annotation Ontology (NAO)](nao-ontology.html) extends metadata
+ with annotations.
+- Other [Tracker extensions](tracker-ontology.html) to further annotate
+ data and link to external services.
+
+# Creating custom ontologies
+
+Tracker does also allow developers to define ontologies that are tailored
for their use.
+
+Ontologies are made themselves of RDF data in the [Turtle](https://www.w3.org/TR/turtle/)
+format with the `.ontology` extension. Custom-made ontologies will build upon the
+[base ontology](#base-ontology) provided for this purpose.
+
+Ontologies may be split in multiple documents in a same directory. The individual
+ontology files do not need be self-consistent (e.g. they may use definitions from
+other files), but all the ontology files as a whole must be self-consistent.
+Tracker will not open or create a RDF triple store if the ontology is not
+consistent, and will roll back any change if necessary.
+
+Tracker loads the ontology files in alphanumeric order, it is advisable
+that those have a numbered prefix in order to load those at a consistent
+order despite future additions.
+
+## Defining a namespace
+
+A namespace is the topmost layer of an individual ontology, it will
+contain all classes and properties defined by it. In order to define
+a namespace you can do:
+
+```turtle
+# These prefixes will be used in the definition of the ontology,
+# thus must be explicitly defined
+@prefix nrl: <http://tracker.api.gnome.org/ontology/v3/nrl#> .
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
+@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
+
+# This is our example namespace
+@prefix ex: <http://example.org/#>
+
+ex: a nrl:Namespace, nrl:Ontology
+ nrl:prefix "ex"
+ rdfs:comment "example ontology"
+ nrl:lastModified "2017-01-01T15:00:00Z"
+```
+
+## Defining classes
+
+Classes are the base of an ontology, all stored resources must define
+themselves as "being" at least one of these classes. They all derive
+from the base rdfs:Resource type. To eg. define classes representing
+animals and plants, you can do:
+
+```turtle
+ex:Eukaryote a rdfs:Class;
+ rdfs:subClassOf rdfs:Resource;
+ rdfs:comment "An eukaryote".
+```
+
+By convention all classes use CamelCase names, although class names
+are not restricted. The allowed charset is UTF-8.
+
+Declaring subclasses is possible:
+
+```turtle
+ex:Animal a rdfs:Class;
+ rdfs:subClassOf ex:Eukaryote;
+ rdfs:comment "An animal".
+
+ex:Plant a rdfs:Class;
+ rdfs:subClassOf ex:Eukaryote;
+ rdfs:comment "A plant".
+
+ex:Mammal a rdfs:Class;
+ rdfs:subClassOf ex:Animal;
+ rdfs:comment "A mammal".
+```
+
+With such classes defined, resources may be inserted to the endpoint,
+eg. with the SPARQL:
+
+```SPARQL
+INSERT DATA { <merry> a ex:Mammal }
+INSERT DATA { <treebeard> a ex:Animal, ex:Plant }
+```
+
+Note that multiple inheritance is possible, resources will just inherit
+all properties from all classes and superclasses.
+
+## Defining properties
+
+Properties relate to a class, so all resources pertaining to that class
+can define values for these.
+
+```turtle
+ex:cromosomes a rdf:Property;
+ rdfs:domain ex:Eukaryote;
+ rdfs:range xsd:integer.
+
+ex:unicellular a rdf:Property;
+ rdfs:domain ex:Eukaryote;
+ rdfs:range xsd:bool;
+
+ex:dateOfBirth a rdf:Property;
+ rdfs:domain ex:Mammal;
+ rdfs:range xsd:dateTime;
+```
+
+The class the property belongs to is defined by `rdfs:domain`, while the
+data type contained is defined by `rdfs:range`. By convention all
+properties use dromedaryCase names, although property names are not
+restricted. The allowed charset is UTF-8.
+
+The following basic types are supported:
+
+- `xsd:boolean`
+- `xsd:string` and `rdf:langString`
+- `xsd:integer`, ranging from -2^63 to 2^63-1.
+- `xsd:double`, able to store a 8 byte IEEE floating point number.
+- `xsd:date` and `xsd:dateTime`, able to store dates and times since
+ January 1st 1 AD, with microsecond resolution.
+
+Of course, properties can also point to resources of the same or
+other classes, so stored resources can conform a graph:
+
+```turtle
+ex:parent a rdf:Property;
+ rdfs:domain ex:Mammal;
+ rdfs:range ex:Mammal;
+
+ex:pet a rdf:Property;
+ rdfs:domain ex:Mammal;
+ rdfs:range ex:Eukaryote;
+```
+
+There is also inheritance of properties, an example would be a property
+in a subclass concretizing a more generic property from a superclass.
+
+```turtle
+ex:geneticInformation a rdf:Property;
+ rdfs:domain ex:Eukaryote;
+ rdfs:range xsd:string;
+
+ex:dna a rdf:Property;
+ rdfs:domain ex:Mammal;
+ rdfs:range xsd:string;
+ rdfs:subPropertyOf ex:geneticInformation.
+```
+
+SPARQL queries are expected to provide the same result when queried
+for a property or one of its superproperties.
+
+```SPARQL
+# These two queries should provide the exact same result(s)
+SELECT { ?animal a ex:Animal;
+ ex:geneticInformation "AGCT" }
+SELECT { ?animal a ex:Animal;
+ ex:dna "AGCT" }
+```
+
+## Defining cardinality of properties
+
+By default, properties are multivalued, there are no restrictions in
+the number of values a property can store.
+
+```SPARQL
+INSERT DATA {
+ <cat> a ex:Mammal .
+ <dog> a ex:Mammal .
+
+ <peter> a ex:Mammal ;
+ ex:pets <cat>, <dog>
+}
+```
+
+Wherever this is not desirable, cardinality can be limited on properties
+through nrl:maxCardinality.
+
+```turtle
+ex:cromosomes a rdf:Property;
+ rdfs:domain ex:Eukaryote;
+ rdfs:range xsd:integer;
+ nrl:maxCardinality 1.
+```
+
+This will raise an error if the SPARQL updates in the endpoint end up
+in the property inserted multiple times.
+
+```SPARQL
+# This will fail
+INSERT DATA { <cat> a ex:Mammal;
+ ex:cromosomes 38;
+ ex:cromosomes 42 }
+
+# This will succeed
+INSERT DATA { <donald> a ex:Mammal;
+ ex:cromosomes 47 }
+```
+
+Tracker does not implement support for other maximum cardinalities
+than 1.
+
+<!---
+ XXX: explain how cardinality affects subproperties, superproperties
+--->
+
+## Defining uniqueness
+
+It is desirable for certain properties to keep their values unique
+across all resources, this can be expressed by defining the properties
+as being a nrl:InverseFunctionalProperty.
+
+```turtle
+ex:geneticInformation a rdf:Property, nrl:InverseFunctionalProperty;
+ rdfs:domain ex:Eukaryote;
+ rdfs:range xsd:string;
+```
+
+With that in place, no two resources can have the same value on the
+property.
+
+```SPARQL
+# First insertion, this will succeed
+INSERT DATA { <drosophila> a ex:Eukariote;
+ ex:geneticInformation "AGCT" }
+
+# This will fail
+INSERT DATA { <melanogaster> a ex:Eukariote;
+ ex:geneticInformation "AGCT" }
+```
+
+<!---
+ XXX: explain how inverse functional proeprties affect sub/superproperties
+--->
+
+## Defining indexes
+
+It may be the case that SPARQL queries performed on the endpoint are
+known to match, sort, or filter on certain properties more often than others.
+In this case, the ontology may use nrl:domainIndex in the class definition:
+
+```turtle
+# Make queries on ex:dateOfBirth faster
+ex:Mammal a rdfs:Class;
+ rdfs:subClassOf ex:Animal;
+ rdfs:comment "A mammal";
+ nrl:domainIndex ex:dateOfBirth.
+```
+
+Classes may define multiple domain indexes.
+
+**Note**: Be frugal with indexes, do not add these proactively. An index in the wrong
+place might not affect query performance positively, but all indexes come at
+a cost in disk size.
+
+## Defining full-text search properties
+
+Tracker provides nonstandard full-text search capabilities, in order to use
+these, the string properties can use nrl:fulltextIndexed:
+
+```turtle
+ex:name a rdf:Property;
+ rdfs:domain ex:Mammal;
+ rdfs:range xsd:string;
+ nrl:fulltextIndexed true;
+ nrl:weight 10.
+```
+
+Weighting can also be applied, so certain properties rank higher than others
+in full-text search queries. With nrl:fulltextIndexed in place, sparql
+queries may use full-text search capabilities:
+
+```SPARQL
+SELECT { ?mammal a ex:Mammal;
+ fts:match "timmy" }
+```
+
+## Predefined elements
+
+It may be desirable for the ontology to offer predefined elements of a
+certain class, which can then be used by the endpoint.
+
+```turtle
+ex:self a ex:Mammal.
+```
+
+Usage does not differ in use from the elements of that same class that
+could be inserted in the endpoint.
+
+```SPARQL
+INSERT DATA { ex:self ex:pets <cat> .
+ <cat> ex:pets ex:self }
+```
+
+## Updating an ontology
+
+As software evolves, sometimes changes in the ontology are unavoidable.
+Tracker can transparently handle certain ontology changes on existing
+databases.
+
+1. Adding a class.
+2. Removing a class.
+ All resources will be removed from this class, and all related
+ properties will disappear.
+3. Adding a property.
+4. Removing a property.
+ The property will disappear from all elements pertaining to the
+ class in domain of the property.
+5. Changing rdfs:range of a property.
+ The following conversions are allowed:
+
+ - `xsd:integer` to `xsd:bool`, `xsd:double` and `xsd:string`</listitem></varlistentry>
+ - `xsd:double` to `xsd:bool`, `xsd:integer` and `xsd:string`</listitem></varlistentry>
+ - `xsd:string` to `xsd:bool`, `xsd:integer` and `xsd:double`</listitem></varlistentry>
+
+6. Adding and removing `nrl:domainIndex` from a class.
+7. Adding and removing `nrl:fulltextIndexed` from a property.
+8. Changing the `nrl:weight` on a property.
+9. Removing `nrl:maxCardinality` from a property.
+
+<!---
+ XXX: these need documenting too
+ add intermediate superproperties
+ add intermediate superclasses
+ remove intermediate superproperties
+ remove intermediate superclasses
+--->
+
+However, there are certain ontology changes that Tracker will find
+incompatible. Either because they are incoherent or resulting into
+situations where it can not deterministically satisfy the change
+in the stored data. Tracker will error out and refuse to do any data
+changes in these situations:
+
+- Properties with rdfs:range being `xsd:bool`, `xsd:date`, `xsd:dateTime`,
+ or any other custom class are not convertible. Only conversions
+ covered in the list above are accepted.
+- You can not add `rdfs:subClassOf` in classes that are not being
+ newly added. You can not remove `rdfs:subClassOf` from classes.
+ The only allowed change to `rdfs:subClassOf` is to correct
+ subclasses when deleting a class, so they point a common
+ superclass.
+- You can not add `rdfs:subPropertyOf` to properties that are not
+ being newly added. You can not change an existing
+ `rdfs:subPropertyOf` unless it is made to point to a common
+ superproperty. You can however remove `rdfs:subPropertyOf` from
+ non-new properties.
+- Properties can not move across classes, thus any change in
+ `rdfs:domain` is forbidden.
+- You can not add `nrl:maxCardinality` restrictions on properties that
+ are not being newly added.
+- You can not add nor remove `nrl:InverseFunctionalProperty` from a
+ property that is not being newly added.
+
+The recommendation to bypass these situations is the same for all,
+use different property and class names and use SPARQL to manually
+migrate the old data to the new format if necessary.
+
+High level code is in a better position to solve the
+possible incoherences (e.g. picking a single value if a property
+changes from multiple values to single value). After the manual
+data migration has been completed, the old classes and properties
+can be dropped.
+
+Once changes are made, the `nrl:lastModified` value should be updated
+so Tracker knows to reprocess the ontology.
diff --git a/docs/reference/libtracker-sparql/overview.md b/docs/reference/libtracker-sparql/overview.md
index c14769342..949d81c3f 100644
--- a/docs/reference/libtracker-sparql/overview.md
+++ b/docs/reference/libtracker-sparql/overview.md
@@ -1,9 +1,4 @@
----
-title: Overview
-short-description: Library Overview
-...
-
-# Overview
+Title: Overview
Tracker SPARQL allows creating and connecting to one or more
triplestore databases. It is used by the
@@ -12,13 +7,13 @@ and can also store and publish any kind of app data.
Querying data is done using the SPARQL graph query language. See the
[examples](examples.html) to find out how this works.
-Storing data can also be done using SPARQL, or using the [](TrackerResource)
+Storing data can also be done using SPARQL, or using the [class@Tracker.Resource]
API.
-You can share a database over D-Bus using the [](TrackerEndpoint) API,
+You can share a database over D-Bus using the [class@Tracker.Endpoint] API,
allowing other libtracker-sparql users to query from it, either
by referencing it in a `SELECT { SERVICE ... }` query, or by connecting
-directly with [](tracker_sparql_connection_bus_new).
+directly with [ctor@Tracker.SparqlConnection.bus_new].
Tracker SPARQL partitions the database into multiple graphs.
You can implementing access control restrictions based on
@@ -28,14 +23,14 @@ The number of graphs is [limited](limits.html).
## Connection methods
You can create and access a private store using
-[](tracker_sparql_connection_new). This is useful to store
+[ctor@Tracker.SparqlConnection.new]. This is useful to store
app-specific data.
To connect to another database on the same local machine, such as the
-one exposed by Tracker Miner FS, use [](tracker_sparql_connection_bus_new).
+one exposed by Tracker Miner FS, use [ctor@Tracker.SparqlConnection.bus_new].
To connect to another a database on a remote machine, use
-[](tracker_sparql_connection_remote_new). This can be used to query online
+[ctor@Tracker.SparqlConnection.remote_new]. This can be used to query online
databases that provide a SPARQL endpoint, such as [DBpedia](https://wiki.dbpedia.org/about).
.
## Connecting from Flatpak
@@ -48,7 +43,7 @@ The app's Flatpak manifest needs to specify which graph(s) the app will
access. See the [example app](https://gitlab.gnome.org/GNOME/tracker/-/blob/master/examples/flatpak/org.example.TrackerSandbox.json)
and the [portal documentation](https://gnome.pages.gitlab.gnome.org/tracker/docs/commandline/#tracker-xdg-portal-3) to see how.
-No code changes are needed in the app, as [](tracker_sparql_connection_bus_new)
+No code changes are needed in the app, as [ctor@Tracker.SparqlConnection.bus_new]
will automatically try connect via the portal if it can't talk to the
given D-Bus name directly.
diff --git a/docs/reference/libtracker-sparql/performance.md b/docs/reference/libtracker-sparql/performance.md
index 3994d850f..942b2de57 100644
--- a/docs/reference/libtracker-sparql/performance.md
+++ b/docs/reference/libtracker-sparql/performance.md
@@ -1,13 +1,9 @@
----
-title: Performance dos and donts
-short-description: Performance dos and donts
-...
-
-# Performance advise
+Title: Performance dos and donts
+slug: performance-advise
SPARQL is a very powerful query language. As it should be
suspected, this means there are areas where performance is
-sacrificed for versatility.
+inherently sacrificed for versatility.
These are some tips to get the best of SPARQL as implemented
by Tracker.
@@ -20,7 +16,7 @@ Queries with unrestricted predicates are those like:
SELECT ?p { <a> ?p 42 }
```
-They involve lookups across all possible triples of
+These involve lookups across all possible triples of
an object, which roughly translates to a traversal
through all tables and columns.
@@ -30,7 +26,8 @@ The most pathological case is:
SELECT ?s ?p ?o { ?s ?p ?o }
```
-Which does retrieve every triple existing in the store.
+Which does retrieve every triple existing in the RDF
+triple store.
Queries with unrestricted predicates are most useful to
introspect resources, or the triple store in its entirety.
@@ -76,11 +73,11 @@ The graph(s) may be specified through
SPARQL syntax for graphs. For example:
```SPARQL
-WITH <G> SELECT ?u { ?u a rdfs:Resource }
-WITH <G> SELECT ?g ?u { GRAPH ?g { ?u a rdfs:Resource }}
+WITH <http://example.com/Graph> SELECT ?u { ?u a rdfs:Resource }
+SELECT ?g ?u FROM NAMED <http://example.com/Graph> { GRAPH ?g { ?u a rdfs:Resource }}
```
-## Avoid substring matching
+## Avoid globs and substring matching
Matching for regexp/glob/substrings defeats any index text fields
could have. For example:
@@ -88,7 +85,7 @@ could have. For example:
```SPARQL
SELECT ?u {
?u nie:title ?title .
- FILTER (CONTAINS (?title, "sideshow"))
+ FILTER (CONTAINS (?title, "banana"))
}
```
@@ -97,11 +94,11 @@ encouraged to use fulltext search for finding matches within strings
where possible, for example:
```SPARQL
-SELECT ?u { ?u fts:match "sideshow" }
+SELECT ?u { ?u fts:match "banana" }
```
## Use TrackerSparqlStatement
-Using [](TrackerSparqlStatement) allows to parse and compile
+Using [class@Tracker.SparqlStatement] allows to parse and compile
a query once, and reuse it many times. Its usage
is recommended wherever possible.
diff --git a/docs/reference/libtracker-sparql/security.md b/docs/reference/libtracker-sparql/security.md
index 6ea2fa3a2..034bda365 100644
--- a/docs/reference/libtracker-sparql/security.md
+++ b/docs/reference/libtracker-sparql/security.md
@@ -1,19 +1,17 @@
----
-title: Security
-short-description: Security considerations
-...
-
-# Security considerations
+Title: Security considerations
+Slug: security-considerations
The SPARQL 1.1 specifications have a number of informative `Security
-considerations` sections. This section describes how those possibly
-apply to the implementation of Tracker.
+considerations` sections. This is an informative document describing how
+those may or may not apply to the implementation of Tracker.
Note that most of these considerations derive from situations where
-a SPARQL store is exposed through a public endpoint, while Tracker
+a RDF triple store is exposed through a public endpoint, while Tracker
does not do that by default. Users should be careful about creating
endpoints. For D-Bus endpoints, access through the portal is encouraged.
+# SPARQL specifications
+
## Queries
(From [https://www.w3.org/TR/2013/REC-sparql11-query-20130321/#security](https://www.w3.org/TR/2013/REC-sparql11-query-20130321/#security))
@@ -81,7 +79,7 @@ sandboxed process have all SERVICE access restricted.
Tracker developers encourage that all access to endpoints created on D-Bus
happen through the portal, and that all HTTP endpoints validate the provenance
-of the requests through the [](TrackerEndpointHttp::block-remote-address)
+of the requests through the [signal@Tracker.EndpointHttp::block_remote_address]
signal to limit access to resources.
(From [https://www.w3.org/TR/sparql11-protocol/#policy-security](https://www.w3.org/TR/sparql11-protocol/#policy-security))
@@ -103,7 +101,7 @@ processing services.
Tracker does not perform any time or frequency rate limits to queries. HTTP
endpoints may perform the latter through the
-[](TrackerEndpointHttp::block-remote-address) signal.
+[signal@Tracker.EndpointHttp::block_remote_address] signal.
## Updates
@@ -155,6 +153,27 @@ all the way to the public API. As an additional layer of security, readonly
queries happen on readonly database connections. It is essentially not possible
to perform any data change from the query APIs.
+## IRIs
+
+IRIs are a cornerstone of RDF data, since individual RDF resources are typically
+named through them. Quoting [https://www.w3.org/TR/sparql11-protocol/#policy-security](https://www.w3.org/TR/sparql11-protocol/#policy-security):
+
+```
+Different IRIs may have the same appearance. Characters in different scripts
+may look similar (a Cyrillic "о" may appear similar to a Latin "o"). A
+character followed by combining characters may have the same visual
+representation as another character (LATIN SMALL LETTER E followed by
+COMBINING ACUTE ACCENT has the same visual representation as LATIN SMALL
+LETTER E WITH ACUTE). Users of SPARQL must take care to construct queries
+with IRIs that match the IRIs in the data. Further information about matching
+of similar characters can be found in Unicode Security Considerations
+[UNISEC] and Internationalized Resource Identifiers (IRIs) [RFC3987]
+Section 8.
+```
+
+The situations where this might be a source of confusion or mischief, or even
+be possible depends on how those IRIs are created, used, displayed or
+inserted.
# API user considerations
@@ -165,27 +184,11 @@ considerations and take some precautions:
* For local D-Bus endpoints, consider using a graph partitioning scheme that
makes it easy to policy the access to the data when accessed through the
portal.
- * Avoid the possibility of injection attacks. Use [](TrackerSparqlStatement)
+ * Avoid the possibility of injection attacks. Use [class@Tracker.SparqlStatement]
and avoid string-based approaches to build SPARQL queries from user input.
- * Consider that IRIs are susceptible to homograph attacks. Quoting
- https://www.w3.org/TR/sparql11-protocol/#policy-security:
-
- ```
- Different IRIs may have the same appearance. Characters in different scripts
- may look similar (a Cyrillic "о" may appear similar to a Latin "o"). A
- character followed by combining characters may have the same visual
- representation as another character (LATIN SMALL LETTER E followed by
- COMBINING ACUTE ACCENT has the same visual representation as LATIN SMALL
- LETTER E WITH ACUTE). Users of SPARQL must take care to construct queries
- with IRIs that match the IRIs in the data. Further information about matching
- of similar characters can be found in Unicode Security Considerations
- [UNISEC] and Internationalized Resource Identifiers (IRIs) [RFC3987]
- Section 8.
- ```
-
- The situations where this might be a source of confusion or mischief, or even
- be possible depends on how those IRIs are created, used, displayed or
- inserted.
+ * Consider that IRIs describing RDF resources are susceptible to homograph
+ attacks as described above. Developers are not exempt of validating
+ external input that might end up stored as-is.
# Feature grid
diff --git a/docs/reference/libtracker-sparql/sparql-and-tracker.md b/docs/reference/libtracker-sparql/sparql-and-tracker.md
index 436a1c74d..b1de7eb27 100644
--- a/docs/reference/libtracker-sparql/sparql-and-tracker.md
+++ b/docs/reference/libtracker-sparql/sparql-and-tracker.md
@@ -1,12 +1,9 @@
----
-title: SPARQL as understood by Tracker
-short-description: SPARQL as understood by Tracker
-...
+Title: SPARQL as understood by Tracker
+slug: sparql-and-tracker
-# SPARQL as understood by Tracker
-
-This section describes the choices made by Tracker in its interpretation
-of the SPARQL documents, as well as its extensions and divergences.
+This document describes the choices made by Tracker in its interpretation
+of the SPARQL documents, as well as the ways it diverges or extends on the
+specifications.
## The default graph
@@ -37,13 +34,13 @@ the RDF abstract syntax, a blank node is just a unique node that can
be used in one or more RDF statements, but has no intrinsic name.
```
-By default Tracker treats blank nodes as an URI generator instead. The
+By default, Tracker does instead treat blank nodes as an URI generator. The
string referencing a blank node (e.g. as returned by cursors) permanently
identifies that blank node and can be used as an URI reference in
future queries.
The blank node behavior defined in the RDF/SPARQL specifications can
-be enabled with the [](TRACKER_SPARQL_CONNECTION_FLAGS_ANONYMOUS_BNODES)
+be enabled with the #TRACKER_SPARQL_CONNECTION_FLAGS_ANONYMOUS_BNODES
flag.
## Property functions
@@ -277,7 +274,7 @@ are treated as parameters at query time, so it is possible
to prepare a query statement once and reuse it many times
assigning different values to those parameters at query time.
-See [](TrackerSparqlStatement) documentation for more information.
+See [class@Tracker.SparqlStatement] documentation for more information.
## Full-text search
@@ -305,5 +302,5 @@ The DESCRIBE form returns a single result RDF graph containing RDF data about re
```
In order to allow serialization to RDF formats that allow expressing graph information
-(e.g. Trig), DESCRIBE resultsets have 4 columns for subject / predicate / object / graph
-information.
+(e.g. [Trig](https://www.w3.org/TR/trig/)), DESCRIBE resultsets have 4 columns for
+subject / predicate / object / graph information.
diff --git a/docs/reference/libtracker-sparql/sparql-functions.md b/docs/reference/libtracker-sparql/sparql-functions.md
index 939cac37b..be8a48a8c 100644
--- a/docs/reference/libtracker-sparql/sparql-functions.md
+++ b/docs/reference/libtracker-sparql/sparql-functions.md
@@ -1,17 +1,14 @@
----
-title: Builtin SPARQL functions
-short-description: Builtin SPARQL functions
-...
-
-# Builtin SPARQL functions
+Title: Builtin SPARQL functions
+slug: sparql-functions
Besides the functions built in the SPARQL 1.1 syntax, type casts
and functional properties, Tracker supports a number of SPARQL
-functions. Some of these functions have correspondences in XPath.
+functions. Some of these functions have correspondences in
+[XPath](https://www.w3.org/TR/xpath-31/).
-## String functions
+# String functions
-### `fn:lower-case`
+## `fn:lower-case`
```SPARQL
fn:lower-case (?string)
@@ -19,7 +16,7 @@ fn:lower-case (?string)
Converts a string to lowercase, equivalent to `LCASE`.
-### `fn:upper-case`
+## `fn:upper-case`
```SPARQL
fn:upper-case (?string)
@@ -27,7 +24,7 @@ fn:upper-case (?string)
Converts a string to uppercase, equivalent to `UCASE`.
-### `fn:contains`
+## `fn:contains`
```SPARQL
fn:contains (?haystack, ?needle)
@@ -36,7 +33,7 @@ fn:contains (?haystack, ?needle)
Returns a boolean indicating whether `?needle` is
found in `?haystack`. Equivalent to `CONTAINS`.
-### `fn:starts-with`
+## `fn:starts-with`
```SPARQL
fn:starts-with (?string, ?prefix)
@@ -45,7 +42,7 @@ fn:starts-with (?string, ?prefix)
Returns a boolean indicating whether `?string`
starts with `?prefix`. Equivalent to `STRSTARTS`.
-### `fn:ends-with`
+## `fn:ends-with`
```SPARQL
fn:ends-with (?string, ?suffix)
@@ -54,7 +51,7 @@ fn:ends-with (?string, ?suffix)
Returns a boolean indicating whether `?string`
ends with `?suffix`. Equivalent to `STRENDS`.
-### `fn:substring`
+## `fn:substring`
```SPARQL
fn:substring (?string, ?startLoc)
@@ -65,7 +62,7 @@ Returns a substring delimited by the integer
`?startLoc` and `?endLoc` arguments. If `?endLoc`
is omitted, the end of the string is used.
-### `fn:concat`
+## `fn:concat`
```SPARQL
fn:concat (?string1, ?string2, ..., ?stringN)
@@ -74,7 +71,7 @@ fn:concat (?string1, ?string2, ..., ?stringN)
Takes a variable number of arguments and returns a string concatenation
of all its returned values. Equivalent to `CONCAT`.
-### `fn:string-join`
+## `fn:string-join`
```SPARQL
fn:string-join ((?string1, ?string2, ...), ?separator)
@@ -83,7 +80,7 @@ fn:string-join ((?string1, ?string2, ...), ?separator)
Takes a variable number of arguments and returns a string concatenation
using `?separator` to join all elements.
-### `fn:replace`
+## `fn:replace`
```SPARQL
fn:replace (?string, ?regex, ?replacement)
@@ -111,7 +108,7 @@ If `?flags` contains the character `“i”`, search is caseless.
If `?flags` contains the character `“x”`, `?regex` is
interpreted to be an extended regular expression.
-### `tracker:case-fold`
+## `tracker:case-fold`
```SPARQL
tracker:case-fold (?string)
@@ -119,7 +116,7 @@ tracker:case-fold (?string)
Converts a string into a form that is independent of case.
-### `tracker:title-order`
+## `tracker:title-order`
```SPARQL
tracker:title-order (?string)
@@ -128,7 +125,7 @@ tracker:title-order (?string)
Manipulates a string to remove leading articles for sorting
purposes, e.g. “Wall, The”. Best used in `ORDER BY` clauses.
-### `tracker:ascii-lower-case`
+## `tracker:ascii-lower-case`
```SPARQL
tracker:ascii-lower-case (?string)
@@ -136,7 +133,7 @@ tracker:ascii-lower-case (?string)
Converts an ASCII string to lowercase, equivalent to `LCASE`.
-### `tracker:normalize`
+## `tracker:normalize`
```SPARQL
tracker:normalize (?string, ?option)
@@ -145,7 +142,7 @@ tracker:normalize (?string, ?option)
Normalizes `?string`. The `?option` string must be one of
`nfc`, `nfd`, `nfkc` or `nfkd`.
-### `tracker:unaccent`
+## `tracker:unaccent`
```SPARQL
tracker:unaccent (?string)
@@ -153,7 +150,7 @@ tracker:unaccent (?string)
Removes accents from a string.
-### `tracker:coalesce`
+## `tracker:coalesce`
```SPARQL
tracker:coalesce (?value1, ?value2, ..., ?valueN)
@@ -161,7 +158,7 @@ tracker:coalesce (?value1, ?value2, ..., ?valueN)
Picks the first non-null value. Equivalent to `COALESCE`.
-### `tracker:strip-punctuation`
+## `tracker:strip-punctuation`
``` SPARQL
tracker:strip-punctuation (?string)
@@ -170,9 +167,9 @@ tracker:strip-punctuation (?string)
Removes any Unicode character which has the General
Category value of P (Punctuation) from the string.
-## DateTime functions
+# DateTime functions
-### `fn:year-from-dateTime`
+## `fn:year-from-dateTime`
```SPARQL
fn:year-from-dateTime (?date)
@@ -182,7 +179,7 @@ Returns the year from a `xsd:date` type, a `xsd:dateTime`
type, or a ISO8601 date string. This function is equivalent
to `YEAR`.
-### `fn:month-from-dateTime`
+## `fn:month-from-dateTime`
```SPARQL
fn:month-from-dateTime (?date)
@@ -191,7 +188,7 @@ fn:month-from-dateTime (?date)
Returns the month from a `xsd:date` type, a `xsd:dateTime`
type, or a ISO8601 date string. This function is equivalent to `MONTH`.
-### `fn:day-from-dateTime`
+## `fn:day-from-dateTime`
```SPARQL
fn:day-from-dateTime (?date)
@@ -200,7 +197,7 @@ fn:day-from-dateTime (?date)
Returns the day from a `xsd:date` type, a `xsd:dateTime`
type, or a ISO8601 date string. This function is equivalent to `DAY`.
-### `fn:hours-from-dateTime`
+## `fn:hours-from-dateTime`
```SPARQL
fn:hours-from-dateTime (?date)
@@ -209,7 +206,7 @@ fn:hours-from-dateTime (?date)
Returns the hours from a `xsd:dateTime` type or a ISO8601
datetime string. This function is equivalent to `HOURS`.
-### `fn:minutes-from-dateTime`
+## `fn:minutes-from-dateTime`
```SPARQL
fn:minutes-from-dateTime (?date)
@@ -219,7 +216,7 @@ Returns the minutes from a `xsd:dateTime` type
or a ISO8601 datetime string. This function is equivalent to
`MINUTES`.
-### `fn:seconds-from-dateTime`
+## `fn:seconds-from-dateTime`
```SPARQL
fn:seconds-from-dateTime (?date)
@@ -229,7 +226,7 @@ Returns the seconds from a `xsd:dateTime` type
or a ISO8601 datetime string. This function is equivalent to
`SECONDS`.
-### `fn:timezone-from-dateTime`
+## `fn:timezone-from-dateTime`
```SPARQL
fn:timezone-from-dateTime (?date)
@@ -240,7 +237,7 @@ not equivalent to `TIMEZONE` or `TZ`.
## Full-text search functions
-### `fts:rank`
+## `fts:rank`
```SPARQL
fts:rank (?match)
@@ -249,7 +246,7 @@ fts:rank (?match)
Returns the rank of a full-text search match. Must be
used in conjunction with `fts:match`.
-### `fts:offsets`
+## `fts:offsets`
```SPARQL
fts:offsets (?match)
@@ -263,7 +260,7 @@ string has the format:
prefix:property:offset prefix:property:offset prefix:property:offset
```
-### `fts:snippet`
+## `fts:snippet`
```SPARQL
fts:snippet (?match)
@@ -284,9 +281,9 @@ the string used to separate distant matches in the snippet string.
The `?numTokens` parameter specifies the number
of tokens the returned string should containt at most.
-## URI functions
+# URI functions
-### `tracker:uri-is-parent`
+## `tracker:uri-is-parent`
```SPARQL
tracker:uri-is-parent (?parent, ?uri)
@@ -295,7 +292,7 @@ tracker:uri-is-parent (?parent, ?uri)
Returns a boolean value expressing whether
`?parent` is a parent of `?uri`.
-### `tracker:uri-is-descendant`
+## `tracker:uri-is-descendant`
```SPARQL
tracker:uri-is-descendant (?uri1, ?uri2, ..., ?uriN, ?child)
@@ -305,7 +302,7 @@ Returns a boolean value expressing whether one of the
given URIs are a parent (direct or indirect) of
`?child`.
-### `tracker:string-from-filename`
+## `tracker:string-from-filename`
```SPARQL
tracker:string-from-filename (?filename)
@@ -313,9 +310,9 @@ tracker:string-from-filename (?filename)
Returns a UTF-8 string from a filename.
-## Geolocation functions
+# Geolocation functions
-### `tracker:cartesian-distance`
+## `tracker:cartesian-distance`
```SPARQL
tracker:cartesian-distance (?lat1, ?lat2, ?lon1, ?lon2)
@@ -324,7 +321,7 @@ tracker:cartesian-distance (?lat1, ?lat2, ?lon1, ?lon2)
Calculates the cartesian distance between 2 points expressed
by `?lat1 / ?lon1` and `?lat2 / ?lon2`.
-### `tracker:haversine-distance`
+## `tracker:haversine-distance`
```SPARQL
tracker:haversine-distance (?lat1, ?lat2, ?lon1, ?lon2)
@@ -333,9 +330,9 @@ tracker:haversine-distance (?lat1, ?lat2, ?lon1, ?lon2)
Calculates the haversine distance between 2 points expressed
by `?lat1 / ?lon1` and `?lat2 / ?lon2`.
-## Identification functions
+# Identification functions
-### `tracker:id`
+## `tracker:id`
```SPARQL
tracker:id (?urn)
@@ -344,7 +341,7 @@ tracker:id (?urn)
Returns the internal ID corresponding to a URN.
Its inverse operation is `tracker:uri`.
-### `tracker:uri`
+## `tracker:uri`
```SPARQL
tracker:uri (?id)
diff --git a/docs/reference/libtracker-sparql/tutorial.md b/docs/reference/libtracker-sparql/tutorial.md
index b725c6dce..570046648 100644
--- a/docs/reference/libtracker-sparql/tutorial.md
+++ b/docs/reference/libtracker-sparql/tutorial.md
@@ -1,32 +1,33 @@
----
-title: SPARQL Tutorial
-short-description: SPARQL Tutorial
+Title: SPARQL Tutorial
+Slug: sparql-tutorial
...
-# SPARQL Tutorial
+This document aims to introduce you to RDF and SPARQL from the ground
+up, up to a point where SPARQL queries will become familiar and approachable
+to reason about.
-This tutorial aims to introduce you to RDF and SPARQL from the ground
-up. All examples come from the Nepomuk ontology, and even though
+Different RDF triple stores may have different data layouts. All examples
+in this tutorial come from the Nepomuk ontology, and even though
the tutorial aims to be generic enough, it mentions things
-specific to Tracker, those are clearly spelled out.
+specific to Tracker. Those are clearly spelled out.
If you are reading this tutorial, you might also have Tracker installed
in your system, if that is the case you can for example start a fresh
empty SPARQL service for local testing:
```bash
-$ tracker3 endpoint --dbus-service a.b.c --ontology nepomuk
+$ tracker3 endpoint --dbus-service org.example.Endpoint --ontology nepomuk
```
The queries can be run in this specific service with:
```bash
-$ tracker3 sparql --dbus-service a.b.c --query $SPARQL_QUERY
+$ tracker3 sparql --dbus-service org.example.Endpoint --query $SPARQL_QUERY
```
## RDF Triples
-RDF data define a graph, composed by vertices and edges. This graph is
+RDF data defines a graph, composed by vertices and edges. This graph is
directed, because edges point from one vertex to another, and it is
labeled, as those edges have a name. The unit of data in RDF is a
triple of the form:
@@ -316,7 +317,7 @@ ASK {
Sadly, not everything in the world can be trivially mapped to
a URI, as an aide Tracker offers helpers to generate URIs based
-on UUIDv4 identifiers like [](tracker_sparql_get_uuid_urn),
+on UUIDv4 identifiers like [func@Tracker.sparql_get_uuid_urn],
these generated strings are typically called URNs.
The `BASE` keyword allows setting a common prefix for all URIs
@@ -701,8 +702,8 @@ SELECT ?song ?value {
```
To learn more about how ontologies are done, read the documentation about
-[defining ontologies](ontologies.md). Tracker also provides a stock
-[Nepomuk](nepomuk.md) ontology, ready for use.
+[defining ontologies](ontologies.html#creating-custom-ontologies). Tracker also provides a stock
+[Nepomuk](ontologies.html#nepomuk) ontology, ready for use.
## Inserting data
@@ -856,8 +857,8 @@ Where any second insert would be redundantly attempting to add the same
triple to the store.
By default, Tracker deviates from the SPARQL standard in the handling
-of blank nodes, these are considered a generator of URIs. The
-[](TRACKER_SPARQL_CONNECTION_FLAGS_ANONYMOUS_BNODES) flag may be used to
+of blank nodes, these are considered a generator of URIs.
+The #TRACKER_SPARQL_CONNECTION_FLAGS_ANONYMOUS_BNODES flag may be used to
make Tracker honor the SPARQL 1.1 standard with those. The standard
defines blank nodes as truly anonymous, you can only use them to determine
that there is something that matches the graph pattern you defined. The
@@ -871,7 +872,7 @@ SELECT ?u {
```
Tracker by default will provide you with URNs that can be fed into other
-SPARQL queries as URIs. With [](TRACKER_SPARQL_CONNECTION_FLAGS_ANONYMOUS_BNODES)
+SPARQL queries as URIs. With #TRACKER_SPARQL_CONNECTION_FLAGS_ANONYMOUS_BNODES
enabled, the returned elements will be temporary names that can only be used to
determine the existence of a distinct match. There, blank nodes can match named
nodes, but named nodes do not match with blank nodes.
@@ -1072,8 +1073,7 @@ to query information in RDF data graphs) very thoroughly, it also has some
unusual features that make it able to scale from small private databases
to large distributed ones.
-This is not all that there is, and perhaps the "everything is a graph" mindset
-takes a while to think intuitively about to anyone with a background in
-relational databases. The purpose that this tutorial hopefully achieved is
-that SPARQL queries will now look familiar, and became approachable to reason
-about.
+Perhaps the "everything is a graph" mindset takes a while to think intuitively
+about to anyone with a background in relational databases. As with any complex
+language, mastery requires dedication. This is probably just the beginning of a
+journey.