| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Dropping that possibly fixes coalescing of URNs and datetimes with
other types transparently handled by SQLite. Eg. this broke the
"tracker search" CLI subcommand, where it uses:
SELECT COALESCE(nie:url(?u), ?u) ...
This resulted in "tracker search foo" returning "(null)" as the
file uri/urn.
We must translate each Expression to string so that correctly results
in the URL string being coalesced with the URN string. This is the
expected result, and a regression compared to the older parser.
|
|
|
|
|
|
|
|
| |
This results in eg. URN strings or ISO8601 dates being passed to
those if used in one such function, instead of ROWIDs or timestamps.
It's unclear how that should behave as per the spec, this is however
what the old parser did, so better to stick to the same implementation
defined behavior.
|
|
|
|
|
|
|
| |
The Expression element may be used in several places (eg. ArgList or
ExpressionList in functions) that may require them to be converted
to string, add a builtin toggle in the state so this can ben done
easily.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
We must propagate the original types, despite converting the
resultset to string in the topmost select. Fixes warnings in the
bus backend (seen with several flatpak apps), since it's not as
lenient as the direct one wrt fetching values with the "wrong"
vfunc.
|
|
|
|
|
|
|
|
| |
The data != null check in get_string() really happens because of
fetching values on a finished or not yet started cursor, so make
it clearer to reason about.
This is of course a programming error.
|
|
|
|
|
| |
It says "NULL is returned if column is not between [0,n_columns]",
says nothing about it being a programming error though.
|
|
|
|
|
|
| |
We were missing it in the "SELECT ?a AS ?b ..." case, breaking
those types that require a conversion to string when exposed
through a cursor (resource, and presumably date/datetime).
|
|
|
|
|
|
|
| |
Unbound variables are unexpected/meaningless here, the spec says
nothing about raising errors though, and other SPARQL engines seem
to agree about it being a no-op. So just go with it and avoid
the crash.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
The blank node map must be set up there, both for the URN storage
aspect and the GVariant generation one.
Closes: https://gitlab.gnome.org/GNOME/tracker/issues/56
|
|/
|
|
|
| |
The blank node map must be set up there, both for the URN storage
aspect and the GVariant generation one.
|
|
|
|
|
|
|
|
| |
Commit c58f7aa419 late in wip/carlosg/sparql-parser-ng wrongly made this
dependent on the query being an update_blank() one (i.e. we need to
generate a GVariant with blank node results to give back). This actually
defeated the path where we generate unique URNs for blank nodes on inserts,
resulting in simple urns like <1> being generated.
|
|
|
|
|
|
| |
The SPARQL protocol is supposedly case insensitive, and
TrackerResource uses "TRUE"/"FALSE" for boolean strings. We must
use caseless comparion or we get false negatives.
|
| |
|
|
|
|
|
| |
The SPARQL was referring to a non-existent variable, which
older parser used to ignore.
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If the right conditions apply, tracker-store will shut down
after 30s of inactivity (no clients doing updates/selects).
Bringing it back again is relatively cheap, so let's see how
this flies.
For the cases it won't, tracker-store has a --disable-shutdown
switch (also useful for testing from terminal), also running on
other buses than the session one will disable it, since both
shutting down and later restart pose questions and risks.
In theory, this will make tracker-store disappear 99% of the
time, since database updates are sparse. There's also the
possibility of clients running with TRACKER_SPARQL_BACKEND=bus
or resorting to bus connection (eg. flatpak apps), that will
make selects go through tracker-store.
|
| |
| |
| |
| |
| | |
This no longer makes sense with tracker-store automatic shutdown, assume
the activatable through DBus and keep the miner running in that situation.
|
|/
|
|
|
|
|
|
|
|
|
| |
Avoid possibly raising tracker-store when initializating a bus
connection. This can be done in a delayed manner (eg. when some
other dbus message is sent in its way).
There is one situation where this is necessary though, the database
may not exist yet. In this case we must poke tracker-store before
setting up the direct connection. Detect those cases by handling
direct connection initialization errors.
|
|\ |
|
| |
| |
| |
| |
| |
| | |
Since we no longer pass the string directly, we must also avoid the
escaping of the strftime format modifiers. Those aren't any longer at
risk of being mistaken for printf ones.
|
| |
| |
| |
| |
| |
| | |
Recent GLib changed the hashtable hashing, thus changing the order in which
the contents are iterated by a GHashTableIter. Avoid doing that in the one
place we used to (also used to happen in previous vala code).
|
| |
| |
| |
| | |
Inverse and sequence paths are tested thus far.
|
| | |
|
| |
| |
| |
| | |
There's now partial support of property paths, so better be specific.
|
| |
| |
| |
| |
| |
| | |
"?a :foo/:bar ?b" is equivalent to "?a :foo ?gen . ?gen :bar ?b",
make the parser make up those generated variables before processing
the current predicate property.
|
| |
| |
| |
| |
| | |
"?a ^:foo ?b" is equivalent to "?b :foo ?a", invert the subject/predicate
in order to handle this.
|
| |
| |
| |
| |
| |
| |
| | |
Property paths may introduce intermediate anonymous resources, or
shuffle subject/object. We still do the bulk of the job while parsing
the predicate, so prepare for the predicate being pre-filled, and
the Path grammar element to alter the next parsed token.
|
| |
| |
| |
| |
| | |
This query form was broken in the previous parser, add a test for it
now that it is supported.
|
| |
| |
| |
| |
| | |
Some of those combinations were broken in the previous parser, now
that they are handled properly, add tests for them.
|
| |
| |
| |
| |
| | |
This should eventually be implemented in the bus backend as well, but not
yet.
|
| |
| |
| |
| | |
This uses an internal TrackerSparql to hold the query.
|
| |
| |
| |
| |
| | |
This object can hold a long lived query, in which parameters may be changed
prior to execution.
|
| |
| |
| |
| |
| | |
Those relate to PARAMETERIZED_VAR, and allow binding values through it at
query preparation time.
|
| |
| |
| |
| |
| |
| | |
These variables (with "~var" syntax) will be bound through API,
providing a decent protection against injections. They can be used
in every place a boolean/numeric/string literal is allowed.
|
| |
| |
| |
| |
| | |
This function takes a generic GValue, and uses the right sqlite3_bind*
function underneath, or transforms to a string.
|
| | |
|
| |
| |
| |
| | |
From now on, TrackerSparql will be used for handling SPARQL queries.
|
| |
| |
| |
| |
| |
| | |
Once we are past the variable limit (Currently hardcoded to 999, matching
SQLite limits) resort to appending literals in SQL directly. This used to
happen in the older parser, and unbreaks 02-sparql-bugs which tests this.
|
| |
| |
| |
| |
| | |
This is a tracker extension that allows INSERT OR REPLACE to also delete
values.
|
| |
| |
| |
| |
| |
| |
| | |
In a quite unstandard way, this function takes the list of strings
to join surrounded by parentheses, eg. fn:string-join(('a', 'b'), '|').
Handle this through allowing nesting of ArgList for this case, it will
error out in every other case.
|
| | |
|
| | |
|
| |
| |
| |
| | |
This is a tracker extension present in previous SPARQL parser.
|
| | |
|
| |
| |
| |
| | |
This is a Tracker extension to SPARQL.
|