| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Because multiple clients can be concurrently writing to the journal,
it's possible to see partially written journal entries. Emitting
g_warning() here breaks test cases.
The real fix would be safe concurrent access to the journal, but
that's harder.
https://bugzilla.gnome.org/700785
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This patch ensures that we safely write all data from journals to
metatrees on exit. E.g. if anything happens to session bus or we get
replaced by some other instance.
https://bugzilla.gnome.org/show_bug.cgi?id=637095
|
|
|
|
|
|
|
| |
Since we've moved journal to a non-volatile storage, let's flush
more often to minimize a chance of data loss.
https://bugzilla.gnome.org/show_bug.cgi?id=637095
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This essentially moves is_on_nfs() from metatree.c in metabuilder.c
as the more appropriate place for shared functions. It's used in
meta_builder_get_journal_filename() to determine whether to use original
metadata directory or temporary $XDG_RUNTIME_DIR location to work around
certain NFS issues.
The idea behind this change is to have separate journals for every client
that is accessing shared homedir. Then the only possible point of conflict
is on rotation which is backed up by atomic file rename. Without this,
there were multiple metadata daemons writing to the same journal file,
overwriting changes to each other and being racy in flush and rotation.
There will always be a conflict between clients, overwriting tree file
data by flushing their journals.
https://bugzilla.gnome.org/show_bug.cgi?id=637095
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With concurrent access of multiple daemons there may be a moment when
tree file exists but not the journal file. The daemon can't write
anything without journal and is doomed until next rotation. Missing
journal file can also happen on system crash etc.
This patch tries to create new journal file only when it doesn't exist,
other errors still lead to inability to store anything.
This will also allow us to have journal file somewhere else, e.g. on
a non-persistent storage.
https://bugzilla.gnome.org/show_bug.cgi?id=637095
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In shared NFS homedir case with multiple clients writing to the same
mmaped journal file data can get easily corrupted. The daemon iterates
over a journal file on flush taking in account variable entry size and
advances according to the data read.
However in certain case invalid data are read making us to jump out of
bounds. In case of zero entry size we would stand at the same place
leading to infinite loop.
This patch checks if the indicated entry size is at least the size of
the structure we're getting the size from (it's a first element) and breaks
the iteration cycle if it's not. This may lead to partial data loss on flush
as we don't process the rest of the journal file. Old data from existing
tree file will be preserved of course, only few recent changes would get lost.
https://bugzilla.gnome.org/show_bug.cgi?id=637095
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Once we flush the journal and write new tree file we need to re-read
it to refresh internal data structures (and mmap data from the right
file). We originally left this work on meta_tree_refresh_locked() and
meta_tree_needs_rereading() respectively where we checked the rotated
bit.
In detail, metabuilder wrote a new temp tree file, then explicitly opened
the current (old) one, wrote the rotated bit and atomically replaced the
temp file. Then the metadata daemon having mmapped the old file detected
the rotated bit and scheduled journal and tree file reopen+reread.
However in concurrent environment like NFS homedir where multiple metadata
daemons are handling the same database we may run in a race and not getting
the rotated bit detected properly.
This led to an infinite loop between meta_journal_add_entry() -
meta_tree_flush_locked() - meta_tree_refresh_locked() - meta_journal_add_entry()
since we had full journal, didn't detect the rotation and since the files
were already unlinked, there was no force to break that loop. This patch
forces tree file re-read after successful flush to prevent this issue.
https://bugzilla.gnome.org/show_bug.cgi?id=637095
|
|
|
|
| |
https://bugzilla.gnome.org/show_bug.cgi?id=511802
|
|
|
|
| |
https://bugzilla.gnome.org/show_bug.cgi?id=511802
|
|
|
|
|
|
|
| |
The add_cache_entry helper's signature said it returned the CacheEntry
but it doesn't.
https://bugzilla.gnome.org/show_bug.cgi?id=699424
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
The gvfs-move tool doesn't have a --preserve option.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Add support for the new icon serialisation interface to GVfsIcon as well
as implementing the new interface on GVfsClass for deserialising.
https://bugzilla.gnome.org/show_bug.cgi?id=688820
|
|
|
|
|
|
| |
The last name I chose was deprecated. What fun.
https://bugzilla.gnome.org/show_bug.cgi?id=698174
|
|
|
|
|
|
| |
I was previously using an Ubuntu specific alias.
https://bugzilla.gnome.org/show_bug.cgi?id=698174
|
| |
|
|
|
|
| |
https://bugzilla.gnome.org/show_bug.cgi?id=511802
|
|
|
|
|
| |
As with do_push, it's necessary for us to explicitly detect the
OVERWRITE case and handle it appropriately.
|
|
|
|
|
|
|
|
| |
The trivial fix didn't work because there is now a circular dependency
between the gvfsdaemon.h and and gvfsbackend.h headers. Break
this by creating the standard "types.h".
https://bugzilla.gnome.org/show_bug.cgi?id=511802
|
|
|
|
|
|
| |
only close channels with that backend as a single daemon may handle multiple mounts/backends.
https://bugzilla.gnome.org/show_bug.cgi?id=511802
|
| |
|
|
|
|
|
|
|
|
| |
Make sure we never read more than one page unless requested on the first
read. This helps for e.g. sniffing and gstreamer (which reads in 4k blocks
with seeks inbetween).
Also, further limit the max size request, because 512k seems very ridicoulus.
|
|
|
|
|
|
|
|
|
| |
We were constantly adding extra readahead operations that were not really
needed. A single readahead is necessary to get the read operations
pipelined (see comment). Also, avoid readahead for the first read
to handle random access i/o better.
https://bugzilla.gnome.org/show_bug.cgi?id=697289
|
|
|
|
|
| |
This fixes a warning and a potentially useless call of do() method in
a thread.
|
| |
|
|
|
|
|
|
|
| |
We might be getting replies for old cancelled operations which
we need to ignore.
https://bugzilla.gnome.org/show_bug.cgi?id=675181
|
|
|
|
|
|
|
|
|
|
|
|
| |
We put a channel request on the output buffer and start writing, but
if the write is cancelled on the first call (i.e. no partial writes)
we abort immediately without ever writing the request.
However, if we do this we also need to unqueue the request from the output
buffer, as otherwise this will be sent with the next operation. This
can be problematic for seeks as the seek generation is then not in sync.
https://bugzilla.gnome.org/show_bug.cgi?id=675181
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The error handling in gvfschannel.c:start_queued_request() when
there was an error creating the job or when the request was cancelled
caused problems. It didn't set current_job, yet it called
g_vfs_channel_send_error() which eventually resulted in a
call to send_reply_cb which crashed as it assumed current_job
was set.
Also, not returning TRUE for started_job when we sent an error
is problematic as we then could start the next job which caused
us to have two outstanding jobs on the same channel mixing things up
badly.
https://bugzilla.gnome.org/show_bug.cgi?id=675181
|
|
|
|
| |
One of the 1.1.6 specific calls was missing its #ifdef guard.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case of too many open files within the process the
g_vfs_channel_init() call fails on socketpair and subsequent
g_vfs_channel_steal_remote_fd() call returns -1 for the fd.
Then g_unix_fd_list_append() hits the assert and doesn't set an
error we're dereferencing afterwards.
This patch doesn't solve the lack of free fds situation and since glib
heavily depends on it it would fail somewhere else. We're just fixing
the segfault and returning nicer error.
Based on a fix suggested by Stephen M. Webb
https://bugzilla.gnome.org/show_bug.cgi?id=696713
|
|
|
|
| |
https://bugzilla.gnome.org/show_bug.cgi?id=695834
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Spotted by Serge Gavrilov <serge@pdmi.ras.ru>
The GDbus port removed the need for us to initialise
libdbus' threads support, and commit
7d4bd61385cd56db5507ee31b14724244b637da4
removed the last call to it.
However, the obexftp backend still hasn't fully been ported
to GDbus, and needs this threading support. Re-add it directly
in the backend.
https://bugzilla.gnome.org/show_bug.cgi?id=693574
|
| |
|
| |
|
|
|
|
|
|
| |
Fixed one typo from the first fix, and re-replaced one string. My
first choice had not really been translated (though it was present
in some of the existing translation files)
|
|
|
|
|
| |
The last set of changes unnecessarily introduced new strings for
translation. This change replaces them with existing strings.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
If a monitor is being cleaned up due to the backend disappearing,
we could see the monitor being finalized as a result of removing
a subscriber, leading to a segfault as it continues to access
its internal state.
https://bugzilla.gnome.org/show_bug.cgi?id=696479
|
|
|
|
|
|
|
|
| |
If a file copy is requested without OVERWRITE but the destination
exists, return G_IO_ERROR_EXISTS. If OVERWRITE is requested, then
delete the destination before the push.
https://bugzilla.gnome.org/show_bug.cgi?id=696163
|
|
|
|
|
|
|
| |
If an object is reported as removed by the device, remove it from
the cache and emit a delete event for it.
https://bugzilla.gnome.org/show_bug.cgi?id=696016
|