| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|\
| |
| |
| |
| | |
Tristan/fix cloned plugin ids 1.2
See merge request BuildStream/buildstream!1315
|
| |
| |
| |
| |
| |
| | |
When cloning a Source, we should inherit the same unique ID for
the sake of sending a valid ID in any messages sent back to
the frontend from a source cloned in a child task.
|
| |
| |
| |
| |
| |
| |
| | |
In the case of cloned Sources, they should not be allocating
a new ID in track() and fetch() in case they do communicate their
ID back to the main process, they should inherit the same ID of
the Source they were cloned from.
|
|/
|
|
|
|
|
|
|
|
|
| |
This was always intended, but was not well commented. The reason
we start plugin ID counters at 1 is that we prefer relying on
a falsy value to determine whether an ID holding variable has
been set or not.
This patch also adds a more informative assertion in Plugin._lookup()
This by itself essentially fixes #1012
|
|\
| |
| |
| |
| | |
Fix building the docs
See merge request BuildStream/buildstream!1302
|
| |
| |
| |
| | |
String starting with a "%" character need to be quoted.
|
|/
|
|
|
|
|
|
|
|
|
|
| |
Sphinx 1.7, released in February 2018 moved the sphinx.apidoc module to
sphinx.ext.apidoc, with an alias and a deprecation warning in place so
users know to port their code.
The compatibility alias was removed in Sphinx 2.0, so we need to move to
the new module name.
Fortunately, since the new module name is more than a year old, this
shouldn't break anything for anybody.
|
|\
| |
| |
| |
| | |
plugins/sources/git.py: Cope with rename returning error EEXIST
See merge request BuildStream/buildstream!1294
|
|/
|
|
| |
Thanks to Matthew Yates for pointing this out.
|
|\
| |
| |
| |
| | |
Fix non strict push 1.2
See merge request BuildStream/buildstream!1289
|
| |
| |
| |
| | |
This is a regression test for issue #990
|
|/
|
|
|
|
|
|
|
| |
Marking all elements as pulled in Stream.push() ensures that cache
keys are resolved before pushing elements, otherwise state is left
unresolved in non-strict mode while elements are awaiting to download
an artifact by it's strict cache key.
Fixes #990
|
|\
| |
| |
| |
| | |
Cache quote related backports
See merge request BuildStream/buildstream!1288
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This needs to be added along with the status messages added
to the artifact cache, and this detail diverges from master.
This is because we have not bothered to backport !1071 which
refactors the CASCache to be a delegate object of the ArtifactCache
istead of a derived class - backporting !1071 would allow us to
remove these message handlers because the CAS server and test
fixture only use the CASCache object directly, not the business
logic in the ArtifactCache.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This needs to be added along with the status messages added
to the artifact cache, and this detail diverges from master.
This is because we have not bothered to backport !1071 which
refactors the CASCache to be a delegate object of the ArtifactCache
istead of a derived class - backporting !1071 would allow us to
remove these message handlers because the CAS server and test
fixture only use the CASCache object directly, not the business
logic in the ArtifactCache.
|
| |
| |
| |
| |
| |
| | |
Updates the known cache size in the main process while the cleanup
process is ongoing, so that the status indicators update live
while the cleanup happens.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Added some useful status messages when:
* Calculating a new artifact cache usage size
* Starting a cleanup
* Finishing a cleanup
Also enhanced messaging about what was cleaned up so far when
aborting a cleanup.
|
| |
| |
| |
| |
| | |
This also adds some comments around the main status bar heading
rendering function.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
A frontend facing API for obtaining usage statistics.
I would have put this on Stream instead, but the Context
seems to be the de facto place for looking up the artifact cache
in general so let's put it here.
|
| | |
|
| |
| |
| |
| |
| | |
A simple object which creates a snapshot of current
usage statistics for easy reporting in the frontend.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is not an error related to loading data, like a parse error
in the quota specification is, but a problem raised by the artifact
cache - this allows us to assert more specific machine readable
errors in test cases (instead of checking the string in stderr, which
this patch also fixes).
This also removes a typo from the error message in the said error.
* tests/artifactcache/cache_size.py
Updated test case to expect the artifact error, which consequently
changes the test case to properly assert a machine readable error
instead of asserting text in the stderr (which is the real, secret
motivation behind this patch).
* tests/artifactcache/expiry.py: Reworked test_invalid_cache_quota()
Now expect the artifact error for the tests which check configurations
which create caches too large to fit on the disk.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This will benefit from a better UtilError being raised, and
and turns the artifact cache's local function into a one liner.
The loop which finds the first existing directory in the
given path has been removed, being meaningless due to the
call to os.makedirs() in ArtifactCache.__init__().
The local function was renamed to _get_cache_volume_size() and
no longer takes any arguments, which is more suitable for the
function as it serves as a testing override surface for
unittest.mock().
The following test cases which use the function to override
the ArtifactCache behavior have been updated to use the new
overridable function name:
tests/artifactcache/cache_size.py
tests/artifactcache/expiry.py
|
| |
| |
| |
| | |
We can streamline this call to os.statvfs() in a few places.
|
| |
| |
| |
| | |
The artifact cache emits messages, and we want to allow that in preflight.
|
| |
| |
| |
| |
| |
| | |
Now that the platform is independent of the context, explicit
instantiation is no longer required. This avoids issues with platform
instances used across test cases with mismatching context.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
The artifact cache is no longer platform-specific.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
On systems without user namespace support, elements with unsupported
sandbox config (UID/GID) are now individually tainted, which disables
artifact push.
|
| |
| |
| |
| | |
Unsupported sandbox config (UID/GID) is now reported by the element.
|
| | |
|
|/ |
|
|\
| |
| |
| |
| | |
Backport bzr source plugin race condition to 1.2
See merge request BuildStream/buildstream!1287
|
| |
| |
| |
| |
| |
| |
| |
| | |
This causes multiple source instances to interact with the same
backing data store at the same time, increasing the likelyhood
of triggering issues around concurrent access.
This more reliably triggers issue #868
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
With get_element_state(), you need to invoke BuildStream once
for every element state you want to observe in a pipeline.
The new get_element_states() reports a dictionary with
the element state hashed by element name and is better to use
if you have more than one element to observe the state of.
|
| |
| |
| |
| |
| | |
Follow up of last commit which uses exclusive locking to
protect bzr operations instead.
|
|/
|
|
| |
This patch by itself fixes #868
|