| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
This will need more attention when we bring in another virtual
directory backend, however, we've said it is acceptable for the
sandbox itself to access the underlying directory, and this is
the best fix in the meantime.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
sandbox/_mount.py, sandbox/_sandboxbwrap.py:
Remove instances of get_directory
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This removes _add_directory_to_tarfile since it is now implemented in
_filebaseddirectory.py.
|
| |
|
| |
|
|
|
|
|
|
| |
buildstream/storage/Directory.py: New file.
buildstream/storage/_filebaseddirectory.py: New file.
buildstream/_exceptions.py: New VIRTUAL_FS exception source.
|
|
|
|
|
|
|
|
| |
This is to allow to allow its use by subclasses.
Since access to get_directories is now blocked for some plugins,
and the subclasses of Sandbox do not have configuration defined
by YAML files, they need another way to get at the root directory.
|
|
|
|
|
| |
magic_timestamp is moved into file scope so other classes
can use it.
|
| |
|
|
|
|
|
|
|
|
| |
Element paths should always be completed from the root element folder
defined by the element-path key in project.conf. Fix complete_path() to
always search into its given base_directory argument.
See issue BuildStream/buildstream#448
|
| |
|
|
|
|
|
|
|
| |
This allows the scheduler to move jobs from the current queue to the next.
As a result of this change later queues than the build queue
mustn't skip a cached failure, so the logic is specialised to build queues only.
|
|
|
|
|
| |
This flags up a failure and if run in an interactive prompt
permits the user to attempt a rebuild.
|
|
|
|
|
|
|
|
|
| |
This creates an artifact when element assembly fails too,
and if it's the right kind of exception uses the now-included install directory
similarly to if it had returned successfully.
If there's a failure during install the artifact contains any installed files,
but may contain nothing at all.
|
| |
|
|
|
|
| |
When we later add cached failures it needs to not treat them as successes.
|
|
|
|
|
| |
This just puts the metadata in place,
we're adding code paths to add failed builds later.
|
|
|
|
|
|
|
|
|
|
|
| |
Normally we'd only need it in the case of scheduling a weakly cached build,
but to allow caching of failed builds we need to be able to distinguish
between cached successes and cached failures
for both strong and weak cache keys.
To allow other cache lookup codepaths to look up via the weak key
requires changes through the call stack to consult which key to use,
and cache invalidation of the saved state when it changes.
|
|
|
|
|
|
| |
Change widget.py print_summary() to only print the failure
messages of elements in the Failure Summary that failed on the
current try.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use os.rename() to rename the cloned temporary repository into
place in the source cache, and issue a STATUS message when discarding
a duplicate clone, in the case where the same repository is cloned
twice in parallel.
The problem with using shutil.move() is that it will create the source
directory in a subdirectory of the destination when the destination
exists, so it's behavior depends on whether the destination exists.
This shutil.move() behavior has so far hidden the race condition
where a duplicate repo is created in a subdirectory, as you need
to have three concurrent downloads of the same repo in order to
trigger the error.
This fixes issue #503
|
|
|
|
|
| |
Since we have now backported this to `bst-1.2`, the APIs have
been introduced in 1.2 and not 1.4
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This address issue #491.
When attempting to run buildstream with a configuration specifying
a cache quota larger than your available disk space, buildstream
will alert the user and exit.
Note:
This takes into consideration your current cache usage and
therefore restricts the overall size of your artifact cache folder.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
When implementing fetching from mirrors, I encountered some problems
with the git source:
1. The mirror URL was using translate_url()'s output, so if a different
alias was used, then fetching from the mirror would go to a different
directory, and be inaccessible.
2. After fixing that, fetching was unable to pull from a URL other than
the one used at repository creation, meaning it wouldn't actually
pull from the mirror.
|
|
|
|
|
|
|
| |
This fixes:
* Bzr repositories pulling from the branch they were created with.
* Bzr's _ensure_mirror() not actually checking that it successfully
mirrored the ref.
|
| |
|
|
|
|
|
|
|
| |
In user config (buildstream.conf), it is set with the "default-mirror"
field.
On the command-line, it is set with "--default-mirror"
|
|
|
|
|
|
|
|
|
|
|
| |
**KLUDGE WARNING**: This involves making the source store its "meta"
object so that it's possible to create a copy of the source inside the
fetch queue, instead of back when the pipeline was being loaded.
This adds the SourceFetcher class, which is intended for sources that
fetch from multiple URLs (e.g. the git source and its submodules)
Fix when fetching
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is part of a later plan to implement mirroring without forcing
everyone to update their sources. We use the expected calls to
Source.translate_url() when running Source.configure() to extract the
aliases from the URL. Multiple aliases must be extracted because
sources exist that may fetch from multiple aliases (for example, git
submodules)
Later, we want to substitute another URI where the alias normally reads
from the project - We accomplish this by re-instantiating the Source
with the alias overrides passed as an argument to the constructor.
|
|
|
|
| |
The separator is useful in source files other than _project.py
|
| |
|
|
|
|
| |
This aims to fix issue #502.
|
|
|
|
|
|
| |
Since we backported the temporary/permanent failures patch for #397
into the `bst-1.2` branch, we need to adjust the since versions in master
down to 1.2.
|
|
|
|
|
| |
The git plugin will now make use of the fail_temporarily parameter
to Plugin.call(), allowing failures to trigger a retry.
|
|
|
|
|
|
|
|
| |
Plugin.call() now takes fail_temporarily as an optional parameter,
when supplied it will cause subsequent failures to trigger temporary
errors as opposed to permanent errors.
This also extends Plugin.check_output() which makes use of Plugin.call()
|
|
|
|
| |
Further work needs to be done for the current grpc exceptions which are reraised.
|
|
|
|
| |
This follows the change in 67ecd97a05279a3b7570ad59f05bf0a5973ef04c.
|
|
|
|
|
|
|
|
|
|
|
|
| |
job.py: Changes to the logic surrounding retry attempts and child process return codes
element.py, source.py: ElementError and SourceError also implement this change.
These exceptions now have an optional parameter of temporary which defaults to false. This will potentially break
backwards compatibility where exceptions were previously raised and a retry was intended.
To trigger a retry, one must now raise their SourceError or ElementError with temporary=True.
This aims to fix #397.
|
|
|
|
| |
When the user provides a path for the filename parameter, provide a reason
|
|
|
|
|
|
|
|
| |
Add a plugin that supports downloading files verbatim from a source with
an optional overridable filename and destination directory. Bumps bst
format version to 10.
Fixes #163
|
|
|
|
| |
tests/frontend/workspace.py: Added tests
|
|
|
|
|
| |
Because the RecursionError exception was introduced in Python 3.5, until we
drop support for for Python 3.4, we must use RuntimeError.
|
|
|
|
| |
This addresses issue #501.
|