| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is needed to permit access to the device nodes added to /dev
on Linux when FUSE is used as root.
The chroot sandbox only works with all privileges,
so there's no explicit check for being root
or having the appropriate capabilities.
A check for whether it's running as root isn't needed on Linux with bubblewrap
because /dev or its devices are mounted on top of the FUSE layer,
so device nodes are accessed directly rather than through the FUSE layer.
|
|
|
|
|
|
| |
This fixes all devices being mapped to the non-existant device 0,
which prevents being able to use even safe devices like /dev/null
through the hardlinks FUSE layer.
|
|\
| |
| |
| |
| | |
Update contributing guide
See merge request BuildStream/buildstream!801
|
| | |
|
| | |
|
|/ |
|
|\
| |
| |
| |
| | |
Address post-merge review of Ensure PWD is set in process environment
See merge request BuildStream/buildstream!788
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The current directory isn't always in the python module search path,
so we have to ensure it is for the script to work.
Strictly speaking, the user may already have a modified PYTHONPATH
at which point PYTHONPATH=".${PYTHONPATH+:$PYTHONPATH}" is necessary,
but it's probably premature to overcomplicate the documentation like that
before we discover it's a problem.
|
| |
| |
| |
| |
| | |
Since we now set PWD in the environment of builds
existing builds may behave differently so must cache differently now.
|
|/ |
|
|
|
|
|
|
| |
Somehow I missed this when originally forking the file from the click
library, now noticing that we should have followed what was written
in: https://github.com/pallets/click/blob/master/LICENSE
|
|\
| |
| |
| |
| | |
Bunch of cleanups
See merge request BuildStream/buildstream!798
|
| |
| |
| |
| | |
Remove unneeded cruft.
|
| | |
|
| |
| |
| |
| |
| |
| | |
* Rename tree to dir_digest to make it clear this is a Digest object,
and not a Tree object.
* Add documentation
|
| |
| |
| |
| |
| |
| |
| |
| | |
* Rename it to _commit_directory() because… it is what it does; and
also for symmetry with _fetch_directory().
* Rename digest to dir_digest to make it clear this is a digest for a
directory. A following commit will also reuse the same variable name
* Document method.
|
| |
| |
| |
| | |
Tristan Maat created the original file, so he is added as the author.
|
| |
| |
| |
| |
| | |
We want to check if some file is already cached here, not the parent
directory.
|
|/ |
|
|\
| |
| |
| |
| | |
Don't delete required artifacts when tracking is enabled
See merge request BuildStream/buildstream!793
|
| |
| |
| |
| |
| |
| | |
Same test as test_never_delete_required(), except that this test ensures
that we never delete required artifacts when their cache keys are
discovered dynamically during the build.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* create_element_size()
Now uses a git Repo object instead of a local source, and
returns the repo.
* update_element_size()
Added this function which can now resize the expected output
of an element generated with create_element_size(), useful
to allow testing sized elements with the tracking feature.
|
| |
| |
| |
| |
| | |
This allows one to modify a file in an existing git repo,
as opposed to adding a new one.
|
| |
| |
| |
| |
| |
| |
| | |
These tests were not checking that we fail for the expected reasons.
Added `res.assert_task_error(ErrorDomain.ARTIFACT, 'cache-too-full')`
where we expect to fail because the cache is too full.
|
| |
| |
| |
| |
| |
| |
| | |
This commit renames test_never_delete_dependencies() to
test_never_delete_required(), renders the test more readable by renaming
some elements and reordering some statements and makes the comments more
straight forward and accurate.
|
|/
|
|
|
|
|
|
|
| |
This was previously append_required_artifacts(), which presumed that
we knew at startup time what the cache keys of all elements in the
loaded pipeline would be.
This fixes unexpected deletions of required artifacts when
dynamic tracking is enabled with `bst build --track-all target.bst`
|
|\
| |
| |
| |
| | |
Fix tests that attempt to access the home directory
See merge request BuildStream/buildstream!780
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
They were moving the whole tmpdir to move the project repository.
This moves the cache directories etc. as well, meaning cli.run can't find them.
This was worked around by setting configure=False,
but this has the side-effect of making use of the user's caches,
which it should not be doing for reproducibility reasons.
By changing the tempdir layout to have the project in a subdirectory
we can move the project around for the relative workspace tests
without losing track of the configured state directories,
so we can leave configure=True and avoid touching the user's caches.
|
|/
|
|
|
|
| |
Overriding the config with a custom config file on the command-line
prevents it merging in the test-specific configuration
and can permit it to attempt to initialise the user's cache.
|
|\
| |
| |
| |
| | |
Fix: While caching build artifact: "Cannot extract [path to socket file] into staging-area. Unsupported type."
See merge request BuildStream/buildstream!783
|
| | |
|
|/
|
|
|
| |
We can't include a socket in a CAS tree, but it's mostly meaningless to do so
since there can't possibly be a process serving it.
|
|\
| |
| |
| |
| | |
Simplify element state by removing `__cached`
See merge request BuildStream/buildstream!784
|
|/
|
|
|
| |
This can get out of sync with other two cache states,
and we can do without it.
|
|\
| |
| |
| |
| | |
README.rst: Add status badges for PyPI release and Python versions
See merge request BuildStream/buildstream!719
|
|/
|
|
|
|
| |
The first badge will work fine right away while the second badge will
show "not found" until a release is made after merging this branch:
https://gitlab.com/BuildStream/buildstream/merge_requests/718.
|
|\
| |
| |
| |
| | |
source/install_source.rst: pip plugin depends on host pip
See merge request BuildStream/buildstream!791
|
|/ |
|
|\
| |
| |
| |
| |
| |
| | |
_artifactcache/casserver.py: Implement BatchReadBlobs
Closes #632
See merge request BuildStream/buildstream!785
|
| |
| |
| |
| | |
Fixes #632.
|
|/ |
|
|\
| |
| |
| |
| | |
Add validation of configuration variables
See merge request BuildStream/buildstream!678
|
| | |
|
| |
| |
| |
| | |
Ensure that protected variables are not being redefined by the user.
|
| |
| |
| |
| |
| | |
And remove then from the defaults as they are dynamically set by
BuildStream.
|
|/
|
|
| |
Setting "max-jobs" won't be allowed anymore in a following commit.
|
|\
| |
| |
| |
| | |
Ensure PWD is set in process environment
See merge request BuildStream/buildstream!782
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Naive getcwd implementations (such as in bash 4.4) can break
when bind-mounts to different paths on the same filesystem are present,
since the algorithm needs to know whether it's a mount-point
to know whether it can trust the inode value from the readdir result
or to use stat on the directory.
Less naive implementations (such as in glibc) iterate again using stat
in the case of not finding the directory because the inode in readdir was wrong,
though a Linux-specific implementation could use name_to_handle_at.
Letting the command know what directory it is in makes it unnecessary
for it to call the faulty getcwd in the first place.
|
|\
| |
| |
| |
| | |
_scheduler/queues: Mark build and pull queue as requiring shared access to the CACHE
See merge request BuildStream/buildstream!775
|