| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes the stateful Pipeline object and leaves behind only a toolbox
of functions for constructing element lists, such as _pipeline.get_selection()
and _pipeline.except_elements(), and some helpers for asserting element states
on lists of elements.
This makes it easier for Stream to manage it's own internal state, so that
Stream can more easily decide to operate without hard requiring a Project
instance be available.
This also adds type annotations to the new version of _pipeline.py.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This function is used only once and is quite unnecessary
|
|
|
|
|
|
|
|
|
|
| |
Replaces Pipeline method `track_cross_junction_filter()`.
This changes the error domain for invalid cross junction tracking, so
updating the following two test cases:
* testing/_sourcetests/track.py
* tests/frontend/track.py
|
|
|
|
| |
Replaces Pipeline `resolve_elements()`.
|
|
|
|
|
|
| |
This replaces the pipeline `load()` method.
This does some rewording in `Stream._load_elements_from_targets()`
|
|
|
|
| |
This replaces the corresponding Pipeline method `check_remotes()`.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This also adjusts the very strange tests in tests/internals/cascache.py
which use unittest's MagicMock interface to inspect what happened on
specific python methods instead of doing proper end-to-end testing.
|
|
|
|
| |
Use Messenger convinence functions instead.
|
|
|
|
| |
Use Messenger convinence functions instead.
|
|
|
|
| |
Use Messenger convinence functions instead.
|
|
|
|
|
|
| |
Several parts of the core have replicated codepaths for issuing info and
warn messages, remove the need for these functions by providing a convenience
layer in the Messenger object.
|
|
|
|
|
|
|
|
|
| |
This omits the type annotation for the message handler callback, as
this callback contains a keyword argument and can only be annotated
using `Protocol` type, which will only be available in python >= 3.8.
Added a FIXME comment so that we can recitify this when dropping
support for python 3.7
|
|
|
|
|
| |
This allows third party developers to use the Cli() object
in their own test fixtures.
|
|
|
|
| |
Instead of requiring every fixture to do it separately.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In errors pertaining to failing to launch a shell with a buildtree.
Other related updates:
- _frontend/cli.py: Propagate machine readable error codes in `bst shell`
This command prefixes a reported error, so it rewraps the error into
an AppError, this needs to propagate the originating machine readable
error.
- tests/integration/shell.py, tests/integration/shellbuildtrees.py:
Updated to use new machine readable errors
|
|
|
|
|
|
|
|
|
| |
When stream is asked for a list of artifacts to show for
the purpose of `bst artifact show`, it was squashing the element
name with the artifact name before it gets displayed in the
frontend.
Instead, make the special casing in the frontend.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Stack elements cannot be build-only dependencies, as this would defeat
the purpose of using stack elements in order to directly build-depend on
them.
Stack element dependencies must all be built in order to build depend
on them, and as such we gain no build parallelism by allowing runtime-only
dependencies on stack elements. Declaring a runtime-only dependency on
a stack element as a whole might still be useful, but still requires the
entire stack to be built at the time we need that stack.
Instead, it is more useful to ensure that a stack element is a logical
group of all dependencies, including runtime dependencies, such that we
can guarantee cache key alignment with all stack dependencies.
This allows for stronger reliability in commands such as
`bst artifact checkout`, which can now reliably download and checkout
a fully built stack as a result, without any uncertainty about possible
runtime-only dependencies which might exist in the project where that
artifact was created.
This consequently closes #1075
This also fixes the following tests such that the no longer
require build-depends or runtime-depends to work in stack elements:
* tests/frontend/default_target.py: Was not necessary to check results of show,
these stacks were set to runtime-depends so that they would have the same
buildable state as their dependencies when shown.
* tests/format/dependencies.py: tests/frontend/pull.py, test/frontend/show.py,
tests/integration/compose.py:
These tests were using specific build/runtime dependencies in stacks, but
for no particular reason.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This improves overall documentation comments on the State object,
adds full pep484 type hinting, and renames the Task.set_render_cb()
to Task.set_task_changed_callback() to be more consistently named.
This also adds missing frontend facing API for the group changed
status notifications, even though the frontend does not currently
use these, it makes better sense to have them than to remove the
entire codepaths and callback lists.
This also reorders the classes in this file so that Task and TaskGroup
are both defined before State, this helps a bit with undefined references
for type hinting information.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We created the State object in the core for the purpose of advertizing
state to the frontend, and the frontend can register callbacks and get
updates to state changes (implicit invocation in the frontend), state
always belongs to the core and the frontend can only read state.
When the frontend asks the core to do something, this should always
be done with an explicit function call, and preferably not via the
State object, as this confuses the use of state, which is only a readonly
state advertizing desk.
This was broken (implemented backwards) for job retries, instead we had
the frontend telling state "It has been requested that this job be retried !",
and then we had the core registering callbacks to that frontend request - this
direction of implicit invocation should not happen (the core should never
have to register callbacks on the State object at all in fact).
Summary of changes:
* _stream.py: Change _failure_retry(), which was for some reason
private albeit called from the frontend, to an explicit function
call named "retry_job()".
Instead of calling into the State object and causing core-side
callbacks to be triggered, later to be handled by the Scheduler,
implement the retry directly from the Stream, since this implementation
deals only with Queues and State, which already directly belong to
the Stream object, there is no reason to trouble the Scheduler
with this.
* _scheduler.py: Remove the callback handling the State "task retry"
event.
* _state.py: Remove the task retry callback chain completely.
* _frontend/app.py: Call stream.retry_job() instead of
stream.failure_retry(), now passing along the task's action name
rather than the task's ID.
This API now assumes that Stream.retry_job() can only be called on
a task which originates from a scheduler Queue, and expects to be
given the action name of the queue in which the given element has
failed and should be retried..
|
|
|
|
|
| |
The Task object is not internal to the State object, it is clearly
given to the frontend and passed around.
|
| |
|
| |
|
|
|
|
|
| |
The implementation can be reused to replace `local_missing_blobs()` and
simplify `contains_files()`.
|
| |
|
|
|
|
|
| |
This allows adding multiple objects in a single batch, avoiding extra
gRPC round trips to buildbox-casd.
|
| |
|
|
|
|
|
| |
It's only used by `_fetch_tree()` and can be replaced by a single
additional line.
|
|
|
|
|
| |
This eliminates code duplication in `ArtifactCache`, `SourceCache` and
`ElementSourcesCache`.
|
|
|
|
| |
This simplifies the code, delegating the logic to buildbox-casd.
|
|
|
|
| |
It's not used outside testutils.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* .pylintrc: Disable new `raise-missing-from` check. We might want to
enable that later, but it fails in many places. Let's not merge both
changes here.
* pluginoriginpip.py: Catch the newer thrown exception from
pkg_resources. The previous one still exists, so we should be good
keeping the same compatibility as before
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When instantiating an ArtifactElement, use an ArtifactProject to ensure
that the Element does not accidentally have access to any incidentally
existing project loaded from the current working directory.
Also pass along the Artifact to the Element's initializer directly, and
conditionally instantiate the element based on it's artifact instead of
based on loading YAML configuration.
Fixes #1410
Summary of changes:
* _artifactelement.py:
- Now load the Artifact and pass it along to the Element constructor
- Now use an ArtifactProject for the element's project
- Remove overrides of Element methods, instead of behaving differently,
now we just fill in all the blanks for an Element to behave more
naturally when loaded from an artifact.
- Avoid redundantly loading the same artifact twice, if the artifact
was cached then we will load only one artifact.
* element.py:
- Conditionally instantiate from the passed Artifact instead of
considering any YAML loading.
- Error out early in _prepare_sandbox() in case that we are trying
to instantiate a sandbox for an uncached artifact, in which case
we don't have any SandboxConfig at hand to do so.
* _stream.py:
- Clear the ArtifactProject cache after loading artifacts
- Ensure we load a list of unique artifacts without any duplicates
* tests/frontend/buildcheckout.py: Expect a different error when trying
to checkout an uncached artifact
* tests/frontend/push.py, tests/frontend/artifact_show.py: No longer expect
duplicates to show up with wild card statements which would capture multiple
versions of the same artifact (this changes because of #1410 being fixed)
|
|
|
|
|
|
| |
These properties allow easy addressing of the different cache keys,
and report a cached value if the artifact was cached, otherwise report
the value assigned to the Artifact at instantiation time.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Project's class initializer is now refactored such that loading
a project.conf is made optional. The initializer is now well sorted
with public members showing up before private members, followed by
the initialization body. pep484 type hints are now employed aggressively
for all project instance members.
The added ArtifactProject is added to serve as the data model counterpart
of the ArtifactElement, ensuring that we never mistakenly use locally
loaded project data in ArtifactElement instances.
Consequently, the Project.sandbox and Project.splits variables are
properly made public by this commit, as these are simply loaded from
the project config and accessed elsewhere by Element; Element is updated
to access these public members by their new public names.
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of delegating this job to a Project instance, implement
the application state cleanup directly on the Stream object.
This commit also:
* Removes Project.cleanup()
* Reword incorrect API documentation for Element._reset_load_state()
|
|
|
|
|
|
|
|
|
| |
Instead of having _pipeline.py implement load_artifacts() by calling
_project.py's other implementation of load_artifacts(), instead just
implement _load_artifacts() directly in _stream.py.
This of course removes the load_artifacts() implementations from
_pipeline.py and _project.py.
|
| |
|