| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
When a user sets zuul_console_disabled, we don't need to try to
connect to the streaming daemon. In fact, they may have set it
because they know it won't be running. Check for this and avoid
the connection step in that case and therefore avoid the extraneous
"Waiting on logger" messages and extra 30 second delay at the end
of each task.
Change-Id: I86af231f1ca1c5b54b21daae29387a8798190a58
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Upon further discussion we recently found another case of leaking
console streaming files; if the zuul_console is not running on port
19885, or can not be reached, the streaming spool files will still be
leaked.
The prior work in I823156dc2bcae91bd6d9770bd1520aa55ad875b4 has the
receiving side indicate to the zuul_console daemon that it should
remove the spool file.
If this doesn't happen, either because the daemon was never there, or
it is firewalled off, the streaming spool files are left behind.
This modifies the command action plugin to look for a variable
"zuul_console_disable" which will indicate to the library running the
shell/command task not to write out the spool file at all, as it will
not be consumed.
It is expected this would be set at a host level in inventory for
nodes that you know can not or will not have access to zuul_console
daemon.
We already have a mechanism to disable this for commands running in a
loop; we expand this with a new string type. The advantage of this is
it leaves the library/command.py side basically untouched.
Documentation is updated, and we cover this with a new test.
Change-Id: I0273993c3ece4363098e4bf30bfc4308bb69a8b4
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is a follow-on to Ia78ad9e3ec51bc47bf68c9ff38c0fcd16ba2e728 to
use a different loopback address for the local connection to the
Python 2.7 container. This way, we don't have to override the
existing localhost/127.0.0.1 matches that avoid the executor trying to
talk to a zuul_console daemon. These bits are removed.
The comment around the port settings is updated while we're here.
Change-Id: I33b2198baba13ea348052e998b1a5a362c165479
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Change Ief366c092e05fb88351782f6d9cd280bfae96237 intoduced a bug in
the streaming daemons because it was using Python 3.6 features. The
streaming console needs to work on all Ansible managed nodes, which
includes back to Python 2.7 nodes (while Ansible supports that).
This introduces a regression test by building about the smallest
Python 2.7 container that can be managed by Ansbile. We start this
container and modify the test inventory to include it, then run the
stream tests against it.
The existing testing runs against the "new" console but also tests
against the console OpenDev's Zuul starts to ensure
backwards-compatability. Since this container wasn't started by Zuul
it doesn't have this, so that testing is skipped for this node.
It might be good to abstract all testing of the console daemons into
separate containers for each Ansible supported managed-node Python
version -- it's a bit more work than I want to take on right now.
This should ensure the lower-bound though and prevent regressions for
older platforms.
Change-Id: Ia78ad9e3ec51bc47bf68c9ff38c0fcd16ba2e728
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
I noticed in some of our testing a construct like
debug:
msg: '{{ ansible_version }}'
was actually erroring out; you'll see in the console output if you're
looking
Ansible output: b'TASK [Print ansible version msg={{ ansible_version }}] *************************'
Ansible output: b'[WARNING]: Failure using method (v2_runner_on_ok) in callback plugin'
Ansible output: b'(<ansible.plugins.callback.zuul_stream.CallbackModule object at'
Ansible output: b"0x7f502760b490>): 'dict' object has no attribute 'startswith'"
and the job-output.txt will be empty for this task (this is detected
by by I9f569a411729f8a067de17d99ef6b9d74fc21543).
This is because the msg value here comes in as a dict, and in several
places we assume it is a string. This changes places we inspect the
msg variable to use the standard Ansible way to make a text string
(to_text function) and ensures in the logging function it converts the
input to a string.
We test for this with updated tasks in the remote_zuul_stream tests.
It is slightly refactored to do partial matches so we can use the
version strings, which is where we saw the issue.
Change-Id: I6e6ed8dba2ba1fc74e7fc8361e8439ea6139279e
|
|\ \ \ \
| |/ / /
| | / /
| |/ /
|/| | |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently the task in the test playbook
- hosts: compute1
tasks:
- name: Command Not Found
command: command-not-found
failed_when: false
is failing in the zuul_stream callback with an exception trying to
fill out the "delta" value in the message here. The result dict
(taken from the new output) shows us why:
2022-08-24 07:19:27.079961 | TASK [Command Not Found]
2022-08-24 07:19:28.578380 | compute1 | ok: ERROR (ignored)
2022-08-24 07:19:28.578622 | compute1 | {
2022-08-24 07:19:28.578672 | compute1 | "failed_when_result": false,
2022-08-24 07:19:28.578700 | compute1 | "msg": "[Errno 2] No such file or directory: b'command-not-found'",
2022-08-24 07:19:28.578726 | compute1 | "rc": 2
2022-08-24 07:19:28.578750 | compute1 | }
i.e. it has no start/stop/delta in the result (it did run and fail, so
you'd think it might ... but this is what Ansible gives us).
This checks for this path; as mentioned the output will now look like
above in this case.
This was found by the prior change
I9f569a411729f8a067de17d99ef6b9d74fc21543. This fixes the current
warning, so we invert the test to prevent further regressions.
Change-Id: I106b2bbe626ed5af8ca739d354ba41eca2f08f77
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the defaulh "linear" strategy (and likely others), Ansible will
send the on_task_start callback, and then fork a worker process to
execute that task. Since we spawn a thread in the on_task_start
callback, we can end up emitting a log message in this method while
Ansible is forking. If a forked process inherits a Python file object
(i.e., stdout) that is locked by a thread that doesn't exist in the
fork (i.e., this one), it can deadlock when trying to flush the file
object. To minimize the chances of that happening, we should avoid
using _display outside the main thread.
The Python logging module is supposed to use internal locks which are
automatically aqcuired and released across a fork. Assuming this is
(still) true and functioning correctly, we should be okay to issue
our Python logging module calls at any time. If there is a fault
in this system, however, it could have a potential to cause a similar
problem.
If we can convince the Ansible maintainers to lock _display across
forks, we may be able to revert this change in the future.
Change-Id: Ifc6b835c151539e6209284728ccad467bef8be6f
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In zuul_stream.py:v2_playbook_on_task_start() it checks for
"task.loop" and exits if the task is part of a loop.
However the library/command.py override still writes out the console
log despite it never being read. To avoid leaving this file around,
mark a sentinel uuid in the action plugin if the command is part of a
loop. In that case, for simplicity we just write to /dev/null -- that
way no other assumptions in the library/command.py have to change; it
just doesn't leave a file on disk.
This is currently difficult to test as the infrastructure zuul_console
leaves /tmp/console-* files and we do not know what comes from that,
or testing. After this and the related change
I823156dc2bcae91bd6d9770bd1520aa55ad875b4 are deployed to the
infrastructure executors, we can make a simple and complete test for
the future by just ensuring no /tmp/console-* files are left behind
afer testing. I have tested this locally and do not see files from
loops, which I was before this change.
Change-Id: I4f4660c3c0b0f170561c14940cc159dc43eadc79
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In Ief366c092e05fb88351782f6d9cd280bfae96237 I missed that this runs
in the context of the remote node; meaning that it must support all
the Python versions that might run there. f-strings are not 3.5
compatible.
I'm thinking about how to lint this better (a syntax check run?)
Change-Id: Ia4133b061800791196cd631f2e6836cb77347664
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When using protocol version 1, send a finalise message when streaming
is complete so that the zuul_console daemon can delete the temporary
file.
We test this by inspecting the Ansible console output, which logs a
message with the UUID of the streaming job. We dump the temporary
files on the remote side and make sure a console file for that job
isn't present.
Change-Id: I823156dc2bcae91bd6d9770bd1520aa55ad875b4
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A refresher on how this works, to the best of my knowledge
1 Firstly, Zuul's Ansible has a library task "zuul_console:" which is
run against the remote node; this forks a console daemon, listening
on a default port.
2 We have a action plugin that runs for each task, and if that task
is a command/shell task, assigns it a unique id
3 We then override with library/command.py (which backs command/shell
tasks) with a version that forks and runs the process on the target
node as usual, but also saves the stdout/stderr output to a
temporary file named per the unique uuid from the step above.
4 At the same time we have the callback plugin zuul_stream.py, which
Ansible is calling as it moves through starting, running and
finishing the tasks. This looks at the task, and if it has a UUID
[2], sends a request to the zuul_console [1], which opens the
temporary file [3] and starts streaming it back.
5 We loop reading this until the connection is closed by [1],
eventually outputting each line.
In this way, the console log is effectively streamed and saved into
our job output.
We have established that we expect the console [1] is updated
asynchronously to the command/streaming [3,4] in situation such as
static nodes. This poses a problem if we ever want to update either
part -- for example we can not change the file-name that the
command.py file logs to, because an old zuul_console: will not know to
open the new file. You could imagine other fantasy things you might
like to do; e.g. negotiate compression etc. that would have similar
issues.
To provide the flexibility for these types of changes, implement a
simple protocol where the zuul_stream and zuul_console sides exchange
their respective version numbers before sending the log files. This
way they can both decide what operations are compatible both ways.
Luckily the extant protocol, which is really just sending a plain
uuid, can be adapted to this. When an old zuul_console server gets
the protocol request it will just look like an invalid log file, which
zuul_stream can handle and thus assume the remote end doesn't know
about protocols.
This bumps the testing timeout; it seems that the extra calls make for
random failures. The failures are random and not in the same place,
I've run this separately in 850719 several times and not seen any
problems with the higher timeout. This test is already has a settle
timeout slightly higher so I think it must have just been on the edge.
Change-Id: Ief366c092e05fb88351782f6d9cd280bfae96237
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Zuul Ansible callback stream plugin assumed that the ansible loop
var was always called 'item' in the result_dict. You can override this
value (and it is often necessary to do so to avoid collisions) to
something less generic. In those cases we would get errors like:
b'[WARNING]: Failure using method (v2_runner_item_on_ok) in callback plugin'
b'(<ansible.plugins.callback.zuul_stream.CallbackModule object at'
b"0x7fbecc97c910>): 'item'"
And stream output would not include the info typically logged.
Address this by checking if ansible_loop_var is in the results_dict and
using that value for the loop var name instead. We still fall back to
'item' as I'm not sure that ansible_loop_var is always present.
Change-Id: I408e6d4af632f8097d63c04cbcb611d843086f6c
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the command didn't exist:
- Popen would throw an exception
- 't' would not be updated (t is None)
- return code would not be written to the console
- zuul_stream would wait unecessary for 10 seconds
As rc is defined in normal case or in both exceptions, it can be written
in each case to the console.
Change-Id: I77a4e1bdc6cd163143eacda06555b62c9195ee38
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a role is applied to a host more than once (via either play
roles or include_roles, but not via an include_role loop), it will
have the same task UUID from ansible which means Zuul's command
plugin will write the streaming output to the same filename, and
the log streaming will request the same file. That means the file
might look this after the second invocation:
2022-05-19 17:06:23.673625 | one
2022-05-19 17:06:23.673781 | [Zuul] Task exit code: 0
2022-05-19 17:06:29.226463 | two
2022-05-19 17:06:29.226605 | [Zuul] Task exit code: 0
But since we stop reading the log after "Task exit code", the user
would see "one" twice, and never see "two".
Here are some potential fixes for this that don't work:
* Accessing the task vars from zuul_stream to store any additional
information: the callback plugins are not given the task vars.
* Setting the log id on the task args in zuul_stream instead of
command: the same Task object is used for each host and therefore
the command module might see the task object after it has been
further modified (in other words, nothing host-specific can be
set on the task object).
* Setting an even more unique uuid than Task._uuid on the Task
object in zuul_stream and using that in the command module instead
of Task._uuid: in some rare cases, the actual task Python object
may be different between the callback and command plugin, yet still
have the same _uuid; therefore the new attribute would be missing.
Instead, a global variable is used in order to transfer data between
zuul_stream and command. This variable holds a counter for each
task+host combination. Most of the time it will be 1, but if we run
the same task on the same host again, it will increment. Since Ansible
will not run more than one task on a host simultaneously, so there is
no race between the counter being incremented in zuul_stream and used
in command.
Because Ansible is re-invoked for each playbook, the memory usage is
not a concern.
There may be a fork between zuul_stream and command, but that's fine
as long as we treat it as read-only in the command plugin. It will
have the data for our current task+host from the most recent zuul_stream
callback invocation.
This change also includes a somewhat unrelated change to the test
infrastructure. Because we were not setting the log stream port on
the executor in tests, we were actually relying on the "real" OpenDev
Zuul starting zuul_console on the test nodes rather than the
zuul_console we set up for each specific Ansible version from the tests.
This corrects that and uses the correct zuul_console port, so that if we
make any changes to zuul_console in the future, we will test the
changed version, not the one from the Zuul which actually runs the
tox-remote job.
Change-Id: Ia656db5f3dade52c8dbd0505b24049fe0fff67a5
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The current no_log handling here was added in the following steps
Ic1becaf2f3ab345da22fa62314f1296d76777fec was the original zuul_json
and added the check of the result object
if result._result.get('_ansible_no_log', False):
This happened in July, 2017. However, soon after in October 2017,
Ansible merged [1] which changed things so that the callbacks
(v2_runner_on_ok) get "clean copy" of the TaskResult object; e.g.
if isinstance(arg, TaskResult):
new_args.append(arg.clean_copy())
It can be seen at [2] that this is where results are censored. This
change made it into Ansible 2.5. Ergo this callback will never see
uncensored results here.
The second part of the no_log processing here was added with
I9e8d08f75207b362ca23457c44cc2f38ff43ac23 in March 2018. This was to
work around an issue with loops, where the uncensored values would
"leak" into the results under certain circumstances when using loops,
e.g.
- name: A templated no_log loop
a_task:
a_arg: 'foo'
no_log: '{{ item }}'
loop:
- True
- False
Ansible merged a few changes related to fixing this. [3] was merged
in June 2018 for this exact issue of unreachable hosts. [4] was
merged in August 2018, which makes it so that if any loop item is
"no_log", the whole task is. Both of these changes made it into
Ansible 2.7.
Ansible has merged test-cases for these, but we have also merged
test-cases, so we have double coverage against regression in the
future.
Ergo, I believe we can revert both of these checks. This also means
we don't have to worry about special-casing string results as done in
I02bcd307bcfad8d99dd0db13d979ce7ba3d5e0e4.
[1] https://github.com/ansible/ansible/commit/01b6c7c9c6b7459a3cb53ffc2fe02a8dcc1a3acc
[2] https://github.com/ansible/ansible/blob/f7c2b1986c5b6afce1d8fe83ce6bf26b535aa617/lib/ansible/executor/task_result.py#L13
[3] https://github.com/ansible/ansible/commit/336b3762b23a64e355cfa3efba11ddf5bdd7f0d8
[4] https://github.com/ansible/ansible/commit/bda074d34e46ee9862a48ed067ad42260d3f92ab
Change-Id: I00ef08869f3a8f08a1affa5e15e3386a1891f11e
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We have noted a problem with tasks such as "yum:" or "package:" that
output is not being logged in the job json file.
After investigation, I have found that this is failing with an
exception in the results parsing at
for index, item_result in enumerate(
clean_result.get('results', [])):
if not item_result.get('_ansible_no_log', False):
continue
When we are *not* processing a task loop, "results" is a simple list
of strings about the package results; thus when it walks each result
the "item_result.get('_ansible_no_log', False)" step fails because a
string has no ".get()". This causes the plugin to exit and the
information in the resulting .json to be incomplete, leading to the
tasks being missing from the UI. I believe this was a regression
introduced with I9e8d08f75207b362ca23457c44cc2f38ff43ac23.
When we *are* processing a task loop, "results" is a list where each
entry is a loop-item result. In this case, we are always walking a
list of dictionaries, so the existing ".get()" call works.
Change-Id: I02bcd307bcfad8d99dd0db13d979ce7ba3d5e0e4
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This adds support for Ansible 5. As mentioned in the reno, only
the major version is specified; that corresponds to major.minor in
Ansible core, so is approximately equivalent to our current regime.
The command module is updated to be based on the current code in
ansible core 2.12.4 (corresponding to community 5.6.0). The previous
version is un-symlinked and copied to the 2.8 and 2.8 directories
for easy deletion as they age out.
The new command module has corrected a code path we used to test
that the zuul_stream module handles python exceptions in modules,
so instead we now take advantage of the ability to load
playbook-adjacent modules to add a test fixture module that always
raises an exception. The zuul stream functional test validation is
adjusted to match the new values.
Similarly, in test_command in the remote tests, we relied on that
behavior, but there is already a test for module exceptions in
test_module_exception, so that check is simply removed.
Among our Ansible version tests, we occasionally had tests which
exercised 2.8 but not 2.9 because it is the default and is otherwise
tested. This change adds explicit tests for 2.9 even if they are
redundant in order to make future Ansible version updates easier and
more mechanical (we don't need to remember to add 2.9 later when
we change the default).
This is our first version of Ansible where the value of
job.ansible-version could be interpreted as an integer, so the
configloader is updated to handle that possibility transparently,
as it already does for floating point values.
Change-Id: I694b979077d7944b4b365dbd8c72aba3f9807329
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We no longer need these directories since there is no deffirence
between which Ansible plugins we load for the trusted vs untrusted
context.
Change-Id: Ibd460d89ebd75a0b58ce715284916e1e1628b518
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This restriction is no longer necessary since it was used to check
that we weren't writing something outside of the work/ dir, but that
is now permitted.
Change-Id: Ic447762c7654cf9ac8e9f8bcecc51e8b76b8e67f
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | | |
We now allow executing code on localhost, so remove this restriction.
Change-Id: I8677674302be666529733158b72af8b6e5f45979
|
|\ \ \
| |/ /
| | /
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| | |
These are the restricted ansible modules and symlinks which are
no longer required.
Change-Id: I8c7b5b00a2f3c84ae780a471bd19f0a2c971a19e
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| | |
async tasks is causing zuul_stream.py to hang for 30 seconds before
timing out all log streaming threads. This stops zuul_stream.py from
streaming logs for async tasks.
Change-Id: I86098b0788fdd87593e3c68d04953ede7607432b
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The recent security fix that caused loading ansible.bultin.command
to load the normal command module was a little too aggressive since
the same mechanism is used by Ansible to load Windows powershell
support code, which is under the Ansible.ModuleUtils.* hierarchy.
This would result in an error from Ansible when attempting to run
the setup playbook:
Could not find imported module support code for 'Ansible.ModuleUtils.Legacy'"'
This is corrected by reducing the scope of the mutation to just
ansible.builtin and ansible.legacy.
Change-Id: I70e9481478a3326692cb848ce0782f5331dc4758
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| | |
This change prevents `ansible.builtin.command` usage to fail with
`zuul_log_id missing`
Change-Id: Ie6ef0817a2d276af2e9949e9755f29343067df26
|
| |
| |
| |
| |
| |
| |
| |
| | |
This corrects a security vulnerability related to loading Ansible
plugins under the `ansible.builtin.*` aliases.
Change-Id: I3a394904765e22080aa038c44bfe26e07a1e86c7
Story: 2009941
|
|/
|
|
| |
Change-Id: Ie209b6e9a4b9192f4e53e73022d4549611cd230c
|
|
|
|
|
|
|
|
|
| |
So that a job may provide sensitive data to a child job without
those data ending up in the inventory file (and therefore in the
log archive) add a secret_data attribute to zuul_return.
Change-Id: I7cb8bed585eb6e94009647f490b9341927266e8f
Story: 2008389
|
|\ |
|
| |
| |
| |
| |
| |
| | |
ansible_user_dir is used by ensure_output_dir roles in zuul-job
Change-Id: I8328dca41c055e24aa76d4b9d89a1699a6a3a713
|
| |
| |
| |
| | |
Change-Id: If76e730673ac80e254c878c46b17181e14b4a5de
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We already have the infrastructure in place for adding warnings to the
reporting. Plumb that through to zuul_return so jobs can do that on
purpose as well. An example could be a post playbook that analyzes
performance statistics and emits a warning about inefficient usage of
the build node resources.
Change-Id: I4c3b85dc8f4c69c55cbc6168b8a66afce8b50a97
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Allow the "find" module to run on the executor under allowed paths.
We allow fileglob filter, so this seems like a natural related
function.
Change-Id: Iab4fe4f9ef4efed38c38981f4f13e90ff0c1a76f
|