| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a combination of 2 commits.
---
py2/3: Stop using stdlib's putrequest(); it only does ASCII
Note that this only affects the functest client.
See also: https://bugs.python.org/issue36274 and
https://bugs.python.org/issue38216
This was previously done just for py3 compatibility, but following
https://github.com/python/cpython/commit/bb8071a our stable gates are
all broken -- apparently, they're running a 2.7 pre-release?
(cherry picked from commit c0ae48ba9aafb0b91869ea3bae8da07a32088777)
(cherry picked from commit 2b4d58952cae8b174fb60529d5284c1d328e9287)
---
bufferedhttp: ensure query params are properly quoted
Recent versions of py27 [1] have begun raising InvalidURL if you try to
include non-ASCII characters in the request path. This was observed
recently in the periodic checks of stable/ocata and stable/pike. In
particular, we would spin up some in-process servers in
test.unit.proxy.test_server.TestSocketObjectVersions and do a container
listing with a prefix param that included raw (unquoted) UTF-8. This
query string would pass unmolested through the proxy, tripping the
InvalidURL error when bufferedhttp called putrequest.
More recent versions of Swift would not exhibit this particular failure,
as the listing_formats middleware would force a decoding/re-encoding of
the query string for account and container requests. However, object
requests with errant query strings would likely be able to trip the same
error.
Swift on py3 should not exhibit this behavior, as we so
thoroughly re-write the request line to avoid hitting
https://bugs.python.org/issue33973.
Now, always parse and re-encode the query string in bufferedhttp. This
prevents any errors on object requests and cleans up any callers that
might use bufferedhttp directly.
[1] Anything after https://github.com/python/cpython/commit/bb8071a;
see https://bugs.python.org/issue30458
Closes-Bug: 1843816
Related-Change: Id3ce37aa0402e2d8dd5784ce329d7cb4fbaf700d
Related-Change: Ie648f5c04d4415f3b620fb196fa567ce7575d522
(cherry picked from commit 49f62f6ab7fd1b833e9b5bfbcaafa4b45b592d34)
(cherry picked from commit 9cc6d4138946034516fdf579ac084cb954ea6b06)
---
Change-Id: I4eafc5f057df8a3c15560ace255d05602db56ef6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:
http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html
Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a mechanically generated change to replace openstack.org
git:// URLs with https:// equivalents.
This is in aid of a planned future move of the git hosting
infrastructure to a self-hosted instance of gitea (https://gitea.io),
which does not support the git wire protocol at this stage.
This update should result in no functional change.
For more information see the thread at
http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html
Change-Id: I89d1d57edd615b14b09d0c6d03d5d74ef9c2bba4
|
|
|
|
| |
Change-Id: I08cbbbf7e9f2c479d788f97fbf89b590a1072c55
|
|
|
|
|
|
| |
Change-Id: Ib7b208669e900b84a7759819ef76b7b5b7ce8c9a
Closes-Bug: 1774719
(cherry picked from commit 9ef2a828166aece6b374a97b0777b90c359fdebd)
|
|
|
|
|
|
|
|
| |
Import legacy jobs since devstack does not have Zuul v3 native jobs
for ocata defined.
Co-Authored-By: Andreas Jaeger <jaegerandi@gmail.com>
Change-Id: I49d963b98f3df21fea0db24c83553ef873ad73c8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a mechanically generated patch to complete step 1 of moving
the zuul job settings out of project-config and into each project
repository.
Because there will be a separate patch on each branch, the branch
specifiers for branch-specific jobs have been removed.
Because this patch is generated by a script, there may be some
cosmetic changes to the layout of the YAML file(s) as the contents are
normalized.
See the python3-first goal document for details:
https://governance.openstack.org/tc/goals/stein/python3-first.html
Change-Id: I7633d4a5f82075c427327b566828d8d789046538
Story: #2002586
Task: #24337
|
|
|
|
|
|
|
|
|
| |
Fix source repository from stable/newton to
stable/ocata
Change-Id: I4be6d421c243d9db8917e78c5f4241ef75e2dc8b
Closes-Bug: #1743514
Closes-Bug: #1731943
|
|\
| |
| |
| | |
stable/ocata
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
You can't modify the X-Static-Large-Object metadata with a POST, an
object being a SLO is a property of the .data file. Revert the change
from 4500ff which attempts to correctly handle X-Static-Large-Object
metadata on a POST, but is subject to a race if the most recent SLO
.data isn't available during the POST. Instead this change adjusts the
reading of metadata such that the X-Static-Large-Object metadata is
always preserved from the metadata on the datafile and bleeds through
a .meta if any.
Closes-bug: #1453807
Closes-bug: #1634723
Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Change-Id: Ie48a38442559229a2993443ab0a04dc84717ca59
(cherry picked from commit 36a843be73e2d58c3fe49a049d514b421124bd06)
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, Swift's WSGI servers, the object replicator, and the
object reconstructor were setting Eventlet's hub to either "poll" or
"selects", depending on availability. Other daemons were letting
Eventlet use its default hub, which is "epoll".
In any daemons that fork, we really don't want to use epoll. Epoll
instances end up shared between the parent and all children, and you
get some awful messes when file descriptors are shared.
Here's an example where two processes are trying to wait on the same
file descriptor using the same epoll instance, and everything goes
wrong:
[proc A] epoll_ctl(6, EPOLL_CTL_ADD, 3, ...) = 0
[proc B] epoll_ctl(6, EPOLL_CTL_ADD, 3, ...) = -1 EEXIST (File exists)
[proc B] epoll_wait(6, ...) = 1
[proc B] epoll_ctl(6, EPOLL_CTL_DEL, 3, ...) = 0
[proc A] epoll_wait(6, ...)
This primarily affects the container updater and object updater since
they fork. I've decided to change the hub for all Swift daemons so
that we don't add multiprocessing support to some other daemon someday
and suffer through this same bug again.
This problem was made more apparent by commit 6d16079, which made our
logging mutex use file descriptors. However, it could have struck on
any shared file descriptor on which a read or write returned EAGAIN.
Change-Id: Ic2c1178ac918c88b0b901e581eb4fab3b2666cfe
Closes-Bug: 1722951
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the proxy times out talking to a backend server (say, because it
was under heavy load and having trouble servicing the request), we catch
the ChunkReadTimeout and try to get the rest from another server. The
client by and large doesn't care; there may be a brief pause in the
download while the proxy get the new connection, but all the bytes
arrive and in the right order:
GET from node1, serve bytes 0 through N, timeout
GET from node2, serve bytes N through end
When we calculate the range for the new request, we check to see if we
already have one from the previous request -- if one exists, we adjust
it based on the bytes sent to the client thus far. This works fine for
single failures, but if we need to go back *again* we double up the
offset and send the client incomplete, bad data:
GET from node1, serve bytes 0 through N, timeout
GET from node2, serve bytes N through M, timeout
GET from node3, serve bytes N + M through end
Leaving the client missing bytes M through N + M.
We should adjust the range based on the number of bytes pulled from the
*backend* rather than delivered to the *frontend*. This just requires
that we reset our book-keeping after adjusting the Range header.
Change-Id: Ie153d01479c4242c01f48bf0ada78c2f9b6c8ff0
Closes-Bug: 1717401
(cherry picked from commit 6b19ca7a7d5833f5648976d8d30c776975e361db)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The object server runs certain IO-intensive methods outside the main
pthread for performance. If one of those methods tries to log, this can
cause a crash that eventually leads to an object server with hundreds
or thousands of greenthreads, all deadlocked.
The short version of the story is that logging.SysLogHandler has a
mutex which Eventlet monkey-patches. However, the monkey-patched mutex
sometimes breaks if used across different pthreads, and it breaks in
such a way that it is still considered held. After that happens, any
attempt to emit a log message blocks the calling greenthread forever.
The fix is to use a mutex that works across different greenlets and
across different pthreads. This patch introduces such a lock based on
an anonymous pipe.
Change-Id: I57decefaf5bbed57b97a62d0df8518b112917480
Closes-Bug: 1710328
(cherry picked from commit 6d160797fc3257942618a7914d526911ebbda328)
|
|
|
|
| |
Change-Id: Iae4a173d5663422e561f34db121e6d1112c22a2a
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch uses a more specific string to match
the ring name in recon cli.
Almost all the places in the project where need to get the
suffix (like ring.gz, ts, data and .etc) will include the '.'
in front, if a file named 'object.sring.gz' in the swift_dir
will be added in the ring_names, which is not what we want.
Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Closes-Bug: #1680704
Change-Id: Ida659fa71585f9b0cf36da75b58b28e6a25533df
|
|\ \ |
|
| | |
| | |
| | |
| | | |
Change-Id: I4f84b725e220e28919570fd7f296b63b34d0375d
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
SSYNC is designed to limit concurrent incoming connections in order to
prevent IO contention. The reconstructor should expect remote
replication servers to fail ssync_sender when the remote is too busy.
When the remote rejects SSYNC - it should avoid forcing additional IO
against the remote with a REPLICATE request which causes suffix
rehashing.
Suffix rehashing via REPLICATE verbs takes two forms:
1) a initial pre-flight call to REPLICATE /dev/part will cause a remote
primary to rehash any invalid suffixes and return a map for the local
sender to compare so that a sync can be performed on any mis-matched
suffixes.
2) a final call to REPLICATE /dev/part/suf1-suf2-suf3[-sufX[...]] will
cause the remote primary to rehash the *given* suffixes even if they are
*not* invalid. This is a requirement for rsync replication because
after a suffix is synced via rsync the contents of a suffix dir will
likely have changed and the remote server needs to update it hashes.pkl
to reflect the new data.
SSYNC does not *need* to send a post-sync REPLICATE request. Any
suffixes that are modified by the SSYNC protocol will call _finalize_put
under the hood as it is syncing. It is however not harmful and
potentially useful to go ahead refresh hashes after an SSYNC while the
inodes of those suffixes are warm in the cache.
However, that only makes sense if the SSYNC conversation actually synced
any suffixes - if SSYNC is rejected for concurrency before it ever got
started there is no value in the remote performing a rehash. It may be
that *another* reconstructor is pushing data into that same partition
and the suffixes will become immediately invalidated.
If a ssync_sender does not successful finish a sync the reconstructor
should skip the REPLICATE call entirely and move on to the next
partition without causing any useless remote IO.
Closes-Bug: #1665141
Change-Id: Ia72c407247e4525ef071a1728750850807ae8231
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Recently out gate started blowing up intermittently with a strange
case of ports mixed up. Sometimes a functional tests tries to
authorize on a port that's clearly an object server port, and
the like. As it turns out, eventlet developers added an unavoidable
SO_REUSEPORT into listen(), which makes listen(("localhost",0)
to reuse ports.
There's an issue about it:
https://github.com/eventlet/eventlet/issues/411
This patch is working around the problem while eventlet people
consider the issue.
Change-Id: I67522909f96495a6a30e1acdb79835dce2189549
(cherry picked from commit 5dfc3a75fb506a82ffdedad64ce59ecc3db15e6c)
|
| |
| |
| |
| |
| |
| |
| | |
For more information about this automatic import see:
http://docs.openstack.org/developer/i18n/reviewing-translation-import.html
Change-Id: Ie39c9f2a102a4223269918c6913e640790453036
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When drive with container or account database is unmounted
replicator pushes database to handoff location. But this
handoff location finds replica with unmounted drive and
pushes database to the *next* handoff until all handoffs has
a replica - all container/account servers has replicas of
all unmounted drives.
This patch solves:
- Consumption of iterator on handoff location that results in
replication to the next and next handoff.
- StopIteration exception stopped not finished loop over
available handoffs if no more nodes exists for db replication
candidency.
Regression was introduced in 2.4.0 with rsync compression.
Co-Author: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Change-Id: I344f9daaa038c6946be11e1cf8c4ef104a09e68b
Closes-Bug: 1675500
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this change, subrequests made while servicing a copy would
result in logging the request type from the copy PUT/GET request
instead of the type from the subrequest being logged.
In order to have the correct requst type logged for subrequests:
- Changed subrequest environments to not inherit the orig_req_method
of the enclosing request.
- Changed copy to be more picky about when it sets orig_req_method
In addition, subrequest environments will no longer inherit the
swift.log_info from the enclosing request. That inheritance had
been added at Ic96a92e938589a2f6add35a40741fd062f1c29eb
along with swift.orig_req_method.
Change-Id: I1ccb2665b6cd2887659e548e55a26aa00de879e3
Closes-Bug: #1657246
|
|
|
|
|
|
|
| |
Closes-Bug: #1671896
Related-Change: Ie4fee05b5f7e0c0879a7b42973bca459f7c85408
Change-Id: I18db1937a0991497027a4d096fb95cdda81f7d68
(cherry picked from commit b958466a72cd6038cc4ce6479f0e8d922518e656)
|
|
|
|
|
|
|
|
|
|
| |
Not so long ago, we changed our default port ranges from 60xx to 62xx
but we left the install guide using the old ranges.
Closes-Bug: #1669389
Related-Change: Ie1c778b159792c8e259e2a54cb86051686ac9d18
Change-Id: Ie4fee05b5f7e0c0879a7b42973bca459f7c85408
(cherry picked from commit 740d683e29e03e588492642a6dccca07268b6954)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, Swift3 used client-facing HTTP headers to pass the S3 access
key, signature, and normalized request through the WSGI pipeline.
However, tempauth did not validate that Swift3 actually set the headers;
as a result, an attacker who has captured either a single valid S3-style
temporary URL or a single valid request through the S3 API may impersonate
the user that signed the URL or issued the request indefinitely through
the Swift API.
Now, the S3 authentication information will be taken from a separate
namespace in the WSGI environment, completely inaccessible to the
client. Specifically,
environ['swift3.auth_details'] = {
'access_key': <access key>,
'signature': <signature>,
'string_to_sign': <normalized request>,
}
Note that tempauth is not expected to be in production use, but may have
been used as a template by other authentication middlewares to add their
own Swift3 support.
Change-Id: Ib90adcc2f059adaf203fba1c95b2154561ea7487
Related-Change: Ia3fbb4938f0daa8845cba4137a01cc43bc1a713c
(cherry picked from commit f3ef616dc6a2c4987c952b31232fa3bbb5bc6801)
|
|
|
|
| |
Change-Id: I024f63d6c514771bf93d5b487a5c007187a5b84b
|
|
|
|
| |
Change-Id: I64d383aa3f4f1886bde799bc5f86d78a33ee54b4
|
|
|
|
| |
Change-Id: I4d90075bd1eb6775b9d736668aa9c7af5eb41f4e
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The handoffs_first mode in the replicator has the useful behavior of
processing all handoff parts across all disks until there aren't any
handoffs anymore on the node [1] and then it seemingly tries to drop
back into normal operation. In practice I've only ever heard of
handoffs_first used while rebalancing and turned off as soon as the
rebalance finishes - it's not recommended to run with handoffs_first
mode turned on and it emits a warning on startup if option is enabled.
The handoffs_first mode on the reconstructor doesn't work - it was
prioritizing handoffs *per-part* [2] - which is really unfortunate
because in the reconstructor during a rebalance it's often *much* more
attractive from an efficiency disk/network perspective to revert a
partition from a handoff than it is to rebuild an entire partition from
another primary using the other EC fragments in the cluster.
This change deprecates handoffs_first in favor of handoffs_only in the
reconstructor which is far more useful - and just like handoffs_first
mode in the replicator - it gives the operator the option of forcing the
consistency engine to focus on rebalance. The handoffs_only behavior is
somewhat consistent with the replicator's handoffs_first option (any
error on any handoff in the replicactor will make it essentially handoff
only forever) but the option does what you want and is named correctly
in the reconstructor.
For consistency with the replicator the reconstructor will mostly honor
the handoffs_first option, but if you set handoffs_only in the config it
always takes precedence. Having handoffs_first in your config always
results in a warning, but if handoff_only is not set and handoffs_first
is true the reconstructor will assume you need handoffs_only and behaves
as such.
When running in handoffs_only mode the reconstructor will start to log a
warning every cycle if you leave it running in handoffs_only after it
finishes reverting handoffs. However you should be monitoring on-disk
partitions and disable the option as soon as the cluster finishes the
full rebalance cycle.
1. Ia324728d42c606e2f9e7d29b4ab5fcbff6e47aea fixed replicator
handoffs_first "mode"
2. Unlike replication each partition in a EC policy can have a different
kind of job per frag_index, but the cardinality of jobs is typically
only one (either sync or revert) unless there's been a bunch of errors
during write and then handoffs partitions maybe hold a number of
different fragments.
Known-Issues:
handoffs_only is not documented outside of the example config, see lp
bug #1626290
Closes-Bug: #1653018
Change-Id: Idde4b6cf92fab6c45f2c0c2733277701eb436898
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Verify .ring.gz path exist if ring file is the first argument.
- Code Refactoring:
- swift/cli/info.parse_get_node_args()
- Respective test cases for info.parse_get_node_args()
Closes-Bug: #1539275
Change-Id: I0a41936d6b75c60336be76f8702fd616d74f1545
Signed-off-by: Sachin Patil <psachin@redhat.com>
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | | |
Change-Id: Iea629d5a08aa3d94e097fcdab28f94511b262fcf
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add the Apache License 2.0 description in ../conf.py which
is necessary.
Change-Id: Ief3767fdc22f582beb8683a9075dc25dbcb541cd
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It has been a real honour working with you, guys. Thanks!
Change-Id: I2668ddf546791ca36fe22d6fdd2d5e745ed14200
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixed the comment in the test to match exactly what's
being removed and what the expected result is.
Also, removed that extra '/' parameter which was causing
the assert to test at the wrong directory level.
Change-Id: I2f27f0d12c08375c61047a3f861c94a3dd3915c6
Signed-off-by: Thiago da Silva <thiago@redhat.com>
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Fixes the failure in TestReconstructorRevert.test_delete_propagate
introduced by Related-Change.
Related-Change-Id: Ie351d8342fc8e589b143f981e95ce74e70e52784
Change-Id: I1657c1eecc9b62320e2cf184050e0db122821139
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Prior to the Related-Change no suffixes were written to hashes.invalid
until after initial suffix hashing created hashes.pkl - and in our probe
test the only updates to the partition occurred before replication.
Before the related change with sync_method = rsync it was possible when
starting from a clean slate to write data, and replicate from a handoff
partition without generating a hashes.invalid file in any primary.
After the Related-Change it was no longer possible to write data without
generating a hashes.invalid file; however with sync_method = rsync the
replicator could still replicate data into a partition that never
received an update directly and therefore no hashes.invalid.
When using sync_method = ssync replication updates the hashes.invalid
like any normal update to the partition and therefore all partitions
always have a hashes.invalid.
This change opts to ignores these implementation details in the probe
tests when comparing the files between synced partitions by
black-listing these known cache files and only validates that the disk
file's on disk files are in sync.
Related-Change-Id: I2b48238d9d684e831d9777a7b18f91a3cef57cd1
Change-Id: Ia9c50d7bc1a74a17c608a3c3cfb8f93196fb709d
Closes-Bug: #1663021
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
...and refactor two extremely similar tests to use a single
helper method - the only paramerization being the existence
or not of hashes.pkl at start of the test.
Change-Id: I601218a9a031e7fc77bc53ea735e89700ec1647d
Related-Change: Ia43ec2cf7ab715ec37f0044625a10aeb6420f6e3
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patches fixes Swift's IO priority control on the AArch64 architecture
by getting the correct __NR_ioprio_set value.
Change-Id: Ic93ce80fde223074e7d1a5338c8cf88863c6ddeb
Closes-Bug: #1658405
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The reported timing can be 00:59:59 sometimes, but is still valid. This
will fail in the tests, as seen in [1].
This patch fixes this by mocking the current time, ensuring that the
first two rebalances happen at the same time.
[1] http://logs.openstack.org/97/337297/32/check/gate-swift-python27-ubuntu-xenial/46203f1/console.html#_2017-02-08_07_28_42_589176
Change-Id: I0fd43d5bb13d0e88126f4f6ba14fb87faab6df9c
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Trivialfix
Change-Id: I4862d073adecf1cc5312a64795ad890eeddf774d
|
|\ \ \ \ \ \ |
|