| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
v0.28 rc1
|
| | |
|
|/ |
|
|\
| |
| | |
Docs
|
| | |
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The mbedtls library uses a callback mechanism to allow downstream users
to plug in their own receive and send functions. We implement `bio_read`
and `bio_write` functions, which simply wrap the `git_stream_read` and
`git_stream_write` functions, respectively.
The problem arises due to the return value of the callback functions:
mbedtls expects us to return an `int` containing the actual number of
bytes that were read or written. But this is in fact completely
misdesigned, as callers are allowed to pass in a buffer with length
`SIZE_MAX`. We thus may be unable to represent the number of bytes
written via the return value.
Fix this by only ever reading or writing at most `INT_MAX` bytes.
|
| |
| |
| |
| |
| |
| | |
The mbedtls stream implementation makes use of some global variables
which are not marked as `static`, even though they're only used in this
compilation unit. Fix this and remove a duplicate declaration.
|
| |
| |
| |
| |
| |
| |
| | |
Our `openssl_write` function calls `SSL_write` by passing in both `data`
and `len` arguments directly. Thing is, our `len` parameter is of type
`size_t` and theirs is of type `int`. We thus need to clamp our length
to be at most `INT_MAX`.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Now that the function `git_stream__write_full` exists and callers of
`git_stream_write` have been adjusted, we can lift logic for short
writes out of the stream implementations. Instead, this is now handled
either by `git_stream__write_full` or by callers of `git_stream_write`
directly.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Similar to the write(3) function, implementations of `git_stream_write`
do not guarantee that all bytes are written. Instead, they return the
number of bytes that actually have been written, which may be smaller
than the total number of bytes. Furthermore, due to an interface design
issue, we cannot ever write more than `SSIZE_MAX` bytes at once, as
otherwise we cannot represent the number of bytes written to the caller.
Unfortunately, no caller of `git_stream_write` ever checks the return
value, except to verify that no error occurred. Due to this, they are
susceptible to the case where only partial data has been written.
Fix this by introducing a new function `git_stream__write_full`. In
contrast to `git_stream_write`, it will always return either success or
failure, without returning the number of bytes written. Thus, it is able
to write all `SIZE_MAX` bytes and loop around `git_stream_write` until
all data has been written. Adjust all callers except the BIO callbacks
in our mbedtls and OpenSSL streams, which already do the right thing and
require the amount of bytes written.
|
|/
|
|
|
|
| |
The callback functions that implement the `git_stream` structure are
only used inside of their respective implementation files, but they are
not marked as `static`. Fix this.
|
|\
| |
| | |
Documentation fixes
|
| | |
|
| | |
|
| | |
|
| | |
|
|\ \
| |/
|/| |
ci: add an individual coverity pipeline
|
| | |
|
|/
|
|
|
| |
Coverity is back but it's only read-only! Agh. Just allow it to fail
and not impact the overall job run.
|
|\
| |
| | |
ci: run docurium to create documentation
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Run docurium as part of the build. The goal of this is to be able to
evaluate the documentation in a given pull request; as such, this does
not implement any sort of deployment pipeline.
This will allow us to download a snapshot of the documentation from the
CI build and evaluate the docs for a particular pull request; before
it's been merged.
|
|\ \
| | |
| | | |
ci: return coverity to the nightlies
|
|/ / |
|
|\ \
| | |
| | | |
Clean up some warnings
|
| | |
| | |
| | |
| | |
| | | |
Validate that the return value of the read is not less than INT_MAX,
then cast.
|
| | |
| | |
| | |
| | | |
Index entries are 32 bit unsigned ints, not `size_t`s.
|
| | |
| | |
| | |
| | |
| | | |
The git_describe_format_options.abbreviated_size type is an unsigned
int. There's no need for it to be anything else; keep it what it is.
|
| | |
| | |
| | |
| | |
| | | |
Quiet down a warning from MSVC about how we're potentially losing data.
Validate that our data will fit into the type provided then cast.
|
| | |
| | |
| | |
| | |
| | | |
The transport code returns an `int` with the number of bytes written;
thus only attempt to write at most `INT_MAX`.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Windows doesn't include ssize_t or its _MAX value by default. We are
already declaring ssize_t as SSIZE_T, which is __int64_t on Win64 and
long otherwise. Include its _MAX value as a correspondence to its type.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Our streams implementation takes a `size_t` that indicates the length of
the data buffer to be written, and returns an `ssize_t` that indicates
the length that _was_ written. Clearly no such implementation can write
more than `SSIZE_MAX` bytes. Ensure that each TLS stream implementation
does not try to write more than `SSIZE_MAX` bytes (or smaller; if the
given implementation takes a smaller size).
|
| | |
| | |
| | |
| | |
| | | |
Quiet down a warning from MSVC about how we're potentially losing data.
This is safe since we've explicitly tested it.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
A number of source files have their implementation #ifdef'd out (because
they target another platform). MSVC warns on empty compilation units
(with warning LNK4221). Ignore warning 4221 when creating the object
library.
|
| | | |
|
| | |
| | |
| | |
| | | |
Cast actual filesystem data to the int32_t that index entries store.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The filesystem iterator takes `stat` data from disk and puts them into
index entries, which use 32 bit ints for time (the seconds portion) and
filesize. However, on most systems these are not 32 bit, thus will
typically invoke a warning.
Most users ignore these fields entirely. Diff and checkout code do use
the values, however only for the cache to determine if they should check
file modification. Thus, this is not a critical error (and will cause a
hash recomputation at worst).
|
| | |
| | |
| | |
| | |
| | |
| | | |
Our blob size is a `git_off_t`, which is a signed 64 bit int. This may
be erroneously negative or larger than `SIZE_MAX`. Ensure that the blob
size fits into a `size_t` before casting.
|
| | |
| | |
| | |
| | |
| | | |
Quiet down a warning from MSVC about how we're potentially losing data.
Ensure that we're within a uint16_t before we do.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Quiet down a warning from MSVC about how we're potentially losing data.
This is safe since we've explicitly tested that it's positive and less
than SIZE_MAX.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Quiet down a warning from MSVC about how we're potentially losing data.
This is safe since we've explicitly tested that it's within the range of
0-100.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Quiet down a warning from MSVC about how we're potentially losing data.
This cast is safe since we've explicitly tested that `strip_len` <=
`last_len`.
|
| |/
| |
| |
| | |
Quiet down a warning from MSVC about how we're potentially losing data.
|
|\ \
| |/
|/| |
Nightlies: use `latest` docker images
|
|/ |
|
|\
| |
| | |
index: preserve extension parsing errors
|
| |
| |
| |
| |
| |
| |
| | |
Previously, we would clobber any extension-specific error message with
an "extension is truncated" message. This makes `read_extension`
correctly preserve those errors, takes responsibility for truncation
errors, and adds a new message with the actual extension signature for
unsupported mandatory extensions.
|