| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This was already changed in the past, bringing to this branch
to te able to run CI here too.
|
|
|
|
|
| |
The spec says a new connection must be established and some servers
reply with a 400 Bad Request when reusing an existing connection.
|
|
|
|
|
|
|
|
|
| |
We should not schedule a new read after reading the close message, since
we don't expect more input. This fixes a crash due to an assert that
checks that the input source is NULL when the connection is destroyed
that happens when the connection is destroyed in the closed callback.
Fixes #181
|
|
|
|
|
| |
The GByteArray allocated in the beginning is not freed in case
of error.
|
|
|
|
|
| |
g_byte_array_append() can reallocate its data, so make sure that we
don't rely on any pointer pointing to it after calling it.
|
|
|
|
|
|
|
| |
We are currently ignoring the Set-Cookie in handshake response because
the SoupCookieJar feature handles the header in the got-headers signal
that is never emitted for informational messages. SoupCookieJar should
also handle Set-Cookie for switching protocols informational messages.
|
|
|
|
|
|
|
|
|
| |
Instead of having two pollable sources constantly running, always try to
read/write without blocking and start polling if the operation returns
G_IO_ERROR_WOULD_BLOCK. This patch also fixes test
/websocket/direct/close-after-close that was passing but not actually
testing what we wanted, because the client close was never sent. When
the mutex is released, the frame has been queued, but not sent.
|
|
|
|
|
|
|
|
| |
We use GByteArray, which can be reallocated, so be careful when
keeping track of the current position in a message not to use
potentially dangling pointers.
Fixes #160
|
|
|
|
|
|
|
|
|
|
|
| |
Using the code SOUP_WEBSOCKET_CLOSE_NO_STATUS. The spec says that code
should not be included in a frame, but that doesn't mean we can't use it
on the API level to mean no status. We were using 0 internally which is
not a valid code either. When an empty close frame is received we still
reply with a SOUP_WEBSOCKET_CLOSE_NORMAL code frame, but we return
SOUP_WEBSOCKET_CLOSE_NO_STATUS from
soup_websocket_connection_get_close_code() because that's actually what
we received.
|
|
|
|
|
|
|
|
|
|
| |
We currently ignore data frames when close has been received, but we
should also ignore any frame after close has been sent and received.
Currently, if we receive two close frames we end up with the code and
reason of the second frame, while the RFC says: "The WebSocket
Connection Close Code is defined as the status code contained in the
first Close control frame received by the application implementing
this protocol."
|
|
|
|
| |
When close message was already received from the server.
|
|
|
|
|
|
|
| |
RFC 6455:
The server MUST close the connection upon receiving a
frame that is not masked.
|
|
|
|
|
|
|
| |
RFC 6455:
A server MUST NOT mask any frames that it sends to the client. A client
MUST close a connection if it detects a masked frame.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RFC 6455 says that text messages should contains valid UTF-8, and null
characters valid according to RFC 3629. However, we are using
g_utf8_validate(), which considers null characters as errors, to
validate WebSockets text messages. This patch adds an internal
utf8_validate() function based on g_utf8_validate() but allowing null
characters and just returning a gboolean since we are always ignoring
the end parameter in case of errors.
soup_websocket_connection_send_text() assumes the given text is null
terminated, so we need a new public function to allow sending text
messages containing null characters. This patch adds
soup_websocket_connection_send_message() that receives a
SoupWebsocketDataType and GBytes, which is consistent with
SoupWebsocketConnection::message signal.
|
|
|
|
|
|
|
|
| |
That's the case of connections created by SoupSession. In that case, if
the server hasn't closed its end of the connection, we fail to shutdown
the client end, because shutdown_wr_io_stream() does nothing when the io
stream is not a GSocketConnection. So, for SoupIOStream we need to get
the base io stream which is a GSocketConnection.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RFC 6455:
The length of the "Payload data", in bytes: if 0-125, that is the
payload length. If 126, the following 2 bytes interpreted as a
16-bit unsigned integer are the payload length. If 127, the
following 8 bytes interpreted as a 64-bit unsigned integer (the
most significant bit MUST be 0) are the payload length. Multibyte
length quantities are expressed in network byte order. Note that
in all cases, the minimal number of bytes MUST be used to encode
the length, for example, the length of a 124-byte-long string
can't be encoded as the sequence 126, 0, 124.
|
| |
|
|
|
|
| |
Passing data=NULL and length=0 which is consistent with g_bytes_new().
|
|
|
|
|
|
|
|
|
|
| |
(process:20018): GLib-GIO-CRITICAL **: 12:26:09.686: g_task_return_error: assertion 'G_IS_TASK (task)' failed
(process:20018): GLib-GObject-CRITICAL **: 12:26:09.686: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
We are trying to complete the GTask twice, first in
websocket_connect_async_stop() and then in
websocket_connect_async_complete(). The latter should only be called if
the item finishes before got-informational signal is emitted.
|
|
|
|
|
|
|
|
|
| |
When soup_websocket_client_verify_handshake() returns TRUE, the
message connection is stolen and soup_message_io_steal() is called for
the message, making the message to move to FINISHING state. However,
when it returns FALSE, the message stays in RUNNING state forever. We
should call soup_message_io_finished() in that case to ensure the
messages transitions to FINISHING state.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
*.bn was removed from the list, so the tests using it started failing.
Use *.bd instead as a 1-wildcard rule test domain.
|
| |
|
| |
|
|
|
|
|
|
|
| |
It's needed for the gmtime_r check and also for some system types
availability.
Fixes compilation in gnome-continuous (used Ubuntu 16.04 for testing).
|
|
|
|
|
|
| |
Include the custom.vala file while generating the vapi file.
Closes: #13
|
| |
|
|
|
|
|
|
| |
There were some files missing, but many of them were included even
though they should be left out. Now the generated introspection files
are on par with Autotools port.
|
| |
|
| |
|
|
|
|
|
| |
The old name git_args could suggest, that the same variable will be
applied even for the GNOME support.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
By using the compiler's visibility mechanism.
Closes: #10
|
|
|
|
|
|
| |
Technically it's not needed, as they are being included during the
compilation through include macros, but it's better to have them
specified as well.
|
|
|
|
| |
Closes: #11
|
|
|
|
|
|
| |
It's supposed to be 1 and not 0.
Closes: #9
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
The names of the missing ones could be found in the meson log. Also
print a warning if anything that is needed for running all the tests is
missing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For running some of the tests we need the Apache's httpd binary. As we want to
know more about its configuration we have to run it and parse the output. But
here is the first problem, because on Debian we can't run the binary unless the
/etc/apache2/envvars file is sourced, otherwise it ends with failure. The
recommended way to communicate with the Apache is the apachectl that passes
the arguments to httpd and also sources the envvars file. In the ideal world
we could use the apachectl to run the tests as well, but on Fedora any non
trivial call to it ends with the following error:
Passing arguments to httpd using apachectl is no longer supported.
The summary is that for the configuration parsing we will use the apachectl,
but for running the tests we will use the httpd binary.
Closes: #7
|
|
|
|
| |
We are checking for multiple modules in multiple directories.
|