| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
I noticed that gdb's DAP code did not provide a way to see register
values. DAP defines a "register" scope, which this patch implements.
This patch also adds the missing (and optional) "presentationHint" to
scopes.
|
|
|
|
|
|
|
| |
The DAP scopes request examines the symbols in a block tree, but
neglects to omit types. This patch fixes the problem.
|
|
|
|
|
|
|
|
| |
This adds the DAP readMemory and writeMemory requests. A small change
to the evaluation code is needed in order to test this -- this is one
of the few ways for a client to actually acquire a memory reference.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When writing an unwinder it is necessary to create a new class to act
as a frame-id. This new class is almost certainly just going to set a
'sp' and 'pc' attribute within the instance.
This commit adds a little helper class gdb.unwinder.FrameId that does
this job. Users can make use of this to avoid having to write out
standard boilerplate code any time they write an unwinder.
Of course, if the user wants their FrameId class to be more
complicated in some way, then they can still write their own class,
just like they could before.
I've simplified the example code in the documentation to now use the
new helper class, and I've also made use of this helper within the
testsuite.
Any existing user code will continue to work just as it did before
after this change.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit makes a few related changes to the gdb.unwinder.Unwinder
class attributes:
1. The 'name' attribute is now a read-only attribute. This prevents
user code from changing the name after registering the unwinder. It
seems very unlikely that any user is actually trying to do this in
the wild, so I'm not very worried that this will upset anyone,
2. We now validate that the name is a string in the
Unwinder.__init__ method, and throw an error if this is not the
case. Hopefully nobody was doing this in the wild. This should
make it easier to ensure the 'info unwinder' command shows sane
output (how to display a non-string name for an unwinder?),
3. The 'enabled' attribute is now implemented with a getter and
setter. In the setter we ensure that the new value is a boolean,
but the real important change is that we call
'gdb.invalidate_cached_frames()'. This means that the backtrace
will be updated if a user manually disables an unwinder (rather than
calling the 'disable unwinder' command). It is not unreasonable to
think that a user might register multiple unwinders (relating to
some project) and have one command that disables/enables all the
related unwinders. This command might operate by poking the enabled
attribute of each unwinder object directly, after this commit, this
would now work correctly.
There's tests for all the changes, and lots of documentation updates
that both cover the new changes, but also further improve (I think)
the general documentation for GDB's Unwinder API.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The evaluate command supports a "context" parameter which tells the
adapter the context in which an evaluation occurs. One of the
supported values is "repl", which we took to mean evaluation of a gdb
command. That is what this patch implements.
Note that some gdb commands probably will not work correctly with the
rest of the protocol. For example if the user types "continue",
confusion may result.
This patch requires the earlier patch to fix up scopes in DAP.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Internal AdaCore DAP testing on Windows has had occasional failures
that show:
assert threading.current_thread() is _dap_thread
I think this is a race in DAP startup: the _dap_thread global is only
set on return from start_thread, but it seems possible that the thread
itself could already run and encounter a @in_dap_thread decorator.
This patch fixes the problem by setting the global before running any
of the code in the new thread. This also lets us remove a FIXME.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This input sequence is accepted by DAP:
...
{"seq": 4, "type": "request", "command": "configurationDone"}Content-Length: 84
...
This input sequence has the same effect:
...
{"seq": 4, "type": "request", "command": "configurationDone"}ignorethis
Content-Length: 84
...
but the 'ignorethis' part is silently ignored.
Log the ignored bit, such that we have:
...
READ: <<<{"seq": 4, "type": "request", "command": "configurationDone"}>>>
WROTE: <<<{"request_seq": 4, "type": "response", "command": "configurationDone"
, "success": true}>>>
+++ run
IGNORED: <<<b'ignorethis'>>>
...
|
|
|
|
|
| |
According to black 23, gdb/printing.py was mis-formatted. This patch
fixes it.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DAP code already claimed to implement "scopes" and "evaluate", but
this wasn't done completely correctly. This patch implements these
and also implements the "variables" request.
After this patch, variables and scopes correctly report their
sub-structure. This also interfaces with the gdb pretty-printer API,
so the output of pretty-printers is available.
|
|
|
|
|
|
|
|
|
|
| |
Tom de Vries pointed out that one DAP test failed on Python 3.6
because gdb.Frame is not hashable.
This patch fixes the problem by using a list to hold the frames. This
is less efficient but there normally won't be that many frames.
Tested-by: Tom de Vries <tdevries@suse.de>
|
|
|
|
|
|
|
|
|
| |
The DAP stackTrace implementation did not fully account for frames
without debuginfo. Attemping this would yield a result like:
{"request_seq": 5, "type": "response", "command": "stackTrace", "success": false, "message": "'NoneType' object has no attribute 'filename'", "seq": 11}
This patch fixes the problem by adding another check for None.
|
|
|
|
|
|
|
|
|
|
|
| |
The copyright years in the ROCm files (e.g. solib-rocm.c) are wrong,
they end in 2022 instead of 2023. I suppose because I posted (or at
least prepared) the patches in 2022 but merged them in 2023, and forgot
to update the year. I found a bunch of other files that are in the same
situation. Fix them all up.
Change-Id: Ia55f5b563606c2ba6a89046f22bc0bf1c0ff2e10
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
|
|
|
|
| |
Change-Id: Ie8ec8870a16d71c5858f5d08958309d23c318302
Reviewed-By: Tom Tromey <tom@tromey.com>
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
|
|
|
|
|
| |
The "sys" import is unused in several Python files. This removes this
line from all the places where it is unnecessary.
|
|
|
|
|
|
|
|
|
| |
Python functions implementing DAP requests should not use positional
parameters -- it only makes sense to call them with keyword arguments.
This patch changes the few remaining cases to start with the special
"*" parameter, following this rule.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On openSUSE Leap 15.4 with python 3.6, the gdb.dap/basic-dap.exp test-case
fails as follows:
...
ERROR: eof reading json header
while executing
"error "eof reading json header""
invoked from within
"expect {
-i exp19 -timeout 10
-re "^Content-Length: (\[0-9\]+)\r\n" {
set length $expect_out(1,string)
exp_continue
}
-re "^(\[^\r\n\]+)..."
("uplevel" body line 1)
invoked from within
"uplevel $body" NONE eof reading json header
UNRESOLVED: gdb.dap/basic-dap.exp: startup - initialize
...
Investigation using a "catch throw" shows that:
...
(gdb)
at gdb/python/py-utils.c:396
396 error (_("Error occurred in Python: %s"), msg.get ());
(gdb) p msg.get ()
$1 = 0x2b91d10 "module 'queue' has no attribute 'SimpleQueue'"
...
The python class queue.SimpleQueue was introduced in python 3.7.
Fix this by falling back to queue.Queue for python <= 3.6.
Tested on x86_64-linux, by successfully running the test-case:
...
# of expected passes 47
...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Debugger Adapter Protocol is a JSON-RPC protocol that IDEs can use
to communicate with debuggers. You can find more information here:
https://microsoft.github.io/debug-adapter-protocol/
Frequently this is implemented as a shim, but it seemed to me that GDB
could implement it directly, via the Python API. This patch is the
initial implementation.
DAP is implemented as a new "interp". This is slightly weird, because
it doesn't act like an ordinary interpreter -- for example it doesn't
implement a command syntax, and doesn't use GDB's ordinary event loop.
However, this seemed like the best approach overall.
To run GDB in this mode, use:
gdb -i=dap
The DAP code will accept JSON-RPC messages on stdin and print
responses to stdout. GDB redirects the inferior's stdout to a new
pipe so that output can be encapsulated by the protocol.
The Python code uses multiple threads to do its work. Separate
threads are used for reading JSON from the client and for writing JSON
to the client. All GDB work is done in the main thread. (The first
implementation used asyncio, but this had some limitations, and so I
rewrote it to use threads instead.)
This is not a complete implementation of the protocol, but it does
implement enough to demonstrate that the overall approach works.
There is a rudimentary test suite. It uses a JSON parser written in
pure Tcl. This parser is under the same license as Tcl itself, so I
felt it was acceptable to simply import it into the tree.
There is also a bit of documentation -- just documenting the new
interpreter name.
|
|
|
|
|
|
|
| |
This commit is the result of running the gdb/copyright.py script,
which automated the update of the copyright year range for all
source files managed by the GDB project to be updated to include
year 2023.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similarly to booleans and following the fix for PR python/29217 make
`gdb.parameter' accept `None' for `unlimited' with parameters of the
PARAM_UINTEGER, PARAM_INTEGER, and PARAM_ZUINTEGER_UNLIMITED types, as
`None' is already returned by parameters of the two former types, so
one might expect to be able to feed it back. It also makes it possible
to avoid the need to know what the internal integer representation is
for the special setting of `unlimited'.
Expand the testsuite accordingly.
Approved-By: Simon Marchi <simon.marchi@polymtl.ca>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit was inspired by this stackoverflow post:
https://stackoverflow.com/questions/73491793/why-is-there-a-%C2%B1-in-lea-rax-rip-%C2%B1-0xeb3
One of the comments helpfully links to this Python test case:
from pygments import formatters, lexers, highlight
def colorize_disasm(content, gdbarch):
try:
lexer = lexers.get_lexer_by_name("asm")
formatter = formatters.TerminalFormatter()
return highlight(content, lexer, formatter).rstrip().encode()
except:
return None
print(colorize_disasm("lea [rip+0x211] # COMMENT", None).decode())
Run the test case and you should see that the '+' character is
underlined, and could be confused with a combined +/- symbol.
What's happening is that Pygments is failing to parse the input text,
and the '+' is actually being marked in the error style. The error
style is red and underlined.
It is worth noting that the assembly instruction being disassembled
here is an x86-64 instruction in the 'intel' disassembly style, rather
than the default att style. Clearly the Pygments module expects the
att syntax by default.
If we change the test case to this:
from pygments import formatters, lexers, highlight
def colorize_disasm(content, gdbarch):
try:
lexer = lexers.get_lexer_by_name("asm")
lexer.add_filter('raiseonerror')
formatter = formatters.TerminalFormatter()
return highlight(content, lexer, formatter).rstrip().encode()
except:
return None
res = colorize_disasm("lea rax,[rip+0xeb3] # COMMENT", None)
if res:
print(res.decode())
else:
print("No result!")
Here I've added the call: lexer.add_filter('raiseonerror'), and I am
now checking to see if the result is None or not. Running this and
the test now print 'No result!' - instead of styling the '+' in the
error style, we instead give up on the styling attempt.
There are two things we need to fix relating to this disassembly
text. First, Pygments is expecting att style disassembly, not the
intel style that this example uses. Fortunately, Pygments also
supports the intel style, all we need to do is use the 'nasm' lexer
instead of the 'asm' lexer.
However, this leads to the second problem; in our disassembler line we
have '# COMMENT'. The "official" Intel disassembler style uses ';'
for its comment character, however, gas and libopcodes use '#' as the
comment character, as gas uses ';' for an instruction separator.
Unfortunately, Pygments expects ';' as the comment character, and
treats '#' as an error, which means, with the addition of the
'raiseonerror' filter, that any line containing a '#' comment, will
not get styled correctly.
However, as the i386 disassembler never produces a '#' character other
than for comments, we can easily "fix" Pygments parsing of the
disassembly line. This is done by creating a filter. This filter
looks for an Error token with the value '#', we then change this into
a comment token. Every token after this (until the end of the line)
is also converted into a comment.
In this commit I do the following:
1. Check the 'disassembly-flavor' setting and select between the
'asm' and 'nasm' lexers based on the setting. If the setting is not
available then the 'asm' lexer is used by default,
2. Use "add_filter('raiseonerror')" to ensure that the formatted
output will not include any error text, which would be underlined,
and might be confusing,
3. If the 'nasm' lexer is selected, then add an additional filter
that will format '#' and all other text on the line, as a comment,
and
4. If Pygments throws an exception, instead of returning None,
return the original, unmodified content. This will mean that this
one instruction is printed without styling, but GDB will continue to
call into the Python code to style later instructions.
I haven't included a test specifically for the above error case,
though I have manually check that the above case now styles
correctly (with no underline). The existing style tests check that
the disassembler styling still works though, so I know I've not
generally broken things.
One final thought I have after looking at this issue is that I wonder
now if using Pygments for styling disassembly from every architecture
is actually a good idea?
Clearly, the 'asm' lexer is OK with att style x86-64, but not OK with
intel style x86-64, so who knows how well it will handle other random
architectures?
When I first added this feature I tested it against some random
RISC-V, ARM, and X86-64 (att style) code, and it seemed fine, but I
never tried to make an exhaustive check of all instructions, so its
quite possible that there are corner cases where things are styled
incorrectly.
With the above changes I think that things should be a bit better
now. If a particular instruction doesn't parse correctly then our
Pygments based styling code will just not style that one instruction.
This is combined with the fact that many architectures are now moving
to libopcodes based styling, which is much more reliable.
So, I think it is fine to keep using Pygments as a fallback mechanism
for styling all architectures, even if we know it might not be perfect
in all cases.
|
|
|
|
|
|
|
|
|
| |
GDB overwrites Python's sys.stdout and sys.stderr, but does not
properly implement the 'flush' method -- it only ever will flush
stdout. This patch fixes the bug. I couldn't find a straightforward
way to write a test for this.
|
|
|
|
|
| |
Running 'black' on gdb fixed a couple of small issues. This patch is
the result.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PR python/29217 points out that gdb.parameter will return bool values,
but gdb.set_parameter will not properly accept them. This patch fixes
the problem by adding a special case to set_parameter.
I looked at a fix involving rewriting set_parameter in C++. However,
this one is simpler.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29217
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Pierre-Marie noticed that, while gdb.events is a Python module, it
can't be imported. This patch changes how this module is created, so
that it can be imported, while also ensuring that the module is always
visible, just as it was in the past.
This new approach required one non-obvious change -- when running
gdb.base/warning.exp, where --data-directory is intentionally not
found, the event registries can now be nullptr. Consequently, this
patch probably also requires
https://sourceware.org/pipermail/gdb-patches/2022-June/189796.html
Note that this patch obsoletes
https://sourceware.org/pipermail/gdb-patches/2022-June/189797.html
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit extends the Python API to include disassembler support.
The motivation for this commit was to provide an API by which the user
could write Python scripts that would augment the output of the
disassembler.
To achieve this I have followed the model of the existing libopcodes
disassembler, that is, instructions are disassembled one by one. This
does restrict the type of things that it is possible to do from a
Python script, i.e. all additional output has to fit on a single line,
but this was all I needed, and creating something more complex would,
I think, require greater changes to how GDB's internal disassembler
operates.
The disassembler API is contained in the new gdb.disassembler module,
which defines the following classes:
DisassembleInfo
Similar to libopcodes disassemble_info structure, has read-only
properties: address, architecture, and progspace. And has methods:
__init__, read_memory, and is_valid.
Each time GDB wants an instruction disassembled, an instance of
this class is passed to a user written disassembler function, by
reading the properties, and calling the methods (and other support
methods in the gdb.disassembler module) the user can perform and
return the disassembly.
Disassembler
This is a base-class which user written disassemblers should
inherit from. This base class provides base implementations of
__init__ and __call__ which the user written disassembler should
override.
DisassemblerResult
This class can be used to hold the result of a call to the
disassembler, it's really just a wrapper around a string (the text
of the disassembled instruction) and a length (in bytes). The user
can return an instance of this class from Disassembler.__call__ to
represent the newly disassembled instruction.
The gdb.disassembler module also provides the following functions:
register_disassembler
This function registers an instance of a Disassembler sub-class
as a disassembler, either for one specific architecture, or, as a
global disassembler for all architectures.
builtin_disassemble
This provides access to GDB's builtin disassembler. A common
use case that I see is augmenting the existing disassembler output.
The user code can call this function to have GDB disassemble the
instruction in the normal way. The user gets back a
DisassemblerResult object, which they can then read in order to
augment the disassembler output in any way they wish.
This function also provides a mechanism to intercept the
disassemblers reads of memory, thus the user can adjust what GDB
sees when it is disassembling.
The included documentation provides a more detailed description of the
API.
There is also a new CLI command added:
maint info python-disassemblers
This command is defined in the Python gdb.disassemblers module, and
can be used to list the currently registered Python disassemblers.
|
|
|
|
|
|
|
|
| |
I found another more place that still had a workaround for Python 2.
This patch removes it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New in this version:
- Add a PY_MAJOR_VERSION check in configure.ac / AC_TRY_LIBPYTHON. If
the user passes --with-python=python2, this will cause a configure
failure saying that GDB only supports Python 3.
Support for Python 2 is a maintenance burden for any patches touching
Python support. Among others, the differences between Python 2 and 3
string and integer types are subtle. It requires a lot of effort and
thinking to get something that behaves correctly on both. And that's if
the author and reviewer of the patch even remember to test with Python
2.
See this thread for an example:
https://sourceware.org/pipermail/gdb-patches/2021-December/184260.html
So, remove Python 2 support. Update the documentation to state that GDB
can be built against Python 3 (as opposed to Python 2 or 3).
Update all the spots that use:
- sys.version_info
- IS_PY3K
- PY_MAJOR_VERSION
- gdb_py_is_py3k
... to only keep the Python 3 portions and drop the use of some
now-removed compatibility macros.
I did not update the configure script more than just removing the
explicit references to Python 2. We could maybe do more there, like
check the Python version and reject it if that version is not
supported. Otherwise (with this patch), things will only fail at
compile time, so it won't really be clear to the user that they are
trying to use an unsupported Python version. But I'm a bit lost in the
configure code that checks for Python, so I kept that for later.
Change-Id: I75b0f79c148afbe3c07ac664cfa9cade052c0c62
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The motivation for this patch is the fact that py-micmd.c doesn't build
with Python 2, due to PyDict_GetItemWithError being a Python 3-only
function:
CXX python/py-micmd.o
/home/smarchi/src/binutils-gdb/gdb/python/py-micmd.c: In function ‘int micmdpy_uninstall_command(micmdpy_object*)’:
/home/smarchi/src/binutils-gdb/gdb/python/py-micmd.c:430:20: error: ‘PyDict_GetItemWithError’ was not declared in this scope; did you mean ‘PyDict_GetItemString’?
430 | PyObject *curr = PyDict_GetItemWithError (mi_cmd_dict.get (),
| ^~~~~~~~~~~~~~~~~~~~~~~
| PyDict_GetItemString
A first solution to fix this would be to try to replace
PyDict_GetItemWithError equivalent Python 2 code. But I looked at why
we are doing this in the first place: it is to maintain the
`gdb._mi_commands` Python dictionary that we use as a `name ->
gdb.MICommand object` map. Since the `gdb._mi_commands` dictionary is
never actually used in Python, it seems like a lot of trouble to use a
Python object for this.
My first idea was to replace it with a C++ map
(std::unordered_map<std::string, gdbpy_ref<micmdpy_object>>). While
implementing this, I realized we don't really need this map at all. The
mi_command_py objects registered in the main MI command table can own
their backing micmdpy_object (that's a gdb.MICommand, but seen from the
C++ code). To know whether an mi_command is an mi_command_py, we can
use a dynamic cast. Since there's one less data structure to maintain,
there are less chances of messing things up.
- Change mi_command_py::m_pyobj to a gdbpy_ref, the mi_command_py is
now what keeps the MICommand alive.
- Set micmdpy_object::mi_command in the constructor of mi_command_py.
If mi_command_py manages setting/clearing that field in
swap_python_object, I think it makes sense that it also takes care of
setting it initially.
- Move a bunch of checks from micmdpy_install_command to
swap_python_object, and make them gdb_asserts.
- In micmdpy_install_command, start by doing an mi_cmd_lookup. This is
needed to know whether there's a Python MI command already registered
with that name. But we can already tell if there's a non-Python
command registered with that name. Return an error if that happens,
rather than waiting for insert_mi_cmd_entry to fail. Change the
error message to "name is already in use" rather than "may already be
in use", since it's more precise.
I asked Andrew about the original intent of using a Python dictionary
object to hold the command objects. The reason was to make sure the
objects get destroyed when the Python runtime gets finalized, not later.
Holding the objects in global C++ data structures and not doing anything
more means that the held Python objects will be decref'd after the
Python interpreter has been finalized. That's not desirable. I tried
it and it indeed segfaults.
Handle this by adding a gdbpy_finalize_micommands function called in
finalize_python. This is the mirror of gdbpy_initialize_micommands
called in do_start_initialization. In there, delete all Python MI
commands. I think it makes sense to do it this way: if it was somehow
possible to unload Python support from GDB in the middle of a session
we'd want to unregister any Python MI command. Otherwise, these MI
commands would be backed with a stale PyObject or simply nothing.
Delete tests that were related to `gdb._mi_commands`.
Co-Authored-By: Andrew Burgess <aburgess@redhat.com>
Change-Id: I060d5ebc7a096c67487998a8a4ca1e8e56f12cd3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit allows a user to create custom MI commands using Python
similarly to what is possible for Python CLI commands.
A new subclass of mi_command is defined for Python MI commands,
mi_command_py. A new file, gdb/python/py-micmd.c contains the logic
for Python MI commands.
This commit is based on work linked too from this mailing list thread:
https://sourceware.org/pipermail/gdb/2021-November/049774.html
Which has also been previously posted to the mailing list here:
https://sourceware.org/pipermail/gdb-patches/2019-May/158010.html
And was recently reposted here:
https://sourceware.org/pipermail/gdb-patches/2022-January/185190.html
The version in this patch takes some core code from the previously
posted patches, but also has some significant differences, especially
after the feedback given here:
https://sourceware.org/pipermail/gdb-patches/2022-February/185767.html
A new MI command can be implemented in Python like this:
class echo_args(gdb.MICommand):
def invoke(self, args):
return { 'args': args }
echo_args("-echo-args")
The 'args' parameter (to the invoke method) is a list
containing (almost) all command line arguments passed to the MI
command (--thread and --frame are handled before the Python code is
called, and removed from the args list). This list can be empty if
the MI command was passed no arguments.
When used within gdb the above command produced output like this:
(gdb)
-echo-args a b c
^done,args=["a","b","c"]
(gdb)
The 'invoke' method of the new command must return a dictionary. The
keys of this dictionary are then used as the field names in the mi
command output (e.g. 'args' in the above).
The values of the result returned by invoke can be dictionaries,
lists, iterators, or an object that can be converted to a string.
These are processed recursively to create the mi output. And so, this
is valid:
class new_command(gdb.MICommand):
def invoke(self,args):
return { 'result_one': { 'abc': 123, 'def': 'Hello' },
'result_two': [ { 'a': 1, 'b': 2 },
{ 'c': 3, 'd': 4 } ] }
Which produces output like:
(gdb)
-new-command
^done,result_one={abc="123",def="Hello"},result_two=[{a="1",b="2"},{c="3",d="4"}]
(gdb)
I have required that the fields names used in mi result output must
match the regexp: "^[a-zA-Z][-_a-zA-Z0-9]*$" (without the quotes).
This restriction was never written down anywhere before, but seems
sensible to me, and we can always loosen this rule later if it proves
to be a problem. Much harder to try and add a restriction later, once
people are already using the API.
What follows are some details about how this implementation differs
from the original patch that was posted to the mailing list.
In this patch, I have changed how the lifetime of the Python
gdb.MICommand objects is managed. In the original patch, these object
were kept alive by an owned reference within the mi_command_py object.
As such, the Python object would not be deleted until the
mi_command_py object itself was deleted.
This caused a problem, the mi_command_py were held in the global mi
command table (in mi/mi-cmds.c), which, as a global, was not cleared
until program shutdown. By this point the Python interpreter has
already been shutdown. Attempting to delete the mi_command_py object
at this point was causing GDB to try and invoke Python code after
finalising the Python interpreter, and we would crash.
To work around this problem, the original patch added code in
python/python.c that would search the mi command table, and delete the
mi_command_py objects before the Python environment was finalised.
In contrast, in this patch, I have added a new global dictionary to
the gdb module, gdb._mi_commands. We already have several such global
data stores related to pretty printers, and frame unwinders.
The MICommand objects are placed into the new gdb.mi_commands
dictionary, and it is this reference that keeps the objects alive.
When GDB's Python interpreter is shut down gdb._mi_commands is deleted,
and any MICommand objects within it are deleted at this point.
This change avoids having to make the mi_cmd_table global, and walk
over it from within GDB's python related code.
This patch handles command redefinition entirely within GDB's python
code, though this does impose one small restriction which is not
present in the original code (detailed below), I don't think this is a
big issue. However, the original patch relied on being able to
finish executing the mi_command::do_invoke member function after the
mi_command object had been deleted. Though continuing to execute a
member function after an object is deleted is well defined, it is
also (IMHO) risky, its too easy for someone to later add a use of the
object without realising that the object might sometimes, have been
deleted. The new patch avoids this issue.
The one restriction that is added to avoid this, is that an MICommand
object can't be reinitialised with a different command name, so:
(gdb) python cmd = MyMICommand("-abc")
(gdb) python cmd.__init__("-def")
can't reinitialize object with a different command name
This feels like a pretty weird edge case, and I'm happy to live with
this restriction.
I have also changed how the memory is managed for the command name.
In the most recently posted patch series, the command name is moved
into a subclass of mi_command, the python mi_command_py, which
inherits from mi_command is then free to use a smart pointer to manage
the memory for the name.
In this patch, I leave the mi_command class unchanged, and instead
hold the memory for the name within the Python object, as the lifetime
of the Python object always exceeds the c++ object stored in the
mi_cmd_table. This adds a little more complexity in py-micmd.c, but
leaves the mi_command class nice and simple.
Next, this patch adds some extra functionality, there's a
MICommand.name read-only attribute containing the name of the command,
and a read-write MICommand.installed attribute that can be used to
install (make the command available for use) and uninstall (remove the
command from the mi_cmd_table so it can't be used) the command. This
attribute will be automatically updated if a second command replaces
an earlier command.
This patch adds additional error handling, and makes more use the
gdbpy_handle_exception function.
Co-Authored-By: Jan Vrany <jan.vrany@labware.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit moves the two Python functions that are used for styling
into a new module, gdb.styling, there's then a small update in
python.c so GDB can find the functions in their new location.
The motivation for this change is purely to try and reduce the clutter
in the top-level gdb module, and encapsulate related functions into
modules. I did ponder documenting these functions as part of the
Python API, however, doing so would effectively "fix" the API, and I'm
still wondering if there's improvements that could be made, also, the
colorize function is only called in some cases now that GDB prefers
libsource-highlight, so it's not entirely sure how this would work as
part of a user facing API.
Still, despite these functions never having been part of a documented
API, it is possible that a user out there has overridden these to, in
some way, customize how GDB performs styling. Moving the function as
I propose in this patch could break things for that user, however,
fixing this breakage is trivial, and, as these functions were never
documented, I don't think we should be obliged to not break user code
that relies on them.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds styling support to the disassembler output, as such
two new commands are added to GDB:
set style disassembler enabled on|off
show style disassembler enabled
In this commit I make use of the Python Pygments package to provide
the styling. I did investigate making use of libsource-highlight,
however, I found the highlighting results to be inferior to those of
Pygments; only some mnemonics were highlighted, and highlighting of
register names such as r9d and r8d (on x86-64) was incorrect.
To enable disassembler highlighting via Pygments, I've added a new
extension language hook, which is then implemented for Python. This
hook is very similar to the existing hook for source code
colorization.
One possibly odd choice I made with the new hook is to pass a
gdb.Architecture through, even though this is currently unused. The
reason this argument is not used is that, currently, styling is
performed identically for all architectures.
However, even though the Python function used to perform styling of
disassembly output is not part of any documented API, I don't want
to close the door on a user overriding this function to provide
architecture specific styling. To do this, the user would inevitably
require access to the gdb.Architecture, and so I decided to add this
field now.
The styling is applied within gdb_disassembler::print_insn, to achieve
this, gdb_disassembler now writes its output into a temporary buffer,
styling is then applied to the contents of this buffer. Finally the
gdb_disassembler buffer is copied out to its final destination stream.
There's a new test to check that the disassembler output includes some
escape sequences, though I don't check for specific colours; the
precise colors will depend on which instructions are in the
disassembler output, and, I guess, how pygments is configured.
The only negative change with this commit is how we currently style
addresses in GDB.
Currently, when the disassembler wants to print an address, we call
back into GDB, and GDB prints the address value using the `address`
styling, and the symbol name using `function` styling. After this
commit, if pygments is used, then all disassembler styling is done
through pygments, and this include the address and symbol name parts
of the disassembler output.
I don't know how much of an issue this will be for people. There's
already some precedent for this in GDB when we look at source styling.
For example, function names in styled source listings are not styled
using the `function` style, but instead, either GNU Source Highlight,
or pygments gets to decide how the function name should be styled.
If the Python pygments library is not present then GDB will continue
to behave as it always has, the disassembler output is mostly
unstyled, but the address and symbols are styled using the `address`
and `function` styles, as they are today.
However, if the user does `set style disassembler enabled off`, then
all disassembler styling is switched off. This obviously covers the
use of pygments, but also includes the minimal styling done by GDB
when pygments is not available.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The formatting of the help text for 'help set extended-prompt' and
'help show extended-prompt' is a little off.
Here's the offending snippet:
Substitutions are applied to VALUE to compute the real prompt.
The currently defined substitutions are:
\[ Begins a sequence of non-printing characters.
\\ A backslash.
\] Ends a sequence of non-printing characters.
\e The ESC character.
Notice that the line for '\[' is indented more that the others.
Turns out this is due to how we build this help text, something which
is done in Python. We extended a classes __doc__ string with some
dynamically generated text.
The classes doc string looks like this:
"""Set the extended prompt.
Usage: set extended-prompt VALUE
Substitutions are applied to VALUE to compute the real prompt.
The currently defined substitutions are:
"""
Notice the closing """ are in a line of their own, and include some
white space just before. It's this extra white space that's causing
the problem.
Fix the formatting issue by moving the """ to the end of the previous
line. I then add the extra newline in at the point where the doc
string is merged with the dynamically generated text.
Now everything lines up correctly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds support for source files that contain non utf-8
characters when performing source styling using the Python pygments
package. This does not change the behaviour of GDB when the GNU
Source Highlight library is used.
For the following problem description, assume that either GDB is built
without GNU Source Highlight support, of that this has been disabled
using 'maintenance set gnu-source-highlight enabled off'.
The initial problem reported was that a source file containing non
utf-8 characters would cause GDB to print a Python exception, and then
display the source without styling, e.g.:
Python Exception <class 'UnicodeDecodeError'>: 'utf-8' codec can't decode byte 0xc0 in position 142: invalid start byte
/* Source code here, without styling... */
Further, as the user steps through different source files, each time
the problematic source file was evicted from the source cache, and
then later reloaded, the exception would be printed again.
Finally, this problem is only present when using Python 3, this issue
is not present for Python 2.
What makes this especially frustrating is that GDB can clearly print
the source file contents, they're right there... If we disable
styling completely, or make use of the GNU Source Highlight library,
then everything is fine. So why is there an error when we try to
apply styling using Python?
The problem is the use of PyString_FromString (which is an alias for
PyUnicode_FromString in Python 3), this function converts a C string
into a either a Unicode object (Py3) or a str object (Py2). For
Python 2 there is no unicode encoding performed during this function
call, but for Python 3 the input is assumed to be a uft-8 encoding
string for the purpose of the conversion. And here of course, is the
problem, if the source file contains non utf-8 characters, then it
should not be treated as utf-8, but that's what we do, and that's why
we get an error.
My first thought when looking at this was to spot when the
PyString_FromString call failed with a UnicodeDecodeError and silently
ignore the error. This would mean that GDB would print the source
without styling, but would also avoid the annoying exception message.
However, I also make use of `pygmentize`, a command line wrapper
around the Python pygments module, which I use to apply syntax
highlighting in the output of `less`. And this command line wrapper
is quite happy to syntax highlight my source file that contains non
utf-8 characters, so it feels like the problem should be solvable.
It turns out that inside the pygments module there is already support
for guessing the encoding of the incoming file content, if the
incoming content is not already a Unicode string. This is what
happens for Python 2 where the incoming content is of `str` type.
We could try and make GDB smarter when it comes to converting C
strings into Python Unicode objects; this would probably require us to
just try a couple of different encoding schemes rather than just
giving up after utf-8.
However, I figure, why bother? The pygments module already does this
for us, and the colorize API is not part of the documented external
API of GDB. So, why not just change the colorize API, instead of the
content being a Unicode string (for Python 3), lets just make the
content be a bytes object. The pygments module can then take
responsibility for guessing the encoding.
So, currently, the colorize API receives a unicode object, and returns
a unicode object. I propose that the colorize API receive a bytes
object, and return a bytes object.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's sometimes useful to temporarily set some gdb parameter from
Python. Now that the 'endian' crash is fixed, and now that the
current language is no longer captured by the Python layer, it seems
reasonable to add a helper function for this situation.
This adds a new gdb.with_parameter function. This creates a context
manager which temporarily sets some parameter to a specified value.
The old value is restored when the context is exited. This is most
useful with the Python "with" statement:
with gdb.with_parameter('language', 'ada'):
... do Ada stuff
This also adds a simple function to set a parameter,
gdb.set_parameter, as suggested by Andrew.
This is PR python/10790.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=10790
|
|
|
|
|
|
|
|
| |
This commit brings all the changes made by running gdb/copyright.py
as per GDB's Start of New Year Procedure.
For the avoidance of doubt, all changes in this commits were
performed by the script.
|
|
|
|
|
|
|
|
|
| |
Run black 21.12b0 on gdb/, there is a single whitespace change. I will
update the wiki [1] in parallel to bump the version of black to 21.12b0.
[1] https://sourceware.org/gdb/wiki/Internals%20GDB-Python-Coding-Standards
Change-Id: Ib3b859e3506c74a4f15d16f1e44ef402de3b98e2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix these rather obvious warnings reported by flake8:
./lib/gdb/FrameIterator.py:16:1: F401 'gdb' imported but unused
./lib/gdb/FrameIterator.py:17:1: F401 'itertools' imported but unused
./lib/gdb/command/prompt.py:55:26: E712 comparison to False should be 'if cond is False:' or 'if not cond:'
./lib/gdb/command/explore.py:526:9: F841 local variable 'has_explorable_fields' is assigned to but never used
./lib/gdb/command/explore.py:697:56: E712 comparison to False should be 'if cond is False:' or 'if not cond:'
./lib/gdb/command/explore.py:736:62: E712 comparison to False should be 'if cond is False:' or 'if not cond:'
./lib/gdb/command/explore.py:767:61: E712 comparison to False should be 'if cond is False:' or 'if not cond:'
./lib/gdb/command/frame_filters.py:21:1: F401 'copy' imported but unused
./lib/gdb/command/frame_filters.py:22:1: F401 'gdb.FrameIterator.FrameIterator' imported but unused
./lib/gdb/command/frame_filters.py:23:1: F401 'gdb.FrameDecorator.FrameDecorator' imported but unused
./lib/gdb/command/frame_filters.py:25:1: F401 'itertools' imported but unused
./lib/gdb/command/frame_filters.py:179:17: E712 comparison to True should be 'if cond is True:' or 'if cond:'
Change-Id: I4f49c0cb430359ee872222600c61d9c5283b09ab
|
|
|
|
|
|
|
|
|
|
| |
Run black to fix this formatting.
gdb/ChangeLog:
* python/lib/gdb/__init__.py: Format.
Change-Id: I68ea306d1991bf7243b2c8aeeb11719d668851e5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we have multiple registered unwinders, this will helps identify which
unwinder was chosen and make it easier to track down potential problems.
Unwinders have a mandatory name argument, which we can use in the
message.
First, make gdb._execute_unwinders return a tuple containing the name,
in addition to the UnwindInfo. Then, make pyuw_sniffer include the name
in the debug message.
I moved the debug message earlier. I think it's good to print it as
early as possible, so that we see it in case an assert is hit in the
loop below, for example.
gdb/ChangeLog:
* python/lib/gdb/__init__.py (_execute_unwinders): Return tuple
with name of chosen unwinder.
* python/py-unwind.c (pyuw_sniffer): Print name of chosen
unwinder in debug message.
Change-Id: Id603545b44a97df2a39dd1872fe1f38ad5059f03
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While reviewing a patch sent to the mailing list, I noticed there are few
places where python code checks if a variable is 'None' or not by using the
comparison operators '==' and '!='. PEP8[1], which is used as coding standard
in GDBÂ coding standards, recommends using 'is' / 'is not' when comparing to a
singleton such as 'None'.
This patch proposes to change the instances of '== None' by 'is None' and
'!= None' by 'is not None'.
[1] https://www.python.org/dev/peps/pep-0008/
gdb/doc/ChangeLog:
* python.texi (Writing a Pretty-Printer): Use 'is None' instead of
'== None'.
gdb/ChangeLog:
* python/lib/gdb/FrameDecorator.py (FrameDecorator): Use 'is None' instead of
'== None'.
(FrameVars): Use 'is not None' instead of '!= None'.
* python/lib/gdb/command/frame_filters.py (SetFrameFilterPriority): Use 'is None'
instead of '== None' and 'is not None' instead of '!= None'.
gdb/testsuite/ChangeLog:
* gdb.base/premature-dummy-frame-removal.py (TestUnwinder): Use
'is None' instead of '== None' and 'is not None' instead of
'!= None'.
* gdb.python/py-frame-args.py (lookup_function): Same.
* gdb.python/py-framefilter-invalidarg.py (Reverse_Function): Same.
* gdb.python/py-framefilter.py (Reverse_Function): Same.
* gdb.python/py-nested-maps.py (lookup_function): Same.
* gdb.python/py-objfile-script-gdb.py (lookup_function): Same.
* gdb.python/py-prettyprint.py (lookup_function): Same.
* gdb.python/py-section-script.py (lookup_function): Same.
* gdb.python/py-unwind-inline.py (dummy_unwinder): Same.
* gdb.python/python.exp: Same.
* gdb.rust/pp.py (lookup_function): Same.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Re-format all Python files using black [1] version 21.4b0. The goal is
that from now on, we keep all Python files formatted using black. And
that we never have to discuss formatting during review (for these files
at least) ever again.
One change is needed in gdb.python/py-prettyprint.exp, because it
matches the string representation of an exception, which shows source
code. So the change in formatting must be replicated in the expected
regexp.
To document our usage of black I plan on adding this to the "GDB Python
Coding Standards" wiki page [2]:
--8<--
All Python source files under the `gdb/` directory must be formatted
using black version 21.4b0.
This specific version can be installed using:
$ pip3 install 'black == 21.4b0'
All you need to do to re-format files is run `black <file/directory>`,
and black will re-format any Python file it finds in there. It runs
quite fast, so the simplest is to do:
$ black gdb/
from the top-level.
If you notice that black produces changes unrelated to your patch, it's
probably because someone forgot to run it before you. In this case,
don't include unrelated hunks in your patch. Push an obvious patch
fixing the formatting and rebase your work on top of that.
-->8--
Once this is merged, I plan on setting a up an `ignoreRevsFile`
config so that git-blame ignores this commit, as described here:
https://github.com/psf/black#migrating-your-code-style-without-ruining-git-blame
I also plan on working on a git commit hook (checked in the repo) to
automatically check the formatting of the Python files on commit.
[1] https://pypi.org/project/black/
[2] https://sourceware.org/gdb/wiki/Internals%20GDB-Python-Coding-Standards
gdb/ChangeLog:
* Re-format all Python files using black.
gdb/testsuite/ChangeLog:
* Re-format all Python files using black.
* gdb.python/py-prettyprint.exp (run_lang_tests): Adjust.
Change-Id: I28588a22c2406afd6bc2703774ddfff47cd61919
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Python 3.4 has deprecated the imp module in favour of importlib. This
patch avoids the DeprecationWarning. This warning is visible to users
whose libpython.so has been compiled with --with-pydebug.
Considering that even python 3.5 has reached end of life, would it be
better to just use importlib and drop support for python 3.0 to 3.3?
2021-02-28 Boris Staletic <boris.staletic@gmail.com>
* gdb/python/lib/gdb/__init__.py: Use importlib on python 3.4+
to avoid deprecation warnings.
|
|
|
|
|
|
|
|
|
| |
This commits the result of running gdb/copyright.py as per our Start
of New Year procedure...
gdb/ChangeLog
Update copyright year range in copyright header of all GDB files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Consider the test-case gdb.base/async.exp. Using the executable, I run to
main, and land on a line advertised as line 26:
...
$ gdb outputs/gdb.base/async/async -ex start
Reading symbols from outputs/gdb.base/async/async...
Temporary breakpoint 1 at 0x4004e4: file gdb.base/async.c, line 26.
Starting program: outputs/gdb.base/async/async
Temporary breakpoint 1, main () at gdb.base/async.c:26
26 y = foo ();
...
But actually, the line turns out to be line 28:
...
$ cat -n gdb.base/async.c
...
26 y = 2;
27 z = 9;
28 y = foo ();
...
This is caused by the following: the python colorizer initializes the lexer
with default options (no second argument to get_lexer_for_filename):
...
def colorize(filename, contents):
# Don't want any errors.
try:
lexer = lexers.get_lexer_for_filename(filename)
formatter = formatters.TerminalFormatter()
return highlight(contents, lexer, formatter)
...
which include option stripnl=True, which strips leading and trailing newlines.
This causes the python colorizer to strip the two leading newlines of async.c.
Fix this by initializing the lexer with stripnl=False.
Build and reg-tested on x86_64-linux.
gdb/ChangeLog:
2020-04-10 Tom de Vries <tdevries@suse.de>
PR cli/25808
* python/lib/gdb/__init__.py: Initialize lexer with stripnl=False.
gdb/testsuite/ChangeLog:
2020-04-10 Tom de Vries <tdevries@suse.de>
PR cli/25808
* gdb.base/style.c: Add leading newlines.
* gdb.base/style.exp: Use gdb_get_line_number to get specific lines.
Check listing of main's one-line body.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While GNU Source Highlight is good, it's also difficult to build and
distribute. For one thing, it needs Boost. For another, it has an
unusual configuration and installation setup.
Pygments, a Python library, doesn't suffer from these issues, and so I
thought it would be a reasonable fallback.
This patch implements this idea. GNU Source Highlight is preferred,
but if it is unavailable (or fails), the extension languages are
tried. This patch also implements support for Pygments.
Something similar could be done for Guile, using:
https://dthompson.us/projects/guile-syntax-highlight.html
However, I don't know enough about Guile internals to make this
happen, so I have not done it here.
gdb/ChangeLog
2020-01-21 Tom Tromey <tromey@adacore.com>
* source-cache.c (source_cache::ensure): Call ext_lang_colorize.
* python/python.c (python_extension_ops): Update.
(gdbpy_colorize): New function.
* python/lib/gdb/__init__.py (colorize): New function.
* extension.h (ext_lang_colorize): Declare.
* extension.c (ext_lang_colorize): New function.
* extension-priv.h (struct extension_language_ops) <colorize>: New
member.
* cli/cli-style.c (_initialize_cli_style): Update help text.
Change-Id: I5e21623ee05f1f66baaa6deaeca78b578c031bf4
|
|
|
|
|
|
| |
gdb/ChangeLog:
Update copyright year range in all GDB files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm seeing this failure:
...
(gdb) print /x $bnd0 = {0x10, 0x20}^M
$23 = {lbound = 0x10, ubound = 0x20}^M
(gdb) FAIL: gdb.arch/i386-mpx.exp: verify size for bnd0
...
The test expects a pretty printer to be actived printing 'size 17':
...
set test_string ".*\\\: size 17.*"
gdb_test "print /x \$bnd0 = {0x10, 0x20}" "$test_string" "verify size for bnd0"
...
but that doesn't happen.
The pretty printer is for the type of the $bnd0 register, which is created
here in i386_bnd_type:
...
t = arch_composite_type (gdbarch,
"__gdb_builtin_type_bound128", TYPE_CODE_STRUCT);
append_composite_type_field (t, "lbound", bt->builtin_data_ptr);
append_composite_type_field (t, "ubound", bt->builtin_data_ptr);
TYPE_NAME (t) = "builtin_type_bound128";
...
And the pretty-printer is registered here in
gdb/python/lib/gdb/printer/bound_registers.py:
...
gdb.printing.add_builtin_pretty_printer ('mpx_bound128',
'^__gdb_builtin_type_bound128',
MpxBound128Printer)
...
Fix the pretty printer by changing the regexp argument of
add_builtin_pretty_printer to match "builtin_type_bound128", the TYPE_NAME.
Tested on x86_64-linux.
gdb/ChangeLog:
2019-10-09 Tom de Vries <tdevries@suse.de>
* python/lib/gdb/printer/bound_registers.py: Use
'^builtin_type_bound128' as regexp argument for
add_builtin_pretty_printer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PyFile_FromString and PyFile_AsFile have been removed in Python 3.
There is no obvious replacement that works here, and we can't just
pass our FILE* to a DLL in Windows because it may use a different
C runtime.
So we just call a Python function which reads and executes file
contents. Care must be taken to execute it in the context of
__main__.
Tested by inverting the ifdef and running the testsuite on Debian
Linux (even without the patch, I failed at running the testsuite
on Windows). I did test with both Python 2 and 3.
gdb/ChangeLog:
2019-08-22 Christian Biesinger <cbiesinger@google.com>
* python/lib/gdb/__init__.py (_execute_file): New function.
* python/python.c (python_run_simple_file): Call gdb._execute_file
on Windows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I could not tell if GdbSetPythonDirectory is internal or not because
I could not find any references to it, so I left it as-is.
Tested by running the testsuite on gdb.python/*.exp; everything still
passes.
2019-08-15 Christian Biesinger <cbiesinger@google.com>
* python/lib/gdb/__init__.py (GdbOutputFile): Rename to have a
leading underscore.
(GdbOutputErrorFile): Likewise.
(global scope): Adjust constructor calls to GdbOutput{,Error}File
accordingly.
(execute_unwinders): Rename to have a leading underscore.
(auto_load_packages): Likewise.
(global scope): Adjust call to auto_load_packages accordingly.
(GdbSetPythonDirectory): Likewise.
* python/py-unwind.c (pyuw_sniffer): Call _execute_unwinders
instead of execute_unwinders.
gdb/testsuite/ChangeLog:
2019-08-15 Christian Biesinger <cbiesinger@google.com>
* gdb.python/python.exp: Expect a leading underscore on
GdbOutput{,Error}File.
|