abrtd is a daemon that watches for application crashes. When a crash occurs,
it collects the problem data (core file, application’s command line, …) and takes
action according to the configuration and the type of application that crashed.
By default it uses inotify interface  to monitor the dump location
/var/spool/abrt/) for new directories created by C/C++ hook and a Socket API
/var/run/abrt/abrt.socket) used by other hooks like Python hook.
The reason for using socket instead of direct filesystem access is security.
When a Python script throws unhandled exception, Python hook catches it, running
as a part of the broken Python application. The application is running
with certain SELinux privileges, for example it can not execute other
programs, or to create files in
/var/spool/abrt or anything else required
to properly fill a problem directory. Adding these privileges to every
application would weaken the security.
The most suitable solution for the Python application is
to open a socket where
abrtd is listening, write all relevant
data to that socket, and close it.
abrtd handles the rest of the processes.
When C/C++ application crashes kernel uses core_pattern to
handle the crash. Abrt overrides default core_pattern with a pipe
abrt-hook-ccpp executable that stores core dump in abrt’s
dump location and notifies daemon about new crash. It also stores
number of files from
/proc/<PID>/ that might be useful
for debugging —
Format and meaning of these files is described in the documentation
of the Linux kernel .
To enable C/C++ hook use:
systemctl enable --now abrt-ccpp
Variable used to specify a core dump file name template. If
the first character of the pattern is
|, the kernel will treat
the rest of the pattern as a command to run. The core dump will be
written to the standard input of that program instead of to a file.
and kernel produces
core.* files in crashed process` current directory.
Abrt’s C/C++ hook overrides this with:
|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e
which results in kernel calling
abrt-hook-ccpp. Detailed description
can be found in the documentation of the Linux kernel .
To be able to get full featured GDB backtrace from a core dump file, debuginfo
data must be available on the local file system. These data are usually
provided in the form of installable packages, however, ABRT needs to allow
non-privileged users to analyze the core dump file and report the
obtained backtrace to bug tracking tool. Hence, ABRT maintains its own
/var/cache/abrt-di where all users can download and
unpack the required debuginfo packages through
/usr/libexec/abrt-action-install-debuginfo-to-abrt-cache command line
Upon a new core dump file detection ABRT generates a list of build-ids
eu-unstrip -n --core=coredump. When a user decides to
report the core dump file, the ABRT debuginfo tool goes through that list and
remembers those build-ids for which the file
either in the system directories (
/usr/lib/.build-id) or in the ABRT debuginfo directory. Finally, packages
that provide the debug files are looked up in
downloaded and unpacked to the ABRT debuginfo directory.
python3-abrt-addon package provides an exception handler for Python 3
The Python interpreter automatically imports the
abrt.pth file installed in
/usr/lib64/python3.7/site-packages/. This file in turn imports
abrt_exception_handler.py which overrides Python’s default
with a custom handler that forwards unhandled exceptions to
abrtd via its
Automatic import of site-specific modules can be disabled by passing the
option to the Python interpreter:
python -S file.py
Kernel oopses are detected by watcher
abrt-dump-journal-oops, typicaly this
process runs as a daemon and watches systemd-journal. When kernel oops logs
appears, watcher extracts them and creates problem dir, which is further
processed by post-create event handler for type Kerneloops.
Xorg crashes are detected by watcher
abrt-dump-journal-xorg. Mechanism is
same as in oops watcher, systemd-journal is watched and Xorg crashes are
extracted in time of their occurence. In addition xorg watcher can be
configured to search for next Xorg crashes, config file is located in
A problem life cycle is driven by events in ABRT. For example:
- Event 1 — a problem data directory is created.
- Event 2 — problem data is analyzed.
- Event 3 — a problem is reported to Bugzilla.
When a problem is detected and its defining data is stored, the problem is processed by running events on the problem’s data directory. For event configuration how-to, refer to Event configuration.
Standard ABRT installation currently supports several default events that can be selected and used during problem reporting process. Refer to Standard ABRT Installation Supported Events to see the list of these events.
Only following three events are run automatically by ABRT:
- runs after the problem directory creation
- runs after the processing chain is finished to notify user about new problem
- similar to
notifyfor duplicate problems. See Deduplication.
When ABRT catches new crash it compares it to the rest of the stored problems to avoid storing duplicate crashes.
It first checks if there is
uuid item in the problem
directory we are processing.
If there is a
core_backtrace, it iterates over all other dump
directories and computes similarity to their core backtraces (if any).
If one of them is similar enough to be considered duplicate, event processing
is stopped and only
notify-dup event is fired.
If there is an
uuid item (and no core backtrace), simple comparison
uuid hashes is used for duplicate detection.
Elements collected by ABRT¶
Commonly available elements:
||Executable path of the component which caused the
problem. Used by the server to determine
||Problem typem, see Supported problem types.||
||Component which caused this problem.||
||Hostname of the affected machine.||
||Operating system release string.||
||Machine architecture string||
||Kernel version string||
||Time of the occurrence (unixtime)||
||Number of times this problem occurred||
||Unique problem identifier computed as a hash of the first three frames of the backtrace||
Elements dependent on problem type:
||ABRT version string||
||Crashes caught by ABRT|
||cgroup (control group) information for crashed process||
||Machine readable backtrace with no private data||C/C++, Python, Ruby, Kerneloops|
||Original backtrace or backtrace produced by retracing process||C/C++ (after retracing), Python, Ruby, Xorg, Kerneloops|
||List of dynamic libraries loaded at the time of crash||C/C++, Python|
||Likely crash reason and exploitable rating||C/C++|
||C/C++, Python, Ruby, Kerneloops|
||Core dump of the crashing process||C/C++|
||Runtime environment of the process||C/C++, Python|
||List of file descriptors open at the time of crash||C/C++|
||C/C++, Python, Ruby|
||Part of the
||If the problem was already reported, this item contains URLs of the services where it was reported||Reported problems|
||ABRT event log||Reported problems|
Supported problem types¶
Supported values for