Jul 16 12:31:31.876135 kernel: Linux version 5.15.188-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Jul 15 10:04:37 -00 2025 Jul 16 12:31:31.876171 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 16 12:31:31.876193 kernel: BIOS-provided physical RAM map: Jul 16 12:31:31.876200 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 16 12:31:31.876207 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 16 12:31:31.876214 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 16 12:31:31.876223 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jul 16 12:31:31.876248 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jul 16 12:31:31.876254 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 16 12:31:31.876261 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 16 12:31:31.876276 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 16 12:31:31.876283 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 16 12:31:31.876289 kernel: NX (Execute Disable) protection: active Jul 16 12:31:31.876296 kernel: SMBIOS 2.8 present. Jul 16 12:31:31.876305 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jul 16 12:31:31.876319 kernel: Hypervisor detected: KVM Jul 16 12:31:31.876330 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 16 12:31:31.876337 kernel: kvm-clock: cpu 0, msr 4619b001, primary cpu clock Jul 16 12:31:31.876344 kernel: kvm-clock: using sched offset of 4110022780 cycles Jul 16 12:31:31.876351 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 16 12:31:31.876359 kernel: tsc: Detected 2294.608 MHz processor Jul 16 12:31:31.876366 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 16 12:31:31.876380 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 16 12:31:31.876387 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jul 16 12:31:31.876395 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 16 12:31:31.876405 kernel: Using GB pages for direct mapping Jul 16 12:31:31.876412 kernel: ACPI: Early table checksum verification disabled Jul 16 12:31:31.876419 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jul 16 12:31:31.876433 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:31:31.876441 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:31:31.876448 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:31:31.876455 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jul 16 12:31:31.876462 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:31:31.876473 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:31:31.876487 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:31:31.876494 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:31:31.876501 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jul 16 12:31:31.876508 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jul 16 12:31:31.876516 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jul 16 12:31:31.876523 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jul 16 12:31:31.876541 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jul 16 12:31:31.876552 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jul 16 12:31:31.876560 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jul 16 12:31:31.876568 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 16 12:31:31.876576 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 16 12:31:31.876591 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 16 12:31:31.876598 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jul 16 12:31:31.876606 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 16 12:31:31.876616 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jul 16 12:31:31.876630 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 16 12:31:31.876638 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jul 16 12:31:31.876646 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 16 12:31:31.876653 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jul 16 12:31:31.876668 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 16 12:31:31.876676 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jul 16 12:31:31.883714 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 16 12:31:31.883723 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jul 16 12:31:31.883732 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 16 12:31:31.883746 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jul 16 12:31:31.883754 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 16 12:31:31.883763 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 16 12:31:31.883772 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jul 16 12:31:31.883781 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jul 16 12:31:31.883790 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jul 16 12:31:31.883799 kernel: Zone ranges: Jul 16 12:31:31.883808 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 16 12:31:31.883816 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jul 16 12:31:31.883828 kernel: Normal empty Jul 16 12:31:31.883838 kernel: Movable zone start for each node Jul 16 12:31:31.883847 kernel: Early memory node ranges Jul 16 12:31:31.883855 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 16 12:31:31.883864 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jul 16 12:31:31.883873 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jul 16 12:31:31.883882 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 16 12:31:31.883890 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 16 12:31:31.883899 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jul 16 12:31:31.883911 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 16 12:31:31.883919 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 16 12:31:31.883928 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 16 12:31:31.883937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 16 12:31:31.883945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 16 12:31:31.883954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 16 12:31:31.883963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 16 12:31:31.883971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 16 12:31:31.883980 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 16 12:31:31.883992 kernel: TSC deadline timer available Jul 16 12:31:31.884000 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jul 16 12:31:31.884009 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 16 12:31:31.884017 kernel: Booting paravirtualized kernel on KVM Jul 16 12:31:31.884026 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 16 12:31:31.884035 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Jul 16 12:31:31.884044 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 16 12:31:31.884053 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 16 12:31:31.884061 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 16 12:31:31.884073 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Jul 16 12:31:31.884082 kernel: kvm-guest: PV spinlocks enabled Jul 16 12:31:31.884091 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 16 12:31:31.884099 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jul 16 12:31:31.884108 kernel: Policy zone: DMA32 Jul 16 12:31:31.884118 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 16 12:31:31.884127 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 16 12:31:31.884135 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 12:31:31.884147 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 16 12:31:31.884155 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 16 12:31:31.884165 kernel: Memory: 1903832K/2096616K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 192524K reserved, 0K cma-reserved) Jul 16 12:31:31.884173 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 16 12:31:31.884182 kernel: ftrace: allocating 34607 entries in 136 pages Jul 16 12:31:31.884191 kernel: ftrace: allocated 136 pages with 2 groups Jul 16 12:31:31.884199 kernel: rcu: Hierarchical RCU implementation. Jul 16 12:31:31.884209 kernel: rcu: RCU event tracing is enabled. Jul 16 12:31:31.884218 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 16 12:31:31.884229 kernel: Rude variant of Tasks RCU enabled. Jul 16 12:31:31.884238 kernel: Tracing variant of Tasks RCU enabled. Jul 16 12:31:31.884247 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 16 12:31:31.884256 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 16 12:31:31.884264 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jul 16 12:31:31.884273 kernel: random: crng init done Jul 16 12:31:31.884282 kernel: Console: colour VGA+ 80x25 Jul 16 12:31:31.884302 kernel: printk: console [tty0] enabled Jul 16 12:31:31.884312 kernel: printk: console [ttyS0] enabled Jul 16 12:31:31.884321 kernel: ACPI: Core revision 20210730 Jul 16 12:31:31.884330 kernel: APIC: Switch to symmetric I/O mode setup Jul 16 12:31:31.884339 kernel: x2apic enabled Jul 16 12:31:31.884351 kernel: Switched APIC routing to physical x2apic. Jul 16 12:31:31.884360 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jul 16 12:31:31.884370 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jul 16 12:31:31.884379 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 16 12:31:31.884388 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 16 12:31:31.884400 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 16 12:31:31.884409 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 16 12:31:31.884418 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 16 12:31:31.884428 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 16 12:31:31.884437 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 16 12:31:31.884446 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 16 12:31:31.884455 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 16 12:31:31.884464 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 16 12:31:31.884473 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 16 12:31:31.884482 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 16 12:31:31.884491 kernel: TAA: Mitigation: Clear CPU buffers Jul 16 12:31:31.884503 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 16 12:31:31.884513 kernel: GDS: Unknown: Dependent on hypervisor status Jul 16 12:31:31.884522 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 16 12:31:31.884531 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 16 12:31:31.884540 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 16 12:31:31.884549 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 16 12:31:31.884558 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 16 12:31:31.884567 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 16 12:31:31.884576 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 16 12:31:31.884585 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 16 12:31:31.884598 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 16 12:31:31.884607 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 16 12:31:31.884615 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 16 12:31:31.884624 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 16 12:31:31.884634 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jul 16 12:31:31.884643 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jul 16 12:31:31.884652 kernel: Freeing SMP alternatives memory: 32K Jul 16 12:31:31.884661 kernel: pid_max: default: 32768 minimum: 301 Jul 16 12:31:31.884685 kernel: LSM: Security Framework initializing Jul 16 12:31:31.884694 kernel: SELinux: Initializing. Jul 16 12:31:31.884712 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 16 12:31:31.884723 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 16 12:31:31.884737 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Jul 16 12:31:31.884746 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 16 12:31:31.884755 kernel: signal: max sigframe size: 3632 Jul 16 12:31:31.884765 kernel: rcu: Hierarchical SRCU implementation. Jul 16 12:31:31.884774 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 16 12:31:31.884783 kernel: smp: Bringing up secondary CPUs ... Jul 16 12:31:31.884793 kernel: x86: Booting SMP configuration: Jul 16 12:31:31.884802 kernel: .... node #0, CPUs: #1 Jul 16 12:31:31.884811 kernel: kvm-clock: cpu 1, msr 4619b041, secondary cpu clock Jul 16 12:31:31.884824 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 16 12:31:31.884833 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Jul 16 12:31:31.884842 kernel: smp: Brought up 1 node, 2 CPUs Jul 16 12:31:31.884851 kernel: smpboot: Max logical packages: 16 Jul 16 12:31:31.884861 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jul 16 12:31:31.884870 kernel: devtmpfs: initialized Jul 16 12:31:31.884879 kernel: x86/mm: Memory block size: 128MB Jul 16 12:31:31.884889 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 16 12:31:31.884898 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 16 12:31:31.884908 kernel: pinctrl core: initialized pinctrl subsystem Jul 16 12:31:31.884920 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 16 12:31:31.884929 kernel: audit: initializing netlink subsys (disabled) Jul 16 12:31:31.884938 kernel: audit: type=2000 audit(1752669090.582:1): state=initialized audit_enabled=0 res=1 Jul 16 12:31:31.884948 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 16 12:31:31.884957 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 16 12:31:31.884966 kernel: cpuidle: using governor menu Jul 16 12:31:31.884975 kernel: ACPI: bus type PCI registered Jul 16 12:31:31.884984 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 16 12:31:31.884993 kernel: dca service started, version 1.12.1 Jul 16 12:31:31.885006 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 16 12:31:31.885015 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 16 12:31:31.885024 kernel: PCI: Using configuration type 1 for base access Jul 16 12:31:31.885034 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 16 12:31:31.885043 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 16 12:31:31.885052 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 16 12:31:31.885062 kernel: ACPI: Added _OSI(Module Device) Jul 16 12:31:31.885071 kernel: ACPI: Added _OSI(Processor Device) Jul 16 12:31:31.885080 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 16 12:31:31.885093 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 16 12:31:31.885112 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 16 12:31:31.885121 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 16 12:31:31.885129 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 16 12:31:31.885138 kernel: ACPI: Interpreter enabled Jul 16 12:31:31.885146 kernel: ACPI: PM: (supports S0 S5) Jul 16 12:31:31.885154 kernel: ACPI: Using IOAPIC for interrupt routing Jul 16 12:31:31.885162 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 16 12:31:31.885171 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 16 12:31:31.885182 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 16 12:31:31.885370 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 12:31:31.885463 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 16 12:31:31.885551 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 16 12:31:31.885563 kernel: PCI host bridge to bus 0000:00 Jul 16 12:31:31.885663 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 16 12:31:31.885766 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 16 12:31:31.885844 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 16 12:31:31.885923 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 16 12:31:31.885999 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 16 12:31:31.886075 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jul 16 12:31:31.886152 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 16 12:31:31.886255 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 16 12:31:31.886360 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jul 16 12:31:31.886451 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jul 16 12:31:31.886542 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jul 16 12:31:31.886639 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jul 16 12:31:31.886753 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 16 12:31:31.886858 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 16 12:31:31.886954 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jul 16 12:31:31.887061 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 16 12:31:31.887153 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jul 16 12:31:31.887250 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 16 12:31:31.887340 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jul 16 12:31:31.887476 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 16 12:31:31.887566 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jul 16 12:31:31.887713 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 16 12:31:31.887812 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jul 16 12:31:31.887925 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 16 12:31:31.888028 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jul 16 12:31:31.888126 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 16 12:31:31.888214 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jul 16 12:31:31.888331 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 16 12:31:31.888447 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jul 16 12:31:31.888573 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 16 12:31:31.888680 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 16 12:31:31.888789 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jul 16 12:31:31.888883 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 16 12:31:31.888977 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jul 16 12:31:31.889074 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 16 12:31:31.889165 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 16 12:31:31.889252 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jul 16 12:31:31.889340 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jul 16 12:31:31.889435 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 16 12:31:31.889524 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 16 12:31:31.889629 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 16 12:31:31.889799 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jul 16 12:31:31.889888 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jul 16 12:31:31.889990 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 16 12:31:31.890078 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 16 12:31:31.890185 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jul 16 12:31:31.890275 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jul 16 12:31:31.890371 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 16 12:31:31.890459 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 16 12:31:31.890547 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 16 12:31:31.890648 kernel: pci_bus 0000:02: extended config space not accessible Jul 16 12:31:31.892838 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jul 16 12:31:31.892940 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jul 16 12:31:31.893039 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 16 12:31:31.893154 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 16 12:31:31.893256 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 16 12:31:31.893348 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jul 16 12:31:31.893439 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 16 12:31:31.893526 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 16 12:31:31.893614 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 16 12:31:31.893758 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 16 12:31:31.893853 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 16 12:31:31.893944 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 16 12:31:31.894031 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 16 12:31:31.894116 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 16 12:31:31.894238 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 16 12:31:31.894325 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 16 12:31:31.894412 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 16 12:31:31.894508 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 16 12:31:31.894593 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 16 12:31:31.894689 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 16 12:31:31.894785 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 16 12:31:31.894872 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 16 12:31:31.894958 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 16 12:31:31.895063 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 16 12:31:31.895149 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 16 12:31:31.895240 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 16 12:31:31.895327 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 16 12:31:31.895413 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 16 12:31:31.895500 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 16 12:31:31.895513 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 16 12:31:31.895523 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 16 12:31:31.895532 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 16 12:31:31.895542 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 16 12:31:31.895555 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 16 12:31:31.895564 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 16 12:31:31.895574 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 16 12:31:31.895583 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 16 12:31:31.895592 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 16 12:31:31.895601 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 16 12:31:31.895611 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 16 12:31:31.895620 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 16 12:31:31.895629 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 16 12:31:31.895641 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 16 12:31:31.895651 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 16 12:31:31.895660 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 16 12:31:31.895677 kernel: iommu: Default domain type: Translated Jul 16 12:31:31.895686 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 16 12:31:31.895783 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 16 12:31:31.895870 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 16 12:31:31.895956 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 16 12:31:31.895972 kernel: vgaarb: loaded Jul 16 12:31:31.895981 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 16 12:31:31.895991 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 16 12:31:31.896001 kernel: PTP clock support registered Jul 16 12:31:31.896010 kernel: PCI: Using ACPI for IRQ routing Jul 16 12:31:31.896020 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 16 12:31:31.896029 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 16 12:31:31.896038 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jul 16 12:31:31.896047 kernel: clocksource: Switched to clocksource kvm-clock Jul 16 12:31:31.896059 kernel: VFS: Disk quotas dquot_6.6.0 Jul 16 12:31:31.896069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 16 12:31:31.896078 kernel: pnp: PnP ACPI init Jul 16 12:31:31.896173 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 16 12:31:31.896186 kernel: pnp: PnP ACPI: found 5 devices Jul 16 12:31:31.896196 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 16 12:31:31.896206 kernel: NET: Registered PF_INET protocol family Jul 16 12:31:31.896215 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 16 12:31:31.896228 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 16 12:31:31.896238 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 16 12:31:31.896247 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 16 12:31:31.896257 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 16 12:31:31.896266 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 16 12:31:31.896276 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 16 12:31:31.896285 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 16 12:31:31.896295 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 16 12:31:31.896304 kernel: NET: Registered PF_XDP protocol family Jul 16 12:31:31.896396 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jul 16 12:31:31.896484 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 12:31:31.896572 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 12:31:31.896661 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 16 12:31:31.905794 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 16 12:31:31.905878 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 16 12:31:31.905966 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 16 12:31:31.906044 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 16 12:31:31.906125 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 16 12:31:31.906203 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 16 12:31:31.906282 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 16 12:31:31.906360 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 16 12:31:31.906438 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 16 12:31:31.906569 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 16 12:31:31.906651 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 16 12:31:31.906770 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 16 12:31:31.906855 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 16 12:31:31.906937 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 16 12:31:31.907017 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 16 12:31:31.907097 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 16 12:31:31.907176 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 16 12:31:31.907255 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 16 12:31:31.907355 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 16 12:31:31.907441 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 16 12:31:31.907529 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 16 12:31:31.907618 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 16 12:31:31.907726 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 16 12:31:31.907818 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 16 12:31:31.907904 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 16 12:31:31.907998 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 16 12:31:31.908086 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 16 12:31:31.908172 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 16 12:31:31.908259 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 16 12:31:31.908346 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 16 12:31:31.908432 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 16 12:31:31.908524 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 16 12:31:31.908617 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 16 12:31:31.908720 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 16 12:31:31.908807 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 16 12:31:31.908897 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 16 12:31:31.908985 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 16 12:31:31.909073 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 16 12:31:31.909159 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 16 12:31:31.909251 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 16 12:31:31.909338 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 16 12:31:31.909428 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 16 12:31:31.909516 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 16 12:31:31.909611 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 16 12:31:31.909717 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 16 12:31:31.909807 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 16 12:31:31.909898 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 16 12:31:31.909980 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 16 12:31:31.910060 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 16 12:31:31.910144 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 16 12:31:31.910223 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 16 12:31:31.910302 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jul 16 12:31:31.910409 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 16 12:31:31.910497 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jul 16 12:31:31.910586 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 16 12:31:31.910688 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 16 12:31:31.910788 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jul 16 12:31:31.910873 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 16 12:31:31.910959 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 16 12:31:31.911049 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jul 16 12:31:31.911138 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 16 12:31:31.911222 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 16 12:31:31.911316 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 16 12:31:31.911400 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 16 12:31:31.911490 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 16 12:31:31.911582 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jul 16 12:31:31.911674 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 16 12:31:31.911769 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 16 12:31:31.911861 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jul 16 12:31:31.911946 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 16 12:31:31.912030 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 16 12:31:31.912126 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jul 16 12:31:31.912211 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jul 16 12:31:31.912294 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 16 12:31:31.912390 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jul 16 12:31:31.912473 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 16 12:31:31.912556 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 16 12:31:31.912570 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 16 12:31:31.912580 kernel: PCI: CLS 0 bytes, default 64 Jul 16 12:31:31.912591 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 16 12:31:31.912601 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jul 16 12:31:31.912611 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 16 12:31:31.912625 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jul 16 12:31:31.912636 kernel: Initialise system trusted keyrings Jul 16 12:31:31.912646 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 16 12:31:31.912656 kernel: Key type asymmetric registered Jul 16 12:31:31.912666 kernel: Asymmetric key parser 'x509' registered Jul 16 12:31:31.916715 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 16 12:31:31.916726 kernel: io scheduler mq-deadline registered Jul 16 12:31:31.916736 kernel: io scheduler kyber registered Jul 16 12:31:31.916747 kernel: io scheduler bfq registered Jul 16 12:31:31.916865 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 16 12:31:31.916961 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 16 12:31:31.917052 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:31:31.917143 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 16 12:31:31.917234 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 16 12:31:31.917323 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:31:31.917417 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 16 12:31:31.917505 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 16 12:31:31.917597 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:31:31.917704 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 16 12:31:31.917796 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 16 12:31:31.917884 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:31:31.917979 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 16 12:31:31.918068 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 16 12:31:31.918154 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:31:31.918252 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 16 12:31:31.918343 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 16 12:31:31.918432 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:31:31.918530 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 16 12:31:31.918625 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 16 12:31:31.918733 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:31:31.918824 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 16 12:31:31.918912 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 16 12:31:31.919006 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:31:31.919024 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 16 12:31:31.919038 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 16 12:31:31.919049 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 16 12:31:31.919059 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 16 12:31:31.919069 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 16 12:31:31.919080 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 16 12:31:31.919090 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 16 12:31:31.919100 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 16 12:31:31.919110 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 16 12:31:31.919210 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 16 12:31:31.919295 kernel: rtc_cmos 00:03: registered as rtc0 Jul 16 12:31:31.919381 kernel: rtc_cmos 00:03: setting system clock to 2025-07-16T12:31:31 UTC (1752669091) Jul 16 12:31:31.919466 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 16 12:31:31.919479 kernel: intel_pstate: CPU model not supported Jul 16 12:31:31.919490 kernel: NET: Registered PF_INET6 protocol family Jul 16 12:31:31.919500 kernel: Segment Routing with IPv6 Jul 16 12:31:31.919510 kernel: In-situ OAM (IOAM) with IPv6 Jul 16 12:31:31.919524 kernel: NET: Registered PF_PACKET protocol family Jul 16 12:31:31.919534 kernel: Key type dns_resolver registered Jul 16 12:31:31.919544 kernel: IPI shorthand broadcast: enabled Jul 16 12:31:31.919554 kernel: sched_clock: Marking stable (705336714, 115921904)->(1030034203, -208775585) Jul 16 12:31:31.919564 kernel: registered taskstats version 1 Jul 16 12:31:31.919574 kernel: Loading compiled-in X.509 certificates Jul 16 12:31:31.919585 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.188-flatcar: c4b3a19d3bd6de5654dc12075428550cf6251289' Jul 16 12:31:31.919595 kernel: Key type .fscrypt registered Jul 16 12:31:31.919604 kernel: Key type fscrypt-provisioning registered Jul 16 12:31:31.919618 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 16 12:31:31.919628 kernel: ima: Allocated hash algorithm: sha1 Jul 16 12:31:31.919638 kernel: ima: No architecture policies found Jul 16 12:31:31.919648 kernel: clk: Disabling unused clocks Jul 16 12:31:31.919658 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 16 12:31:31.919675 kernel: Write protecting the kernel read-only data: 28672k Jul 16 12:31:31.919685 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 16 12:31:31.919701 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 16 12:31:31.919715 kernel: Run /init as init process Jul 16 12:31:31.919725 kernel: with arguments: Jul 16 12:31:31.919735 kernel: /init Jul 16 12:31:31.919745 kernel: with environment: Jul 16 12:31:31.919755 kernel: HOME=/ Jul 16 12:31:31.919764 kernel: TERM=linux Jul 16 12:31:31.919774 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 16 12:31:31.919790 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 16 12:31:31.919808 systemd[1]: Detected virtualization kvm. Jul 16 12:31:31.919819 systemd[1]: Detected architecture x86-64. Jul 16 12:31:31.919829 systemd[1]: Running in initrd. Jul 16 12:31:31.919839 systemd[1]: No hostname configured, using default hostname. Jul 16 12:31:31.919849 systemd[1]: Hostname set to . Jul 16 12:31:31.919860 systemd[1]: Initializing machine ID from VM UUID. Jul 16 12:31:31.919870 systemd[1]: Queued start job for default target initrd.target. Jul 16 12:31:31.919881 systemd[1]: Started systemd-ask-password-console.path. Jul 16 12:31:31.919894 systemd[1]: Reached target cryptsetup.target. Jul 16 12:31:31.919905 systemd[1]: Reached target paths.target. Jul 16 12:31:31.919915 systemd[1]: Reached target slices.target. Jul 16 12:31:31.919925 systemd[1]: Reached target swap.target. Jul 16 12:31:31.919935 systemd[1]: Reached target timers.target. Jul 16 12:31:31.919946 systemd[1]: Listening on iscsid.socket. Jul 16 12:31:31.919956 systemd[1]: Listening on iscsiuio.socket. Jul 16 12:31:31.919966 systemd[1]: Listening on systemd-journald-audit.socket. Jul 16 12:31:31.919980 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 16 12:31:31.919991 systemd[1]: Listening on systemd-journald.socket. Jul 16 12:31:31.920004 systemd[1]: Listening on systemd-networkd.socket. Jul 16 12:31:31.920015 systemd[1]: Listening on systemd-udevd-control.socket. Jul 16 12:31:31.920025 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 16 12:31:31.920036 systemd[1]: Reached target sockets.target. Jul 16 12:31:31.920046 systemd[1]: Starting kmod-static-nodes.service... Jul 16 12:31:31.920057 systemd[1]: Finished network-cleanup.service. Jul 16 12:31:31.920067 systemd[1]: Starting systemd-fsck-usr.service... Jul 16 12:31:31.920080 systemd[1]: Starting systemd-journald.service... Jul 16 12:31:31.920091 systemd[1]: Starting systemd-modules-load.service... Jul 16 12:31:31.920101 systemd[1]: Starting systemd-resolved.service... Jul 16 12:31:31.920111 systemd[1]: Starting systemd-vconsole-setup.service... Jul 16 12:31:31.920122 systemd[1]: Finished kmod-static-nodes.service. Jul 16 12:31:31.920132 systemd[1]: Finished systemd-fsck-usr.service. Jul 16 12:31:31.920142 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 16 12:31:31.920153 systemd[1]: Started systemd-resolved.service. Jul 16 12:31:31.920163 kernel: audit: type=1130 audit(1752669091.918:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:31.920185 systemd-journald[201]: Journal started Jul 16 12:31:31.920245 systemd-journald[201]: Runtime Journal (/run/log/journal/2e46cc9b9fab4c949cbe596ab2e80457) is 4.7M, max 38.1M, 33.3M free. Jul 16 12:31:31.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:31.871698 systemd-modules-load[202]: Inserted module 'overlay' Jul 16 12:31:31.923274 systemd[1]: Started systemd-journald.service. Jul 16 12:31:31.894581 systemd-resolved[203]: Positive Trust Anchors: Jul 16 12:31:31.928545 kernel: audit: type=1130 audit(1752669091.922:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:31.928575 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 16 12:31:31.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:31.894594 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 12:31:31.894630 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 16 12:31:31.906734 systemd-resolved[203]: Defaulting to hostname 'linux'. Jul 16 12:31:31.923905 systemd[1]: Finished systemd-vconsole-setup.service. Jul 16 12:31:31.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:31.935558 kernel: audit: type=1130 audit(1752669091.931:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:31.935590 kernel: Bridge firewalling registered Jul 16 12:31:31.934604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 16 12:31:31.934950 systemd-modules-load[202]: Inserted module 'br_netfilter' Jul 16 12:31:31.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:31.939370 systemd[1]: Reached target nss-lookup.target. Jul 16 12:31:31.939814 kernel: audit: type=1130 audit(1752669091.935:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:31.940945 systemd[1]: Starting dracut-cmdline-ask.service... Jul 16 12:31:31.962175 kernel: SCSI subsystem initialized Jul 16 12:31:31.960706 systemd[1]: Finished dracut-cmdline-ask.service. Jul 16 12:31:31.961847 systemd[1]: Starting dracut-cmdline.service... Jul 16 12:31:31.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:31.965685 kernel: audit: type=1130 audit(1752669091.960:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:31.977014 dracut-cmdline[218]: dracut-dracut-053 Jul 16 12:31:31.979561 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 16 12:31:31.983150 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 16 12:31:31.983174 kernel: device-mapper: uevent: version 1.0.3 Jul 16 12:31:31.991999 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 16 12:31:31.995386 systemd-modules-load[202]: Inserted module 'dm_multipath' Jul 16 12:31:31.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:31.996481 systemd[1]: Finished systemd-modules-load.service. Jul 16 12:31:31.997564 systemd[1]: Starting systemd-sysctl.service... Jul 16 12:31:32.001989 kernel: audit: type=1130 audit(1752669091.996:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:32.008157 systemd[1]: Finished systemd-sysctl.service. Jul 16 12:31:32.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:32.012694 kernel: audit: type=1130 audit(1752669092.007:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:32.047700 kernel: Loading iSCSI transport class v2.0-870. Jul 16 12:31:32.066706 kernel: iscsi: registered transport (tcp) Jul 16 12:31:32.091700 kernel: iscsi: registered transport (qla4xxx) Jul 16 12:31:32.091728 kernel: QLogic iSCSI HBA Driver Jul 16 12:31:32.129334 systemd[1]: Finished dracut-cmdline.service. Jul 16 12:31:32.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:32.130542 systemd[1]: Starting dracut-pre-udev.service... Jul 16 12:31:32.133816 kernel: audit: type=1130 audit(1752669092.129:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:32.181771 kernel: raid6: avx512x4 gen() 18188 MB/s Jul 16 12:31:32.198772 kernel: raid6: avx512x4 xor() 8307 MB/s Jul 16 12:31:32.215718 kernel: raid6: avx512x2 gen() 17975 MB/s Jul 16 12:31:32.232786 kernel: raid6: avx512x2 xor() 23060 MB/s Jul 16 12:31:32.249917 kernel: raid6: avx512x1 gen() 18048 MB/s Jul 16 12:31:32.266733 kernel: raid6: avx512x1 xor() 19744 MB/s Jul 16 12:31:32.283733 kernel: raid6: avx2x4 gen() 17875 MB/s Jul 16 12:31:32.300765 kernel: raid6: avx2x4 xor() 7186 MB/s Jul 16 12:31:32.317742 kernel: raid6: avx2x2 gen() 17872 MB/s Jul 16 12:31:32.334745 kernel: raid6: avx2x2 xor() 16177 MB/s Jul 16 12:31:32.351741 kernel: raid6: avx2x1 gen() 13541 MB/s Jul 16 12:31:32.368761 kernel: raid6: avx2x1 xor() 13915 MB/s Jul 16 12:31:32.385729 kernel: raid6: sse2x4 gen() 8266 MB/s Jul 16 12:31:32.402745 kernel: raid6: sse2x4 xor() 5476 MB/s Jul 16 12:31:32.419735 kernel: raid6: sse2x2 gen() 9076 MB/s Jul 16 12:31:32.436731 kernel: raid6: sse2x2 xor() 5491 MB/s Jul 16 12:31:32.453734 kernel: raid6: sse2x1 gen() 8544 MB/s Jul 16 12:31:32.471359 kernel: raid6: sse2x1 xor() 4172 MB/s Jul 16 12:31:32.471438 kernel: raid6: using algorithm avx512x4 gen() 18188 MB/s Jul 16 12:31:32.471473 kernel: raid6: .... xor() 8307 MB/s, rmw enabled Jul 16 12:31:32.472091 kernel: raid6: using avx512x2 recovery algorithm Jul 16 12:31:32.486725 kernel: xor: automatically using best checksumming function avx Jul 16 12:31:32.587718 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 16 12:31:32.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:32.602142 systemd[1]: Finished dracut-pre-udev.service. Jul 16 12:31:32.608078 kernel: audit: type=1130 audit(1752669092.602:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:32.602000 audit: BPF prog-id=7 op=LOAD Jul 16 12:31:32.602000 audit: BPF prog-id=8 op=LOAD Jul 16 12:31:32.604182 systemd[1]: Starting systemd-udevd.service... Jul 16 12:31:32.619106 systemd-udevd[400]: Using default interface naming scheme 'v252'. Jul 16 12:31:32.624243 systemd[1]: Started systemd-udevd.service. Jul 16 12:31:32.628389 systemd[1]: Starting dracut-pre-trigger.service... Jul 16 12:31:32.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:32.641809 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jul 16 12:31:32.670518 systemd[1]: Finished dracut-pre-trigger.service. Jul 16 12:31:32.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:32.671644 systemd[1]: Starting systemd-udev-trigger.service... Jul 16 12:31:32.721410 systemd[1]: Finished systemd-udev-trigger.service. Jul 16 12:31:32.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:32.792689 kernel: cryptd: max_cpu_qlen set to 1000 Jul 16 12:31:32.799867 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 16 12:31:32.821417 kernel: AVX2 version of gcm_enc/dec engaged. Jul 16 12:31:32.821434 kernel: AES CTR mode by8 optimization enabled Jul 16 12:31:32.821447 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 16 12:31:32.821458 kernel: GPT:17805311 != 125829119 Jul 16 12:31:32.821468 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 16 12:31:32.821478 kernel: GPT:17805311 != 125829119 Jul 16 12:31:32.821493 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 16 12:31:32.821504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 16 12:31:32.846552 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 16 12:31:32.902006 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (453) Jul 16 12:31:32.902035 kernel: ACPI: bus type USB registered Jul 16 12:31:32.902049 kernel: usbcore: registered new interface driver usbfs Jul 16 12:31:32.902069 kernel: usbcore: registered new interface driver hub Jul 16 12:31:32.902081 kernel: usbcore: registered new device driver usb Jul 16 12:31:32.902093 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 16 12:31:32.937686 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jul 16 12:31:32.937841 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 16 12:31:32.937960 kernel: libata version 3.00 loaded. Jul 16 12:31:32.937977 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 16 12:31:32.938088 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jul 16 12:31:32.938205 kernel: ahci 0000:00:1f.2: version 3.0 Jul 16 12:31:32.960871 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 16 12:31:32.960915 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 16 12:31:32.961061 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 16 12:31:32.961167 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jul 16 12:31:32.961285 kernel: hub 1-0:1.0: USB hub found Jul 16 12:31:32.961418 kernel: hub 1-0:1.0: 4 ports detected Jul 16 12:31:32.961539 kernel: scsi host0: ahci Jul 16 12:31:32.961685 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 16 12:31:32.961924 kernel: hub 2-0:1.0: USB hub found Jul 16 12:31:32.962079 kernel: scsi host1: ahci Jul 16 12:31:32.962197 kernel: hub 2-0:1.0: 4 ports detected Jul 16 12:31:32.962315 kernel: scsi host2: ahci Jul 16 12:31:32.962431 kernel: scsi host3: ahci Jul 16 12:31:32.962542 kernel: scsi host4: ahci Jul 16 12:31:32.962665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 16 12:31:32.962689 kernel: scsi host5: ahci Jul 16 12:31:32.962810 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jul 16 12:31:32.962824 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jul 16 12:31:32.962836 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jul 16 12:31:32.962848 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jul 16 12:31:32.962865 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jul 16 12:31:32.962877 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jul 16 12:31:32.962889 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 16 12:31:32.912717 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 16 12:31:32.913140 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 16 12:31:32.964380 disk-uuid[509]: Primary Header is updated. Jul 16 12:31:32.964380 disk-uuid[509]: Secondary Entries is updated. Jul 16 12:31:32.964380 disk-uuid[509]: Secondary Header is updated. Jul 16 12:31:32.919803 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 16 12:31:32.930611 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 16 12:31:32.932703 systemd[1]: Starting disk-uuid.service... Jul 16 12:31:33.173743 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 16 12:31:33.274722 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 16 12:31:33.277736 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 16 12:31:33.285715 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 16 12:31:33.285794 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 16 12:31:33.290640 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 16 12:31:33.292728 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 16 12:31:33.326696 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 16 12:31:33.335235 kernel: usbcore: registered new interface driver usbhid Jul 16 12:31:33.335319 kernel: usbhid: USB HID core driver Jul 16 12:31:33.349615 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jul 16 12:31:33.349700 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jul 16 12:31:33.953524 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 16 12:31:33.954520 disk-uuid[514]: The operation has completed successfully. Jul 16 12:31:33.991795 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 16 12:31:33.992442 systemd[1]: Finished disk-uuid.service. Jul 16 12:31:33.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:33.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:33.997067 systemd[1]: Starting verity-setup.service... Jul 16 12:31:34.019928 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 16 12:31:34.068800 systemd[1]: Found device dev-mapper-usr.device. Jul 16 12:31:34.070769 systemd[1]: Mounting sysusr-usr.mount... Jul 16 12:31:34.072440 systemd[1]: Finished verity-setup.service. Jul 16 12:31:34.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.155210 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 16 12:31:34.156357 systemd[1]: Mounted sysusr-usr.mount. Jul 16 12:31:34.157887 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 16 12:31:34.159743 systemd[1]: Starting ignition-setup.service... Jul 16 12:31:34.162413 systemd[1]: Starting parse-ip-for-networkd.service... Jul 16 12:31:34.189032 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 12:31:34.189079 kernel: BTRFS info (device vda6): using free space tree Jul 16 12:31:34.189092 kernel: BTRFS info (device vda6): has skinny extents Jul 16 12:31:34.202331 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 16 12:31:34.207327 systemd[1]: Finished ignition-setup.service. Jul 16 12:31:34.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.208593 systemd[1]: Starting ignition-fetch-offline.service... Jul 16 12:31:34.308145 systemd[1]: Finished parse-ip-for-networkd.service. Jul 16 12:31:34.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.309000 audit: BPF prog-id=9 op=LOAD Jul 16 12:31:34.310458 systemd[1]: Starting systemd-networkd.service... Jul 16 12:31:34.310995 ignition[635]: Ignition 2.14.0 Jul 16 12:31:34.311021 ignition[635]: Stage: fetch-offline Jul 16 12:31:34.311099 ignition[635]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:31:34.311142 ignition[635]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:31:34.312532 ignition[635]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:31:34.313083 ignition[635]: parsed url from cmdline: "" Jul 16 12:31:34.313089 ignition[635]: no config URL provided Jul 16 12:31:34.313097 ignition[635]: reading system config file "/usr/lib/ignition/user.ign" Jul 16 12:31:34.313108 ignition[635]: no config at "/usr/lib/ignition/user.ign" Jul 16 12:31:34.313125 ignition[635]: failed to fetch config: resource requires networking Jul 16 12:31:34.317489 systemd[1]: Finished ignition-fetch-offline.service. Jul 16 12:31:34.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.313248 ignition[635]: Ignition finished successfully Jul 16 12:31:34.335840 systemd-networkd[711]: lo: Link UP Jul 16 12:31:34.335853 systemd-networkd[711]: lo: Gained carrier Jul 16 12:31:34.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.336407 systemd-networkd[711]: Enumeration completed Jul 16 12:31:34.336497 systemd[1]: Started systemd-networkd.service. Jul 16 12:31:34.337141 systemd[1]: Reached target network.target. Jul 16 12:31:34.337291 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 12:31:34.342218 systemd[1]: Starting ignition-fetch.service... Jul 16 12:31:34.343620 systemd-networkd[711]: eth0: Link UP Jul 16 12:31:34.343632 systemd-networkd[711]: eth0: Gained carrier Jul 16 12:31:34.351455 ignition[713]: Ignition 2.14.0 Jul 16 12:31:34.351474 ignition[713]: Stage: fetch Jul 16 12:31:34.352832 systemd[1]: Starting iscsiuio.service... Jul 16 12:31:34.351598 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:31:34.351618 ignition[713]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:31:34.352641 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:31:34.352797 ignition[713]: parsed url from cmdline: "" Jul 16 12:31:34.352802 ignition[713]: no config URL provided Jul 16 12:31:34.352809 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Jul 16 12:31:34.352819 ignition[713]: no config at "/usr/lib/ignition/user.ign" Jul 16 12:31:34.355402 ignition[713]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 16 12:31:34.355453 ignition[713]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 16 12:31:34.356401 ignition[713]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 16 12:31:34.364718 ignition[713]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 16 12:31:34.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.366234 systemd[1]: Started iscsiuio.service. Jul 16 12:31:34.367598 systemd[1]: Starting iscsid.service... Jul 16 12:31:34.371836 systemd-networkd[711]: eth0: DHCPv4 address 10.244.89.194/30, gateway 10.244.89.193 acquired from 10.244.89.193 Jul 16 12:31:34.373457 iscsid[721]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 16 12:31:34.373457 iscsid[721]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 16 12:31:34.373457 iscsid[721]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 16 12:31:34.373457 iscsid[721]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 16 12:31:34.373457 iscsid[721]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 16 12:31:34.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.378922 iscsid[721]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 16 12:31:34.375296 systemd[1]: Started iscsid.service. Jul 16 12:31:34.377031 systemd[1]: Starting dracut-initqueue.service... Jul 16 12:31:34.398781 systemd[1]: Finished dracut-initqueue.service. Jul 16 12:31:34.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.399432 systemd[1]: Reached target remote-fs-pre.target. Jul 16 12:31:34.400162 systemd[1]: Reached target remote-cryptsetup.target. Jul 16 12:31:34.401022 systemd[1]: Reached target remote-fs.target. Jul 16 12:31:34.402854 systemd[1]: Starting dracut-pre-mount.service... Jul 16 12:31:34.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.412060 systemd[1]: Finished dracut-pre-mount.service. Jul 16 12:31:34.565284 ignition[713]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jul 16 12:31:34.602032 ignition[713]: GET result: OK Jul 16 12:31:34.602611 ignition[713]: parsing config with SHA512: 074a06050fe67d8c7a0f2f4e9c054917353e5ff4a078fec094c07485abe982b303067594fd72937ae5a2253c6f4fa7575419d81d7104a907bff5206f30dcf838 Jul 16 12:31:34.623119 unknown[713]: fetched base config from "system" Jul 16 12:31:34.624214 unknown[713]: fetched base config from "system" Jul 16 12:31:34.624791 unknown[713]: fetched user config from "openstack" Jul 16 12:31:34.626032 ignition[713]: fetch: fetch complete Jul 16 12:31:34.626553 ignition[713]: fetch: fetch passed Jul 16 12:31:34.627079 ignition[713]: Ignition finished successfully Jul 16 12:31:34.630499 systemd[1]: Finished ignition-fetch.service. Jul 16 12:31:34.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.632340 systemd[1]: Starting ignition-kargs.service... Jul 16 12:31:34.645986 ignition[736]: Ignition 2.14.0 Jul 16 12:31:34.646566 ignition[736]: Stage: kargs Jul 16 12:31:34.647050 ignition[736]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:31:34.647596 ignition[736]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:31:34.648709 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:31:34.650470 ignition[736]: kargs: kargs passed Jul 16 12:31:34.650964 ignition[736]: Ignition finished successfully Jul 16 12:31:34.652148 systemd[1]: Finished ignition-kargs.service. Jul 16 12:31:34.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.653452 systemd[1]: Starting ignition-disks.service... Jul 16 12:31:34.661997 ignition[741]: Ignition 2.14.0 Jul 16 12:31:34.662006 ignition[741]: Stage: disks Jul 16 12:31:34.662125 ignition[741]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:31:34.662151 ignition[741]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:31:34.663172 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:31:34.664377 ignition[741]: disks: disks passed Jul 16 12:31:34.664440 ignition[741]: Ignition finished successfully Jul 16 12:31:34.665780 systemd[1]: Finished ignition-disks.service. Jul 16 12:31:34.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.666772 systemd[1]: Reached target initrd-root-device.target. Jul 16 12:31:34.667533 systemd[1]: Reached target local-fs-pre.target. Jul 16 12:31:34.667909 systemd[1]: Reached target local-fs.target. Jul 16 12:31:34.668660 systemd[1]: Reached target sysinit.target. Jul 16 12:31:34.669021 systemd[1]: Reached target basic.target. Jul 16 12:31:34.670611 systemd[1]: Starting systemd-fsck-root.service... Jul 16 12:31:34.684619 systemd-fsck[748]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks Jul 16 12:31:34.687520 systemd[1]: Finished systemd-fsck-root.service. Jul 16 12:31:34.688746 systemd[1]: Mounting sysroot.mount... Jul 16 12:31:34.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.699705 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 16 12:31:34.700699 systemd[1]: Mounted sysroot.mount. Jul 16 12:31:34.701117 systemd[1]: Reached target initrd-root-fs.target. Jul 16 12:31:34.702800 systemd[1]: Mounting sysroot-usr.mount... Jul 16 12:31:34.703640 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 16 12:31:34.704365 systemd[1]: Starting flatcar-openstack-hostname.service... Jul 16 12:31:34.706542 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 16 12:31:34.706569 systemd[1]: Reached target ignition-diskful.target. Jul 16 12:31:34.708498 systemd[1]: Mounted sysroot-usr.mount. Jul 16 12:31:34.710864 systemd[1]: Starting initrd-setup-root.service... Jul 16 12:31:34.717788 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Jul 16 12:31:34.726262 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Jul 16 12:31:34.731679 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Jul 16 12:31:34.740686 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Jul 16 12:31:34.785665 systemd[1]: Finished initrd-setup-root.service. Jul 16 12:31:34.786983 systemd[1]: Starting ignition-mount.service... Jul 16 12:31:34.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.789429 systemd[1]: Starting sysroot-boot.service... Jul 16 12:31:34.804179 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Jul 16 12:31:34.816346 coreos-metadata[754]: Jul 16 12:31:34.816 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 16 12:31:34.818446 ignition[803]: INFO : Ignition 2.14.0 Jul 16 12:31:34.819070 ignition[803]: INFO : Stage: mount Jul 16 12:31:34.819971 ignition[803]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:31:34.820532 ignition[803]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:31:34.822770 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:31:34.825140 ignition[803]: INFO : mount: mount passed Jul 16 12:31:34.825802 ignition[803]: INFO : Ignition finished successfully Jul 16 12:31:34.826660 systemd[1]: Finished sysroot-boot.service. Jul 16 12:31:34.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.827614 systemd[1]: Finished ignition-mount.service. Jul 16 12:31:34.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.831032 coreos-metadata[754]: Jul 16 12:31:34.830 INFO Fetch successful Jul 16 12:31:34.831032 coreos-metadata[754]: Jul 16 12:31:34.830 INFO wrote hostname srv-f25or.gb1.brightbox.com to /sysroot/etc/hostname Jul 16 12:31:34.835495 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 16 12:31:34.835778 systemd[1]: Finished flatcar-openstack-hostname.service. Jul 16 12:31:34.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:34.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:35.088017 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 16 12:31:35.101817 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (811) Jul 16 12:31:35.104870 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 12:31:35.104941 kernel: BTRFS info (device vda6): using free space tree Jul 16 12:31:35.104976 kernel: BTRFS info (device vda6): has skinny extents Jul 16 12:31:35.112819 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 16 12:31:35.115908 systemd[1]: Starting ignition-files.service... Jul 16 12:31:35.135875 ignition[831]: INFO : Ignition 2.14.0 Jul 16 12:31:35.136518 ignition[831]: INFO : Stage: files Jul 16 12:31:35.137055 ignition[831]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:31:35.137600 ignition[831]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:31:35.139493 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:31:35.142171 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Jul 16 12:31:35.143855 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 16 12:31:35.144402 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 16 12:31:35.150456 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 16 12:31:35.151077 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 16 12:31:35.152563 unknown[831]: wrote ssh authorized keys file for user: core Jul 16 12:31:35.153153 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 16 12:31:35.153955 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 16 12:31:35.154515 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 16 12:31:35.154515 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 16 12:31:35.154515 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 16 12:31:35.394657 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 16 12:31:36.273062 systemd-networkd[711]: eth0: Gained IPv6LL Jul 16 12:31:36.400613 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 16 12:31:36.402255 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 16 12:31:36.402255 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 16 12:31:36.402255 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 16 12:31:36.402255 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 16 12:31:36.402255 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 12:31:36.402255 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 12:31:36.402255 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 12:31:36.402255 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 12:31:36.407295 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 12:31:36.407295 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 12:31:36.407295 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 12:31:36.407295 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 12:31:36.407295 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 12:31:36.407295 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 16 12:31:37.028258 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 16 12:31:37.784344 systemd-networkd[711]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1670:24:19ff:fef4:59c2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1670:24:19ff:fef4:59c2/64 assigned by NDisc. Jul 16 12:31:37.784365 systemd-networkd[711]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 16 12:31:38.347906 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 12:31:38.349874 ignition[831]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 16 12:31:38.349874 ignition[831]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 16 12:31:38.349874 ignition[831]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 16 12:31:38.352715 ignition[831]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 16 12:31:38.352715 ignition[831]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 16 12:31:38.352715 ignition[831]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 16 12:31:38.352715 ignition[831]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 16 12:31:38.352715 ignition[831]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 12:31:38.352715 ignition[831]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 12:31:38.352715 ignition[831]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 16 12:31:38.352715 ignition[831]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 16 12:31:38.352715 ignition[831]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 16 12:31:38.352715 ignition[831]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 16 12:31:38.352715 ignition[831]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 16 12:31:38.364623 ignition[831]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 16 12:31:38.364623 ignition[831]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 16 12:31:38.364623 ignition[831]: INFO : files: files passed Jul 16 12:31:38.364623 ignition[831]: INFO : Ignition finished successfully Jul 16 12:31:38.374631 kernel: kauditd_printk_skb: 26 callbacks suppressed Jul 16 12:31:38.374661 kernel: audit: type=1130 audit(1752669098.366:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.365967 systemd[1]: Finished ignition-files.service. Jul 16 12:31:38.369936 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 16 12:31:38.372258 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 16 12:31:38.383608 kernel: audit: type=1130 audit(1752669098.376:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.383639 kernel: audit: type=1131 audit(1752669098.376:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.373226 systemd[1]: Starting ignition-quench.service... Jul 16 12:31:38.376869 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 16 12:31:38.376962 systemd[1]: Finished ignition-quench.service. Jul 16 12:31:38.386810 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 12:31:38.388342 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 16 12:31:38.397625 systemd[1]: Reached target ignition-complete.target. Jul 16 12:31:38.400925 kernel: audit: type=1130 audit(1752669098.396:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.404085 systemd[1]: Starting initrd-parse-etc.service... Jul 16 12:31:38.424133 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 16 12:31:38.424847 systemd[1]: Finished initrd-parse-etc.service. Jul 16 12:31:38.430945 kernel: audit: type=1130 audit(1752669098.424:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.430979 kernel: audit: type=1131 audit(1752669098.424:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.425457 systemd[1]: Reached target initrd-fs.target. Jul 16 12:31:38.431268 systemd[1]: Reached target initrd.target. Jul 16 12:31:38.431959 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 16 12:31:38.433102 systemd[1]: Starting dracut-pre-pivot.service... Jul 16 12:31:38.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.446102 systemd[1]: Finished dracut-pre-pivot.service. Jul 16 12:31:38.449834 kernel: audit: type=1130 audit(1752669098.445:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.450609 systemd[1]: Starting initrd-cleanup.service... Jul 16 12:31:38.461245 systemd[1]: Stopped target nss-lookup.target. Jul 16 12:31:38.462204 systemd[1]: Stopped target remote-cryptsetup.target. Jul 16 12:31:38.463100 systemd[1]: Stopped target timers.target. Jul 16 12:31:38.463917 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 16 12:31:38.464475 systemd[1]: Stopped dracut-pre-pivot.service. Jul 16 12:31:38.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.468268 systemd[1]: Stopped target initrd.target. Jul 16 12:31:38.468682 kernel: audit: type=1131 audit(1752669098.464:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.469147 systemd[1]: Stopped target basic.target. Jul 16 12:31:38.469998 systemd[1]: Stopped target ignition-complete.target. Jul 16 12:31:38.470858 systemd[1]: Stopped target ignition-diskful.target. Jul 16 12:31:38.471706 systemd[1]: Stopped target initrd-root-device.target. Jul 16 12:31:38.472546 systemd[1]: Stopped target remote-fs.target. Jul 16 12:31:38.473372 systemd[1]: Stopped target remote-fs-pre.target. Jul 16 12:31:38.474285 systemd[1]: Stopped target sysinit.target. Jul 16 12:31:38.475184 systemd[1]: Stopped target local-fs.target. Jul 16 12:31:38.476003 systemd[1]: Stopped target local-fs-pre.target. Jul 16 12:31:38.476875 systemd[1]: Stopped target swap.target. Jul 16 12:31:38.477650 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 16 12:31:38.478229 systemd[1]: Stopped dracut-pre-mount.service. Jul 16 12:31:38.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.481861 systemd[1]: Stopped target cryptsetup.target. Jul 16 12:31:38.482346 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 16 12:31:38.485905 kernel: audit: type=1131 audit(1752669098.478:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.485935 kernel: audit: type=1131 audit(1752669098.482:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.482463 systemd[1]: Stopped dracut-initqueue.service. Jul 16 12:31:38.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.483151 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 16 12:31:38.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.483255 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 16 12:31:38.486377 systemd[1]: ignition-files.service: Deactivated successfully. Jul 16 12:31:38.486475 systemd[1]: Stopped ignition-files.service. Jul 16 12:31:38.488238 systemd[1]: Stopping ignition-mount.service... Jul 16 12:31:38.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.503961 iscsid[721]: iscsid shutting down. Jul 16 12:31:38.490998 systemd[1]: Stopping iscsid.service... Jul 16 12:31:38.491343 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 16 12:31:38.491454 systemd[1]: Stopped kmod-static-nodes.service. Jul 16 12:31:38.492872 systemd[1]: Stopping sysroot-boot.service... Jul 16 12:31:38.493376 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 16 12:31:38.493539 systemd[1]: Stopped systemd-udev-trigger.service. Jul 16 12:31:38.509157 ignition[869]: INFO : Ignition 2.14.0 Jul 16 12:31:38.509157 ignition[869]: INFO : Stage: umount Jul 16 12:31:38.509157 ignition[869]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:31:38.509157 ignition[869]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:31:38.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.494114 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 16 12:31:38.516562 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:31:38.516562 ignition[869]: INFO : umount: umount passed Jul 16 12:31:38.516562 ignition[869]: INFO : Ignition finished successfully Jul 16 12:31:38.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.494217 systemd[1]: Stopped dracut-pre-trigger.service. Jul 16 12:31:38.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.499547 systemd[1]: iscsid.service: Deactivated successfully. Jul 16 12:31:38.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.499665 systemd[1]: Stopped iscsid.service. Jul 16 12:31:38.500334 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 16 12:31:38.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.500428 systemd[1]: Finished initrd-cleanup.service. Jul 16 12:31:38.508461 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 16 12:31:38.512085 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 16 12:31:38.512183 systemd[1]: Stopped sysroot-boot.service. Jul 16 12:31:38.512790 systemd[1]: Stopping iscsiuio.service... Jul 16 12:31:38.516372 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 16 12:31:38.516467 systemd[1]: Stopped iscsiuio.service. Jul 16 12:31:38.517028 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 16 12:31:38.517116 systemd[1]: Stopped ignition-mount.service. Jul 16 12:31:38.517722 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 16 12:31:38.517758 systemd[1]: Stopped ignition-disks.service. Jul 16 12:31:38.518358 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 16 12:31:38.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.518395 systemd[1]: Stopped ignition-kargs.service. Jul 16 12:31:38.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.519015 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 16 12:31:38.519050 systemd[1]: Stopped ignition-fetch.service. Jul 16 12:31:38.519654 systemd[1]: Stopped target network.target. Jul 16 12:31:38.520290 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 16 12:31:38.520330 systemd[1]: Stopped ignition-fetch-offline.service. Jul 16 12:31:38.520999 systemd[1]: Stopped target paths.target. Jul 16 12:31:38.521616 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 16 12:31:38.523735 systemd[1]: Stopped systemd-ask-password-console.path. Jul 16 12:31:38.524218 systemd[1]: Stopped target slices.target. Jul 16 12:31:38.524781 systemd[1]: Stopped target sockets.target. Jul 16 12:31:38.525467 systemd[1]: iscsid.socket: Deactivated successfully. Jul 16 12:31:38.525500 systemd[1]: Closed iscsid.socket. Jul 16 12:31:38.526030 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 16 12:31:38.526066 systemd[1]: Closed iscsiuio.socket. Jul 16 12:31:38.526649 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 16 12:31:38.526708 systemd[1]: Stopped ignition-setup.service. Jul 16 12:31:38.527238 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 16 12:31:38.527271 systemd[1]: Stopped initrd-setup-root.service. Jul 16 12:31:38.528011 systemd[1]: Stopping systemd-networkd.service... Jul 16 12:31:38.529098 systemd[1]: Stopping systemd-resolved.service... Jul 16 12:31:38.532808 systemd-networkd[711]: eth0: DHCPv6 lease lost Jul 16 12:31:38.538406 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 16 12:31:38.538659 systemd[1]: Stopped systemd-resolved.service. Jul 16 12:31:38.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.548987 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 16 12:31:38.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.549105 systemd[1]: Stopped systemd-networkd.service. Jul 16 12:31:38.549853 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 16 12:31:38.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.551000 audit: BPF prog-id=6 op=UNLOAD Jul 16 12:31:38.549892 systemd[1]: Closed systemd-networkd.socket. Jul 16 12:31:38.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.551236 systemd[1]: Stopping network-cleanup.service... Jul 16 12:31:38.553000 audit: BPF prog-id=9 op=UNLOAD Jul 16 12:31:38.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.551653 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 16 12:31:38.551728 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 16 12:31:38.552606 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 16 12:31:38.552654 systemd[1]: Stopped systemd-sysctl.service. Jul 16 12:31:38.553353 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 16 12:31:38.553397 systemd[1]: Stopped systemd-modules-load.service. Jul 16 12:31:38.557922 systemd[1]: Stopping systemd-udevd.service... Jul 16 12:31:38.559425 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 16 12:31:38.562452 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 16 12:31:38.562547 systemd[1]: Stopped network-cleanup.service. Jul 16 12:31:38.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.565350 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 16 12:31:38.565484 systemd[1]: Stopped systemd-udevd.service. Jul 16 12:31:38.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.566574 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 16 12:31:38.566616 systemd[1]: Closed systemd-udevd-control.socket. Jul 16 12:31:38.567443 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 16 12:31:38.567475 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 16 12:31:38.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.568072 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 16 12:31:38.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.568108 systemd[1]: Stopped dracut-pre-udev.service. Jul 16 12:31:38.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.568838 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 16 12:31:38.568872 systemd[1]: Stopped dracut-cmdline.service. Jul 16 12:31:38.569465 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 16 12:31:38.569498 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 16 12:31:38.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.570866 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 16 12:31:38.572213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 16 12:31:38.572260 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 16 12:31:38.587757 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 16 12:31:38.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:38.588466 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 16 12:31:38.589068 systemd[1]: Reached target initrd-switch-root.target. Jul 16 12:31:38.591254 systemd[1]: Starting initrd-switch-root.service... Jul 16 12:31:38.599740 systemd[1]: Switching root. Jul 16 12:31:38.601000 audit: BPF prog-id=5 op=UNLOAD Jul 16 12:31:38.601000 audit: BPF prog-id=4 op=UNLOAD Jul 16 12:31:38.601000 audit: BPF prog-id=3 op=UNLOAD Jul 16 12:31:38.602000 audit: BPF prog-id=8 op=UNLOAD Jul 16 12:31:38.602000 audit: BPF prog-id=7 op=UNLOAD Jul 16 12:31:38.619248 systemd-journald[201]: Journal stopped Jul 16 12:31:41.830488 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jul 16 12:31:41.830605 kernel: SELinux: Class mctp_socket not defined in policy. Jul 16 12:31:41.830640 kernel: SELinux: Class anon_inode not defined in policy. Jul 16 12:31:41.830654 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 16 12:31:41.833751 kernel: SELinux: policy capability network_peer_controls=1 Jul 16 12:31:41.833781 kernel: SELinux: policy capability open_perms=1 Jul 16 12:31:41.833797 kernel: SELinux: policy capability extended_socket_class=1 Jul 16 12:31:41.833814 kernel: SELinux: policy capability always_check_network=0 Jul 16 12:31:41.833834 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 16 12:31:41.833846 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 16 12:31:41.833865 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 16 12:31:41.833884 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 16 12:31:41.833898 systemd[1]: Successfully loaded SELinux policy in 50.198ms. Jul 16 12:31:41.833920 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.926ms. Jul 16 12:31:41.833936 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 16 12:31:41.833951 systemd[1]: Detected virtualization kvm. Jul 16 12:31:41.833964 systemd[1]: Detected architecture x86-64. Jul 16 12:31:41.833988 systemd[1]: Detected first boot. Jul 16 12:31:41.834007 systemd[1]: Hostname set to . Jul 16 12:31:41.834025 systemd[1]: Initializing machine ID from VM UUID. Jul 16 12:31:41.834039 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 16 12:31:41.834053 systemd[1]: Populated /etc with preset unit settings. Jul 16 12:31:41.834067 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 16 12:31:41.834085 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 16 12:31:41.834100 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 12:31:41.834121 systemd[1]: Queued start job for default target multi-user.target. Jul 16 12:31:41.834139 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 16 12:31:41.834163 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 16 12:31:41.834176 systemd[1]: Created slice system-addon\x2drun.slice. Jul 16 12:31:41.834191 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 16 12:31:41.834212 systemd[1]: Created slice system-getty.slice. Jul 16 12:31:41.834225 systemd[1]: Created slice system-modprobe.slice. Jul 16 12:31:41.834239 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 16 12:31:41.834257 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 16 12:31:41.834271 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 16 12:31:41.834290 systemd[1]: Created slice user.slice. Jul 16 12:31:41.834304 systemd[1]: Started systemd-ask-password-console.path. Jul 16 12:31:41.834317 systemd[1]: Started systemd-ask-password-wall.path. Jul 16 12:31:41.834331 systemd[1]: Set up automount boot.automount. Jul 16 12:31:41.834345 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 16 12:31:41.834360 systemd[1]: Reached target integritysetup.target. Jul 16 12:31:41.834378 systemd[1]: Reached target remote-cryptsetup.target. Jul 16 12:31:41.834392 systemd[1]: Reached target remote-fs.target. Jul 16 12:31:41.834406 systemd[1]: Reached target slices.target. Jul 16 12:31:41.834419 systemd[1]: Reached target swap.target. Jul 16 12:31:41.834435 systemd[1]: Reached target torcx.target. Jul 16 12:31:41.834449 systemd[1]: Reached target veritysetup.target. Jul 16 12:31:41.834465 systemd[1]: Listening on systemd-coredump.socket. Jul 16 12:31:41.834478 systemd[1]: Listening on systemd-initctl.socket. Jul 16 12:31:41.834496 systemd[1]: Listening on systemd-journald-audit.socket. Jul 16 12:31:41.834510 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 16 12:31:41.834526 systemd[1]: Listening on systemd-journald.socket. Jul 16 12:31:41.834540 systemd[1]: Listening on systemd-networkd.socket. Jul 16 12:31:41.834553 systemd[1]: Listening on systemd-udevd-control.socket. Jul 16 12:31:41.834567 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 16 12:31:41.834580 systemd[1]: Listening on systemd-userdbd.socket. Jul 16 12:31:41.834593 systemd[1]: Mounting dev-hugepages.mount... Jul 16 12:31:41.834608 systemd[1]: Mounting dev-mqueue.mount... Jul 16 12:31:41.834622 systemd[1]: Mounting media.mount... Jul 16 12:31:41.834641 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:31:41.834655 systemd[1]: Mounting sys-kernel-debug.mount... Jul 16 12:31:41.834677 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 16 12:31:41.834691 systemd[1]: Mounting tmp.mount... Jul 16 12:31:41.834705 systemd[1]: Starting flatcar-tmpfiles.service... Jul 16 12:31:41.834719 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 16 12:31:41.834732 systemd[1]: Starting kmod-static-nodes.service... Jul 16 12:31:41.834752 systemd[1]: Starting modprobe@configfs.service... Jul 16 12:31:41.834766 systemd[1]: Starting modprobe@dm_mod.service... Jul 16 12:31:41.834786 systemd[1]: Starting modprobe@drm.service... Jul 16 12:31:41.834799 systemd[1]: Starting modprobe@efi_pstore.service... Jul 16 12:31:41.834813 systemd[1]: Starting modprobe@fuse.service... Jul 16 12:31:41.834826 systemd[1]: Starting modprobe@loop.service... Jul 16 12:31:41.834839 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 16 12:31:41.834853 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 16 12:31:41.834867 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 16 12:31:41.834880 systemd[1]: Starting systemd-journald.service... Jul 16 12:31:41.834894 systemd[1]: Starting systemd-modules-load.service... Jul 16 12:31:41.834912 systemd[1]: Starting systemd-network-generator.service... Jul 16 12:31:41.834926 systemd[1]: Starting systemd-remount-fs.service... Jul 16 12:31:41.834939 systemd[1]: Starting systemd-udev-trigger.service... Jul 16 12:31:41.834953 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:31:41.834966 systemd[1]: Mounted dev-hugepages.mount. Jul 16 12:31:41.834980 systemd[1]: Mounted dev-mqueue.mount. Jul 16 12:31:41.834993 kernel: fuse: init (API version 7.34) Jul 16 12:31:41.835006 systemd[1]: Mounted media.mount. Jul 16 12:31:41.835020 systemd[1]: Mounted sys-kernel-debug.mount. Jul 16 12:31:41.835038 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 16 12:31:41.835052 systemd[1]: Mounted tmp.mount. Jul 16 12:31:41.835065 systemd[1]: Finished kmod-static-nodes.service. Jul 16 12:31:41.835080 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 16 12:31:41.835106 systemd-journald[1008]: Journal started Jul 16 12:31:41.835177 systemd-journald[1008]: Runtime Journal (/run/log/journal/2e46cc9b9fab4c949cbe596ab2e80457) is 4.7M, max 38.1M, 33.3M free. Jul 16 12:31:41.826000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 16 12:31:41.826000 audit[1008]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc4f965e90 a2=4000 a3=7ffc4f965f2c items=0 ppid=1 pid=1008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:31:41.826000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 16 12:31:41.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.841509 systemd[1]: Finished modprobe@configfs.service. Jul 16 12:31:41.841569 systemd[1]: Started systemd-journald.service. Jul 16 12:31:41.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.840130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 12:31:41.840289 systemd[1]: Finished modprobe@dm_mod.service. Jul 16 12:31:41.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.840952 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 12:31:41.841105 systemd[1]: Finished modprobe@drm.service. Jul 16 12:31:41.841725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 12:31:41.841888 systemd[1]: Finished modprobe@efi_pstore.service. Jul 16 12:31:41.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.843910 systemd[1]: Finished systemd-network-generator.service. Jul 16 12:31:41.845564 systemd[1]: Finished systemd-remount-fs.service. Jul 16 12:31:41.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.846636 systemd[1]: Reached target network-pre.target. Jul 16 12:31:41.848693 systemd[1]: Mounting sys-kernel-config.mount... Jul 16 12:31:41.849134 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 16 12:31:41.884855 kernel: loop: module loaded Jul 16 12:31:41.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.855659 systemd[1]: Starting systemd-hwdb-update.service... Jul 16 12:31:41.857266 systemd[1]: Starting systemd-journal-flush.service... Jul 16 12:31:41.858794 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 12:31:41.860134 systemd[1]: Starting systemd-random-seed.service... Jul 16 12:31:41.868760 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 16 12:31:41.868951 systemd[1]: Finished modprobe@fuse.service. Jul 16 12:31:41.869586 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 12:31:41.869781 systemd[1]: Finished modprobe@loop.service. Jul 16 12:31:41.870872 systemd[1]: Finished systemd-modules-load.service. Jul 16 12:31:41.871680 systemd[1]: Mounted sys-kernel-config.mount. Jul 16 12:31:41.873794 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 16 12:31:41.876765 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 16 12:31:41.877891 systemd[1]: Starting systemd-sysctl.service... Jul 16 12:31:41.879636 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 16 12:31:41.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.894153 systemd-journald[1008]: Time spent on flushing to /var/log/journal/2e46cc9b9fab4c949cbe596ab2e80457 is 32.724ms for 1247 entries. Jul 16 12:31:41.894153 systemd-journald[1008]: System Journal (/var/log/journal/2e46cc9b9fab4c949cbe596ab2e80457) is 8.0M, max 584.8M, 576.8M free. Jul 16 12:31:41.932403 systemd-journald[1008]: Received client request to flush runtime journal. Jul 16 12:31:41.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.893099 systemd[1]: Finished systemd-random-seed.service. Jul 16 12:31:41.893603 systemd[1]: Reached target first-boot-complete.target. Jul 16 12:31:41.906538 systemd[1]: Finished systemd-sysctl.service. Jul 16 12:31:41.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.933286 systemd[1]: Finished systemd-journal-flush.service. Jul 16 12:31:41.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:41.989989 systemd[1]: Finished flatcar-tmpfiles.service. Jul 16 12:31:41.992516 systemd[1]: Starting systemd-sysusers.service... Jul 16 12:31:42.015587 systemd[1]: Finished systemd-sysusers.service. Jul 16 12:31:42.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:42.018744 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 16 12:31:42.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:42.030903 systemd[1]: Finished systemd-udev-trigger.service. Jul 16 12:31:42.032801 systemd[1]: Starting systemd-udev-settle.service... Jul 16 12:31:42.042718 udevadm[1067]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 16 12:31:42.054927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 16 12:31:42.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:42.466595 systemd[1]: Finished systemd-hwdb-update.service. Jul 16 12:31:42.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:42.471114 systemd[1]: Starting systemd-udevd.service... Jul 16 12:31:42.498940 systemd-udevd[1070]: Using default interface naming scheme 'v252'. Jul 16 12:31:42.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:42.520496 systemd[1]: Started systemd-udevd.service. Jul 16 12:31:42.525389 systemd[1]: Starting systemd-networkd.service... Jul 16 12:31:42.535450 systemd[1]: Starting systemd-userdbd.service... Jul 16 12:31:42.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:42.582021 systemd[1]: Started systemd-userdbd.service. Jul 16 12:31:42.608044 systemd[1]: Found device dev-ttyS0.device. Jul 16 12:31:42.636022 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 16 12:31:42.672694 kernel: mousedev: PS/2 mouse device common for all mice Jul 16 12:31:42.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:42.680080 systemd-networkd[1084]: lo: Link UP Jul 16 12:31:42.680087 systemd-networkd[1084]: lo: Gained carrier Jul 16 12:31:42.680570 systemd-networkd[1084]: Enumeration completed Jul 16 12:31:42.680721 systemd[1]: Started systemd-networkd.service. Jul 16 12:31:42.681706 systemd-networkd[1084]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 12:31:42.683329 systemd-networkd[1084]: eth0: Link UP Jul 16 12:31:42.683424 systemd-networkd[1084]: eth0: Gained carrier Jul 16 12:31:42.688877 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 16 12:31:42.696753 kernel: ACPI: button: Power Button [PWRF] Jul 16 12:31:42.713847 systemd-networkd[1084]: eth0: DHCPv4 address 10.244.89.194/30, gateway 10.244.89.193 acquired from 10.244.89.193 Jul 16 12:31:42.725000 audit[1082]: AVC avc: denied { confidentiality } for pid=1082 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 16 12:31:42.725000 audit[1082]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560592d84de0 a1=338ac a2=7f080dde8bc5 a3=5 items=110 ppid=1070 pid=1082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:31:42.725000 audit: CWD cwd="/" Jul 16 12:31:42.725000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=1 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=2 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=3 name=(null) inode=14630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=4 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=5 name=(null) inode=14631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=6 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=7 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=8 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=9 name=(null) inode=14633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=10 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=11 name=(null) inode=14634 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=12 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=13 name=(null) inode=14635 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=14 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=15 name=(null) inode=14636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=16 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=17 name=(null) inode=14637 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=18 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=19 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=20 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=21 name=(null) inode=14639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=22 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=23 name=(null) inode=14640 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=24 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=25 name=(null) inode=14641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=26 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=27 name=(null) inode=14642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=28 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=29 name=(null) inode=14643 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=30 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=31 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=32 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=33 name=(null) inode=14645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=34 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=35 name=(null) inode=14646 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=36 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=37 name=(null) inode=14647 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=38 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=39 name=(null) inode=14648 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=40 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=41 name=(null) inode=14649 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=42 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=43 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=44 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=45 name=(null) inode=14651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=46 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=47 name=(null) inode=14652 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=48 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=49 name=(null) inode=14653 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=50 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=51 name=(null) inode=14654 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=52 name=(null) inode=14650 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=53 name=(null) inode=14655 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=55 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=56 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=57 name=(null) inode=14657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=58 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=59 name=(null) inode=14658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=60 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=61 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=62 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=63 name=(null) inode=14660 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=64 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=65 name=(null) inode=14661 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=66 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=67 name=(null) inode=14662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=68 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=69 name=(null) inode=14663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=70 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=71 name=(null) inode=14664 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=72 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=73 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=74 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=75 name=(null) inode=14666 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=76 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=77 name=(null) inode=14667 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=78 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=79 name=(null) inode=14668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=80 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=81 name=(null) inode=14669 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=82 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=83 name=(null) inode=14670 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=84 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=85 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=86 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=87 name=(null) inode=14672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=88 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=89 name=(null) inode=14673 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=90 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=91 name=(null) inode=14674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=92 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=93 name=(null) inode=14675 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=94 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=95 name=(null) inode=14676 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=96 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=97 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=98 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=99 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=100 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=101 name=(null) inode=14679 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=102 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=103 name=(null) inode=14680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=104 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=105 name=(null) inode=14681 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=106 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=107 name=(null) inode=14682 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PATH item=109 name=(null) inode=14683 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:31:42.725000 audit: PROCTITLE proctitle="(udev-worker)" Jul 16 12:31:42.758705 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 16 12:31:42.763693 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 16 12:31:42.780083 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 16 12:31:42.780231 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 16 12:31:42.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:42.933772 systemd[1]: Finished systemd-udev-settle.service. Jul 16 12:31:42.937560 systemd[1]: Starting lvm2-activation-early.service... Jul 16 12:31:42.958286 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 16 12:31:42.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:42.987437 systemd[1]: Finished lvm2-activation-early.service. Jul 16 12:31:42.988823 systemd[1]: Reached target cryptsetup.target. Jul 16 12:31:42.992575 systemd[1]: Starting lvm2-activation.service... Jul 16 12:31:43.002234 lvm[1102]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 16 12:31:43.023271 systemd[1]: Finished lvm2-activation.service. Jul 16 12:31:43.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.025095 systemd[1]: Reached target local-fs-pre.target. Jul 16 12:31:43.026340 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 16 12:31:43.026403 systemd[1]: Reached target local-fs.target. Jul 16 12:31:43.027394 systemd[1]: Reached target machines.target. Jul 16 12:31:43.031721 systemd[1]: Starting ldconfig.service... Jul 16 12:31:43.033651 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.033745 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:31:43.037741 systemd[1]: Starting systemd-boot-update.service... Jul 16 12:31:43.040378 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 16 12:31:43.043117 systemd[1]: Starting systemd-machine-id-commit.service... Jul 16 12:31:43.045031 systemd[1]: Starting systemd-sysext.service... Jul 16 12:31:43.046777 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1105 (bootctl) Jul 16 12:31:43.048230 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 16 12:31:43.066207 systemd[1]: Unmounting usr-share-oem.mount... Jul 16 12:31:43.070377 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 16 12:31:43.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.074854 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 16 12:31:43.075096 systemd[1]: Unmounted usr-share-oem.mount. Jul 16 12:31:43.092720 kernel: loop0: detected capacity change from 0 to 221472 Jul 16 12:31:43.094996 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 16 12:31:43.095663 systemd[1]: Finished systemd-machine-id-commit.service. Jul 16 12:31:43.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.113691 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 16 12:31:43.130688 kernel: loop1: detected capacity change from 0 to 221472 Jul 16 12:31:43.142840 (sd-sysext)[1121]: Using extensions 'kubernetes'. Jul 16 12:31:43.144347 (sd-sysext)[1121]: Merged extensions into '/usr'. Jul 16 12:31:43.165177 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:31:43.167266 systemd[1]: Mounting usr-share-oem.mount... Jul 16 12:31:43.168652 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.170866 systemd[1]: Starting modprobe@dm_mod.service... Jul 16 12:31:43.172635 systemd[1]: Starting modprobe@efi_pstore.service... Jul 16 12:31:43.174584 systemd[1]: Starting modprobe@loop.service... Jul 16 12:31:43.175049 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.175229 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:31:43.177913 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:31:43.179150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 12:31:43.179458 systemd[1]: Finished modprobe@dm_mod.service. Jul 16 12:31:43.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.187346 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 12:31:43.187497 systemd[1]: Finished modprobe@efi_pstore.service. Jul 16 12:31:43.188145 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 12:31:43.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.191079 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 12:31:43.191236 systemd[1]: Finished modprobe@loop.service. Jul 16 12:31:43.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.191844 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.192085 systemd[1]: Mounted usr-share-oem.mount. Jul 16 12:31:43.194741 systemd[1]: Finished systemd-sysext.service. Jul 16 12:31:43.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.197085 systemd[1]: Starting ensure-sysext.service... Jul 16 12:31:43.199917 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 16 12:31:43.215189 systemd-fsck[1120]: fsck.fat 4.2 (2021-01-31) Jul 16 12:31:43.215189 systemd-fsck[1120]: /dev/vda1: 790 files, 120725/258078 clusters Jul 16 12:31:43.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.218380 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 16 12:31:43.221374 systemd[1]: Mounting boot.mount... Jul 16 12:31:43.226916 systemd[1]: Reloading. Jul 16 12:31:43.243554 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 16 12:31:43.248873 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 16 12:31:43.252846 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 16 12:31:43.283746 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-07-16T12:31:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 16 12:31:43.283773 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-07-16T12:31:43Z" level=info msg="torcx already run" Jul 16 12:31:43.384288 ldconfig[1104]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 16 12:31:43.430423 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 16 12:31:43.430448 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 16 12:31:43.449630 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 12:31:43.516847 systemd[1]: Finished ldconfig.service. Jul 16 12:31:43.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.518387 kernel: kauditd_printk_skb: 208 callbacks suppressed Jul 16 12:31:43.518495 kernel: audit: type=1130 audit(1752669103.516:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.524950 systemd[1]: Mounted boot.mount. Jul 16 12:31:43.543652 kernel: audit: type=1130 audit(1752669103.537:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.538109 systemd[1]: Finished systemd-boot-update.service. Jul 16 12:31:43.546469 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.548168 systemd[1]: Starting modprobe@dm_mod.service... Jul 16 12:31:43.550420 systemd[1]: Starting modprobe@efi_pstore.service... Jul 16 12:31:43.552512 systemd[1]: Starting modprobe@loop.service... Jul 16 12:31:43.553307 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.553507 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:31:43.554593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 12:31:43.554830 systemd[1]: Finished modprobe@efi_pstore.service. Jul 16 12:31:43.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.557982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 12:31:43.558179 systemd[1]: Finished modprobe@dm_mod.service. Jul 16 12:31:43.563830 kernel: audit: type=1130 audit(1752669103.556:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.563901 kernel: audit: type=1131 audit(1752669103.556:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.566825 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 12:31:43.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.569833 systemd[1]: Finished modprobe@loop.service. Jul 16 12:31:43.572769 kernel: audit: type=1130 audit(1752669103.565:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.572823 kernel: audit: type=1131 audit(1752669103.565:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.574953 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.576514 systemd[1]: Starting modprobe@dm_mod.service... Jul 16 12:31:43.577690 kernel: audit: type=1130 audit(1752669103.572:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.584685 kernel: audit: type=1131 audit(1752669103.572:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.593618 systemd[1]: Starting modprobe@efi_pstore.service... Jul 16 12:31:43.595859 systemd[1]: Starting modprobe@loop.service... Jul 16 12:31:43.596315 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.596486 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:31:43.598293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 12:31:43.598514 systemd[1]: Finished modprobe@dm_mod.service. Jul 16 12:31:43.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.601320 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 12:31:43.601526 systemd[1]: Finished modprobe@loop.service. Jul 16 12:31:43.604684 kernel: audit: type=1130 audit(1752669103.600:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.604734 kernel: audit: type=1131 audit(1752669103.600:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.609595 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.611975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 12:31:43.612176 systemd[1]: Finished modprobe@efi_pstore.service. Jul 16 12:31:43.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.613002 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 12:31:43.615838 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.617288 systemd[1]: Starting modprobe@dm_mod.service... Jul 16 12:31:43.622161 systemd[1]: Starting modprobe@drm.service... Jul 16 12:31:43.624899 systemd[1]: Starting modprobe@efi_pstore.service... Jul 16 12:31:43.626662 systemd[1]: Starting modprobe@loop.service... Jul 16 12:31:43.627373 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.627556 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:31:43.629337 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 16 12:31:43.634117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 12:31:43.634316 systemd[1]: Finished modprobe@dm_mod.service. Jul 16 12:31:43.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.635577 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 12:31:43.635750 systemd[1]: Finished modprobe@drm.service. Jul 16 12:31:43.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.636728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 12:31:43.636874 systemd[1]: Finished modprobe@efi_pstore.service. Jul 16 12:31:43.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.638561 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 12:31:43.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.640582 systemd[1]: Finished ensure-sysext.service. Jul 16 12:31:43.645864 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 12:31:43.646029 systemd[1]: Finished modprobe@loop.service. Jul 16 12:31:43.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.646511 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.674851 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 16 12:31:43.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.679369 systemd[1]: Starting audit-rules.service... Jul 16 12:31:43.682291 systemd[1]: Starting clean-ca-certificates.service... Jul 16 12:31:43.685780 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 16 12:31:43.692006 systemd[1]: Starting systemd-resolved.service... Jul 16 12:31:43.695240 systemd[1]: Starting systemd-timesyncd.service... Jul 16 12:31:43.698096 systemd[1]: Starting systemd-update-utmp.service... Jul 16 12:31:43.705548 systemd[1]: Finished clean-ca-certificates.service. Jul 16 12:31:43.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.706110 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 16 12:31:43.708000 audit[1252]: SYSTEM_BOOT pid=1252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.711653 systemd[1]: Finished systemd-update-utmp.service. Jul 16 12:31:43.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.739232 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 16 12:31:43.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.741220 systemd[1]: Starting systemd-update-done.service... Jul 16 12:31:43.753041 systemd[1]: Finished systemd-update-done.service. Jul 16 12:31:43.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:31:43.770000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 16 12:31:43.770000 audit[1265]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc87d7010 a2=420 a3=0 items=0 ppid=1241 pid=1265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:31:43.770000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 16 12:31:43.771353 augenrules[1265]: No rules Jul 16 12:31:43.771531 systemd[1]: Finished audit-rules.service. Jul 16 12:31:43.801022 systemd[1]: Started systemd-timesyncd.service. Jul 16 12:31:43.801507 systemd[1]: Reached target time-set.target. Jul 16 12:31:43.804276 systemd-resolved[1250]: Positive Trust Anchors: Jul 16 12:31:43.804585 systemd-resolved[1250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 12:31:43.804771 systemd-resolved[1250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 16 12:31:43.810760 systemd-resolved[1250]: Using system hostname 'srv-f25or.gb1.brightbox.com'. Jul 16 12:31:43.812629 systemd[1]: Started systemd-resolved.service. Jul 16 12:31:43.813100 systemd[1]: Reached target network.target. Jul 16 12:31:43.813446 systemd[1]: Reached target nss-lookup.target. Jul 16 12:31:43.813824 systemd[1]: Reached target sysinit.target. Jul 16 12:31:43.814253 systemd[1]: Started motdgen.path. Jul 16 12:31:43.814624 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 16 12:31:43.815198 systemd[1]: Started logrotate.timer. Jul 16 12:31:43.815646 systemd[1]: Started mdadm.timer. Jul 16 12:31:43.815987 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 16 12:31:43.816342 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 16 12:31:43.816373 systemd[1]: Reached target paths.target. Jul 16 12:31:43.816722 systemd[1]: Reached target timers.target. Jul 16 12:31:43.817431 systemd[1]: Listening on dbus.socket. Jul 16 12:31:43.819231 systemd[1]: Starting docker.socket... Jul 16 12:31:43.821112 systemd[1]: Listening on sshd.socket. Jul 16 12:31:43.821711 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:31:43.822045 systemd[1]: Listening on docker.socket. Jul 16 12:31:43.822568 systemd[1]: Reached target sockets.target. Jul 16 12:31:43.823101 systemd[1]: Reached target basic.target. Jul 16 12:31:43.823744 systemd[1]: System is tainted: cgroupsv1 Jul 16 12:31:43.823802 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.823828 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 16 12:31:43.825134 systemd[1]: Starting containerd.service... Jul 16 12:31:43.827471 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 16 12:31:43.829590 systemd[1]: Starting dbus.service... Jul 16 12:31:43.831592 systemd[1]: Starting enable-oem-cloudinit.service... Jul 16 12:31:43.833905 systemd[1]: Starting extend-filesystems.service... Jul 16 12:31:43.837895 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 16 12:31:43.839341 systemd[1]: Starting motdgen.service... Jul 16 12:31:43.851411 jq[1279]: false Jul 16 12:31:43.843685 systemd[1]: Starting prepare-helm.service... Jul 16 12:31:43.846992 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 16 12:31:43.850972 systemd[1]: Starting sshd-keygen.service... Jul 16 12:31:43.869199 systemd[1]: Starting systemd-logind.service... Jul 16 12:31:43.869653 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:31:43.869773 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 16 12:31:43.871130 extend-filesystems[1280]: Found loop1 Jul 16 12:31:43.871236 systemd[1]: Starting update-engine.service... Jul 16 12:31:43.872775 extend-filesystems[1280]: Found vda Jul 16 12:31:43.873186 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 16 12:31:43.873296 extend-filesystems[1280]: Found vda1 Jul 16 12:31:43.874017 extend-filesystems[1280]: Found vda2 Jul 16 12:31:43.874017 extend-filesystems[1280]: Found vda3 Jul 16 12:31:43.874017 extend-filesystems[1280]: Found usr Jul 16 12:31:43.874017 extend-filesystems[1280]: Found vda4 Jul 16 12:31:43.874017 extend-filesystems[1280]: Found vda6 Jul 16 12:31:43.874017 extend-filesystems[1280]: Found vda7 Jul 16 12:31:43.874017 extend-filesystems[1280]: Found vda9 Jul 16 12:31:43.874017 extend-filesystems[1280]: Checking size of /dev/vda9 Jul 16 12:31:43.879234 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 16 12:31:43.879490 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 16 12:31:43.890552 systemd-networkd[1084]: eth0: Gained IPv6LL Jul 16 12:31:43.901607 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 16 12:31:43.902316 systemd[1]: Reached target network-online.target. Jul 16 12:31:43.905110 systemd[1]: Starting kubelet.service... Jul 16 12:31:43.905843 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 16 12:31:43.906094 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 16 12:31:43.935200 jq[1297]: true Jul 16 12:31:43.935355 tar[1302]: linux-amd64/helm Jul 16 12:31:43.944693 jq[1316]: true Jul 16 12:31:43.956908 dbus-daemon[1277]: [system] SELinux support is enabled Jul 16 12:31:43.957111 systemd[1]: Started dbus.service. Jul 16 12:31:43.959790 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 16 12:31:43.959825 systemd[1]: Reached target system-config.target. Jul 16 12:31:43.960245 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 16 12:31:43.960265 systemd[1]: Reached target user-config.target. Jul 16 12:31:43.963512 systemd[1]: motdgen.service: Deactivated successfully. Jul 16 12:31:43.963779 dbus-daemon[1277]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1084 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 16 12:31:43.963780 systemd[1]: Finished motdgen.service. Jul 16 12:31:43.966362 dbus-daemon[1277]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 16 12:31:43.971114 systemd[1]: Starting systemd-hostnamed.service... Jul 16 12:31:43.995781 extend-filesystems[1280]: Resized partition /dev/vda9 Jul 16 12:31:44.000337 extend-filesystems[1335]: resize2fs 1.46.5 (30-Dec-2021) Jul 16 12:31:44.002690 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jul 16 12:31:44.035793 update_engine[1295]: I0716 12:31:44.034369 1295 main.cc:92] Flatcar Update Engine starting Jul 16 12:31:44.040424 systemd[1]: Started update-engine.service. Jul 16 12:31:44.043042 systemd[1]: Started locksmithd.service. Jul 16 12:31:44.044165 update_engine[1295]: I0716 12:31:44.043697 1295 update_check_scheduler.cc:74] Next update check in 3m16s Jul 16 12:31:44.062893 bash[1344]: Updated "/home/core/.ssh/authorized_keys" Jul 16 12:31:44.063645 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 16 12:31:44.073411 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:31:44.073448 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:31:44.098048 env[1306]: time="2025-07-16T12:31:44.097978914Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 16 12:31:44.106684 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 16 12:31:44.118735 extend-filesystems[1335]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 16 12:31:44.118735 extend-filesystems[1335]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 16 12:31:44.118735 extend-filesystems[1335]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 16 12:31:44.120964 extend-filesystems[1280]: Resized filesystem in /dev/vda9 Jul 16 12:31:44.119900 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 16 12:31:44.120137 systemd[1]: Finished extend-filesystems.service. Jul 16 12:31:44.140897 systemd-logind[1294]: Watching system buttons on /dev/input/event2 (Power Button) Jul 16 12:31:44.140919 systemd-logind[1294]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 16 12:31:44.142407 systemd-logind[1294]: New seat seat0. Jul 16 12:31:44.147226 systemd[1]: Started systemd-logind.service. Jul 16 12:31:44.192298 env[1306]: time="2025-07-16T12:31:44.191699430Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 16 12:31:44.192298 env[1306]: time="2025-07-16T12:31:44.191852860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 16 12:31:44.201382 env[1306]: time="2025-07-16T12:31:44.201338733Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.188-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 16 12:31:44.201382 env[1306]: time="2025-07-16T12:31:44.201376645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 16 12:31:44.202274 env[1306]: time="2025-07-16T12:31:44.201646030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 16 12:31:44.202274 env[1306]: time="2025-07-16T12:31:44.201667680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 16 12:31:44.202274 env[1306]: time="2025-07-16T12:31:44.201693541Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 16 12:31:44.202274 env[1306]: time="2025-07-16T12:31:44.201704145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 16 12:31:44.202274 env[1306]: time="2025-07-16T12:31:44.201781061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 16 12:31:44.202274 env[1306]: time="2025-07-16T12:31:44.202027114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 16 12:31:44.202274 env[1306]: time="2025-07-16T12:31:44.202189463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 16 12:31:44.202274 env[1306]: time="2025-07-16T12:31:44.202205687Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 16 12:31:44.202274 env[1306]: time="2025-07-16T12:31:44.202251038Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 16 12:31:44.202274 env[1306]: time="2025-07-16T12:31:44.202263074Z" level=info msg="metadata content store policy set" policy=shared Jul 16 12:31:44.215407 env[1306]: time="2025-07-16T12:31:44.215345589Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 16 12:31:44.215407 env[1306]: time="2025-07-16T12:31:44.215378630Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 16 12:31:44.215407 env[1306]: time="2025-07-16T12:31:44.215393498Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 16 12:31:44.215563 env[1306]: time="2025-07-16T12:31:44.215446987Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 16 12:31:44.215563 env[1306]: time="2025-07-16T12:31:44.215462845Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 16 12:31:44.215563 env[1306]: time="2025-07-16T12:31:44.215476990Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 16 12:31:44.215563 env[1306]: time="2025-07-16T12:31:44.215526217Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 16 12:31:44.215563 env[1306]: time="2025-07-16T12:31:44.215541448Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 16 12:31:44.215563 env[1306]: time="2025-07-16T12:31:44.215560845Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 16 12:31:44.215706 env[1306]: time="2025-07-16T12:31:44.215573372Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 16 12:31:44.215706 env[1306]: time="2025-07-16T12:31:44.215585863Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 16 12:31:44.215706 env[1306]: time="2025-07-16T12:31:44.215600738Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 16 12:31:44.215774 env[1306]: time="2025-07-16T12:31:44.215708859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 16 12:31:44.215799 env[1306]: time="2025-07-16T12:31:44.215780043Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 16 12:31:44.216197 env[1306]: time="2025-07-16T12:31:44.216173821Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 16 12:31:44.216275 env[1306]: time="2025-07-16T12:31:44.216212351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216275 env[1306]: time="2025-07-16T12:31:44.216226550Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 16 12:31:44.216367 env[1306]: time="2025-07-16T12:31:44.216288670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216367 env[1306]: time="2025-07-16T12:31:44.216306329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216367 env[1306]: time="2025-07-16T12:31:44.216318571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216367 env[1306]: time="2025-07-16T12:31:44.216330789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216367 env[1306]: time="2025-07-16T12:31:44.216343492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216367 env[1306]: time="2025-07-16T12:31:44.216355367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216367 env[1306]: time="2025-07-16T12:31:44.216366721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216545 env[1306]: time="2025-07-16T12:31:44.216378113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216545 env[1306]: time="2025-07-16T12:31:44.216392641Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 16 12:31:44.216545 env[1306]: time="2025-07-16T12:31:44.216504528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216640 env[1306]: time="2025-07-16T12:31:44.216546629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216640 env[1306]: time="2025-07-16T12:31:44.216559661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.216640 env[1306]: time="2025-07-16T12:31:44.216571007Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 16 12:31:44.216640 env[1306]: time="2025-07-16T12:31:44.216585881Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 16 12:31:44.216640 env[1306]: time="2025-07-16T12:31:44.216598385Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 16 12:31:44.216640 env[1306]: time="2025-07-16T12:31:44.216626800Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 16 12:31:44.216810 env[1306]: time="2025-07-16T12:31:44.216682666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 16 12:31:44.217861 env[1306]: time="2025-07-16T12:31:44.216876430Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 16 12:31:44.217861 env[1306]: time="2025-07-16T12:31:44.216933733Z" level=info msg="Connect containerd service" Jul 16 12:31:44.217861 env[1306]: time="2025-07-16T12:31:44.216991610Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 16 12:31:44.217861 env[1306]: time="2025-07-16T12:31:44.217565558Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 12:31:44.220416 env[1306]: time="2025-07-16T12:31:44.217938592Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 16 12:31:44.220416 env[1306]: time="2025-07-16T12:31:44.217979038Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 16 12:31:44.220416 env[1306]: time="2025-07-16T12:31:44.218555803Z" level=info msg="Start subscribing containerd event" Jul 16 12:31:44.220416 env[1306]: time="2025-07-16T12:31:44.218604258Z" level=info msg="Start recovering state" Jul 16 12:31:44.220416 env[1306]: time="2025-07-16T12:31:44.219992353Z" level=info msg="Start event monitor" Jul 16 12:31:44.220416 env[1306]: time="2025-07-16T12:31:44.220027592Z" level=info msg="Start snapshots syncer" Jul 16 12:31:44.220416 env[1306]: time="2025-07-16T12:31:44.220038632Z" level=info msg="Start cni network conf syncer for default" Jul 16 12:31:44.220416 env[1306]: time="2025-07-16T12:31:44.220046682Z" level=info msg="Start streaming server" Jul 16 12:31:44.220416 env[1306]: time="2025-07-16T12:31:44.220219075Z" level=info msg="containerd successfully booted in 0.130459s" Jul 16 12:31:44.218150 systemd[1]: Started containerd.service. Jul 16 12:31:44.239685 dbus-daemon[1277]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 16 12:31:44.239816 systemd[1]: Started systemd-hostnamed.service. Jul 16 12:31:44.240601 dbus-daemon[1277]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1327 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 16 12:31:44.243615 systemd[1]: Starting polkit.service... Jul 16 12:31:44.260641 polkitd[1355]: Started polkitd version 121 Jul 16 12:31:44.273585 polkitd[1355]: Loading rules from directory /etc/polkit-1/rules.d Jul 16 12:31:44.273659 polkitd[1355]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 16 12:31:44.276530 polkitd[1355]: Finished loading, compiling and executing 2 rules Jul 16 12:31:44.276927 dbus-daemon[1277]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 16 12:31:44.277130 systemd[1]: Started polkit.service. Jul 16 12:31:44.278095 polkitd[1355]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 16 12:31:44.289422 systemd-hostnamed[1327]: Hostname set to (static) Jul 16 12:31:44.723723 tar[1302]: linux-amd64/LICENSE Jul 16 12:31:44.723922 tar[1302]: linux-amd64/README.md Jul 16 12:31:44.734116 systemd[1]: Finished prepare-helm.service. Jul 16 12:31:44.775410 systemd-timesyncd[1251]: Contacted time server 85.199.214.98:123 (0.flatcar.pool.ntp.org). Jul 16 12:31:44.776325 systemd-timesyncd[1251]: Initial clock synchronization to Wed 2025-07-16 12:31:44.801663 UTC. Jul 16 12:31:44.887337 locksmithd[1345]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 16 12:31:45.160505 systemd[1]: Created slice system-sshd.slice. Jul 16 12:31:45.243319 systemd[1]: Started kubelet.service. Jul 16 12:31:45.329724 sshd_keygen[1296]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 16 12:31:45.354509 systemd[1]: Finished sshd-keygen.service. Jul 16 12:31:45.356870 systemd[1]: Starting issuegen.service... Jul 16 12:31:45.358947 systemd[1]: Started sshd@0-10.244.89.194:22-147.75.109.163:46950.service. Jul 16 12:31:45.364801 systemd[1]: issuegen.service: Deactivated successfully. Jul 16 12:31:45.365033 systemd[1]: Finished issuegen.service. Jul 16 12:31:45.367114 systemd[1]: Starting systemd-user-sessions.service... Jul 16 12:31:45.377984 systemd[1]: Finished systemd-user-sessions.service. Jul 16 12:31:45.379802 systemd[1]: Started getty@tty1.service. Jul 16 12:31:45.381660 systemd[1]: Started serial-getty@ttyS0.service. Jul 16 12:31:45.384376 systemd[1]: Reached target getty.target. Jul 16 12:31:45.856503 kubelet[1375]: E0716 12:31:45.856452 1375 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 12:31:45.858865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 12:31:45.859067 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 12:31:46.279974 sshd[1390]: Accepted publickey for core from 147.75.109.163 port 46950 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:31:46.282255 sshd[1390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:31:46.299806 systemd[1]: Created slice user-500.slice. Jul 16 12:31:46.304153 systemd[1]: Starting user-runtime-dir@500.service... Jul 16 12:31:46.311164 systemd-logind[1294]: New session 1 of user core. Jul 16 12:31:46.317884 systemd[1]: Finished user-runtime-dir@500.service. Jul 16 12:31:46.319946 systemd[1]: Starting user@500.service... Jul 16 12:31:46.327368 (systemd)[1405]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:31:46.406353 systemd[1405]: Queued start job for default target default.target. Jul 16 12:31:46.407686 systemd[1405]: Reached target paths.target. Jul 16 12:31:46.407859 systemd[1405]: Reached target sockets.target. Jul 16 12:31:46.407974 systemd[1405]: Reached target timers.target. Jul 16 12:31:46.408065 systemd[1405]: Reached target basic.target. Jul 16 12:31:46.408290 systemd[1]: Started user@500.service. Jul 16 12:31:46.409778 systemd[1]: Started session-1.scope. Jul 16 12:31:46.412161 systemd[1405]: Reached target default.target. Jul 16 12:31:46.413539 systemd[1405]: Startup finished in 79ms. Jul 16 12:31:46.472652 systemd-networkd[1084]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1670:24:19ff:fef4:59c2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1670:24:19ff:fef4:59c2/64 assigned by NDisc. Jul 16 12:31:46.473654 systemd-networkd[1084]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 16 12:31:47.040802 systemd[1]: Started sshd@1-10.244.89.194:22-147.75.109.163:46964.service. Jul 16 12:31:47.939302 sshd[1416]: Accepted publickey for core from 147.75.109.163 port 46964 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:31:47.942859 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:31:47.954823 systemd[1]: Started session-2.scope. Jul 16 12:31:47.955396 systemd-logind[1294]: New session 2 of user core. Jul 16 12:31:48.562205 sshd[1416]: pam_unix(sshd:session): session closed for user core Jul 16 12:31:48.568810 systemd[1]: sshd@1-10.244.89.194:22-147.75.109.163:46964.service: Deactivated successfully. Jul 16 12:31:48.570293 systemd[1]: session-2.scope: Deactivated successfully. Jul 16 12:31:48.572208 systemd-logind[1294]: Session 2 logged out. Waiting for processes to exit. Jul 16 12:31:48.573928 systemd-logind[1294]: Removed session 2. Jul 16 12:31:48.712686 systemd[1]: Started sshd@2-10.244.89.194:22-147.75.109.163:42590.service. Jul 16 12:31:49.624256 sshd[1423]: Accepted publickey for core from 147.75.109.163 port 42590 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:31:49.627930 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:31:49.639699 systemd-logind[1294]: New session 3 of user core. Jul 16 12:31:49.640565 systemd[1]: Started session-3.scope. Jul 16 12:31:50.258278 sshd[1423]: pam_unix(sshd:session): session closed for user core Jul 16 12:31:50.265719 systemd-logind[1294]: Session 3 logged out. Waiting for processes to exit. Jul 16 12:31:50.267262 systemd[1]: sshd@2-10.244.89.194:22-147.75.109.163:42590.service: Deactivated successfully. Jul 16 12:31:50.269734 systemd[1]: session-3.scope: Deactivated successfully. Jul 16 12:31:50.270599 systemd-logind[1294]: Removed session 3. Jul 16 12:31:50.937725 coreos-metadata[1276]: Jul 16 12:31:50.937 WARN failed to locate config-drive, using the metadata service API instead Jul 16 12:31:50.980594 coreos-metadata[1276]: Jul 16 12:31:50.980 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 16 12:31:51.009860 coreos-metadata[1276]: Jul 16 12:31:51.009 INFO Fetch successful Jul 16 12:31:51.010173 coreos-metadata[1276]: Jul 16 12:31:51.009 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 16 12:31:51.044641 coreos-metadata[1276]: Jul 16 12:31:51.044 INFO Fetch successful Jul 16 12:31:51.046522 unknown[1276]: wrote ssh authorized keys file for user: core Jul 16 12:31:51.060818 update-ssh-keys[1433]: Updated "/home/core/.ssh/authorized_keys" Jul 16 12:31:51.061256 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 16 12:31:51.061597 systemd[1]: Reached target multi-user.target. Jul 16 12:31:51.063317 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 16 12:31:51.072622 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 16 12:31:51.072840 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 16 12:31:51.072978 systemd[1]: Startup finished in 7.952s (kernel) + 12.317s (userspace) = 20.270s. Jul 16 12:31:55.928950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 16 12:31:55.929430 systemd[1]: Stopped kubelet.service. Jul 16 12:31:55.933274 systemd[1]: Starting kubelet.service... Jul 16 12:31:56.068217 systemd[1]: Started kubelet.service. Jul 16 12:31:56.129845 kubelet[1445]: E0716 12:31:56.129776 1445 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 12:31:56.133353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 12:31:56.133526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 12:32:00.415315 systemd[1]: Started sshd@3-10.244.89.194:22-147.75.109.163:47554.service. Jul 16 12:32:01.314852 sshd[1453]: Accepted publickey for core from 147.75.109.163 port 47554 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:32:01.319538 sshd[1453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:32:01.329627 systemd-logind[1294]: New session 4 of user core. Jul 16 12:32:01.330948 systemd[1]: Started session-4.scope. Jul 16 12:32:01.942254 sshd[1453]: pam_unix(sshd:session): session closed for user core Jul 16 12:32:01.949853 systemd[1]: sshd@3-10.244.89.194:22-147.75.109.163:47554.service: Deactivated successfully. Jul 16 12:32:01.952794 systemd-logind[1294]: Session 4 logged out. Waiting for processes to exit. Jul 16 12:32:01.952945 systemd[1]: session-4.scope: Deactivated successfully. Jul 16 12:32:01.954590 systemd-logind[1294]: Removed session 4. Jul 16 12:32:02.092757 systemd[1]: Started sshd@4-10.244.89.194:22-147.75.109.163:47570.service. Jul 16 12:32:02.995023 sshd[1460]: Accepted publickey for core from 147.75.109.163 port 47570 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:32:02.998965 sshd[1460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:32:03.009575 systemd[1]: Started session-5.scope. Jul 16 12:32:03.010210 systemd-logind[1294]: New session 5 of user core. Jul 16 12:32:03.620419 sshd[1460]: pam_unix(sshd:session): session closed for user core Jul 16 12:32:03.626075 systemd[1]: sshd@4-10.244.89.194:22-147.75.109.163:47570.service: Deactivated successfully. Jul 16 12:32:03.627471 systemd[1]: session-5.scope: Deactivated successfully. Jul 16 12:32:03.628911 systemd-logind[1294]: Session 5 logged out. Waiting for processes to exit. Jul 16 12:32:03.630017 systemd-logind[1294]: Removed session 5. Jul 16 12:32:03.769310 systemd[1]: Started sshd@5-10.244.89.194:22-147.75.109.163:47574.service. Jul 16 12:32:04.666396 sshd[1467]: Accepted publickey for core from 147.75.109.163 port 47574 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:32:04.670833 sshd[1467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:32:04.680781 systemd-logind[1294]: New session 6 of user core. Jul 16 12:32:04.681177 systemd[1]: Started session-6.scope. Jul 16 12:32:05.297837 sshd[1467]: pam_unix(sshd:session): session closed for user core Jul 16 12:32:05.303948 systemd[1]: sshd@5-10.244.89.194:22-147.75.109.163:47574.service: Deactivated successfully. Jul 16 12:32:05.305627 systemd-logind[1294]: Session 6 logged out. Waiting for processes to exit. Jul 16 12:32:05.305789 systemd[1]: session-6.scope: Deactivated successfully. Jul 16 12:32:05.308280 systemd-logind[1294]: Removed session 6. Jul 16 12:32:05.473801 systemd[1]: Started sshd@6-10.244.89.194:22-147.75.109.163:47584.service. Jul 16 12:32:06.179065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 16 12:32:06.179582 systemd[1]: Stopped kubelet.service. Jul 16 12:32:06.183472 systemd[1]: Starting kubelet.service... Jul 16 12:32:06.319568 systemd[1]: Started kubelet.service. Jul 16 12:32:06.371962 kubelet[1483]: E0716 12:32:06.371917 1483 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 12:32:06.375041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 12:32:06.375355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 12:32:06.456497 sshd[1474]: Accepted publickey for core from 147.75.109.163 port 47584 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:32:06.460091 sshd[1474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:32:06.470371 systemd[1]: Started session-7.scope. Jul 16 12:32:06.471749 systemd-logind[1294]: New session 7 of user core. Jul 16 12:32:06.993791 sudo[1493]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 16 12:32:06.994049 sudo[1493]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 16 12:32:07.006108 dbus-daemon[1277]: \xd0m\xbbèU: received setenforce notice (enforcing=-2075704192) Jul 16 12:32:07.008856 sudo[1493]: pam_unix(sudo:session): session closed for user root Jul 16 12:32:07.168419 sshd[1474]: pam_unix(sshd:session): session closed for user core Jul 16 12:32:07.175662 systemd-logind[1294]: Session 7 logged out. Waiting for processes to exit. Jul 16 12:32:07.177317 systemd[1]: sshd@6-10.244.89.194:22-147.75.109.163:47584.service: Deactivated successfully. Jul 16 12:32:07.178890 systemd[1]: session-7.scope: Deactivated successfully. Jul 16 12:32:07.181096 systemd-logind[1294]: Removed session 7. Jul 16 12:32:07.329297 systemd[1]: Started sshd@7-10.244.89.194:22-147.75.109.163:47590.service. Jul 16 12:32:08.306800 sshd[1497]: Accepted publickey for core from 147.75.109.163 port 47590 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:32:08.310567 sshd[1497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:32:08.318332 systemd-logind[1294]: New session 8 of user core. Jul 16 12:32:08.319300 systemd[1]: Started session-8.scope. Jul 16 12:32:08.833658 sudo[1502]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 16 12:32:08.834415 sudo[1502]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 16 12:32:08.839516 sudo[1502]: pam_unix(sudo:session): session closed for user root Jul 16 12:32:08.852779 sudo[1501]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 16 12:32:08.853364 sudo[1501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 16 12:32:08.865351 systemd[1]: Stopping audit-rules.service... Jul 16 12:32:08.865000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 16 12:32:08.870167 kernel: kauditd_printk_skb: 22 callbacks suppressed Jul 16 12:32:08.870226 kernel: audit: type=1305 audit(1752669128.865:163): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 16 12:32:08.865000 audit[1505]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd73dca7d0 a2=420 a3=0 items=0 ppid=1 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:08.878732 kernel: audit: type=1300 audit(1752669128.865:163): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd73dca7d0 a2=420 a3=0 items=0 ppid=1 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:08.878802 kernel: audit: type=1327 audit(1752669128.865:163): proctitle=2F7362696E2F617564697463746C002D44 Jul 16 12:32:08.878829 kernel: audit: type=1131 audit(1752669128.875:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:08.865000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 16 12:32:08.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:08.876839 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 12:32:08.879623 auditctl[1505]: No rules Jul 16 12:32:08.877081 systemd[1]: Stopped audit-rules.service. Jul 16 12:32:08.880638 systemd[1]: Starting audit-rules.service... Jul 16 12:32:08.908287 augenrules[1523]: No rules Jul 16 12:32:08.910498 systemd[1]: Finished audit-rules.service. Jul 16 12:32:08.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:08.912888 sudo[1501]: pam_unix(sudo:session): session closed for user root Jul 16 12:32:08.917720 kernel: audit: type=1130 audit(1752669128.909:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:08.910000 audit[1501]: USER_END pid=1501 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 16 12:32:08.910000 audit[1501]: CRED_DISP pid=1501 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 16 12:32:08.926657 kernel: audit: type=1106 audit(1752669128.910:166): pid=1501 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 16 12:32:08.926706 kernel: audit: type=1104 audit(1752669128.910:167): pid=1501 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 16 12:32:09.075261 sshd[1497]: pam_unix(sshd:session): session closed for user core Jul 16 12:32:09.076000 audit[1497]: USER_END pid=1497 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:09.087727 kernel: audit: type=1106 audit(1752669129.076:168): pid=1497 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:09.088040 systemd[1]: sshd@7-10.244.89.194:22-147.75.109.163:47590.service: Deactivated successfully. Jul 16 12:32:09.076000 audit[1497]: CRED_DISP pid=1497 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:09.094148 kernel: audit: type=1104 audit(1752669129.076:169): pid=1497 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:09.093485 systemd[1]: session-8.scope: Deactivated successfully. Jul 16 12:32:09.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.244.89.194:22-147.75.109.163:47590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:09.093891 systemd-logind[1294]: Session 8 logged out. Waiting for processes to exit. Jul 16 12:32:09.098591 systemd-logind[1294]: Removed session 8. Jul 16 12:32:09.098770 kernel: audit: type=1131 audit(1752669129.086:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.244.89.194:22-147.75.109.163:47590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:09.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.89.194:22-147.75.109.163:47910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:09.209665 systemd[1]: Started sshd@8-10.244.89.194:22-147.75.109.163:47910.service. Jul 16 12:32:10.120000 audit[1530]: USER_ACCT pid=1530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:10.122157 sshd[1530]: Accepted publickey for core from 147.75.109.163 port 47910 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:32:10.124000 audit[1530]: CRED_ACQ pid=1530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:10.124000 audit[1530]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd621c7660 a2=3 a3=0 items=0 ppid=1 pid=1530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.124000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:32:10.126922 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:32:10.137868 systemd-logind[1294]: New session 9 of user core. Jul 16 12:32:10.138539 systemd[1]: Started session-9.scope. Jul 16 12:32:10.145000 audit[1530]: USER_START pid=1530 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:10.147000 audit[1533]: CRED_ACQ pid=1533 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:10.604000 audit[1534]: USER_ACCT pid=1534 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 16 12:32:10.605593 sudo[1534]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 16 12:32:10.605000 audit[1534]: CRED_REFR pid=1534 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 16 12:32:10.605890 sudo[1534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 16 12:32:10.607000 audit[1534]: USER_START pid=1534 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 16 12:32:10.642646 systemd[1]: Starting docker.service... Jul 16 12:32:10.701073 env[1544]: time="2025-07-16T12:32:10.701006182Z" level=info msg="Starting up" Jul 16 12:32:10.702978 env[1544]: time="2025-07-16T12:32:10.702938733Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 16 12:32:10.703156 env[1544]: time="2025-07-16T12:32:10.703134025Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 16 12:32:10.703290 env[1544]: time="2025-07-16T12:32:10.703266782Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 16 12:32:10.703415 env[1544]: time="2025-07-16T12:32:10.703396880Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 16 12:32:10.707800 env[1544]: time="2025-07-16T12:32:10.707767212Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 16 12:32:10.707800 env[1544]: time="2025-07-16T12:32:10.707785280Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 16 12:32:10.707800 env[1544]: time="2025-07-16T12:32:10.707801549Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 16 12:32:10.708030 env[1544]: time="2025-07-16T12:32:10.707811969Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 16 12:32:10.733874 env[1544]: time="2025-07-16T12:32:10.733840179Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 16 12:32:10.733874 env[1544]: time="2025-07-16T12:32:10.733863360Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 16 12:32:10.734148 env[1544]: time="2025-07-16T12:32:10.734130526Z" level=info msg="Loading containers: start." Jul 16 12:32:10.816000 audit[1576]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.816000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcd26f95c0 a2=0 a3=7ffcd26f95ac items=0 ppid=1544 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.816000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 16 12:32:10.819000 audit[1578]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.819000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe9dce0500 a2=0 a3=7ffe9dce04ec items=0 ppid=1544 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.819000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 16 12:32:10.822000 audit[1580]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.822000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc0b9ea010 a2=0 a3=7ffc0b9e9ffc items=0 ppid=1544 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.822000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 16 12:32:10.824000 audit[1582]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.824000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc50547620 a2=0 a3=7ffc5054760c items=0 ppid=1544 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.824000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 16 12:32:10.827000 audit[1585]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.827000 audit[1585]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff3e7067f0 a2=0 a3=7fff3e7067dc items=0 ppid=1544 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.827000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 16 12:32:10.847000 audit[1590]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.847000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc67059120 a2=0 a3=7ffc6705910c items=0 ppid=1544 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.847000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 16 12:32:10.852000 audit[1592]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.852000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc9f0e4fb0 a2=0 a3=7ffc9f0e4f9c items=0 ppid=1544 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.852000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 16 12:32:10.855000 audit[1594]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.855000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff2251e0d0 a2=0 a3=7fff2251e0bc items=0 ppid=1544 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.855000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 16 12:32:10.859000 audit[1596]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.859000 audit[1596]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe67ca6bb0 a2=0 a3=7ffe67ca6b9c items=0 ppid=1544 pid=1596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.859000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 16 12:32:10.867000 audit[1600]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.867000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffce82325a0 a2=0 a3=7ffce823258c items=0 ppid=1544 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.867000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 16 12:32:10.873000 audit[1601]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.873000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffde5531d30 a2=0 a3=7ffde5531d1c items=0 ppid=1544 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.873000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 16 12:32:10.886698 kernel: Initializing XFRM netlink socket Jul 16 12:32:10.941068 env[1544]: time="2025-07-16T12:32:10.941008550Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 16 12:32:10.971000 audit[1609]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.971000 audit[1609]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fffc7ab1980 a2=0 a3=7fffc7ab196c items=0 ppid=1544 pid=1609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.971000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 16 12:32:10.985000 audit[1612]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.985000 audit[1612]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe969f6390 a2=0 a3=7ffe969f637c items=0 ppid=1544 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.985000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 16 12:32:10.990000 audit[1615]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.990000 audit[1615]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffbc6f49b0 a2=0 a3=7fffbc6f499c items=0 ppid=1544 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.990000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 16 12:32:10.994000 audit[1617]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.994000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff3240de90 a2=0 a3=7fff3240de7c items=0 ppid=1544 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.994000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 16 12:32:10.998000 audit[1619]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:10.998000 audit[1619]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe677021a0 a2=0 a3=7ffe6770218c items=0 ppid=1544 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:10.998000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 16 12:32:11.005000 audit[1621]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:11.005000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd6c42e600 a2=0 a3=7ffd6c42e5ec items=0 ppid=1544 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:11.005000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 16 12:32:11.008000 audit[1623]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:11.008000 audit[1623]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe8252f8f0 a2=0 a3=7ffe8252f8dc items=0 ppid=1544 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:11.008000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 16 12:32:11.023000 audit[1626]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:11.023000 audit[1626]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffdc9a8ed00 a2=0 a3=7ffdc9a8ecec items=0 ppid=1544 pid=1626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:11.023000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 16 12:32:11.025000 audit[1628]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:11.025000 audit[1628]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffee2bb0960 a2=0 a3=7ffee2bb094c items=0 ppid=1544 pid=1628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:11.025000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 16 12:32:11.028000 audit[1630]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:11.028000 audit[1630]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff0e9a8a20 a2=0 a3=7fff0e9a8a0c items=0 ppid=1544 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:11.028000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 16 12:32:11.030000 audit[1632]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:11.030000 audit[1632]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc13160a20 a2=0 a3=7ffc13160a0c items=0 ppid=1544 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:11.030000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 16 12:32:11.031700 systemd-networkd[1084]: docker0: Link UP Jul 16 12:32:11.038000 audit[1636]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:11.038000 audit[1636]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe9d20c130 a2=0 a3=7ffe9d20c11c items=0 ppid=1544 pid=1636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:11.038000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 16 12:32:11.044000 audit[1637]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1637 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:11.044000 audit[1637]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc3c8d1b90 a2=0 a3=7ffc3c8d1b7c items=0 ppid=1544 pid=1637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:11.044000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 16 12:32:11.045253 env[1544]: time="2025-07-16T12:32:11.045215098Z" level=info msg="Loading containers: done." Jul 16 12:32:11.059086 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1205504628-merged.mount: Deactivated successfully. Jul 16 12:32:11.068480 env[1544]: time="2025-07-16T12:32:11.068422437Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 16 12:32:11.068692 env[1544]: time="2025-07-16T12:32:11.068649546Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 16 12:32:11.068808 env[1544]: time="2025-07-16T12:32:11.068784048Z" level=info msg="Daemon has completed initialization" Jul 16 12:32:11.080324 systemd[1]: Started docker.service. Jul 16 12:32:11.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:11.091156 env[1544]: time="2025-07-16T12:32:11.091106258Z" level=info msg="API listen on /run/docker.sock" Jul 16 12:32:12.513079 env[1306]: time="2025-07-16T12:32:12.512882140Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Jul 16 12:32:13.200123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1837228172.mount: Deactivated successfully. Jul 16 12:32:14.842820 env[1306]: time="2025-07-16T12:32:14.842658585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:14.845116 env[1306]: time="2025-07-16T12:32:14.845065576Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:14.847747 env[1306]: time="2025-07-16T12:32:14.847717673Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:14.851845 env[1306]: time="2025-07-16T12:32:14.851773930Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:14.853124 env[1306]: time="2025-07-16T12:32:14.853047269Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Jul 16 12:32:14.854454 env[1306]: time="2025-07-16T12:32:14.854392759Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Jul 16 12:32:16.428365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 16 12:32:16.442092 kernel: kauditd_printk_skb: 84 callbacks suppressed Jul 16 12:32:16.442194 kernel: audit: type=1130 audit(1752669136.427:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:16.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:16.428705 systemd[1]: Stopped kubelet.service. Jul 16 12:32:16.448010 kernel: audit: type=1131 audit(1752669136.427:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:16.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:16.443560 systemd[1]: Starting kubelet.service... Jul 16 12:32:16.517062 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 16 12:32:16.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:16.520736 kernel: audit: type=1131 audit(1752669136.515:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:16.578997 systemd[1]: Started kubelet.service. Jul 16 12:32:16.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:16.582683 kernel: audit: type=1130 audit(1752669136.577:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:16.639268 kubelet[1683]: E0716 12:32:16.639210 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 12:32:16.640976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 12:32:16.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 16 12:32:16.641143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 12:32:16.644712 kernel: audit: type=1131 audit(1752669136.639:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 16 12:32:17.337815 env[1306]: time="2025-07-16T12:32:17.337551579Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:17.346703 env[1306]: time="2025-07-16T12:32:17.344754427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:17.348929 env[1306]: time="2025-07-16T12:32:17.348865644Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:17.350908 env[1306]: time="2025-07-16T12:32:17.350861839Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:17.352582 env[1306]: time="2025-07-16T12:32:17.352455782Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Jul 16 12:32:17.354708 env[1306]: time="2025-07-16T12:32:17.354666422Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Jul 16 12:32:19.762513 env[1306]: time="2025-07-16T12:32:19.762403372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:19.766026 env[1306]: time="2025-07-16T12:32:19.765974302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:19.768906 env[1306]: time="2025-07-16T12:32:19.768862777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:19.770322 env[1306]: time="2025-07-16T12:32:19.770276401Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Jul 16 12:32:19.771064 env[1306]: time="2025-07-16T12:32:19.771034967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Jul 16 12:32:19.771514 env[1306]: time="2025-07-16T12:32:19.771474691Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:21.739053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745579591.mount: Deactivated successfully. Jul 16 12:32:22.464042 env[1306]: time="2025-07-16T12:32:22.463947119Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:22.465722 env[1306]: time="2025-07-16T12:32:22.465652318Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:22.467167 env[1306]: time="2025-07-16T12:32:22.467119486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:22.468647 env[1306]: time="2025-07-16T12:32:22.468603102Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:22.469194 env[1306]: time="2025-07-16T12:32:22.469099102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Jul 16 12:32:22.470704 env[1306]: time="2025-07-16T12:32:22.470625398Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 16 12:32:23.680152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449914173.mount: Deactivated successfully. Jul 16 12:32:24.846877 env[1306]: time="2025-07-16T12:32:24.846782897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:24.850413 env[1306]: time="2025-07-16T12:32:24.850349968Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:24.852282 env[1306]: time="2025-07-16T12:32:24.852248930Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:24.853842 env[1306]: time="2025-07-16T12:32:24.853799550Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 16 12:32:24.854664 env[1306]: time="2025-07-16T12:32:24.854630839Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 16 12:32:24.854865 env[1306]: time="2025-07-16T12:32:24.854836281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:26.041414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1826220160.mount: Deactivated successfully. Jul 16 12:32:26.044955 env[1306]: time="2025-07-16T12:32:26.044919630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:26.047279 env[1306]: time="2025-07-16T12:32:26.047254289Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:26.049598 env[1306]: time="2025-07-16T12:32:26.049577286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:26.053634 env[1306]: time="2025-07-16T12:32:26.053573610Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 16 12:32:26.054025 env[1306]: time="2025-07-16T12:32:26.053815479Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:26.054275 env[1306]: time="2025-07-16T12:32:26.054255527Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 16 12:32:26.678457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 16 12:32:26.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:26.678776 systemd[1]: Stopped kubelet.service. Jul 16 12:32:26.692716 kernel: audit: type=1130 audit(1752669146.678:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:26.688159 systemd[1]: Starting kubelet.service... Jul 16 12:32:26.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:26.696692 kernel: audit: type=1131 audit(1752669146.678:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:26.817151 systemd[1]: Started kubelet.service. Jul 16 12:32:26.820703 kernel: audit: type=1130 audit(1752669146.816:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:26.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:26.867565 kubelet[1699]: E0716 12:32:26.867502 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 12:32:26.869265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 12:32:26.869438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 12:32:26.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 16 12:32:26.874721 kernel: audit: type=1131 audit(1752669146.869:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 16 12:32:27.602609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1965126350.mount: Deactivated successfully. Jul 16 12:32:28.805763 update_engine[1295]: I0716 12:32:28.804563 1295 update_attempter.cc:509] Updating boot flags... Jul 16 12:32:30.932277 env[1306]: time="2025-07-16T12:32:30.932173690Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:30.937008 env[1306]: time="2025-07-16T12:32:30.936936930Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:30.940428 env[1306]: time="2025-07-16T12:32:30.940396177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:30.943183 env[1306]: time="2025-07-16T12:32:30.943150189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:30.944794 env[1306]: time="2025-07-16T12:32:30.944754924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 16 12:32:34.107439 systemd[1]: Stopped kubelet.service. Jul 16 12:32:34.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:34.116166 kernel: audit: type=1130 audit(1752669154.106:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:34.116242 kernel: audit: type=1131 audit(1752669154.106:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:34.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:34.114369 systemd[1]: Starting kubelet.service... Jul 16 12:32:34.148129 systemd[1]: Reloading. Jul 16 12:32:34.249637 /usr/lib/systemd/system-generators/torcx-generator[1764]: time="2025-07-16T12:32:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 16 12:32:34.249701 /usr/lib/systemd/system-generators/torcx-generator[1764]: time="2025-07-16T12:32:34Z" level=info msg="torcx already run" Jul 16 12:32:34.418106 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 16 12:32:34.418127 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 16 12:32:34.439814 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 12:32:34.537440 systemd[1]: Started kubelet.service. Jul 16 12:32:34.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:34.542684 kernel: audit: type=1130 audit(1752669154.537:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:34.548089 systemd[1]: Stopping kubelet.service... Jul 16 12:32:34.549320 systemd[1]: kubelet.service: Deactivated successfully. Jul 16 12:32:34.549560 systemd[1]: Stopped kubelet.service. Jul 16 12:32:34.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:34.551481 systemd[1]: Starting kubelet.service... Jul 16 12:32:34.552685 kernel: audit: type=1131 audit(1752669154.548:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:34.665019 systemd[1]: Started kubelet.service. Jul 16 12:32:34.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:34.671715 kernel: audit: type=1130 audit(1752669154.664:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:34.724024 kubelet[1834]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 12:32:34.724024 kubelet[1834]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 16 12:32:34.724024 kubelet[1834]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 12:32:34.725029 kubelet[1834]: I0716 12:32:34.724026 1834 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 12:32:34.954196 kubelet[1834]: I0716 12:32:34.954147 1834 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 16 12:32:34.954421 kubelet[1834]: I0716 12:32:34.954401 1834 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 12:32:34.955057 kubelet[1834]: I0716 12:32:34.955030 1834 server.go:934] "Client rotation is on, will bootstrap in background" Jul 16 12:32:35.001137 kubelet[1834]: E0716 12:32:35.001006 1834 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.89.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.89.194:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:32:35.015491 kubelet[1834]: I0716 12:32:35.015448 1834 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 12:32:35.022141 kubelet[1834]: E0716 12:32:35.022103 1834 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 16 12:32:35.022141 kubelet[1834]: I0716 12:32:35.022133 1834 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 16 12:32:35.027426 kubelet[1834]: I0716 12:32:35.027404 1834 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 12:32:35.028345 kubelet[1834]: I0716 12:32:35.028320 1834 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 16 12:32:35.028495 kubelet[1834]: I0716 12:32:35.028466 1834 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 12:32:35.028709 kubelet[1834]: I0716 12:32:35.028491 1834 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-f25or.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 16 12:32:35.028898 kubelet[1834]: I0716 12:32:35.028737 1834 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 12:32:35.028898 kubelet[1834]: I0716 12:32:35.028747 1834 container_manager_linux.go:300] "Creating device plugin manager" Jul 16 12:32:35.028898 kubelet[1834]: I0716 12:32:35.028872 1834 state_mem.go:36] "Initialized new in-memory state store" Jul 16 12:32:35.031456 kubelet[1834]: I0716 12:32:35.031435 1834 kubelet.go:408] "Attempting to sync node with API server" Jul 16 12:32:35.031515 kubelet[1834]: I0716 12:32:35.031476 1834 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 12:32:35.031550 kubelet[1834]: I0716 12:32:35.031521 1834 kubelet.go:314] "Adding apiserver pod source" Jul 16 12:32:35.031590 kubelet[1834]: I0716 12:32:35.031552 1834 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 12:32:35.056491 kubelet[1834]: W0716 12:32:35.055003 1834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.89.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.89.194:6443: connect: connection refused Jul 16 12:32:35.056491 kubelet[1834]: E0716 12:32:35.055076 1834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.89.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.89.194:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:32:35.056491 kubelet[1834]: W0716 12:32:35.055335 1834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.89.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f25or.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.89.194:6443: connect: connection refused Jul 16 12:32:35.056491 kubelet[1834]: E0716 12:32:35.055367 1834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.89.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f25or.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.89.194:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:32:35.057458 kubelet[1834]: I0716 12:32:35.057323 1834 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 16 12:32:35.057939 kubelet[1834]: I0716 12:32:35.057919 1834 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 12:32:35.058055 kubelet[1834]: W0716 12:32:35.058002 1834 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 16 12:32:35.062997 kubelet[1834]: I0716 12:32:35.062967 1834 server.go:1274] "Started kubelet" Jul 16 12:32:35.067879 kubelet[1834]: I0716 12:32:35.067850 1834 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 12:32:35.069116 kubelet[1834]: I0716 12:32:35.069098 1834 server.go:449] "Adding debug handlers to kubelet server" Jul 16 12:32:35.087876 kernel: audit: type=1400 audit(1752669155.079:219): avc: denied { mac_admin } for pid=1834 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:32:35.088106 kernel: audit: type=1401 audit(1752669155.079:219): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 16 12:32:35.079000 audit[1834]: AVC avc: denied { mac_admin } for pid=1834 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:32:35.079000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 16 12:32:35.088315 kubelet[1834]: I0716 12:32:35.080732 1834 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 16 12:32:35.088315 kubelet[1834]: I0716 12:32:35.080768 1834 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 16 12:32:35.088315 kubelet[1834]: I0716 12:32:35.083403 1834 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 12:32:35.088754 kubelet[1834]: I0716 12:32:35.088707 1834 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 12:32:35.089145 kubelet[1834]: I0716 12:32:35.089132 1834 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 12:32:35.079000 audit[1834]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b42120 a1=c0006ef830 a2=c000b420f0 a3=25 items=0 ppid=1 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.093617 kubelet[1834]: E0716 12:32:35.092321 1834 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.89.194:6443/api/v1/namespaces/default/events\": dial tcp 10.244.89.194:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-f25or.gb1.brightbox.com.1852bb4f7c4f7a82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-f25or.gb1.brightbox.com,UID:srv-f25or.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-f25or.gb1.brightbox.com,},FirstTimestamp:2025-07-16 12:32:35.062938242 +0000 UTC m=+0.385013369,LastTimestamp:2025-07-16 12:32:35.062938242 +0000 UTC m=+0.385013369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-f25or.gb1.brightbox.com,}" Jul 16 12:32:35.079000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 16 12:32:35.097776 kernel: audit: type=1300 audit(1752669155.079:219): arch=c000003e syscall=188 success=no exit=-22 a0=c000b42120 a1=c0006ef830 a2=c000b420f0 a3=25 items=0 ppid=1 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.097839 kernel: audit: type=1327 audit(1752669155.079:219): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 16 12:32:35.097874 kernel: audit: type=1400 audit(1752669155.080:220): avc: denied { mac_admin } for pid=1834 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:32:35.080000 audit[1834]: AVC avc: denied { mac_admin } for pid=1834 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:32:35.098806 kubelet[1834]: I0716 12:32:35.098787 1834 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 12:32:35.101042 kubelet[1834]: I0716 12:32:35.101028 1834 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 16 12:32:35.101387 kubelet[1834]: E0716 12:32:35.101367 1834 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-f25or.gb1.brightbox.com\" not found" Jul 16 12:32:35.080000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 16 12:32:35.080000 audit[1834]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a8afa0 a1=c0006ef848 a2=c000b421b0 a3=25 items=0 ppid=1 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.080000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 16 12:32:35.089000 audit[1847]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:35.089000 audit[1847]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffce4af6e10 a2=0 a3=7ffce4af6dfc items=0 ppid=1834 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.089000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 16 12:32:35.100000 audit[1848]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:35.100000 audit[1848]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe98b9e260 a2=0 a3=7ffe98b9e24c items=0 ppid=1834 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 16 12:32:35.102616 kubelet[1834]: I0716 12:32:35.102599 1834 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 16 12:32:35.102770 kubelet[1834]: I0716 12:32:35.102759 1834 reconciler.go:26] "Reconciler: start to sync state" Jul 16 12:32:35.103754 kubelet[1834]: W0716 12:32:35.103719 1834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.89.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.89.194:6443: connect: connection refused Jul 16 12:32:35.103868 kubelet[1834]: E0716 12:32:35.103852 1834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.89.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.89.194:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:32:35.104029 kubelet[1834]: E0716 12:32:35.104009 1834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.89.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f25or.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.89.194:6443: connect: connection refused" interval="200ms" Jul 16 12:32:35.106000 audit[1850]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:35.106000 audit[1850]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc6648fb50 a2=0 a3=7ffc6648fb3c items=0 ppid=1834 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.106000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 16 12:32:35.113275 kubelet[1834]: I0716 12:32:35.113255 1834 factory.go:221] Registration of the containerd container factory successfully Jul 16 12:32:35.113386 kubelet[1834]: I0716 12:32:35.113376 1834 factory.go:221] Registration of the systemd container factory successfully Jul 16 12:32:35.113530 kubelet[1834]: I0716 12:32:35.113509 1834 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 12:32:35.119000 audit[1852]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:35.119000 audit[1852]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd1a755760 a2=0 a3=7ffd1a75574c items=0 ppid=1834 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 16 12:32:35.130457 kubelet[1834]: E0716 12:32:35.130434 1834 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 12:32:35.140000 audit[1859]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:35.140000 audit[1859]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcd6389530 a2=0 a3=7ffcd638951c items=0 ppid=1834 pid=1859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 16 12:32:35.141793 kubelet[1834]: I0716 12:32:35.141757 1834 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 12:32:35.143000 audit[1862]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:35.143000 audit[1862]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe54ae5860 a2=0 a3=7ffe54ae584c items=0 ppid=1834 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 16 12:32:35.144000 audit[1861]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=1861 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:35.144000 audit[1861]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe73ac2ca0 a2=0 a3=7ffe73ac2c8c items=0 ppid=1834 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 16 12:32:35.145307 kubelet[1834]: I0716 12:32:35.145289 1834 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 12:32:35.145411 kubelet[1834]: I0716 12:32:35.145401 1834 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 16 12:32:35.145505 kubelet[1834]: I0716 12:32:35.145496 1834 kubelet.go:2321] "Starting kubelet main sync loop" Jul 16 12:32:35.145656 kubelet[1834]: E0716 12:32:35.145614 1834 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 12:32:35.145000 audit[1865]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:35.145000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5709b9f0 a2=0 a3=7fff5709b9dc items=0 ppid=1834 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 16 12:32:35.147161 kubelet[1834]: W0716 12:32:35.147139 1834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.89.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.89.194:6443: connect: connection refused Jul 16 12:32:35.147240 kubelet[1834]: E0716 12:32:35.147176 1834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.89.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.89.194:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:32:35.148074 kubelet[1834]: I0716 12:32:35.148053 1834 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 16 12:32:35.148074 kubelet[1834]: I0716 12:32:35.148070 1834 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 16 12:32:35.148183 kubelet[1834]: I0716 12:32:35.148090 1834 state_mem.go:36] "Initialized new in-memory state store" Jul 16 12:32:35.147000 audit[1866]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1866 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:35.147000 audit[1866]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe39d6e840 a2=0 a3=7ffe39d6e82c items=0 ppid=1834 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 16 12:32:35.148000 audit[1867]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:35.148000 audit[1867]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb1740740 a2=0 a3=7ffcb174072c items=0 ppid=1834 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.148000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 16 12:32:35.149717 kubelet[1834]: I0716 12:32:35.149564 1834 policy_none.go:49] "None policy: Start" Jul 16 12:32:35.150390 kubelet[1834]: I0716 12:32:35.150372 1834 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 16 12:32:35.150508 kubelet[1834]: I0716 12:32:35.150498 1834 state_mem.go:35] "Initializing new in-memory state store" Jul 16 12:32:35.150000 audit[1868]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1868 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:35.150000 audit[1868]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffec3c2a340 a2=0 a3=7ffec3c2a32c items=0 ppid=1834 pid=1868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.150000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 16 12:32:35.151000 audit[1869]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1869 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:35.151000 audit[1869]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff8972f0f0 a2=0 a3=7fff8972f0dc items=0 ppid=1834 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 16 12:32:35.155445 kubelet[1834]: I0716 12:32:35.155409 1834 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 12:32:35.154000 audit[1834]: AVC avc: denied { mac_admin } for pid=1834 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:32:35.154000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 16 12:32:35.154000 audit[1834]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e4cae0 a1=c000c0d590 a2=c000e4cab0 a3=25 items=0 ppid=1 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:35.154000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 16 12:32:35.156893 kubelet[1834]: I0716 12:32:35.156873 1834 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 16 12:32:35.157093 kubelet[1834]: I0716 12:32:35.157081 1834 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 12:32:35.157216 kubelet[1834]: I0716 12:32:35.157166 1834 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 12:32:35.157595 kubelet[1834]: I0716 12:32:35.157581 1834 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 12:32:35.158955 kubelet[1834]: E0716 12:32:35.158936 1834 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-f25or.gb1.brightbox.com\" not found" Jul 16 12:32:35.259587 kubelet[1834]: I0716 12:32:35.259435 1834 kubelet_node_status.go:72] "Attempting to register node" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.267647 kubelet[1834]: E0716 12:32:35.267617 1834 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.89.194:6443/api/v1/nodes\": dial tcp 10.244.89.194:6443: connect: connection refused" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.304825 kubelet[1834]: E0716 12:32:35.304770 1834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.89.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f25or.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.89.194:6443: connect: connection refused" interval="400ms" Jul 16 12:32:35.405006 kubelet[1834]: I0716 12:32:35.404905 1834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e688e6895aa42f6104360444613cb1f5-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-f25or.gb1.brightbox.com\" (UID: \"e688e6895aa42f6104360444613cb1f5\") " pod="kube-system/kube-controller-manager-srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.405006 kubelet[1834]: I0716 12:32:35.405014 1834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09194960314de67f51639a287c6a7593-kubeconfig\") pod \"kube-scheduler-srv-f25or.gb1.brightbox.com\" (UID: \"09194960314de67f51639a287c6a7593\") " pod="kube-system/kube-scheduler-srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.405587 kubelet[1834]: I0716 12:32:35.405069 1834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7072e8d6d8e7c25dca96d68d664944fd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-f25or.gb1.brightbox.com\" (UID: \"7072e8d6d8e7c25dca96d68d664944fd\") " pod="kube-system/kube-apiserver-srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.405587 kubelet[1834]: I0716 12:32:35.405117 1834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e688e6895aa42f6104360444613cb1f5-flexvolume-dir\") pod \"kube-controller-manager-srv-f25or.gb1.brightbox.com\" (UID: \"e688e6895aa42f6104360444613cb1f5\") " pod="kube-system/kube-controller-manager-srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.405587 kubelet[1834]: I0716 12:32:35.405161 1834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e688e6895aa42f6104360444613cb1f5-k8s-certs\") pod \"kube-controller-manager-srv-f25or.gb1.brightbox.com\" (UID: \"e688e6895aa42f6104360444613cb1f5\") " pod="kube-system/kube-controller-manager-srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.405587 kubelet[1834]: I0716 12:32:35.405201 1834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e688e6895aa42f6104360444613cb1f5-kubeconfig\") pod \"kube-controller-manager-srv-f25or.gb1.brightbox.com\" (UID: \"e688e6895aa42f6104360444613cb1f5\") " pod="kube-system/kube-controller-manager-srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.405587 kubelet[1834]: I0716 12:32:35.405281 1834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7072e8d6d8e7c25dca96d68d664944fd-ca-certs\") pod \"kube-apiserver-srv-f25or.gb1.brightbox.com\" (UID: \"7072e8d6d8e7c25dca96d68d664944fd\") " pod="kube-system/kube-apiserver-srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.406100 kubelet[1834]: I0716 12:32:35.405329 1834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7072e8d6d8e7c25dca96d68d664944fd-k8s-certs\") pod \"kube-apiserver-srv-f25or.gb1.brightbox.com\" (UID: \"7072e8d6d8e7c25dca96d68d664944fd\") " pod="kube-system/kube-apiserver-srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.406100 kubelet[1834]: I0716 12:32:35.405369 1834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e688e6895aa42f6104360444613cb1f5-ca-certs\") pod \"kube-controller-manager-srv-f25or.gb1.brightbox.com\" (UID: \"e688e6895aa42f6104360444613cb1f5\") " pod="kube-system/kube-controller-manager-srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.472092 kubelet[1834]: I0716 12:32:35.472020 1834 kubelet_node_status.go:72] "Attempting to register node" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.472642 kubelet[1834]: E0716 12:32:35.472566 1834 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.89.194:6443/api/v1/nodes\": dial tcp 10.244.89.194:6443: connect: connection refused" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.567486 env[1306]: time="2025-07-16T12:32:35.567189674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-f25or.gb1.brightbox.com,Uid:7072e8d6d8e7c25dca96d68d664944fd,Namespace:kube-system,Attempt:0,}" Jul 16 12:32:35.572164 env[1306]: time="2025-07-16T12:32:35.571418922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-f25or.gb1.brightbox.com,Uid:09194960314de67f51639a287c6a7593,Namespace:kube-system,Attempt:0,}" Jul 16 12:32:35.575483 env[1306]: time="2025-07-16T12:32:35.575418474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-f25or.gb1.brightbox.com,Uid:e688e6895aa42f6104360444613cb1f5,Namespace:kube-system,Attempt:0,}" Jul 16 12:32:35.706494 kubelet[1834]: E0716 12:32:35.706392 1834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.89.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f25or.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.89.194:6443: connect: connection refused" interval="800ms" Jul 16 12:32:35.877033 kubelet[1834]: I0716 12:32:35.876852 1834 kubelet_node_status.go:72] "Attempting to register node" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:35.878138 kubelet[1834]: E0716 12:32:35.877609 1834 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.89.194:6443/api/v1/nodes\": dial tcp 10.244.89.194:6443: connect: connection refused" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:36.409072 kubelet[1834]: W0716 12:32:36.408908 1834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.89.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.89.194:6443: connect: connection refused Jul 16 12:32:36.409072 kubelet[1834]: E0716 12:32:36.409072 1834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.89.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.89.194:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:32:36.508346 kubelet[1834]: E0716 12:32:36.508253 1834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.89.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f25or.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.89.194:6443: connect: connection refused" interval="1.6s" Jul 16 12:32:36.604601 kubelet[1834]: W0716 12:32:36.604483 1834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.89.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f25or.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.89.194:6443: connect: connection refused Jul 16 12:32:36.604601 kubelet[1834]: E0716 12:32:36.604563 1834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.89.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f25or.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.89.194:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:32:36.660399 kubelet[1834]: W0716 12:32:36.660120 1834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.89.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.89.194:6443: connect: connection refused Jul 16 12:32:36.660399 kubelet[1834]: E0716 12:32:36.660242 1834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.89.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.89.194:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:32:36.681015 kubelet[1834]: I0716 12:32:36.680980 1834 kubelet_node_status.go:72] "Attempting to register node" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:36.681377 kubelet[1834]: E0716 12:32:36.681352 1834 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.89.194:6443/api/v1/nodes\": dial tcp 10.244.89.194:6443: connect: connection refused" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:36.703307 kubelet[1834]: W0716 12:32:36.703213 1834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.89.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.89.194:6443: connect: connection refused Jul 16 12:32:36.703307 kubelet[1834]: E0716 12:32:36.703313 1834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.89.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.89.194:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:32:36.768228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1902747903.mount: Deactivated successfully. Jul 16 12:32:36.771791 env[1306]: time="2025-07-16T12:32:36.771753618Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.774024 env[1306]: time="2025-07-16T12:32:36.773996864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.775698 env[1306]: time="2025-07-16T12:32:36.775635950Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.777064 env[1306]: time="2025-07-16T12:32:36.777042600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.778617 env[1306]: time="2025-07-16T12:32:36.778580996Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.781290 env[1306]: time="2025-07-16T12:32:36.781261321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.784024 env[1306]: time="2025-07-16T12:32:36.783998373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.785175 env[1306]: time="2025-07-16T12:32:36.785145233Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.786704 env[1306]: time="2025-07-16T12:32:36.786664220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.787471 env[1306]: time="2025-07-16T12:32:36.787447766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.788200 env[1306]: time="2025-07-16T12:32:36.788178899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.789383 env[1306]: time="2025-07-16T12:32:36.789360384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:36.821292 env[1306]: time="2025-07-16T12:32:36.821182729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:32:36.821292 env[1306]: time="2025-07-16T12:32:36.821248043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:32:36.823111 env[1306]: time="2025-07-16T12:32:36.821269925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:32:36.823922 env[1306]: time="2025-07-16T12:32:36.823754999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:32:36.823922 env[1306]: time="2025-07-16T12:32:36.823784355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:32:36.823922 env[1306]: time="2025-07-16T12:32:36.823795026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:32:36.824979 env[1306]: time="2025-07-16T12:32:36.824896602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11e3907fb651de8ad505002c7974e04f6b9ad0855069f8448e4e9b14c5f35adf pid=1885 runtime=io.containerd.runc.v2 Jul 16 12:32:36.825149 env[1306]: time="2025-07-16T12:32:36.823539583Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ed4f6ead812b8a58426237bdefd824ca57a71c3f9d78c8ec1004a395a68b16b pid=1881 runtime=io.containerd.runc.v2 Jul 16 12:32:36.834573 env[1306]: time="2025-07-16T12:32:36.834383079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:32:36.834573 env[1306]: time="2025-07-16T12:32:36.834427914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:32:36.834573 env[1306]: time="2025-07-16T12:32:36.834440340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:32:36.834966 env[1306]: time="2025-07-16T12:32:36.834883869Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0ca7c7ae4fc19396d0002a72b7e40274e1344bceaa87b90ea79e90e1c21453d pid=1907 runtime=io.containerd.runc.v2 Jul 16 12:32:36.916824 env[1306]: time="2025-07-16T12:32:36.916721911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-f25or.gb1.brightbox.com,Uid:e688e6895aa42f6104360444613cb1f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"11e3907fb651de8ad505002c7974e04f6b9ad0855069f8448e4e9b14c5f35adf\"" Jul 16 12:32:36.921989 env[1306]: time="2025-07-16T12:32:36.921951563Z" level=info msg="CreateContainer within sandbox \"11e3907fb651de8ad505002c7974e04f6b9ad0855069f8448e4e9b14c5f35adf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 16 12:32:36.941101 env[1306]: time="2025-07-16T12:32:36.941060832Z" level=info msg="CreateContainer within sandbox \"11e3907fb651de8ad505002c7974e04f6b9ad0855069f8448e4e9b14c5f35adf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"93659bf48036e8f19ca31b202ecdeb619daf0318f0721da6eed4bfae60422a1b\"" Jul 16 12:32:36.941803 env[1306]: time="2025-07-16T12:32:36.941774900Z" level=info msg="StartContainer for \"93659bf48036e8f19ca31b202ecdeb619daf0318f0721da6eed4bfae60422a1b\"" Jul 16 12:32:36.953222 env[1306]: time="2025-07-16T12:32:36.953182603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-f25or.gb1.brightbox.com,Uid:09194960314de67f51639a287c6a7593,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ed4f6ead812b8a58426237bdefd824ca57a71c3f9d78c8ec1004a395a68b16b\"" Jul 16 12:32:36.955520 env[1306]: time="2025-07-16T12:32:36.955491686Z" level=info msg="CreateContainer within sandbox \"6ed4f6ead812b8a58426237bdefd824ca57a71c3f9d78c8ec1004a395a68b16b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 16 12:32:36.975239 env[1306]: time="2025-07-16T12:32:36.975185117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-f25or.gb1.brightbox.com,Uid:7072e8d6d8e7c25dca96d68d664944fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0ca7c7ae4fc19396d0002a72b7e40274e1344bceaa87b90ea79e90e1c21453d\"" Jul 16 12:32:36.978444 env[1306]: time="2025-07-16T12:32:36.978413056Z" level=info msg="CreateContainer within sandbox \"c0ca7c7ae4fc19396d0002a72b7e40274e1344bceaa87b90ea79e90e1c21453d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 16 12:32:36.984879 env[1306]: time="2025-07-16T12:32:36.984845151Z" level=info msg="CreateContainer within sandbox \"6ed4f6ead812b8a58426237bdefd824ca57a71c3f9d78c8ec1004a395a68b16b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8b65817a8a15cc270e702858f93409550aad357dec714e2f6d9b1ae7ceb9ed63\"" Jul 16 12:32:36.985326 env[1306]: time="2025-07-16T12:32:36.985304128Z" level=info msg="StartContainer for \"8b65817a8a15cc270e702858f93409550aad357dec714e2f6d9b1ae7ceb9ed63\"" Jul 16 12:32:36.994446 env[1306]: time="2025-07-16T12:32:36.994411331Z" level=info msg="CreateContainer within sandbox \"c0ca7c7ae4fc19396d0002a72b7e40274e1344bceaa87b90ea79e90e1c21453d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"599f7b13c60c7e6081cb6e79ef8280bdfea66b9c2059c2bbfad6767e55b6875a\"" Jul 16 12:32:36.995103 env[1306]: time="2025-07-16T12:32:36.995073784Z" level=info msg="StartContainer for \"599f7b13c60c7e6081cb6e79ef8280bdfea66b9c2059c2bbfad6767e55b6875a\"" Jul 16 12:32:37.053780 env[1306]: time="2025-07-16T12:32:37.053737353Z" level=info msg="StartContainer for \"93659bf48036e8f19ca31b202ecdeb619daf0318f0721da6eed4bfae60422a1b\" returns successfully" Jul 16 12:32:37.068048 kubelet[1834]: E0716 12:32:37.067964 1834 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.89.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.89.194:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:32:37.120618 env[1306]: time="2025-07-16T12:32:37.120581182Z" level=info msg="StartContainer for \"8b65817a8a15cc270e702858f93409550aad357dec714e2f6d9b1ae7ceb9ed63\" returns successfully" Jul 16 12:32:37.134028 env[1306]: time="2025-07-16T12:32:37.133988781Z" level=info msg="StartContainer for \"599f7b13c60c7e6081cb6e79ef8280bdfea66b9c2059c2bbfad6767e55b6875a\" returns successfully" Jul 16 12:32:37.264435 kubelet[1834]: E0716 12:32:37.264329 1834 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.89.194:6443/api/v1/namespaces/default/events\": dial tcp 10.244.89.194:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-f25or.gb1.brightbox.com.1852bb4f7c4f7a82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-f25or.gb1.brightbox.com,UID:srv-f25or.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-f25or.gb1.brightbox.com,},FirstTimestamp:2025-07-16 12:32:35.062938242 +0000 UTC m=+0.385013369,LastTimestamp:2025-07-16 12:32:35.062938242 +0000 UTC m=+0.385013369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-f25or.gb1.brightbox.com,}" Jul 16 12:32:38.109112 kubelet[1834]: E0716 12:32:38.109064 1834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.89.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f25or.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.89.194:6443: connect: connection refused" interval="3.2s" Jul 16 12:32:38.284288 kubelet[1834]: I0716 12:32:38.284259 1834 kubelet_node_status.go:72] "Attempting to register node" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:40.056254 kubelet[1834]: I0716 12:32:40.056209 1834 apiserver.go:52] "Watching apiserver" Jul 16 12:32:40.084987 kubelet[1834]: I0716 12:32:40.084959 1834 kubelet_node_status.go:75] "Successfully registered node" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:40.103296 kubelet[1834]: I0716 12:32:40.103266 1834 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 16 12:32:40.602872 kubelet[1834]: E0716 12:32:40.602755 1834 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-f25or.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-f25or.gb1.brightbox.com" Jul 16 12:32:42.031170 kubelet[1834]: W0716 12:32:42.031124 1834 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 12:32:42.137509 systemd[1]: Reloading. Jul 16 12:32:42.243116 /usr/lib/systemd/system-generators/torcx-generator[2121]: time="2025-07-16T12:32:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 16 12:32:42.243791 /usr/lib/systemd/system-generators/torcx-generator[2121]: time="2025-07-16T12:32:42Z" level=info msg="torcx already run" Jul 16 12:32:42.361266 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 16 12:32:42.361293 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 16 12:32:42.381394 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 12:32:42.476816 systemd[1]: Stopping kubelet.service... Jul 16 12:32:42.497573 systemd[1]: kubelet.service: Deactivated successfully. Jul 16 12:32:42.498546 systemd[1]: Stopped kubelet.service. Jul 16 12:32:42.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:42.504178 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 16 12:32:42.504337 kernel: audit: type=1131 audit(1752669162.497:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:42.509028 systemd[1]: Starting kubelet.service... Jul 16 12:32:43.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:43.538055 systemd[1]: Started kubelet.service. Jul 16 12:32:43.546766 kernel: audit: type=1130 audit(1752669163.536:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:43.657599 kubelet[2183]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 12:32:43.659079 kubelet[2183]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 16 12:32:43.659079 kubelet[2183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 12:32:43.659706 kubelet[2183]: I0716 12:32:43.658541 2183 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 12:32:43.678503 kubelet[2183]: I0716 12:32:43.678461 2183 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 16 12:32:43.678503 kubelet[2183]: I0716 12:32:43.678495 2183 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 12:32:43.678961 kubelet[2183]: I0716 12:32:43.678924 2183 server.go:934] "Client rotation is on, will bootstrap in background" Jul 16 12:32:43.682997 kubelet[2183]: I0716 12:32:43.682973 2183 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 16 12:32:43.701909 kubelet[2183]: I0716 12:32:43.701883 2183 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 12:32:43.709962 kubelet[2183]: E0716 12:32:43.709891 2183 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 16 12:32:43.710132 kubelet[2183]: I0716 12:32:43.710120 2183 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 16 12:32:43.714061 kubelet[2183]: I0716 12:32:43.714042 2183 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 12:32:43.714686 kubelet[2183]: I0716 12:32:43.714659 2183 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 16 12:32:43.714937 kubelet[2183]: I0716 12:32:43.714903 2183 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 12:32:43.715191 kubelet[2183]: I0716 12:32:43.715005 2183 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-f25or.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 16 12:32:43.715416 kubelet[2183]: I0716 12:32:43.715403 2183 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 12:32:43.715487 kubelet[2183]: I0716 12:32:43.715478 2183 container_manager_linux.go:300] "Creating device plugin manager" Jul 16 12:32:43.715594 kubelet[2183]: I0716 12:32:43.715585 2183 state_mem.go:36] "Initialized new in-memory state store" Jul 16 12:32:43.715820 kubelet[2183]: I0716 12:32:43.715811 2183 kubelet.go:408] "Attempting to sync node with API server" Jul 16 12:32:43.723515 kubelet[2183]: I0716 12:32:43.723495 2183 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 12:32:43.723744 kubelet[2183]: I0716 12:32:43.723732 2183 kubelet.go:314] "Adding apiserver pod source" Jul 16 12:32:43.723868 kubelet[2183]: I0716 12:32:43.723858 2183 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 12:32:43.727422 kubelet[2183]: I0716 12:32:43.727404 2183 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 16 12:32:43.728022 kubelet[2183]: I0716 12:32:43.728005 2183 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 12:32:43.728612 kubelet[2183]: I0716 12:32:43.728589 2183 server.go:1274] "Started kubelet" Jul 16 12:32:43.733180 kubelet[2183]: I0716 12:32:43.732636 2183 apiserver.go:52] "Watching apiserver" Jul 16 12:32:43.741552 kubelet[2183]: I0716 12:32:43.741194 2183 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 12:32:43.741870 kubelet[2183]: I0716 12:32:43.741845 2183 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 12:32:43.743000 audit[2183]: AVC avc: denied { mac_admin } for pid=2183 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:32:43.748137 kubelet[2183]: I0716 12:32:43.748118 2183 server.go:449] "Adding debug handlers to kubelet server" Jul 16 12:32:43.743000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 16 12:32:43.751468 kernel: audit: type=1400 audit(1752669163.743:236): avc: denied { mac_admin } for pid=2183 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:32:43.751612 kernel: audit: type=1401 audit(1752669163.743:236): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 16 12:32:43.752704 kubelet[2183]: E0716 12:32:43.752650 2183 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 12:32:43.743000 audit[2183]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c9c180 a1=c0009a8f78 a2=c000c9c150 a3=25 items=0 ppid=1 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:43.754826 kubelet[2183]: I0716 12:32:43.754794 2183 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 12:32:43.757689 kernel: audit: type=1300 audit(1752669163.743:236): arch=c000003e syscall=188 success=no exit=-22 a0=c000c9c180 a1=c0009a8f78 a2=c000c9c150 a3=25 items=0 ppid=1 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:43.758185 kubelet[2183]: I0716 12:32:43.758161 2183 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 16 12:32:43.758327 kubelet[2183]: I0716 12:32:43.758312 2183 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 16 12:32:43.758420 kubelet[2183]: I0716 12:32:43.758411 2183 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 12:32:43.743000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 16 12:32:43.764439 kernel: audit: type=1327 audit(1752669163.743:236): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 16 12:32:43.764506 kubelet[2183]: I0716 12:32:43.764110 2183 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 12:32:43.756000 audit[2183]: AVC avc: denied { mac_admin } for pid=2183 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:32:43.768614 kubelet[2183]: I0716 12:32:43.768572 2183 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 16 12:32:43.769865 kernel: audit: type=1400 audit(1752669163.756:237): avc: denied { mac_admin } for pid=2183 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:32:43.756000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 16 12:32:43.777682 kernel: audit: type=1401 audit(1752669163.756:237): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 16 12:32:43.756000 audit[2183]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b93ec0 a1=c000b9f0f8 a2=c000bdd7a0 a3=25 items=0 ppid=1 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:43.782695 kernel: audit: type=1300 audit(1752669163.756:237): arch=c000003e syscall=188 success=no exit=-22 a0=c000b93ec0 a1=c000b9f0f8 a2=c000bdd7a0 a3=25 items=0 ppid=1 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:43.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 16 12:32:43.787682 kernel: audit: type=1327 audit(1752669163.756:237): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 16 12:32:43.788899 kubelet[2183]: I0716 12:32:43.788881 2183 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 16 12:32:43.789110 kubelet[2183]: I0716 12:32:43.789099 2183 reconciler.go:26] "Reconciler: start to sync state" Jul 16 12:32:43.793752 kubelet[2183]: I0716 12:32:43.793717 2183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 12:32:43.795092 kubelet[2183]: I0716 12:32:43.795034 2183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 12:32:43.795209 kubelet[2183]: I0716 12:32:43.795198 2183 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 16 12:32:43.795323 kubelet[2183]: I0716 12:32:43.795313 2183 kubelet.go:2321] "Starting kubelet main sync loop" Jul 16 12:32:43.795445 kubelet[2183]: E0716 12:32:43.795418 2183 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 12:32:43.802129 kubelet[2183]: I0716 12:32:43.802058 2183 factory.go:221] Registration of the containerd container factory successfully Jul 16 12:32:43.802230 kubelet[2183]: I0716 12:32:43.802219 2183 factory.go:221] Registration of the systemd container factory successfully Jul 16 12:32:43.802401 kubelet[2183]: I0716 12:32:43.802379 2183 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 12:32:43.884705 kubelet[2183]: I0716 12:32:43.884298 2183 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 16 12:32:43.884705 kubelet[2183]: I0716 12:32:43.884330 2183 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 16 12:32:43.884705 kubelet[2183]: I0716 12:32:43.884357 2183 state_mem.go:36] "Initialized new in-memory state store" Jul 16 12:32:43.884705 kubelet[2183]: I0716 12:32:43.884563 2183 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 16 12:32:43.884705 kubelet[2183]: I0716 12:32:43.884576 2183 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 16 12:32:43.884705 kubelet[2183]: I0716 12:32:43.884642 2183 policy_none.go:49] "None policy: Start" Jul 16 12:32:43.886374 kubelet[2183]: I0716 12:32:43.885484 2183 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 16 12:32:43.886374 kubelet[2183]: I0716 12:32:43.885579 2183 state_mem.go:35] "Initializing new in-memory state store" Jul 16 12:32:43.886374 kubelet[2183]: I0716 12:32:43.885820 2183 state_mem.go:75] "Updated machine memory state" Jul 16 12:32:43.886000 audit[2183]: AVC avc: denied { mac_admin } for pid=2183 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:32:43.886000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 16 12:32:43.886000 audit[2183]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b64570 a1=c000ef4990 a2=c000b64540 a3=25 items=0 ppid=1 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:43.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 16 12:32:43.890657 kubelet[2183]: I0716 12:32:43.888560 2183 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 12:32:43.890657 kubelet[2183]: I0716 12:32:43.888777 2183 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 16 12:32:43.890657 kubelet[2183]: I0716 12:32:43.888970 2183 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 12:32:43.890657 kubelet[2183]: I0716 12:32:43.888994 2183 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 12:32:43.890657 kubelet[2183]: I0716 12:32:43.889805 2183 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 12:32:43.906701 kubelet[2183]: W0716 12:32:43.904948 2183 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 12:32:43.912957 kubelet[2183]: W0716 12:32:43.911398 2183 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 12:32:43.966077 kubelet[2183]: I0716 12:32:43.965981 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-f25or.gb1.brightbox.com" podStartSLOduration=1.965948652 podStartE2EDuration="1.965948652s" podCreationTimestamp="2025-07-16 12:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:32:43.955839569 +0000 UTC m=+0.385881213" watchObservedRunningTime="2025-07-16 12:32:43.965948652 +0000 UTC m=+0.395990286" Jul 16 12:32:43.991391 kubelet[2183]: I0716 12:32:43.991355 2183 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 16 12:32:43.991743 kubelet[2183]: I0716 12:32:43.991721 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09194960314de67f51639a287c6a7593-kubeconfig\") pod \"kube-scheduler-srv-f25or.gb1.brightbox.com\" (UID: \"09194960314de67f51639a287c6a7593\") " pod="kube-system/kube-scheduler-srv-f25or.gb1.brightbox.com" Jul 16 12:32:43.991902 kubelet[2183]: I0716 12:32:43.991883 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7072e8d6d8e7c25dca96d68d664944fd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-f25or.gb1.brightbox.com\" (UID: \"7072e8d6d8e7c25dca96d68d664944fd\") " pod="kube-system/kube-apiserver-srv-f25or.gb1.brightbox.com" Jul 16 12:32:43.992000 kubelet[2183]: I0716 12:32:43.991985 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e688e6895aa42f6104360444613cb1f5-k8s-certs\") pod \"kube-controller-manager-srv-f25or.gb1.brightbox.com\" (UID: \"e688e6895aa42f6104360444613cb1f5\") " pod="kube-system/kube-controller-manager-srv-f25or.gb1.brightbox.com" Jul 16 12:32:43.992098 kubelet[2183]: I0716 12:32:43.992082 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e688e6895aa42f6104360444613cb1f5-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-f25or.gb1.brightbox.com\" (UID: \"e688e6895aa42f6104360444613cb1f5\") " pod="kube-system/kube-controller-manager-srv-f25or.gb1.brightbox.com" Jul 16 12:32:43.992181 kubelet[2183]: I0716 12:32:43.992167 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e688e6895aa42f6104360444613cb1f5-kubeconfig\") pod \"kube-controller-manager-srv-f25or.gb1.brightbox.com\" (UID: \"e688e6895aa42f6104360444613cb1f5\") " pod="kube-system/kube-controller-manager-srv-f25or.gb1.brightbox.com" Jul 16 12:32:43.992267 kubelet[2183]: I0716 12:32:43.992254 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7072e8d6d8e7c25dca96d68d664944fd-ca-certs\") pod \"kube-apiserver-srv-f25or.gb1.brightbox.com\" (UID: \"7072e8d6d8e7c25dca96d68d664944fd\") " pod="kube-system/kube-apiserver-srv-f25or.gb1.brightbox.com" Jul 16 12:32:43.992346 kubelet[2183]: I0716 12:32:43.992333 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7072e8d6d8e7c25dca96d68d664944fd-k8s-certs\") pod \"kube-apiserver-srv-f25or.gb1.brightbox.com\" (UID: \"7072e8d6d8e7c25dca96d68d664944fd\") " pod="kube-system/kube-apiserver-srv-f25or.gb1.brightbox.com" Jul 16 12:32:43.992423 kubelet[2183]: I0716 12:32:43.992410 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e688e6895aa42f6104360444613cb1f5-ca-certs\") pod \"kube-controller-manager-srv-f25or.gb1.brightbox.com\" (UID: \"e688e6895aa42f6104360444613cb1f5\") " pod="kube-system/kube-controller-manager-srv-f25or.gb1.brightbox.com" Jul 16 12:32:43.992503 kubelet[2183]: I0716 12:32:43.992490 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e688e6895aa42f6104360444613cb1f5-flexvolume-dir\") pod \"kube-controller-manager-srv-f25or.gb1.brightbox.com\" (UID: \"e688e6895aa42f6104360444613cb1f5\") " pod="kube-system/kube-controller-manager-srv-f25or.gb1.brightbox.com" Jul 16 12:32:44.007832 kubelet[2183]: I0716 12:32:44.007802 2183 kubelet_node_status.go:72] "Attempting to register node" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:44.024130 kubelet[2183]: I0716 12:32:44.024072 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-f25or.gb1.brightbox.com" podStartSLOduration=1.024045178 podStartE2EDuration="1.024045178s" podCreationTimestamp="2025-07-16 12:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:32:43.994504103 +0000 UTC m=+0.424545747" watchObservedRunningTime="2025-07-16 12:32:44.024045178 +0000 UTC m=+0.454086839" Jul 16 12:32:44.041381 kubelet[2183]: I0716 12:32:44.041346 2183 kubelet_node_status.go:111] "Node was previously registered" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:44.041647 kubelet[2183]: I0716 12:32:44.041634 2183 kubelet_node_status.go:75] "Successfully registered node" node="srv-f25or.gb1.brightbox.com" Jul 16 12:32:44.264803 kubelet[2183]: I0716 12:32:44.264744 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-f25or.gb1.brightbox.com" podStartSLOduration=1.264723192 podStartE2EDuration="1.264723192s" podCreationTimestamp="2025-07-16 12:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:32:44.024620997 +0000 UTC m=+0.454662628" watchObservedRunningTime="2025-07-16 12:32:44.264723192 +0000 UTC m=+0.694764836" Jul 16 12:32:48.599261 kubelet[2183]: I0716 12:32:48.599187 2183 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 16 12:32:48.601084 env[1306]: time="2025-07-16T12:32:48.600993538Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 16 12:32:48.602870 kubelet[2183]: I0716 12:32:48.602791 2183 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 16 12:32:49.332075 kubelet[2183]: I0716 12:32:49.332017 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0d1b153-c241-4d11-a47d-ad9fb9cda6bb-lib-modules\") pod \"kube-proxy-5cbtt\" (UID: \"e0d1b153-c241-4d11-a47d-ad9fb9cda6bb\") " pod="kube-system/kube-proxy-5cbtt" Jul 16 12:32:49.332075 kubelet[2183]: I0716 12:32:49.332063 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0d1b153-c241-4d11-a47d-ad9fb9cda6bb-kube-proxy\") pod \"kube-proxy-5cbtt\" (UID: \"e0d1b153-c241-4d11-a47d-ad9fb9cda6bb\") " pod="kube-system/kube-proxy-5cbtt" Jul 16 12:32:49.332443 kubelet[2183]: I0716 12:32:49.332096 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0d1b153-c241-4d11-a47d-ad9fb9cda6bb-xtables-lock\") pod \"kube-proxy-5cbtt\" (UID: \"e0d1b153-c241-4d11-a47d-ad9fb9cda6bb\") " pod="kube-system/kube-proxy-5cbtt" Jul 16 12:32:49.332443 kubelet[2183]: I0716 12:32:49.332119 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2vst\" (UniqueName: \"kubernetes.io/projected/e0d1b153-c241-4d11-a47d-ad9fb9cda6bb-kube-api-access-x2vst\") pod \"kube-proxy-5cbtt\" (UID: \"e0d1b153-c241-4d11-a47d-ad9fb9cda6bb\") " pod="kube-system/kube-proxy-5cbtt" Jul 16 12:32:49.446539 kubelet[2183]: E0716 12:32:49.446474 2183 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 16 12:32:49.446880 kubelet[2183]: E0716 12:32:49.446861 2183 projected.go:194] Error preparing data for projected volume kube-api-access-x2vst for pod kube-system/kube-proxy-5cbtt: configmap "kube-root-ca.crt" not found Jul 16 12:32:49.447160 kubelet[2183]: E0716 12:32:49.447116 2183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0d1b153-c241-4d11-a47d-ad9fb9cda6bb-kube-api-access-x2vst podName:e0d1b153-c241-4d11-a47d-ad9fb9cda6bb nodeName:}" failed. No retries permitted until 2025-07-16 12:32:49.947069164 +0000 UTC m=+6.377110819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x2vst" (UniqueName: "kubernetes.io/projected/e0d1b153-c241-4d11-a47d-ad9fb9cda6bb-kube-api-access-x2vst") pod "kube-proxy-5cbtt" (UID: "e0d1b153-c241-4d11-a47d-ad9fb9cda6bb") : configmap "kube-root-ca.crt" not found Jul 16 12:32:49.837028 kubelet[2183]: I0716 12:32:49.836766 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb186040-7b72-4df7-b370-07f6ed2a2fde-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-tv755\" (UID: \"fb186040-7b72-4df7-b370-07f6ed2a2fde\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-tv755" Jul 16 12:32:49.837028 kubelet[2183]: I0716 12:32:49.836865 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pbsm\" (UniqueName: \"kubernetes.io/projected/fb186040-7b72-4df7-b370-07f6ed2a2fde-kube-api-access-4pbsm\") pod \"tigera-operator-5bf8dfcb4-tv755\" (UID: \"fb186040-7b72-4df7-b370-07f6ed2a2fde\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-tv755" Jul 16 12:32:49.949521 kubelet[2183]: I0716 12:32:49.949433 2183 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 16 12:32:49.982150 env[1306]: time="2025-07-16T12:32:49.982064693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-tv755,Uid:fb186040-7b72-4df7-b370-07f6ed2a2fde,Namespace:tigera-operator,Attempt:0,}" Jul 16 12:32:50.021104 env[1306]: time="2025-07-16T12:32:50.021010334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:32:50.021104 env[1306]: time="2025-07-16T12:32:50.021066175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:32:50.021753 env[1306]: time="2025-07-16T12:32:50.021078156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:32:50.021753 env[1306]: time="2025-07-16T12:32:50.021363631Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47e74cc078daff959d5ccaf0862430d21605519c8e84fded128419b3d6d44a7c pid=2238 runtime=io.containerd.runc.v2 Jul 16 12:32:50.112191 env[1306]: time="2025-07-16T12:32:50.111985551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-tv755,Uid:fb186040-7b72-4df7-b370-07f6ed2a2fde,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"47e74cc078daff959d5ccaf0862430d21605519c8e84fded128419b3d6d44a7c\"" Jul 16 12:32:50.117355 env[1306]: time="2025-07-16T12:32:50.115845052Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 16 12:32:50.199583 env[1306]: time="2025-07-16T12:32:50.199511206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5cbtt,Uid:e0d1b153-c241-4d11-a47d-ad9fb9cda6bb,Namespace:kube-system,Attempt:0,}" Jul 16 12:32:50.221083 env[1306]: time="2025-07-16T12:32:50.220958198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:32:50.221083 env[1306]: time="2025-07-16T12:32:50.221004038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:32:50.221083 env[1306]: time="2025-07-16T12:32:50.221018899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:32:50.222047 env[1306]: time="2025-07-16T12:32:50.221766919Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f5b65a5d3f46818f2c3e0760185f2e15c05b5a6582512966b9e06b2547e1a0c pid=2281 runtime=io.containerd.runc.v2 Jul 16 12:32:50.267235 env[1306]: time="2025-07-16T12:32:50.267192769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5cbtt,Uid:e0d1b153-c241-4d11-a47d-ad9fb9cda6bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f5b65a5d3f46818f2c3e0760185f2e15c05b5a6582512966b9e06b2547e1a0c\"" Jul 16 12:32:50.270100 env[1306]: time="2025-07-16T12:32:50.269966639Z" level=info msg="CreateContainer within sandbox \"9f5b65a5d3f46818f2c3e0760185f2e15c05b5a6582512966b9e06b2547e1a0c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 16 12:32:50.278916 env[1306]: time="2025-07-16T12:32:50.278881432Z" level=info msg="CreateContainer within sandbox \"9f5b65a5d3f46818f2c3e0760185f2e15c05b5a6582512966b9e06b2547e1a0c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"99152eb5fc7d1eec24828821ca0959e4f705aeecc85f06dfdecae3dbf70a3344\"" Jul 16 12:32:50.280611 env[1306]: time="2025-07-16T12:32:50.279791682Z" level=info msg="StartContainer for \"99152eb5fc7d1eec24828821ca0959e4f705aeecc85f06dfdecae3dbf70a3344\"" Jul 16 12:32:50.353122 env[1306]: time="2025-07-16T12:32:50.353070761Z" level=info msg="StartContainer for \"99152eb5fc7d1eec24828821ca0959e4f705aeecc85f06dfdecae3dbf70a3344\" returns successfully" Jul 16 12:32:50.654010 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 16 12:32:50.654238 kernel: audit: type=1325 audit(1752669170.647:239): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.647000 audit[2383]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.650000 audit[2384]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.657740 kernel: audit: type=1325 audit(1752669170.650:240): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.650000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd24fed1a0 a2=0 a3=7ffd24fed18c items=0 ppid=2334 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.664700 kernel: audit: type=1300 audit(1752669170.650:240): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd24fed1a0 a2=0 a3=7ffd24fed18c items=0 ppid=2334 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 16 12:32:50.668690 kernel: audit: type=1327 audit(1752669170.650:240): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 16 12:32:50.668751 kernel: audit: type=1325 audit(1752669170.650:241): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.650000 audit[2385]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.650000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6a960e90 a2=0 a3=7ffe6a960e7c items=0 ppid=2334 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.674608 kernel: audit: type=1300 audit(1752669170.650:241): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6a960e90 a2=0 a3=7ffe6a960e7c items=0 ppid=2334 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.674716 kernel: audit: type=1327 audit(1752669170.650:241): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 16 12:32:50.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 16 12:32:50.650000 audit[2386]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.678740 kernel: audit: type=1325 audit(1752669170.650:242): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.678848 kernel: audit: type=1300 audit(1752669170.650:242): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffccb5aa4a0 a2=0 a3=7ffccb5aa48c items=0 ppid=2334 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.650000 audit[2386]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffccb5aa4a0 a2=0 a3=7ffccb5aa48c items=0 ppid=2334 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 16 12:32:50.685175 kernel: audit: type=1327 audit(1752669170.650:242): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 16 12:32:50.647000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0b618230 a2=0 a3=7ffd0b61821c items=0 ppid=2334 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 16 12:32:50.664000 audit[2387]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.664000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd34f42610 a2=0 a3=7ffd34f425fc items=0 ppid=2334 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 16 12:32:50.665000 audit[2388]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.665000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe081cbdb0 a2=0 a3=7ffe081cbd9c items=0 ppid=2334 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 16 12:32:50.757000 audit[2389]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.757000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe9368d660 a2=0 a3=7ffe9368d64c items=0 ppid=2334 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 16 12:32:50.767000 audit[2391]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.767000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffafdc0f70 a2=0 a3=7fffafdc0f5c items=0 ppid=2334 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.767000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 16 12:32:50.773000 audit[2394]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.773000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe75ec12c0 a2=0 a3=7ffe75ec12ac items=0 ppid=2334 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 16 12:32:50.776000 audit[2395]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.776000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5d7fec70 a2=0 a3=7ffd5d7fec5c items=0 ppid=2334 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.776000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 16 12:32:50.784000 audit[2397]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.784000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd3fb757d0 a2=0 a3=7ffd3fb757bc items=0 ppid=2334 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 16 12:32:50.786000 audit[2398]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.786000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7d1fb670 a2=0 a3=7ffd7d1fb65c items=0 ppid=2334 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 16 12:32:50.790000 audit[2400]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.790000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcde0d05a0 a2=0 a3=7ffcde0d058c items=0 ppid=2334 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.790000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 16 12:32:50.795000 audit[2403]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.795000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc2fad0340 a2=0 a3=7ffc2fad032c items=0 ppid=2334 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 16 12:32:50.796000 audit[2404]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.796000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7a004eb0 a2=0 a3=7ffe7a004e9c items=0 ppid=2334 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 16 12:32:50.800000 audit[2406]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.800000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff99076fd0 a2=0 a3=7fff99076fbc items=0 ppid=2334 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 16 12:32:50.801000 audit[2407]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.801000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff50f082a0 a2=0 a3=7fff50f0828c items=0 ppid=2334 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 16 12:32:50.804000 audit[2409]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.804000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4058ac80 a2=0 a3=7ffc4058ac6c items=0 ppid=2334 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.804000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 16 12:32:50.814000 audit[2412]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.814000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffbb2931c0 a2=0 a3=7fffbb2931ac items=0 ppid=2334 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 16 12:32:50.821000 audit[2415]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.821000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff33c6e640 a2=0 a3=7fff33c6e62c items=0 ppid=2334 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 16 12:32:50.823000 audit[2416]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.823000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff4f2154a0 a2=0 a3=7fff4f21548c items=0 ppid=2334 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 16 12:32:50.826000 audit[2418]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.826000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffedf49e790 a2=0 a3=7ffedf49e77c items=0 ppid=2334 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 16 12:32:50.830000 audit[2421]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.830000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe9f732710 a2=0 a3=7ffe9f7326fc items=0 ppid=2334 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 16 12:32:50.832000 audit[2422]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.832000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1f102460 a2=0 a3=7ffc1f10244c items=0 ppid=2334 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 16 12:32:50.835000 audit[2424]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 16 12:32:50.835000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdef5631e0 a2=0 a3=7ffdef5631cc items=0 ppid=2334 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 16 12:32:50.852497 kubelet[2183]: I0716 12:32:50.852432 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5cbtt" podStartSLOduration=1.8524064139999998 podStartE2EDuration="1.852406414s" podCreationTimestamp="2025-07-16 12:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:32:50.852336006 +0000 UTC m=+7.282377649" watchObservedRunningTime="2025-07-16 12:32:50.852406414 +0000 UTC m=+7.282448055" Jul 16 12:32:50.877000 audit[2430]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:32:50.877000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe4b492200 a2=0 a3=7ffe4b4921ec items=0 ppid=2334 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:32:50.892000 audit[2430]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:32:50.892000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe4b492200 a2=0 a3=7ffe4b4921ec items=0 ppid=2334 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:32:50.896000 audit[2435]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.896000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff497fcb70 a2=0 a3=7fff497fcb5c items=0 ppid=2334 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.896000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 16 12:32:50.903000 audit[2437]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.903000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe7852f730 a2=0 a3=7ffe7852f71c items=0 ppid=2334 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.903000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 16 12:32:50.910000 audit[2440]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.910000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdc10db340 a2=0 a3=7ffdc10db32c items=0 ppid=2334 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.910000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 16 12:32:50.912000 audit[2441]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.912000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8dad6df0 a2=0 a3=7ffc8dad6ddc items=0 ppid=2334 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 16 12:32:50.917000 audit[2443]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.917000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffce952e8c0 a2=0 a3=7ffce952e8ac items=0 ppid=2334 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 16 12:32:50.919000 audit[2444]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.919000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeee79ae10 a2=0 a3=7ffeee79adfc items=0 ppid=2334 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 16 12:32:50.923000 audit[2446]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.923000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdc010f430 a2=0 a3=7ffdc010f41c items=0 ppid=2334 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 16 12:32:50.931000 audit[2449]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.931000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff6551b220 a2=0 a3=7fff6551b20c items=0 ppid=2334 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 16 12:32:50.933000 audit[2450]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.933000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd267cc820 a2=0 a3=7ffd267cc80c items=0 ppid=2334 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 16 12:32:50.937000 audit[2452]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.937000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffaa3aef90 a2=0 a3=7fffaa3aef7c items=0 ppid=2334 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 16 12:32:50.939000 audit[2453]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.939000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffede1b9b80 a2=0 a3=7ffede1b9b6c items=0 ppid=2334 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 16 12:32:50.941000 audit[2455]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.941000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd5429a6c0 a2=0 a3=7ffd5429a6ac items=0 ppid=2334 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 16 12:32:50.952000 audit[2458]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.952000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee194c6d0 a2=0 a3=7ffee194c6bc items=0 ppid=2334 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 16 12:32:50.960000 audit[2461]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.960000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd377c9db0 a2=0 a3=7ffd377c9d9c items=0 ppid=2334 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 16 12:32:50.961000 audit[2462]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.961000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcad89ec40 a2=0 a3=7ffcad89ec2c items=0 ppid=2334 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 16 12:32:50.964000 audit[2464]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.964000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffcbe1e5600 a2=0 a3=7ffcbe1e55ec items=0 ppid=2334 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.964000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 16 12:32:50.975000 audit[2467]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.975000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffa935fde0 a2=0 a3=7fffa935fdcc items=0 ppid=2334 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 16 12:32:50.976000 audit[2468]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.976000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5e1e1db0 a2=0 a3=7ffc5e1e1d9c items=0 ppid=2334 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.976000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 16 12:32:50.980000 audit[2470]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.980000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff0eb2a320 a2=0 a3=7fff0eb2a30c items=0 ppid=2334 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.980000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 16 12:32:50.982000 audit[2471]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.982000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff59695d10 a2=0 a3=7fff59695cfc items=0 ppid=2334 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 16 12:32:50.986000 audit[2473]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.986000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdcc6b17c0 a2=0 a3=7ffdcc6b17ac items=0 ppid=2334 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 16 12:32:50.991000 audit[2476]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 16 12:32:50.991000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd62ac8540 a2=0 a3=7ffd62ac852c items=0 ppid=2334 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.991000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 16 12:32:50.994000 audit[2478]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 16 12:32:50.994000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc4ee819e0 a2=0 a3=7ffc4ee819cc items=0 ppid=2334 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.994000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:32:50.995000 audit[2478]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 16 12:32:50.995000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc4ee819e0 a2=0 a3=7ffc4ee819cc items=0 ppid=2334 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:32:50.995000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:32:51.741368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539598208.mount: Deactivated successfully. Jul 16 12:32:52.681024 env[1306]: time="2025-07-16T12:32:52.680861817Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:52.686442 env[1306]: time="2025-07-16T12:32:52.686363444Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:52.689163 env[1306]: time="2025-07-16T12:32:52.689094198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:52.690693 env[1306]: time="2025-07-16T12:32:52.690606889Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 16 12:32:52.692621 env[1306]: time="2025-07-16T12:32:52.692573907Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:32:52.698882 env[1306]: time="2025-07-16T12:32:52.698852125Z" level=info msg="CreateContainer within sandbox \"47e74cc078daff959d5ccaf0862430d21605519c8e84fded128419b3d6d44a7c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 16 12:32:52.712842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042563068.mount: Deactivated successfully. Jul 16 12:32:52.720166 env[1306]: time="2025-07-16T12:32:52.717921026Z" level=info msg="CreateContainer within sandbox \"47e74cc078daff959d5ccaf0862430d21605519c8e84fded128419b3d6d44a7c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4785096915b484934b878435ea9acbf7ed1d7078a2ba8920d506c6909114fee9\"" Jul 16 12:32:52.720166 env[1306]: time="2025-07-16T12:32:52.718661567Z" level=info msg="StartContainer for \"4785096915b484934b878435ea9acbf7ed1d7078a2ba8920d506c6909114fee9\"" Jul 16 12:32:52.790699 env[1306]: time="2025-07-16T12:32:52.790626499Z" level=info msg="StartContainer for \"4785096915b484934b878435ea9acbf7ed1d7078a2ba8920d506c6909114fee9\" returns successfully" Jul 16 12:32:54.749204 kubelet[2183]: I0716 12:32:54.749145 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-tv755" podStartSLOduration=3.170559334 podStartE2EDuration="5.749115603s" podCreationTimestamp="2025-07-16 12:32:49 +0000 UTC" firstStartedPulling="2025-07-16 12:32:50.114314401 +0000 UTC m=+6.544356021" lastFinishedPulling="2025-07-16 12:32:52.692870642 +0000 UTC m=+9.122912290" observedRunningTime="2025-07-16 12:32:52.867150156 +0000 UTC m=+9.297191803" watchObservedRunningTime="2025-07-16 12:32:54.749115603 +0000 UTC m=+11.179157246" Jul 16 12:32:59.786379 sudo[1534]: pam_unix(sudo:session): session closed for user root Jul 16 12:32:59.795280 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 16 12:32:59.796167 kernel: audit: type=1106 audit(1752669179.785:290): pid=1534 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 16 12:32:59.785000 audit[1534]: USER_END pid=1534 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 16 12:32:59.785000 audit[1534]: CRED_DISP pid=1534 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 16 12:32:59.802692 kernel: audit: type=1104 audit(1752669179.785:291): pid=1534 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 16 12:32:59.945653 sshd[1530]: pam_unix(sshd:session): session closed for user core Jul 16 12:32:59.949000 audit[1530]: USER_END pid=1530 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:59.958445 kernel: audit: type=1106 audit(1752669179.949:292): pid=1530 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:59.958505 kernel: audit: type=1104 audit(1752669179.949:293): pid=1530 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:59.949000 audit[1530]: CRED_DISP pid=1530 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:32:59.961524 systemd[1]: sshd@8-10.244.89.194:22-147.75.109.163:47910.service: Deactivated successfully. Jul 16 12:32:59.963343 systemd[1]: session-9.scope: Deactivated successfully. Jul 16 12:32:59.964316 systemd-logind[1294]: Session 9 logged out. Waiting for processes to exit. Jul 16 12:32:59.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.89.194:22-147.75.109.163:47910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:59.968905 kernel: audit: type=1131 audit(1752669179.961:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.89.194:22-147.75.109.163:47910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:32:59.969340 systemd-logind[1294]: Removed session 9. Jul 16 12:33:00.663000 audit[2561]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:00.673173 kernel: audit: type=1325 audit(1752669180.663:295): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:00.673227 kernel: audit: type=1300 audit(1752669180.663:295): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffca08489f0 a2=0 a3=7ffca08489dc items=0 ppid=2334 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:00.663000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffca08489f0 a2=0 a3=7ffca08489dc items=0 ppid=2334 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:00.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:00.695858 kernel: audit: type=1327 audit(1752669180.663:295): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:00.695918 kernel: audit: type=1325 audit(1752669180.676:296): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:00.676000 audit[2561]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:00.700349 kernel: audit: type=1300 audit(1752669180.676:296): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca08489f0 a2=0 a3=0 items=0 ppid=2334 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:00.676000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca08489f0 a2=0 a3=0 items=0 ppid=2334 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:00.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:00.702000 audit[2563]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:00.702000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc7792b090 a2=0 a3=7ffc7792b07c items=0 ppid=2334 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:00.702000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:00.706000 audit[2563]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:00.706000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7792b090 a2=0 a3=0 items=0 ppid=2334 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:00.706000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:03.220000 audit[2566]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2566 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:03.220000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffea758ea40 a2=0 a3=7ffea758ea2c items=0 ppid=2334 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:03.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:03.250000 audit[2566]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2566 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:03.250000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea758ea40 a2=0 a3=0 items=0 ppid=2334 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:03.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:03.367000 audit[2568]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:03.367000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd61c064b0 a2=0 a3=7ffd61c0649c items=0 ppid=2334 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:03.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:03.370000 audit[2568]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:03.370000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd61c064b0 a2=0 a3=0 items=0 ppid=2334 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:03.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:03.531510 kubelet[2183]: I0716 12:33:03.531333 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fdbeb6c2-4be6-476f-b2b6-7f9d9460c526-typha-certs\") pod \"calico-typha-7d46c79bf9-2gpd7\" (UID: \"fdbeb6c2-4be6-476f-b2b6-7f9d9460c526\") " pod="calico-system/calico-typha-7d46c79bf9-2gpd7" Jul 16 12:33:03.532356 kubelet[2183]: I0716 12:33:03.532332 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7rzw\" (UniqueName: \"kubernetes.io/projected/fdbeb6c2-4be6-476f-b2b6-7f9d9460c526-kube-api-access-q7rzw\") pod \"calico-typha-7d46c79bf9-2gpd7\" (UID: \"fdbeb6c2-4be6-476f-b2b6-7f9d9460c526\") " pod="calico-system/calico-typha-7d46c79bf9-2gpd7" Jul 16 12:33:03.532490 kubelet[2183]: I0716 12:33:03.532477 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdbeb6c2-4be6-476f-b2b6-7f9d9460c526-tigera-ca-bundle\") pod \"calico-typha-7d46c79bf9-2gpd7\" (UID: \"fdbeb6c2-4be6-476f-b2b6-7f9d9460c526\") " pod="calico-system/calico-typha-7d46c79bf9-2gpd7" Jul 16 12:33:03.725783 env[1306]: time="2025-07-16T12:33:03.724962145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d46c79bf9-2gpd7,Uid:fdbeb6c2-4be6-476f-b2b6-7f9d9460c526,Namespace:calico-system,Attempt:0,}" Jul 16 12:33:03.735712 kubelet[2183]: I0716 12:33:03.733887 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/900ad216-892e-4200-8749-37f3149f1157-policysync\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.735712 kubelet[2183]: I0716 12:33:03.733930 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/900ad216-892e-4200-8749-37f3149f1157-cni-log-dir\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.735712 kubelet[2183]: I0716 12:33:03.733950 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/900ad216-892e-4200-8749-37f3149f1157-cni-net-dir\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.735712 kubelet[2183]: I0716 12:33:03.733967 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/900ad216-892e-4200-8749-37f3149f1157-lib-modules\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.735712 kubelet[2183]: I0716 12:33:03.733988 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/900ad216-892e-4200-8749-37f3149f1157-cni-bin-dir\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.736047 kubelet[2183]: I0716 12:33:03.734009 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/900ad216-892e-4200-8749-37f3149f1157-node-certs\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.736047 kubelet[2183]: I0716 12:33:03.734038 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/900ad216-892e-4200-8749-37f3149f1157-flexvol-driver-host\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.736047 kubelet[2183]: I0716 12:33:03.734060 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/900ad216-892e-4200-8749-37f3149f1157-xtables-lock\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.736047 kubelet[2183]: I0716 12:33:03.734086 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/900ad216-892e-4200-8749-37f3149f1157-tigera-ca-bundle\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.736047 kubelet[2183]: I0716 12:33:03.734104 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/900ad216-892e-4200-8749-37f3149f1157-var-lib-calico\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.736203 kubelet[2183]: I0716 12:33:03.734123 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/900ad216-892e-4200-8749-37f3149f1157-var-run-calico\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.736203 kubelet[2183]: I0716 12:33:03.734142 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x6s5\" (UniqueName: \"kubernetes.io/projected/900ad216-892e-4200-8749-37f3149f1157-kube-api-access-2x6s5\") pod \"calico-node-5qrcn\" (UID: \"900ad216-892e-4200-8749-37f3149f1157\") " pod="calico-system/calico-node-5qrcn" Jul 16 12:33:03.754950 env[1306]: time="2025-07-16T12:33:03.754551121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:33:03.755234 env[1306]: time="2025-07-16T12:33:03.755203633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:33:03.755350 env[1306]: time="2025-07-16T12:33:03.755327662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:33:03.755768 env[1306]: time="2025-07-16T12:33:03.755720014Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/200ca28d9523fdb4719ac655b1ca4ceecb3f61a285cc3343742183d86a385c37 pid=2577 runtime=io.containerd.runc.v2 Jul 16 12:33:03.833416 kubelet[2183]: E0716 12:33:03.829991 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b22q6" podUID="58f808a6-a7a4-4400-b1f3-561a7728fef5" Jul 16 12:33:03.848208 kubelet[2183]: E0716 12:33:03.848166 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.848208 kubelet[2183]: W0716 12:33:03.848204 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.848407 kubelet[2183]: E0716 12:33:03.848243 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.848617 kubelet[2183]: E0716 12:33:03.848597 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.848696 kubelet[2183]: W0716 12:33:03.848618 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.848696 kubelet[2183]: E0716 12:33:03.848635 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.848927 kubelet[2183]: E0716 12:33:03.848905 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.848977 kubelet[2183]: W0716 12:33:03.848926 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.848977 kubelet[2183]: E0716 12:33:03.848943 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.849195 kubelet[2183]: E0716 12:33:03.849177 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.849258 kubelet[2183]: W0716 12:33:03.849195 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.849258 kubelet[2183]: E0716 12:33:03.849211 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.850697 kubelet[2183]: E0716 12:33:03.849452 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.850697 kubelet[2183]: W0716 12:33:03.849471 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.850697 kubelet[2183]: E0716 12:33:03.849486 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.850697 kubelet[2183]: E0716 12:33:03.849720 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.850697 kubelet[2183]: W0716 12:33:03.849746 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.850697 kubelet[2183]: E0716 12:33:03.849761 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.850697 kubelet[2183]: E0716 12:33:03.849984 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.850697 kubelet[2183]: W0716 12:33:03.849995 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.850697 kubelet[2183]: E0716 12:33:03.850008 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.851039 kubelet[2183]: E0716 12:33:03.850899 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.851039 kubelet[2183]: W0716 12:33:03.850917 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.851039 kubelet[2183]: E0716 12:33:03.850938 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.851262 kubelet[2183]: E0716 12:33:03.851238 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.851262 kubelet[2183]: W0716 12:33:03.851259 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.851479 kubelet[2183]: E0716 12:33:03.851462 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.851479 kubelet[2183]: W0716 12:33:03.851474 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.851556 kubelet[2183]: E0716 12:33:03.851485 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.851556 kubelet[2183]: E0716 12:33:03.851507 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.851648 kubelet[2183]: E0716 12:33:03.851636 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.851719 kubelet[2183]: W0716 12:33:03.851651 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.851719 kubelet[2183]: E0716 12:33:03.851660 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.851838 kubelet[2183]: E0716 12:33:03.851826 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.851838 kubelet[2183]: W0716 12:33:03.851836 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.851917 kubelet[2183]: E0716 12:33:03.851845 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.851981 kubelet[2183]: E0716 12:33:03.851969 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.851981 kubelet[2183]: W0716 12:33:03.851979 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.852060 kubelet[2183]: E0716 12:33:03.851987 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.852125 kubelet[2183]: E0716 12:33:03.852114 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.852125 kubelet[2183]: W0716 12:33:03.852124 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.852196 kubelet[2183]: E0716 12:33:03.852132 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.852264 kubelet[2183]: E0716 12:33:03.852253 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.852264 kubelet[2183]: W0716 12:33:03.852263 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.852340 kubelet[2183]: E0716 12:33:03.852270 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.852397 kubelet[2183]: E0716 12:33:03.852387 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.852397 kubelet[2183]: W0716 12:33:03.852397 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.852469 kubelet[2183]: E0716 12:33:03.852404 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.852559 kubelet[2183]: E0716 12:33:03.852544 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.852559 kubelet[2183]: W0716 12:33:03.852552 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.852623 kubelet[2183]: E0716 12:33:03.852560 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.854691 kubelet[2183]: E0716 12:33:03.852765 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.854691 kubelet[2183]: W0716 12:33:03.852778 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.854691 kubelet[2183]: E0716 12:33:03.852787 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.854691 kubelet[2183]: E0716 12:33:03.852916 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.854691 kubelet[2183]: W0716 12:33:03.852922 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.854691 kubelet[2183]: E0716 12:33:03.852930 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.854691 kubelet[2183]: E0716 12:33:03.853047 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.854691 kubelet[2183]: W0716 12:33:03.853053 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.854691 kubelet[2183]: E0716 12:33:03.853061 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.854691 kubelet[2183]: E0716 12:33:03.853185 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.855076 kubelet[2183]: W0716 12:33:03.853191 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.855076 kubelet[2183]: E0716 12:33:03.853198 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.869476 kubelet[2183]: E0716 12:33:03.869451 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.869655 kubelet[2183]: W0716 12:33:03.869639 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.869773 kubelet[2183]: E0716 12:33:03.869760 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.918298 env[1306]: time="2025-07-16T12:33:03.917832844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5qrcn,Uid:900ad216-892e-4200-8749-37f3149f1157,Namespace:calico-system,Attempt:0,}" Jul 16 12:33:03.925982 env[1306]: time="2025-07-16T12:33:03.925909192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d46c79bf9-2gpd7,Uid:fdbeb6c2-4be6-476f-b2b6-7f9d9460c526,Namespace:calico-system,Attempt:0,} returns sandbox id \"200ca28d9523fdb4719ac655b1ca4ceecb3f61a285cc3343742183d86a385c37\"" Jul 16 12:33:03.930305 env[1306]: time="2025-07-16T12:33:03.927402896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 16 12:33:03.935645 kubelet[2183]: E0716 12:33:03.935613 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.935645 kubelet[2183]: W0716 12:33:03.935634 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.935645 kubelet[2183]: E0716 12:33:03.935654 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.935965 kubelet[2183]: I0716 12:33:03.935695 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/58f808a6-a7a4-4400-b1f3-561a7728fef5-varrun\") pod \"csi-node-driver-b22q6\" (UID: \"58f808a6-a7a4-4400-b1f3-561a7728fef5\") " pod="calico-system/csi-node-driver-b22q6" Jul 16 12:33:03.935965 kubelet[2183]: E0716 12:33:03.935904 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.935965 kubelet[2183]: W0716 12:33:03.935914 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.935965 kubelet[2183]: E0716 12:33:03.935926 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.935965 kubelet[2183]: I0716 12:33:03.935944 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58f808a6-a7a4-4400-b1f3-561a7728fef5-kubelet-dir\") pod \"csi-node-driver-b22q6\" (UID: \"58f808a6-a7a4-4400-b1f3-561a7728fef5\") " pod="calico-system/csi-node-driver-b22q6" Jul 16 12:33:03.936243 kubelet[2183]: E0716 12:33:03.936090 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.936243 kubelet[2183]: W0716 12:33:03.936098 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.936243 kubelet[2183]: E0716 12:33:03.936107 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.936243 kubelet[2183]: I0716 12:33:03.936121 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z46q\" (UniqueName: \"kubernetes.io/projected/58f808a6-a7a4-4400-b1f3-561a7728fef5-kube-api-access-7z46q\") pod \"csi-node-driver-b22q6\" (UID: \"58f808a6-a7a4-4400-b1f3-561a7728fef5\") " pod="calico-system/csi-node-driver-b22q6" Jul 16 12:33:03.936467 kubelet[2183]: E0716 12:33:03.936258 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.936467 kubelet[2183]: W0716 12:33:03.936265 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.936467 kubelet[2183]: E0716 12:33:03.936273 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.936467 kubelet[2183]: I0716 12:33:03.936288 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58f808a6-a7a4-4400-b1f3-561a7728fef5-registration-dir\") pod \"csi-node-driver-b22q6\" (UID: \"58f808a6-a7a4-4400-b1f3-561a7728fef5\") " pod="calico-system/csi-node-driver-b22q6" Jul 16 12:33:03.936467 kubelet[2183]: E0716 12:33:03.936430 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.936467 kubelet[2183]: W0716 12:33:03.936438 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.936467 kubelet[2183]: E0716 12:33:03.936446 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.936467 kubelet[2183]: I0716 12:33:03.936459 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58f808a6-a7a4-4400-b1f3-561a7728fef5-socket-dir\") pod \"csi-node-driver-b22q6\" (UID: \"58f808a6-a7a4-4400-b1f3-561a7728fef5\") " pod="calico-system/csi-node-driver-b22q6" Jul 16 12:33:03.937435 kubelet[2183]: E0716 12:33:03.936604 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.937435 kubelet[2183]: W0716 12:33:03.936612 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.937435 kubelet[2183]: E0716 12:33:03.936620 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.937435 kubelet[2183]: E0716 12:33:03.936776 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.937435 kubelet[2183]: W0716 12:33:03.936784 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.937435 kubelet[2183]: E0716 12:33:03.936793 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.937435 kubelet[2183]: E0716 12:33:03.936948 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.937435 kubelet[2183]: W0716 12:33:03.936955 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.937435 kubelet[2183]: E0716 12:33:03.936967 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.939828 kubelet[2183]: E0716 12:33:03.939807 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.939828 kubelet[2183]: W0716 12:33:03.939826 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.940002 kubelet[2183]: E0716 12:33:03.939848 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.940067 kubelet[2183]: E0716 12:33:03.940035 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.940067 kubelet[2183]: W0716 12:33:03.940042 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.940067 kubelet[2183]: E0716 12:33:03.940051 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.940260 kubelet[2183]: E0716 12:33:03.940195 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.940260 kubelet[2183]: W0716 12:33:03.940201 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.940260 kubelet[2183]: E0716 12:33:03.940209 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.940430 kubelet[2183]: E0716 12:33:03.940335 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.940430 kubelet[2183]: W0716 12:33:03.940342 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.940430 kubelet[2183]: E0716 12:33:03.940353 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.940831 kubelet[2183]: E0716 12:33:03.940473 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.940831 kubelet[2183]: W0716 12:33:03.940480 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.940831 kubelet[2183]: E0716 12:33:03.940487 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.940831 kubelet[2183]: E0716 12:33:03.940619 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.940831 kubelet[2183]: W0716 12:33:03.940626 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.940831 kubelet[2183]: E0716 12:33:03.940634 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.940831 kubelet[2183]: E0716 12:33:03.940808 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:03.940831 kubelet[2183]: W0716 12:33:03.940815 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:03.940831 kubelet[2183]: E0716 12:33:03.940824 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:03.947162 env[1306]: time="2025-07-16T12:33:03.945358477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:33:03.947162 env[1306]: time="2025-07-16T12:33:03.945457595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:33:03.947162 env[1306]: time="2025-07-16T12:33:03.945481506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:33:03.947162 env[1306]: time="2025-07-16T12:33:03.945701361Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ed3e49e2abc74b9d2f27dc6e0693ddfc0424b9597c2a50efed31c2e219be671 pid=2664 runtime=io.containerd.runc.v2 Jul 16 12:33:04.038985 kubelet[2183]: E0716 12:33:04.038373 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.038985 kubelet[2183]: W0716 12:33:04.038399 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.038985 kubelet[2183]: E0716 12:33:04.038431 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.038985 kubelet[2183]: E0716 12:33:04.038933 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.038985 kubelet[2183]: W0716 12:33:04.038952 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.038985 kubelet[2183]: E0716 12:33:04.038966 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.039347 kubelet[2183]: E0716 12:33:04.039193 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.039347 kubelet[2183]: W0716 12:33:04.039202 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.039347 kubelet[2183]: E0716 12:33:04.039212 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.039452 kubelet[2183]: E0716 12:33:04.039438 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.039452 kubelet[2183]: W0716 12:33:04.039451 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.039521 kubelet[2183]: E0716 12:33:04.039468 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.041135 kubelet[2183]: E0716 12:33:04.040914 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.041135 kubelet[2183]: W0716 12:33:04.040933 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.041135 kubelet[2183]: E0716 12:33:04.040951 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.042032 kubelet[2183]: E0716 12:33:04.041169 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.042032 kubelet[2183]: W0716 12:33:04.041177 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.042032 kubelet[2183]: E0716 12:33:04.041190 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.042032 kubelet[2183]: E0716 12:33:04.041468 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.042032 kubelet[2183]: W0716 12:33:04.041476 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.042472 kubelet[2183]: E0716 12:33:04.042293 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.042565 kubelet[2183]: E0716 12:33:04.042528 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.042565 kubelet[2183]: W0716 12:33:04.042538 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.042636 kubelet[2183]: E0716 12:33:04.042574 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.045648 env[1306]: time="2025-07-16T12:33:04.043858576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5qrcn,Uid:900ad216-892e-4200-8749-37f3149f1157,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ed3e49e2abc74b9d2f27dc6e0693ddfc0424b9597c2a50efed31c2e219be671\"" Jul 16 12:33:04.045992 kubelet[2183]: E0716 12:33:04.045714 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.045992 kubelet[2183]: W0716 12:33:04.045761 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.045992 kubelet[2183]: E0716 12:33:04.045952 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.046303 kubelet[2183]: E0716 12:33:04.046143 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.046303 kubelet[2183]: W0716 12:33:04.046164 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.046303 kubelet[2183]: E0716 12:33:04.046218 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.051127 kubelet[2183]: E0716 12:33:04.046636 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.051127 kubelet[2183]: W0716 12:33:04.046690 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.051127 kubelet[2183]: E0716 12:33:04.046865 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.051127 kubelet[2183]: W0716 12:33:04.046873 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.051127 kubelet[2183]: E0716 12:33:04.047039 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.051127 kubelet[2183]: E0716 12:33:04.047112 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.051127 kubelet[2183]: E0716 12:33:04.047356 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.051127 kubelet[2183]: W0716 12:33:04.047368 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.051127 kubelet[2183]: E0716 12:33:04.047387 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.051127 kubelet[2183]: E0716 12:33:04.047557 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.056988 kubelet[2183]: W0716 12:33:04.047565 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.056988 kubelet[2183]: E0716 12:33:04.047577 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.056988 kubelet[2183]: E0716 12:33:04.047732 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.056988 kubelet[2183]: W0716 12:33:04.047739 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.056988 kubelet[2183]: E0716 12:33:04.047816 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.056988 kubelet[2183]: E0716 12:33:04.047925 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.056988 kubelet[2183]: W0716 12:33:04.047931 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.056988 kubelet[2183]: E0716 12:33:04.047988 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.056988 kubelet[2183]: E0716 12:33:04.048085 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.056988 kubelet[2183]: W0716 12:33:04.048093 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.057315 kubelet[2183]: E0716 12:33:04.048104 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.057315 kubelet[2183]: E0716 12:33:04.048251 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.057315 kubelet[2183]: W0716 12:33:04.048258 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.057315 kubelet[2183]: E0716 12:33:04.048316 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.057315 kubelet[2183]: E0716 12:33:04.048412 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.057315 kubelet[2183]: W0716 12:33:04.048419 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.057315 kubelet[2183]: E0716 12:33:04.048511 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.057315 kubelet[2183]: E0716 12:33:04.048601 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.057315 kubelet[2183]: W0716 12:33:04.048607 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.057315 kubelet[2183]: E0716 12:33:04.048618 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.057719 kubelet[2183]: E0716 12:33:04.048817 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.057719 kubelet[2183]: W0716 12:33:04.048826 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.057719 kubelet[2183]: E0716 12:33:04.048838 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.057719 kubelet[2183]: E0716 12:33:04.049012 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.057719 kubelet[2183]: W0716 12:33:04.049019 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.057719 kubelet[2183]: E0716 12:33:04.049031 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.057719 kubelet[2183]: E0716 12:33:04.049199 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.057719 kubelet[2183]: W0716 12:33:04.049212 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.057719 kubelet[2183]: E0716 12:33:04.049223 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.057719 kubelet[2183]: E0716 12:33:04.052822 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.058112 kubelet[2183]: W0716 12:33:04.052845 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.058112 kubelet[2183]: E0716 12:33:04.052859 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.058112 kubelet[2183]: E0716 12:33:04.053100 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.058112 kubelet[2183]: W0716 12:33:04.053109 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.058112 kubelet[2183]: E0716 12:33:04.053118 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.069023 kubelet[2183]: E0716 12:33:04.068986 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:04.069262 kubelet[2183]: W0716 12:33:04.069235 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:04.069461 kubelet[2183]: E0716 12:33:04.069438 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:04.411000 audit[2726]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2726 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:04.411000 audit[2726]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd21cbdfb0 a2=0 a3=7ffd21cbdf9c items=0 ppid=2334 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:04.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:04.419000 audit[2726]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2726 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:04.419000 audit[2726]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd21cbdfb0 a2=0 a3=0 items=0 ppid=2334 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:04.419000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:05.719459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438540657.mount: Deactivated successfully. Jul 16 12:33:05.798957 kubelet[2183]: E0716 12:33:05.798825 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b22q6" podUID="58f808a6-a7a4-4400-b1f3-561a7728fef5" Jul 16 12:33:07.089621 env[1306]: time="2025-07-16T12:33:07.089574111Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:07.092820 env[1306]: time="2025-07-16T12:33:07.092787943Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:07.095167 env[1306]: time="2025-07-16T12:33:07.095141316Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:07.097365 env[1306]: time="2025-07-16T12:33:07.097339186Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:07.098686 env[1306]: time="2025-07-16T12:33:07.098619506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 16 12:33:07.102794 env[1306]: time="2025-07-16T12:33:07.101970484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 16 12:33:07.118007 env[1306]: time="2025-07-16T12:33:07.117963603Z" level=info msg="CreateContainer within sandbox \"200ca28d9523fdb4719ac655b1ca4ceecb3f61a285cc3343742183d86a385c37\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 16 12:33:07.126975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273719014.mount: Deactivated successfully. Jul 16 12:33:07.131889 env[1306]: time="2025-07-16T12:33:07.131837918Z" level=info msg="CreateContainer within sandbox \"200ca28d9523fdb4719ac655b1ca4ceecb3f61a285cc3343742183d86a385c37\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"63f4088b5db744cf63a64c91060d7c0dcc5e0f41c8db83ffb40de8805ecc765d\"" Jul 16 12:33:07.133998 env[1306]: time="2025-07-16T12:33:07.133973412Z" level=info msg="StartContainer for \"63f4088b5db744cf63a64c91060d7c0dcc5e0f41c8db83ffb40de8805ecc765d\"" Jul 16 12:33:07.216662 env[1306]: time="2025-07-16T12:33:07.216605424Z" level=info msg="StartContainer for \"63f4088b5db744cf63a64c91060d7c0dcc5e0f41c8db83ffb40de8805ecc765d\" returns successfully" Jul 16 12:33:07.797102 kubelet[2183]: E0716 12:33:07.796306 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b22q6" podUID="58f808a6-a7a4-4400-b1f3-561a7728fef5" Jul 16 12:33:07.909712 kubelet[2183]: I0716 12:33:07.909124 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d46c79bf9-2gpd7" podStartSLOduration=1.735797955 podStartE2EDuration="4.90909379s" podCreationTimestamp="2025-07-16 12:33:03 +0000 UTC" firstStartedPulling="2025-07-16 12:33:03.926937921 +0000 UTC m=+20.356979540" lastFinishedPulling="2025-07-16 12:33:07.100233743 +0000 UTC m=+23.530275375" observedRunningTime="2025-07-16 12:33:07.90758793 +0000 UTC m=+24.337629590" watchObservedRunningTime="2025-07-16 12:33:07.90909379 +0000 UTC m=+24.339135468" Jul 16 12:33:07.991637 kubelet[2183]: E0716 12:33:07.991592 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.991908 kubelet[2183]: W0716 12:33:07.991884 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.992258 kubelet[2183]: E0716 12:33:07.992001 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.992487 kubelet[2183]: E0716 12:33:07.992351 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.992487 kubelet[2183]: W0716 12:33:07.992362 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.992487 kubelet[2183]: E0716 12:33:07.992376 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.992823 kubelet[2183]: E0716 12:33:07.992804 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.992919 kubelet[2183]: W0716 12:33:07.992906 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.993009 kubelet[2183]: E0716 12:33:07.992997 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.993514 kubelet[2183]: E0716 12:33:07.993499 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.993692 kubelet[2183]: W0716 12:33:07.993665 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.993791 kubelet[2183]: E0716 12:33:07.993779 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.994102 kubelet[2183]: E0716 12:33:07.994089 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.994189 kubelet[2183]: W0716 12:33:07.994176 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.994268 kubelet[2183]: E0716 12:33:07.994256 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.994711 kubelet[2183]: E0716 12:33:07.994697 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.994804 kubelet[2183]: W0716 12:33:07.994792 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.994878 kubelet[2183]: E0716 12:33:07.994867 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.995442 kubelet[2183]: E0716 12:33:07.995423 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.995541 kubelet[2183]: W0716 12:33:07.995528 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.995822 kubelet[2183]: E0716 12:33:07.995602 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.995932 kubelet[2183]: E0716 12:33:07.995921 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.996009 kubelet[2183]: W0716 12:33:07.995997 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.996087 kubelet[2183]: E0716 12:33:07.996076 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.996703 kubelet[2183]: E0716 12:33:07.996689 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.996803 kubelet[2183]: W0716 12:33:07.996790 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.996873 kubelet[2183]: E0716 12:33:07.996862 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.997601 kubelet[2183]: E0716 12:33:07.997587 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.997763 kubelet[2183]: W0716 12:33:07.997722 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.997871 kubelet[2183]: E0716 12:33:07.997858 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.998170 kubelet[2183]: E0716 12:33:07.998158 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.998256 kubelet[2183]: W0716 12:33:07.998244 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.998352 kubelet[2183]: E0716 12:33:07.998340 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.998620 kubelet[2183]: E0716 12:33:07.998608 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.998722 kubelet[2183]: W0716 12:33:07.998710 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.998802 kubelet[2183]: E0716 12:33:07.998790 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:07.999605 kubelet[2183]: E0716 12:33:07.999585 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:07.999736 kubelet[2183]: W0716 12:33:07.999722 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:07.999814 kubelet[2183]: E0716 12:33:07.999802 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.000076 kubelet[2183]: E0716 12:33:08.000064 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.000169 kubelet[2183]: W0716 12:33:08.000158 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.000252 kubelet[2183]: E0716 12:33:08.000237 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.000895 kubelet[2183]: E0716 12:33:08.000881 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.001029 kubelet[2183]: W0716 12:33:08.001014 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.001112 kubelet[2183]: E0716 12:33:08.001100 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.078074 kubelet[2183]: E0716 12:33:08.077837 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.078074 kubelet[2183]: W0716 12:33:08.077907 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.078074 kubelet[2183]: E0716 12:33:08.078016 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.078602 kubelet[2183]: E0716 12:33:08.078552 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.078602 kubelet[2183]: W0716 12:33:08.078573 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.078602 kubelet[2183]: E0716 12:33:08.078599 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.080552 kubelet[2183]: E0716 12:33:08.080451 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.080947 kubelet[2183]: W0716 12:33:08.080920 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.081416 kubelet[2183]: E0716 12:33:08.081164 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.082018 kubelet[2183]: E0716 12:33:08.081997 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.082221 kubelet[2183]: W0716 12:33:08.082198 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.082701 kubelet[2183]: E0716 12:33:08.082404 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.082981 kubelet[2183]: E0716 12:33:08.082948 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.083142 kubelet[2183]: W0716 12:33:08.083120 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.083290 kubelet[2183]: E0716 12:33:08.083269 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.083794 kubelet[2183]: E0716 12:33:08.083773 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.083974 kubelet[2183]: W0716 12:33:08.083935 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.084124 kubelet[2183]: E0716 12:33:08.084104 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.084580 kubelet[2183]: E0716 12:33:08.084561 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.084796 kubelet[2183]: W0716 12:33:08.084773 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.084935 kubelet[2183]: E0716 12:33:08.084914 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.085449 kubelet[2183]: E0716 12:33:08.085427 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.085602 kubelet[2183]: W0716 12:33:08.085579 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.085775 kubelet[2183]: E0716 12:33:08.085754 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.086364 kubelet[2183]: E0716 12:33:08.086343 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.086521 kubelet[2183]: W0716 12:33:08.086496 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.086661 kubelet[2183]: E0716 12:33:08.086638 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.087278 kubelet[2183]: E0716 12:33:08.087243 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.087393 kubelet[2183]: W0716 12:33:08.087281 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.087393 kubelet[2183]: E0716 12:33:08.087327 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.088060 kubelet[2183]: E0716 12:33:08.088029 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.088197 kubelet[2183]: W0716 12:33:08.088062 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.088320 kubelet[2183]: E0716 12:33:08.088293 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.088510 kubelet[2183]: E0716 12:33:08.088485 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.088598 kubelet[2183]: W0716 12:33:08.088513 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.088948 kubelet[2183]: E0716 12:33:08.088921 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.089065 kubelet[2183]: W0716 12:33:08.088950 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.089065 kubelet[2183]: E0716 12:33:08.089000 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.089445 kubelet[2183]: E0716 12:33:08.089394 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.089649 kubelet[2183]: E0716 12:33:08.089435 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.089816 kubelet[2183]: W0716 12:33:08.089793 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.089988 kubelet[2183]: E0716 12:33:08.089953 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.090408 kubelet[2183]: E0716 12:33:08.090371 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.090533 kubelet[2183]: W0716 12:33:08.090407 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.090533 kubelet[2183]: E0716 12:33:08.090437 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.091325 kubelet[2183]: E0716 12:33:08.091301 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.091395 kubelet[2183]: W0716 12:33:08.091329 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.091395 kubelet[2183]: E0716 12:33:08.091370 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.091907 kubelet[2183]: E0716 12:33:08.091894 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.092016 kubelet[2183]: W0716 12:33:08.092001 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.092093 kubelet[2183]: E0716 12:33:08.092080 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.092448 kubelet[2183]: E0716 12:33:08.092438 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.092522 kubelet[2183]: W0716 12:33:08.092510 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.092598 kubelet[2183]: E0716 12:33:08.092587 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.815922 env[1306]: time="2025-07-16T12:33:08.815836979Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:08.817001 env[1306]: time="2025-07-16T12:33:08.816857035Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:08.818510 env[1306]: time="2025-07-16T12:33:08.818466897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:08.820168 env[1306]: time="2025-07-16T12:33:08.820119146Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:08.821170 env[1306]: time="2025-07-16T12:33:08.820661629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 16 12:33:08.825709 env[1306]: time="2025-07-16T12:33:08.824529791Z" level=info msg="CreateContainer within sandbox \"6ed3e49e2abc74b9d2f27dc6e0693ddfc0424b9597c2a50efed31c2e219be671\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 16 12:33:08.840547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545214691.mount: Deactivated successfully. Jul 16 12:33:08.853371 env[1306]: time="2025-07-16T12:33:08.853286710Z" level=info msg="CreateContainer within sandbox \"6ed3e49e2abc74b9d2f27dc6e0693ddfc0424b9597c2a50efed31c2e219be671\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a0e892d719207e9dfa591c70cb7d3b2cfbb6fc59e457f10223bb1883192f1932\"" Jul 16 12:33:08.854922 env[1306]: time="2025-07-16T12:33:08.854840858Z" level=info msg="StartContainer for \"a0e892d719207e9dfa591c70cb7d3b2cfbb6fc59e457f10223bb1883192f1932\"" Jul 16 12:33:08.894086 kubelet[2183]: I0716 12:33:08.893666 2183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 12:33:08.910448 kubelet[2183]: E0716 12:33:08.910255 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.910448 kubelet[2183]: W0716 12:33:08.910278 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.910448 kubelet[2183]: E0716 12:33:08.910302 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.910864 kubelet[2183]: E0716 12:33:08.910745 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.910864 kubelet[2183]: W0716 12:33:08.910758 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.910864 kubelet[2183]: E0716 12:33:08.910771 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.911724 kubelet[2183]: E0716 12:33:08.911038 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.911724 kubelet[2183]: W0716 12:33:08.911049 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.911724 kubelet[2183]: E0716 12:33:08.911061 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.911724 kubelet[2183]: E0716 12:33:08.911261 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.911724 kubelet[2183]: W0716 12:33:08.911269 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.911724 kubelet[2183]: E0716 12:33:08.911278 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.911724 kubelet[2183]: E0716 12:33:08.911450 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.911724 kubelet[2183]: W0716 12:33:08.911458 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.911724 kubelet[2183]: E0716 12:33:08.911476 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.912201 kubelet[2183]: E0716 12:33:08.912079 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.912201 kubelet[2183]: W0716 12:33:08.912090 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.912201 kubelet[2183]: E0716 12:33:08.912102 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.912485 kubelet[2183]: E0716 12:33:08.912376 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.912485 kubelet[2183]: W0716 12:33:08.912385 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.912485 kubelet[2183]: E0716 12:33:08.912395 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.912930 kubelet[2183]: E0716 12:33:08.912803 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.912930 kubelet[2183]: W0716 12:33:08.912814 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.912930 kubelet[2183]: E0716 12:33:08.912826 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.913217 kubelet[2183]: E0716 12:33:08.913114 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.913217 kubelet[2183]: W0716 12:33:08.913124 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.913217 kubelet[2183]: E0716 12:33:08.913135 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.913495 kubelet[2183]: E0716 12:33:08.913393 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.913495 kubelet[2183]: W0716 12:33:08.913403 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.913495 kubelet[2183]: E0716 12:33:08.913413 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.913684 kubelet[2183]: E0716 12:33:08.913659 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.913839 kubelet[2183]: W0716 12:33:08.913747 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.913839 kubelet[2183]: E0716 12:33:08.913761 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.914102 kubelet[2183]: E0716 12:33:08.914092 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.914182 kubelet[2183]: W0716 12:33:08.914170 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.914249 kubelet[2183]: E0716 12:33:08.914238 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.914491 kubelet[2183]: E0716 12:33:08.914481 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.914564 kubelet[2183]: W0716 12:33:08.914553 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.914637 kubelet[2183]: E0716 12:33:08.914626 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.914961 kubelet[2183]: E0716 12:33:08.914950 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.915054 kubelet[2183]: W0716 12:33:08.915041 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.915122 kubelet[2183]: E0716 12:33:08.915111 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.915447 kubelet[2183]: E0716 12:33:08.915437 2183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 12:33:08.915539 kubelet[2183]: W0716 12:33:08.915527 2183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 12:33:08.915610 kubelet[2183]: E0716 12:33:08.915599 2183 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 12:33:08.937383 env[1306]: time="2025-07-16T12:33:08.937336103Z" level=info msg="StartContainer for \"a0e892d719207e9dfa591c70cb7d3b2cfbb6fc59e457f10223bb1883192f1932\" returns successfully" Jul 16 12:33:09.001044 env[1306]: time="2025-07-16T12:33:09.000972448Z" level=info msg="shim disconnected" id=a0e892d719207e9dfa591c70cb7d3b2cfbb6fc59e457f10223bb1883192f1932 Jul 16 12:33:09.001044 env[1306]: time="2025-07-16T12:33:09.001026027Z" level=warning msg="cleaning up after shim disconnected" id=a0e892d719207e9dfa591c70cb7d3b2cfbb6fc59e457f10223bb1883192f1932 namespace=k8s.io Jul 16 12:33:09.001044 env[1306]: time="2025-07-16T12:33:09.001036734Z" level=info msg="cleaning up dead shim" Jul 16 12:33:09.010159 env[1306]: time="2025-07-16T12:33:09.010114955Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:33:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2868 runtime=io.containerd.runc.v2\n" Jul 16 12:33:09.112277 systemd[1]: run-containerd-runc-k8s.io-a0e892d719207e9dfa591c70cb7d3b2cfbb6fc59e457f10223bb1883192f1932-runc.zQctXo.mount: Deactivated successfully. Jul 16 12:33:09.112653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0e892d719207e9dfa591c70cb7d3b2cfbb6fc59e457f10223bb1883192f1932-rootfs.mount: Deactivated successfully. Jul 16 12:33:09.797096 kubelet[2183]: E0716 12:33:09.797007 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b22q6" podUID="58f808a6-a7a4-4400-b1f3-561a7728fef5" Jul 16 12:33:09.903185 env[1306]: time="2025-07-16T12:33:09.902772346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 16 12:33:11.799153 kubelet[2183]: E0716 12:33:11.798932 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b22q6" podUID="58f808a6-a7a4-4400-b1f3-561a7728fef5" Jul 16 12:33:13.796929 kubelet[2183]: E0716 12:33:13.796868 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b22q6" podUID="58f808a6-a7a4-4400-b1f3-561a7728fef5" Jul 16 12:33:14.569305 env[1306]: time="2025-07-16T12:33:14.569204720Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:14.572211 env[1306]: time="2025-07-16T12:33:14.572151110Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:14.573966 env[1306]: time="2025-07-16T12:33:14.573918680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:14.575527 env[1306]: time="2025-07-16T12:33:14.575484324Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:14.576102 env[1306]: time="2025-07-16T12:33:14.576052646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 16 12:33:14.580267 env[1306]: time="2025-07-16T12:33:14.579569989Z" level=info msg="CreateContainer within sandbox \"6ed3e49e2abc74b9d2f27dc6e0693ddfc0424b9597c2a50efed31c2e219be671\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 16 12:33:14.590553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4219997705.mount: Deactivated successfully. Jul 16 12:33:14.594959 env[1306]: time="2025-07-16T12:33:14.594918715Z" level=info msg="CreateContainer within sandbox \"6ed3e49e2abc74b9d2f27dc6e0693ddfc0424b9597c2a50efed31c2e219be671\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b6b5c34a86495f1f25a9e95b89bbbd56fed136d32691462272888b4ca4774aa4\"" Jul 16 12:33:14.596936 env[1306]: time="2025-07-16T12:33:14.596903843Z" level=info msg="StartContainer for \"b6b5c34a86495f1f25a9e95b89bbbd56fed136d32691462272888b4ca4774aa4\"" Jul 16 12:33:14.677152 env[1306]: time="2025-07-16T12:33:14.677110370Z" level=info msg="StartContainer for \"b6b5c34a86495f1f25a9e95b89bbbd56fed136d32691462272888b4ca4774aa4\" returns successfully" Jul 16 12:33:15.348701 env[1306]: time="2025-07-16T12:33:15.348626255Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 12:33:15.377131 env[1306]: time="2025-07-16T12:33:15.377082451Z" level=info msg="shim disconnected" id=b6b5c34a86495f1f25a9e95b89bbbd56fed136d32691462272888b4ca4774aa4 Jul 16 12:33:15.377131 env[1306]: time="2025-07-16T12:33:15.377123422Z" level=warning msg="cleaning up after shim disconnected" id=b6b5c34a86495f1f25a9e95b89bbbd56fed136d32691462272888b4ca4774aa4 namespace=k8s.io Jul 16 12:33:15.377131 env[1306]: time="2025-07-16T12:33:15.377132606Z" level=info msg="cleaning up dead shim" Jul 16 12:33:15.385285 env[1306]: time="2025-07-16T12:33:15.385248782Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:33:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2934 runtime=io.containerd.runc.v2\n" Jul 16 12:33:15.456259 kubelet[2183]: I0716 12:33:15.455832 2183 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 16 12:33:15.588370 systemd[1]: run-containerd-runc-k8s.io-b6b5c34a86495f1f25a9e95b89bbbd56fed136d32691462272888b4ca4774aa4-runc.FEpg9x.mount: Deactivated successfully. Jul 16 12:33:15.588565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6b5c34a86495f1f25a9e95b89bbbd56fed136d32691462272888b4ca4774aa4-rootfs.mount: Deactivated successfully. Jul 16 12:33:15.655776 kubelet[2183]: I0716 12:33:15.655598 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9xwr\" (UniqueName: \"kubernetes.io/projected/927b7472-bad5-428f-9ac3-ea2fe33052e3-kube-api-access-h9xwr\") pod \"calico-apiserver-66fbfc9dbd-7qrcc\" (UID: \"927b7472-bad5-428f-9ac3-ea2fe33052e3\") " pod="calico-apiserver/calico-apiserver-66fbfc9dbd-7qrcc" Jul 16 12:33:15.655776 kubelet[2183]: I0716 12:33:15.655690 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbpjb\" (UniqueName: \"kubernetes.io/projected/2e94965a-1e31-4dae-9e6f-fa9636bb6e1e-kube-api-access-zbpjb\") pod \"coredns-7c65d6cfc9-4vckm\" (UID: \"2e94965a-1e31-4dae-9e6f-fa9636bb6e1e\") " pod="kube-system/coredns-7c65d6cfc9-4vckm" Jul 16 12:33:15.655776 kubelet[2183]: I0716 12:33:15.655714 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckmkf\" (UniqueName: \"kubernetes.io/projected/ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5-kube-api-access-ckmkf\") pod \"goldmane-58fd7646b9-qgcq4\" (UID: \"ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5\") " pod="calico-system/goldmane-58fd7646b9-qgcq4" Jul 16 12:33:15.655776 kubelet[2183]: I0716 12:33:15.655749 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/927b7472-bad5-428f-9ac3-ea2fe33052e3-calico-apiserver-certs\") pod \"calico-apiserver-66fbfc9dbd-7qrcc\" (UID: \"927b7472-bad5-428f-9ac3-ea2fe33052e3\") " pod="calico-apiserver/calico-apiserver-66fbfc9dbd-7qrcc" Jul 16 12:33:15.655776 kubelet[2183]: I0716 12:33:15.655771 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5-goldmane-key-pair\") pod \"goldmane-58fd7646b9-qgcq4\" (UID: \"ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5\") " pod="calico-system/goldmane-58fd7646b9-qgcq4" Jul 16 12:33:15.656176 kubelet[2183]: I0716 12:33:15.655790 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzbxg\" (UniqueName: \"kubernetes.io/projected/18b5ae42-266d-4760-b2fc-63a368dee70d-kube-api-access-dzbxg\") pod \"coredns-7c65d6cfc9-psh6r\" (UID: \"18b5ae42-266d-4760-b2fc-63a368dee70d\") " pod="kube-system/coredns-7c65d6cfc9-psh6r" Jul 16 12:33:15.656176 kubelet[2183]: I0716 12:33:15.655820 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ddab6da-2ee6-41f4-a22e-8303dfd78061-whisker-backend-key-pair\") pod \"whisker-6ddf456bc9-fznn8\" (UID: \"7ddab6da-2ee6-41f4-a22e-8303dfd78061\") " pod="calico-system/whisker-6ddf456bc9-fznn8" Jul 16 12:33:15.656176 kubelet[2183]: I0716 12:33:15.655838 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ddab6da-2ee6-41f4-a22e-8303dfd78061-whisker-ca-bundle\") pod \"whisker-6ddf456bc9-fznn8\" (UID: \"7ddab6da-2ee6-41f4-a22e-8303dfd78061\") " pod="calico-system/whisker-6ddf456bc9-fznn8" Jul 16 12:33:15.656176 kubelet[2183]: I0716 12:33:15.655871 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrdk6\" (UniqueName: \"kubernetes.io/projected/c753503c-8c32-4abd-adf0-0419aba85c9d-kube-api-access-hrdk6\") pod \"calico-kube-controllers-64fd67f98d-kkwfb\" (UID: \"c753503c-8c32-4abd-adf0-0419aba85c9d\") " pod="calico-system/calico-kube-controllers-64fd67f98d-kkwfb" Jul 16 12:33:15.656176 kubelet[2183]: I0716 12:33:15.655911 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5-config\") pod \"goldmane-58fd7646b9-qgcq4\" (UID: \"ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5\") " pod="calico-system/goldmane-58fd7646b9-qgcq4" Jul 16 12:33:15.656398 kubelet[2183]: I0716 12:33:15.655933 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhkhc\" (UniqueName: \"kubernetes.io/projected/7ddab6da-2ee6-41f4-a22e-8303dfd78061-kube-api-access-xhkhc\") pod \"whisker-6ddf456bc9-fznn8\" (UID: \"7ddab6da-2ee6-41f4-a22e-8303dfd78061\") " pod="calico-system/whisker-6ddf456bc9-fznn8" Jul 16 12:33:15.656398 kubelet[2183]: I0716 12:33:15.655964 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bf32dfb7-052c-4edd-885b-1571df3da4fa-calico-apiserver-certs\") pod \"calico-apiserver-66fbfc9dbd-f9x9w\" (UID: \"bf32dfb7-052c-4edd-885b-1571df3da4fa\") " pod="calico-apiserver/calico-apiserver-66fbfc9dbd-f9x9w" Jul 16 12:33:15.656398 kubelet[2183]: I0716 12:33:15.655985 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e94965a-1e31-4dae-9e6f-fa9636bb6e1e-config-volume\") pod \"coredns-7c65d6cfc9-4vckm\" (UID: \"2e94965a-1e31-4dae-9e6f-fa9636bb6e1e\") " pod="kube-system/coredns-7c65d6cfc9-4vckm" Jul 16 12:33:15.656398 kubelet[2183]: I0716 12:33:15.656003 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-qgcq4\" (UID: \"ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5\") " pod="calico-system/goldmane-58fd7646b9-qgcq4" Jul 16 12:33:15.656398 kubelet[2183]: I0716 12:33:15.656035 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c753503c-8c32-4abd-adf0-0419aba85c9d-tigera-ca-bundle\") pod \"calico-kube-controllers-64fd67f98d-kkwfb\" (UID: \"c753503c-8c32-4abd-adf0-0419aba85c9d\") " pod="calico-system/calico-kube-controllers-64fd67f98d-kkwfb" Jul 16 12:33:15.656617 kubelet[2183]: I0716 12:33:15.656057 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18b5ae42-266d-4760-b2fc-63a368dee70d-config-volume\") pod \"coredns-7c65d6cfc9-psh6r\" (UID: \"18b5ae42-266d-4760-b2fc-63a368dee70d\") " pod="kube-system/coredns-7c65d6cfc9-psh6r" Jul 16 12:33:15.656617 kubelet[2183]: I0716 12:33:15.656076 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwjm9\" (UniqueName: \"kubernetes.io/projected/bf32dfb7-052c-4edd-885b-1571df3da4fa-kube-api-access-fwjm9\") pod \"calico-apiserver-66fbfc9dbd-f9x9w\" (UID: \"bf32dfb7-052c-4edd-885b-1571df3da4fa\") " pod="calico-apiserver/calico-apiserver-66fbfc9dbd-f9x9w" Jul 16 12:33:15.806300 env[1306]: time="2025-07-16T12:33:15.806257632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b22q6,Uid:58f808a6-a7a4-4400-b1f3-561a7728fef5,Namespace:calico-system,Attempt:0,}" Jul 16 12:33:15.825723 env[1306]: time="2025-07-16T12:33:15.825651582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4vckm,Uid:2e94965a-1e31-4dae-9e6f-fa9636bb6e1e,Namespace:kube-system,Attempt:0,}" Jul 16 12:33:15.826286 env[1306]: time="2025-07-16T12:33:15.826252674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qgcq4,Uid:ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5,Namespace:calico-system,Attempt:0,}" Jul 16 12:33:15.826514 env[1306]: time="2025-07-16T12:33:15.826488842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-psh6r,Uid:18b5ae42-266d-4760-b2fc-63a368dee70d,Namespace:kube-system,Attempt:0,}" Jul 16 12:33:15.833140 env[1306]: time="2025-07-16T12:33:15.833104592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fbfc9dbd-f9x9w,Uid:bf32dfb7-052c-4edd-885b-1571df3da4fa,Namespace:calico-apiserver,Attempt:0,}" Jul 16 12:33:15.838514 env[1306]: time="2025-07-16T12:33:15.838374792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ddf456bc9-fznn8,Uid:7ddab6da-2ee6-41f4-a22e-8303dfd78061,Namespace:calico-system,Attempt:0,}" Jul 16 12:33:15.838660 env[1306]: time="2025-07-16T12:33:15.838604309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fd67f98d-kkwfb,Uid:c753503c-8c32-4abd-adf0-0419aba85c9d,Namespace:calico-system,Attempt:0,}" Jul 16 12:33:15.935787 env[1306]: time="2025-07-16T12:33:15.935752725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 16 12:33:16.066699 env[1306]: time="2025-07-16T12:33:16.066605418Z" level=error msg="Failed to destroy network for sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.067230 env[1306]: time="2025-07-16T12:33:16.067184709Z" level=error msg="encountered an error cleaning up failed sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.067412 env[1306]: time="2025-07-16T12:33:16.067357386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-psh6r,Uid:18b5ae42-266d-4760-b2fc-63a368dee70d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.067995 kubelet[2183]: E0716 12:33:16.067645 2183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.067995 kubelet[2183]: E0716 12:33:16.067750 2183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-psh6r" Jul 16 12:33:16.067995 kubelet[2183]: E0716 12:33:16.067780 2183 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-psh6r" Jul 16 12:33:16.068192 kubelet[2183]: E0716 12:33:16.067836 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-psh6r_kube-system(18b5ae42-266d-4760-b2fc-63a368dee70d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-psh6r_kube-system(18b5ae42-266d-4760-b2fc-63a368dee70d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-psh6r" podUID="18b5ae42-266d-4760-b2fc-63a368dee70d" Jul 16 12:33:16.100815 env[1306]: time="2025-07-16T12:33:16.100770135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fbfc9dbd-7qrcc,Uid:927b7472-bad5-428f-9ac3-ea2fe33052e3,Namespace:calico-apiserver,Attempt:0,}" Jul 16 12:33:16.123782 env[1306]: time="2025-07-16T12:33:16.123725615Z" level=error msg="Failed to destroy network for sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.124109 env[1306]: time="2025-07-16T12:33:16.124077625Z" level=error msg="encountered an error cleaning up failed sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.124163 env[1306]: time="2025-07-16T12:33:16.124130238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4vckm,Uid:2e94965a-1e31-4dae-9e6f-fa9636bb6e1e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.126252 kubelet[2183]: E0716 12:33:16.124404 2183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.126252 kubelet[2183]: E0716 12:33:16.124488 2183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4vckm" Jul 16 12:33:16.126252 kubelet[2183]: E0716 12:33:16.124530 2183 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4vckm" Jul 16 12:33:16.126414 kubelet[2183]: E0716 12:33:16.124574 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4vckm_kube-system(2e94965a-1e31-4dae-9e6f-fa9636bb6e1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4vckm_kube-system(2e94965a-1e31-4dae-9e6f-fa9636bb6e1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4vckm" podUID="2e94965a-1e31-4dae-9e6f-fa9636bb6e1e" Jul 16 12:33:16.140438 env[1306]: time="2025-07-16T12:33:16.140379270Z" level=error msg="Failed to destroy network for sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.140752 env[1306]: time="2025-07-16T12:33:16.140720690Z" level=error msg="encountered an error cleaning up failed sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.140822 env[1306]: time="2025-07-16T12:33:16.140773636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b22q6,Uid:58f808a6-a7a4-4400-b1f3-561a7728fef5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.141468 kubelet[2183]: E0716 12:33:16.141020 2183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.141468 kubelet[2183]: E0716 12:33:16.141089 2183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b22q6" Jul 16 12:33:16.141468 kubelet[2183]: E0716 12:33:16.141111 2183 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b22q6" Jul 16 12:33:16.141693 kubelet[2183]: E0716 12:33:16.141174 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b22q6_calico-system(58f808a6-a7a4-4400-b1f3-561a7728fef5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b22q6_calico-system(58f808a6-a7a4-4400-b1f3-561a7728fef5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b22q6" podUID="58f808a6-a7a4-4400-b1f3-561a7728fef5" Jul 16 12:33:16.168395 env[1306]: time="2025-07-16T12:33:16.168335141Z" level=error msg="Failed to destroy network for sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.168733 env[1306]: time="2025-07-16T12:33:16.168704059Z" level=error msg="encountered an error cleaning up failed sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.168791 env[1306]: time="2025-07-16T12:33:16.168754007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fbfc9dbd-f9x9w,Uid:bf32dfb7-052c-4edd-885b-1571df3da4fa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.169488 kubelet[2183]: E0716 12:33:16.169045 2183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.169488 kubelet[2183]: E0716 12:33:16.169114 2183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66fbfc9dbd-f9x9w" Jul 16 12:33:16.169488 kubelet[2183]: E0716 12:33:16.169135 2183 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66fbfc9dbd-f9x9w" Jul 16 12:33:16.170045 kubelet[2183]: E0716 12:33:16.169194 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66fbfc9dbd-f9x9w_calico-apiserver(bf32dfb7-052c-4edd-885b-1571df3da4fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66fbfc9dbd-f9x9w_calico-apiserver(bf32dfb7-052c-4edd-885b-1571df3da4fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66fbfc9dbd-f9x9w" podUID="bf32dfb7-052c-4edd-885b-1571df3da4fa" Jul 16 12:33:16.171912 env[1306]: time="2025-07-16T12:33:16.171872202Z" level=error msg="Failed to destroy network for sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.172328 env[1306]: time="2025-07-16T12:33:16.172280692Z" level=error msg="encountered an error cleaning up failed sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.172466 env[1306]: time="2025-07-16T12:33:16.172427493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fd67f98d-kkwfb,Uid:c753503c-8c32-4abd-adf0-0419aba85c9d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.172963 kubelet[2183]: E0716 12:33:16.172792 2183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.172963 kubelet[2183]: E0716 12:33:16.172838 2183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64fd67f98d-kkwfb" Jul 16 12:33:16.172963 kubelet[2183]: E0716 12:33:16.172864 2183 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64fd67f98d-kkwfb" Jul 16 12:33:16.173137 kubelet[2183]: E0716 12:33:16.172903 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64fd67f98d-kkwfb_calico-system(c753503c-8c32-4abd-adf0-0419aba85c9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64fd67f98d-kkwfb_calico-system(c753503c-8c32-4abd-adf0-0419aba85c9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64fd67f98d-kkwfb" podUID="c753503c-8c32-4abd-adf0-0419aba85c9d" Jul 16 12:33:16.183721 env[1306]: time="2025-07-16T12:33:16.183654657Z" level=error msg="Failed to destroy network for sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.184161 env[1306]: time="2025-07-16T12:33:16.184128925Z" level=error msg="encountered an error cleaning up failed sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.184299 env[1306]: time="2025-07-16T12:33:16.184270854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qgcq4,Uid:ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.185061 kubelet[2183]: E0716 12:33:16.184558 2183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.185061 kubelet[2183]: E0716 12:33:16.184629 2183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qgcq4" Jul 16 12:33:16.185061 kubelet[2183]: E0716 12:33:16.184649 2183 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qgcq4" Jul 16 12:33:16.185296 kubelet[2183]: E0716 12:33:16.184717 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-qgcq4_calico-system(ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-qgcq4_calico-system(ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-qgcq4" podUID="ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5" Jul 16 12:33:16.189579 env[1306]: time="2025-07-16T12:33:16.189495279Z" level=error msg="Failed to destroy network for sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.191115 env[1306]: time="2025-07-16T12:33:16.191062011Z" level=error msg="encountered an error cleaning up failed sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.191200 env[1306]: time="2025-07-16T12:33:16.191141426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ddf456bc9-fznn8,Uid:7ddab6da-2ee6-41f4-a22e-8303dfd78061,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.191910 kubelet[2183]: E0716 12:33:16.191427 2183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.191910 kubelet[2183]: E0716 12:33:16.191471 2183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ddf456bc9-fznn8" Jul 16 12:33:16.191910 kubelet[2183]: E0716 12:33:16.191500 2183 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ddf456bc9-fznn8" Jul 16 12:33:16.192116 kubelet[2183]: E0716 12:33:16.191551 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6ddf456bc9-fznn8_calico-system(7ddab6da-2ee6-41f4-a22e-8303dfd78061)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6ddf456bc9-fznn8_calico-system(7ddab6da-2ee6-41f4-a22e-8303dfd78061)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6ddf456bc9-fznn8" podUID="7ddab6da-2ee6-41f4-a22e-8303dfd78061" Jul 16 12:33:16.225922 env[1306]: time="2025-07-16T12:33:16.225854657Z" level=error msg="Failed to destroy network for sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.226406 env[1306]: time="2025-07-16T12:33:16.226373007Z" level=error msg="encountered an error cleaning up failed sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.226532 env[1306]: time="2025-07-16T12:33:16.226506393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fbfc9dbd-7qrcc,Uid:927b7472-bad5-428f-9ac3-ea2fe33052e3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.228557 kubelet[2183]: E0716 12:33:16.226900 2183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:16.228557 kubelet[2183]: E0716 12:33:16.226968 2183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66fbfc9dbd-7qrcc" Jul 16 12:33:16.228557 kubelet[2183]: E0716 12:33:16.226987 2183 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66fbfc9dbd-7qrcc" Jul 16 12:33:16.229095 kubelet[2183]: E0716 12:33:16.227061 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66fbfc9dbd-7qrcc_calico-apiserver(927b7472-bad5-428f-9ac3-ea2fe33052e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66fbfc9dbd-7qrcc_calico-apiserver(927b7472-bad5-428f-9ac3-ea2fe33052e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66fbfc9dbd-7qrcc" podUID="927b7472-bad5-428f-9ac3-ea2fe33052e3" Jul 16 12:33:16.928485 kubelet[2183]: I0716 12:33:16.928341 2183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:16.933595 kubelet[2183]: I0716 12:33:16.933532 2183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:16.940921 kubelet[2183]: I0716 12:33:16.940880 2183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:16.945400 kubelet[2183]: I0716 12:33:16.944380 2183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:16.948250 env[1306]: time="2025-07-16T12:33:16.948189546Z" level=info msg="StopPodSandbox for \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\"" Jul 16 12:33:16.949170 env[1306]: time="2025-07-16T12:33:16.949137031Z" level=info msg="StopPodSandbox for \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\"" Jul 16 12:33:16.949630 env[1306]: time="2025-07-16T12:33:16.949604265Z" level=info msg="StopPodSandbox for \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\"" Jul 16 12:33:16.954151 env[1306]: time="2025-07-16T12:33:16.954109587Z" level=info msg="StopPodSandbox for \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\"" Jul 16 12:33:16.954614 kubelet[2183]: I0716 12:33:16.954586 2183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:16.957181 env[1306]: time="2025-07-16T12:33:16.957145604Z" level=info msg="StopPodSandbox for \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\"" Jul 16 12:33:16.959388 kubelet[2183]: I0716 12:33:16.959351 2183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:16.962429 env[1306]: time="2025-07-16T12:33:16.961945466Z" level=info msg="StopPodSandbox for \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\"" Jul 16 12:33:16.969555 kubelet[2183]: I0716 12:33:16.969521 2183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:16.973528 env[1306]: time="2025-07-16T12:33:16.973489959Z" level=info msg="StopPodSandbox for \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\"" Jul 16 12:33:16.978501 kubelet[2183]: I0716 12:33:16.978470 2183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:16.981405 env[1306]: time="2025-07-16T12:33:16.981334269Z" level=info msg="StopPodSandbox for \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\"" Jul 16 12:33:17.070215 env[1306]: time="2025-07-16T12:33:17.070132371Z" level=error msg="StopPodSandbox for \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\" failed" error="failed to destroy network for sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:17.070497 kubelet[2183]: E0716 12:33:17.070450 2183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:17.072043 kubelet[2183]: E0716 12:33:17.070527 2183 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733"} Jul 16 12:33:17.072183 kubelet[2183]: E0716 12:33:17.072081 2183 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18b5ae42-266d-4760-b2fc-63a368dee70d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 16 12:33:17.072183 kubelet[2183]: E0716 12:33:17.072111 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18b5ae42-266d-4760-b2fc-63a368dee70d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-psh6r" podUID="18b5ae42-266d-4760-b2fc-63a368dee70d" Jul 16 12:33:17.112848 env[1306]: time="2025-07-16T12:33:17.112738845Z" level=error msg="StopPodSandbox for \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\" failed" error="failed to destroy network for sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:17.113269 kubelet[2183]: E0716 12:33:17.113200 2183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:17.113363 kubelet[2183]: E0716 12:33:17.113296 2183 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2"} Jul 16 12:33:17.113403 kubelet[2183]: E0716 12:33:17.113356 2183 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf32dfb7-052c-4edd-885b-1571df3da4fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 16 12:33:17.113500 kubelet[2183]: E0716 12:33:17.113390 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf32dfb7-052c-4edd-885b-1571df3da4fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66fbfc9dbd-f9x9w" podUID="bf32dfb7-052c-4edd-885b-1571df3da4fa" Jul 16 12:33:17.117409 env[1306]: time="2025-07-16T12:33:17.117155341Z" level=error msg="StopPodSandbox for \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\" failed" error="failed to destroy network for sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:17.118117 kubelet[2183]: E0716 12:33:17.118064 2183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:17.118476 kubelet[2183]: E0716 12:33:17.118138 2183 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252"} Jul 16 12:33:17.118535 kubelet[2183]: E0716 12:33:17.118506 2183 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 16 12:33:17.118617 kubelet[2183]: E0716 12:33:17.118547 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-qgcq4" podUID="ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5" Jul 16 12:33:17.124028 env[1306]: time="2025-07-16T12:33:17.123969520Z" level=error msg="StopPodSandbox for \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\" failed" error="failed to destroy network for sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:17.124296 kubelet[2183]: E0716 12:33:17.124250 2183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:17.124453 kubelet[2183]: E0716 12:33:17.124309 2183 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6"} Jul 16 12:33:17.124453 kubelet[2183]: E0716 12:33:17.124360 2183 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e94965a-1e31-4dae-9e6f-fa9636bb6e1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 16 12:33:17.124453 kubelet[2183]: E0716 12:33:17.124390 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e94965a-1e31-4dae-9e6f-fa9636bb6e1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4vckm" podUID="2e94965a-1e31-4dae-9e6f-fa9636bb6e1e" Jul 16 12:33:17.148693 env[1306]: time="2025-07-16T12:33:17.148592644Z" level=error msg="StopPodSandbox for \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\" failed" error="failed to destroy network for sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:17.149093 kubelet[2183]: E0716 12:33:17.149034 2183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:17.149186 kubelet[2183]: E0716 12:33:17.149133 2183 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b"} Jul 16 12:33:17.149225 kubelet[2183]: E0716 12:33:17.149189 2183 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58f808a6-a7a4-4400-b1f3-561a7728fef5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 16 12:33:17.149330 kubelet[2183]: E0716 12:33:17.149221 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58f808a6-a7a4-4400-b1f3-561a7728fef5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b22q6" podUID="58f808a6-a7a4-4400-b1f3-561a7728fef5" Jul 16 12:33:17.158539 env[1306]: time="2025-07-16T12:33:17.158461283Z" level=error msg="StopPodSandbox for \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\" failed" error="failed to destroy network for sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:17.159244 kubelet[2183]: E0716 12:33:17.159185 2183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:17.159380 kubelet[2183]: E0716 12:33:17.159288 2183 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c"} Jul 16 12:33:17.159380 kubelet[2183]: E0716 12:33:17.159340 2183 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"927b7472-bad5-428f-9ac3-ea2fe33052e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 16 12:33:17.159504 kubelet[2183]: E0716 12:33:17.159370 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"927b7472-bad5-428f-9ac3-ea2fe33052e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66fbfc9dbd-7qrcc" podUID="927b7472-bad5-428f-9ac3-ea2fe33052e3" Jul 16 12:33:17.183775 env[1306]: time="2025-07-16T12:33:17.183441627Z" level=error msg="StopPodSandbox for \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\" failed" error="failed to destroy network for sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:17.185849 kubelet[2183]: E0716 12:33:17.185791 2183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:17.186094 kubelet[2183]: E0716 12:33:17.185874 2183 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a"} Jul 16 12:33:17.186094 kubelet[2183]: E0716 12:33:17.185927 2183 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c753503c-8c32-4abd-adf0-0419aba85c9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 16 12:33:17.186094 kubelet[2183]: E0716 12:33:17.185959 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c753503c-8c32-4abd-adf0-0419aba85c9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64fd67f98d-kkwfb" podUID="c753503c-8c32-4abd-adf0-0419aba85c9d" Jul 16 12:33:17.186568 kubelet[2183]: E0716 12:33:17.186309 2183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:17.186568 kubelet[2183]: E0716 12:33:17.186363 2183 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a"} Jul 16 12:33:17.186568 kubelet[2183]: E0716 12:33:17.186405 2183 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ddab6da-2ee6-41f4-a22e-8303dfd78061\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 16 12:33:17.186568 kubelet[2183]: E0716 12:33:17.186429 2183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ddab6da-2ee6-41f4-a22e-8303dfd78061\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6ddf456bc9-fznn8" podUID="7ddab6da-2ee6-41f4-a22e-8303dfd78061" Jul 16 12:33:17.187116 env[1306]: time="2025-07-16T12:33:17.186120049Z" level=error msg="StopPodSandbox for \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\" failed" error="failed to destroy network for sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 12:33:19.691248 kubelet[2183]: I0716 12:33:19.690991 2183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 12:33:19.764000 audit[3294]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:19.776780 kernel: kauditd_printk_skb: 25 callbacks suppressed Jul 16 12:33:19.776942 kernel: audit: type=1325 audit(1752669199.764:305): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:19.781699 kernel: audit: type=1300 audit(1752669199.764:305): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc4c52eae0 a2=0 a3=7ffc4c52eacc items=0 ppid=2334 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:19.764000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc4c52eae0 a2=0 a3=7ffc4c52eacc items=0 ppid=2334 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:19.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:19.780000 audit[3294]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:19.799584 kernel: audit: type=1327 audit(1752669199.764:305): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:19.799664 kernel: audit: type=1325 audit(1752669199.780:306): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:19.799710 kernel: audit: type=1300 audit(1752669199.780:306): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc4c52eae0 a2=0 a3=7ffc4c52eacc items=0 ppid=2334 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:19.780000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc4c52eae0 a2=0 a3=7ffc4c52eacc items=0 ppid=2334 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:19.780000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:19.806291 kernel: audit: type=1327 audit(1752669199.780:306): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:25.524561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851631838.mount: Deactivated successfully. Jul 16 12:33:25.566883 env[1306]: time="2025-07-16T12:33:25.566801316Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:25.569825 env[1306]: time="2025-07-16T12:33:25.569770636Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:25.572655 env[1306]: time="2025-07-16T12:33:25.572570153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:25.575591 env[1306]: time="2025-07-16T12:33:25.575529252Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:25.576850 env[1306]: time="2025-07-16T12:33:25.576778463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 16 12:33:25.626181 env[1306]: time="2025-07-16T12:33:25.626139692Z" level=info msg="CreateContainer within sandbox \"6ed3e49e2abc74b9d2f27dc6e0693ddfc0424b9597c2a50efed31c2e219be671\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 16 12:33:25.640096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416147455.mount: Deactivated successfully. Jul 16 12:33:25.640991 env[1306]: time="2025-07-16T12:33:25.640958484Z" level=info msg="CreateContainer within sandbox \"6ed3e49e2abc74b9d2f27dc6e0693ddfc0424b9597c2a50efed31c2e219be671\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"702e814bc9b28efb7a610e8566f5593b10dc3c0a78903f8443fc9c4dc85c8216\"" Jul 16 12:33:25.643151 env[1306]: time="2025-07-16T12:33:25.643122848Z" level=info msg="StartContainer for \"702e814bc9b28efb7a610e8566f5593b10dc3c0a78903f8443fc9c4dc85c8216\"" Jul 16 12:33:25.710589 env[1306]: time="2025-07-16T12:33:25.710550215Z" level=info msg="StartContainer for \"702e814bc9b28efb7a610e8566f5593b10dc3c0a78903f8443fc9c4dc85c8216\" returns successfully" Jul 16 12:33:25.922745 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 16 12:33:25.923542 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 16 12:33:26.078900 kubelet[2183]: I0716 12:33:26.076377 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5qrcn" podStartSLOduration=1.549447301 podStartE2EDuration="23.076308716s" podCreationTimestamp="2025-07-16 12:33:03 +0000 UTC" firstStartedPulling="2025-07-16 12:33:04.052389318 +0000 UTC m=+20.482430937" lastFinishedPulling="2025-07-16 12:33:25.579250691 +0000 UTC m=+42.009292352" observedRunningTime="2025-07-16 12:33:26.075041458 +0000 UTC m=+42.505083098" watchObservedRunningTime="2025-07-16 12:33:26.076308716 +0000 UTC m=+42.506350359" Jul 16 12:33:26.211950 env[1306]: time="2025-07-16T12:33:26.211902832Z" level=info msg="StopPodSandbox for \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\"" Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.326 [INFO][3380] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.328 [INFO][3380] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" iface="eth0" netns="/var/run/netns/cni-f64701be-20cb-ec77-6d90-88b0762a76f1" Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.328 [INFO][3380] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" iface="eth0" netns="/var/run/netns/cni-f64701be-20cb-ec77-6d90-88b0762a76f1" Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.334 [INFO][3380] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" iface="eth0" netns="/var/run/netns/cni-f64701be-20cb-ec77-6d90-88b0762a76f1" Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.334 [INFO][3380] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.334 [INFO][3380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.463 [INFO][3389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" HandleID="k8s-pod-network.c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--6ddf456bc9--fznn8-eth0" Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.466 [INFO][3389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.466 [INFO][3389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.482 [WARNING][3389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" HandleID="k8s-pod-network.c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--6ddf456bc9--fznn8-eth0" Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.482 [INFO][3389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" HandleID="k8s-pod-network.c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--6ddf456bc9--fznn8-eth0" Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.486 [INFO][3389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:26.493432 env[1306]: 2025-07-16 12:33:26.489 [INFO][3380] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:26.497574 env[1306]: time="2025-07-16T12:33:26.493623193Z" level=info msg="TearDown network for sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\" successfully" Jul 16 12:33:26.497574 env[1306]: time="2025-07-16T12:33:26.493703163Z" level=info msg="StopPodSandbox for \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\" returns successfully" Jul 16 12:33:26.529432 systemd[1]: run-netns-cni\x2df64701be\x2d20cb\x2dec77\x2d6d90\x2d88b0762a76f1.mount: Deactivated successfully. Jul 16 12:33:26.659726 kubelet[2183]: I0716 12:33:26.659264 2183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ddab6da-2ee6-41f4-a22e-8303dfd78061-whisker-ca-bundle\") pod \"7ddab6da-2ee6-41f4-a22e-8303dfd78061\" (UID: \"7ddab6da-2ee6-41f4-a22e-8303dfd78061\") " Jul 16 12:33:26.659726 kubelet[2183]: I0716 12:33:26.659419 2183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhkhc\" (UniqueName: \"kubernetes.io/projected/7ddab6da-2ee6-41f4-a22e-8303dfd78061-kube-api-access-xhkhc\") pod \"7ddab6da-2ee6-41f4-a22e-8303dfd78061\" (UID: \"7ddab6da-2ee6-41f4-a22e-8303dfd78061\") " Jul 16 12:33:26.659726 kubelet[2183]: I0716 12:33:26.659482 2183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ddab6da-2ee6-41f4-a22e-8303dfd78061-whisker-backend-key-pair\") pod \"7ddab6da-2ee6-41f4-a22e-8303dfd78061\" (UID: \"7ddab6da-2ee6-41f4-a22e-8303dfd78061\") " Jul 16 12:33:26.678151 kubelet[2183]: I0716 12:33:26.671189 2183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ddab6da-2ee6-41f4-a22e-8303dfd78061-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7ddab6da-2ee6-41f4-a22e-8303dfd78061" (UID: "7ddab6da-2ee6-41f4-a22e-8303dfd78061"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 16 12:33:26.691934 systemd[1]: var-lib-kubelet-pods-7ddab6da\x2d2ee6\x2d41f4\x2da22e\x2d8303dfd78061-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 16 12:33:26.692833 kubelet[2183]: I0716 12:33:26.692778 2183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ddab6da-2ee6-41f4-a22e-8303dfd78061-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7ddab6da-2ee6-41f4-a22e-8303dfd78061" (UID: "7ddab6da-2ee6-41f4-a22e-8303dfd78061"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 16 12:33:26.697122 systemd[1]: var-lib-kubelet-pods-7ddab6da\x2d2ee6\x2d41f4\x2da22e\x2d8303dfd78061-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhkhc.mount: Deactivated successfully. Jul 16 12:33:26.698089 kubelet[2183]: I0716 12:33:26.697997 2183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ddab6da-2ee6-41f4-a22e-8303dfd78061-kube-api-access-xhkhc" (OuterVolumeSpecName: "kube-api-access-xhkhc") pod "7ddab6da-2ee6-41f4-a22e-8303dfd78061" (UID: "7ddab6da-2ee6-41f4-a22e-8303dfd78061"). InnerVolumeSpecName "kube-api-access-xhkhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 12:33:26.761335 kubelet[2183]: I0716 12:33:26.761000 2183 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhkhc\" (UniqueName: \"kubernetes.io/projected/7ddab6da-2ee6-41f4-a22e-8303dfd78061-kube-api-access-xhkhc\") on node \"srv-f25or.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:33:26.762771 kubelet[2183]: I0716 12:33:26.762648 2183 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ddab6da-2ee6-41f4-a22e-8303dfd78061-whisker-backend-key-pair\") on node \"srv-f25or.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:33:26.763358 kubelet[2183]: I0716 12:33:26.763310 2183 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ddab6da-2ee6-41f4-a22e-8303dfd78061-whisker-ca-bundle\") on node \"srv-f25or.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:33:27.268618 kubelet[2183]: I0716 12:33:27.268478 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbv57\" (UniqueName: \"kubernetes.io/projected/33ac34c1-7a4a-4840-aa0e-6ff9fb2aeba6-kube-api-access-bbv57\") pod \"whisker-5d8559b6d9-4bwfz\" (UID: \"33ac34c1-7a4a-4840-aa0e-6ff9fb2aeba6\") " pod="calico-system/whisker-5d8559b6d9-4bwfz" Jul 16 12:33:27.269317 kubelet[2183]: I0716 12:33:27.268633 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33ac34c1-7a4a-4840-aa0e-6ff9fb2aeba6-whisker-backend-key-pair\") pod \"whisker-5d8559b6d9-4bwfz\" (UID: \"33ac34c1-7a4a-4840-aa0e-6ff9fb2aeba6\") " pod="calico-system/whisker-5d8559b6d9-4bwfz" Jul 16 12:33:27.269317 kubelet[2183]: I0716 12:33:27.268727 2183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33ac34c1-7a4a-4840-aa0e-6ff9fb2aeba6-whisker-ca-bundle\") pod \"whisker-5d8559b6d9-4bwfz\" (UID: \"33ac34c1-7a4a-4840-aa0e-6ff9fb2aeba6\") " pod="calico-system/whisker-5d8559b6d9-4bwfz" Jul 16 12:33:27.433356 env[1306]: time="2025-07-16T12:33:27.433244957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d8559b6d9-4bwfz,Uid:33ac34c1-7a4a-4840-aa0e-6ff9fb2aeba6,Namespace:calico-system,Attempt:0,}" Jul 16 12:33:27.528022 systemd[1]: run-containerd-runc-k8s.io-702e814bc9b28efb7a610e8566f5593b10dc3c0a78903f8443fc9c4dc85c8216-runc.ZzuNsQ.mount: Deactivated successfully. Jul 16 12:33:27.584000 audit[3483]: AVC avc: denied { write } for pid=3483 comm="tee" name="fd" dev="proc" ino=29176 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 16 12:33:27.584000 audit[3483]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff0820e7bd a2=241 a3=1b6 items=1 ppid=3463 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:27.597356 kernel: audit: type=1400 audit(1752669207.584:307): avc: denied { write } for pid=3483 comm="tee" name="fd" dev="proc" ino=29176 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 16 12:33:27.597507 kernel: audit: type=1300 audit(1752669207.584:307): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff0820e7bd a2=241 a3=1b6 items=1 ppid=3463 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:27.584000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 16 12:33:27.584000 audit: PATH item=0 name="/dev/fd/63" inode=29173 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:33:27.613748 kernel: audit: type=1307 audit(1752669207.584:307): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 16 12:33:27.615123 kernel: audit: type=1302 audit(1752669207.584:307): item=0 name="/dev/fd/63" inode=29173 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:33:27.584000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 16 12:33:27.623707 kernel: audit: type=1327 audit(1752669207.584:307): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 16 12:33:27.682000 audit[3506]: AVC avc: denied { write } for pid=3506 comm="tee" name="fd" dev="proc" ino=29988 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 16 12:33:27.686694 kernel: audit: type=1400 audit(1752669207.682:308): avc: denied { write } for pid=3506 comm="tee" name="fd" dev="proc" ino=29988 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 16 12:33:27.708156 systemd-networkd[1084]: calid65a4f398e1: Link UP Jul 16 12:33:27.712155 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 16 12:33:27.712575 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid65a4f398e1: link becomes ready Jul 16 12:33:27.719861 systemd-networkd[1084]: calid65a4f398e1: Gained carrier Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.486 [INFO][3431] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.501 [INFO][3431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0 whisker-5d8559b6d9- calico-system 33ac34c1-7a4a-4840-aa0e-6ff9fb2aeba6 893 0 2025-07-16 12:33:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d8559b6d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-f25or.gb1.brightbox.com whisker-5d8559b6d9-4bwfz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid65a4f398e1 [] [] }} ContainerID="4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" Namespace="calico-system" Pod="whisker-5d8559b6d9-4bwfz" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.501 [INFO][3431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" Namespace="calico-system" Pod="whisker-5d8559b6d9-4bwfz" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.604 [INFO][3442] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" HandleID="k8s-pod-network.4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.605 [INFO][3442] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" HandleID="k8s-pod-network.4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e110), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f25or.gb1.brightbox.com", "pod":"whisker-5d8559b6d9-4bwfz", "timestamp":"2025-07-16 12:33:27.604532473 +0000 UTC"}, Hostname:"srv-f25or.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.605 [INFO][3442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.605 [INFO][3442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.605 [INFO][3442] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f25or.gb1.brightbox.com' Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.629 [INFO][3442] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.638 [INFO][3442] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.644 [INFO][3442] ipam/ipam.go 511: Trying affinity for 192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.647 [INFO][3442] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.650 [INFO][3442] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.650 [INFO][3442] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.652 [INFO][3442] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198 Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.660 [INFO][3442] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.668 [INFO][3442] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.65/26] block=192.168.20.64/26 handle="k8s-pod-network.4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.668 [INFO][3442] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.65/26] handle="k8s-pod-network.4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.668 [INFO][3442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:27.742663 env[1306]: 2025-07-16 12:33:27.668 [INFO][3442] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.65/26] IPv6=[] ContainerID="4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" HandleID="k8s-pod-network.4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0" Jul 16 12:33:27.743000 audit[3509]: AVC avc: denied { write } for pid=3509 comm="tee" name="fd" dev="proc" ino=30021 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 16 12:33:27.682000 audit[3506]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd67ebc7ce a2=241 a3=1b6 items=1 ppid=3456 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:27.748928 env[1306]: 2025-07-16 12:33:27.672 [INFO][3431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" Namespace="calico-system" Pod="whisker-5d8559b6d9-4bwfz" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0", GenerateName:"whisker-5d8559b6d9-", Namespace:"calico-system", SelfLink:"", UID:"33ac34c1-7a4a-4840-aa0e-6ff9fb2aeba6", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d8559b6d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"", Pod:"whisker-5d8559b6d9-4bwfz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid65a4f398e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:27.748928 env[1306]: 2025-07-16 12:33:27.672 [INFO][3431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.65/32] ContainerID="4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" Namespace="calico-system" Pod="whisker-5d8559b6d9-4bwfz" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0" Jul 16 12:33:27.748928 env[1306]: 2025-07-16 12:33:27.672 [INFO][3431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid65a4f398e1 ContainerID="4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" Namespace="calico-system" Pod="whisker-5d8559b6d9-4bwfz" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0" Jul 16 12:33:27.748928 env[1306]: 2025-07-16 12:33:27.714 [INFO][3431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" Namespace="calico-system" Pod="whisker-5d8559b6d9-4bwfz" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0" Jul 16 12:33:27.748928 env[1306]: 2025-07-16 12:33:27.714 [INFO][3431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" Namespace="calico-system" Pod="whisker-5d8559b6d9-4bwfz" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0", GenerateName:"whisker-5d8559b6d9-", Namespace:"calico-system", SelfLink:"", UID:"33ac34c1-7a4a-4840-aa0e-6ff9fb2aeba6", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d8559b6d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198", Pod:"whisker-5d8559b6d9-4bwfz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid65a4f398e1", MAC:"0a:90:f3:f6:e1:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:27.748928 env[1306]: 2025-07-16 12:33:27.737 [INFO][3431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198" Namespace="calico-system" Pod="whisker-5d8559b6d9-4bwfz" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-whisker--5d8559b6d9--4bwfz-eth0" Jul 16 12:33:27.752459 kernel: audit: type=1400 audit(1752669207.743:309): avc: denied { write } for pid=3509 comm="tee" name="fd" dev="proc" ino=30021 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 16 12:33:27.752539 kernel: audit: type=1300 audit(1752669207.682:308): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd67ebc7ce a2=241 a3=1b6 items=1 ppid=3456 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:27.752567 kernel: audit: type=1307 audit(1752669207.682:308): cwd="/etc/service/enabled/bird/log" Jul 16 12:33:27.682000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 16 12:33:27.682000 audit: PATH item=0 name="/dev/fd/63" inode=29968 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:33:27.756612 kernel: audit: type=1302 audit(1752669207.682:308): item=0 name="/dev/fd/63" inode=29968 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:33:27.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 16 12:33:27.743000 audit[3513]: AVC avc: denied { write } for pid=3513 comm="tee" name="fd" dev="proc" ino=29201 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 16 12:33:27.743000 audit[3513]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeaed507be a2=241 a3=1b6 items=1 ppid=3464 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:27.743000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 16 12:33:27.743000 audit: PATH item=0 name="/dev/fd/63" inode=29985 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:33:27.743000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 16 12:33:27.743000 audit[3509]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcb90e17cd a2=241 a3=1b6 items=1 ppid=3465 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:27.743000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 16 12:33:27.743000 audit: PATH item=0 name="/dev/fd/63" inode=29982 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:33:27.743000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 16 12:33:27.769000 audit[3536]: AVC avc: denied { write } for pid=3536 comm="tee" name="fd" dev="proc" ino=30030 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 16 12:33:27.769000 audit[3536]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff929d77cf a2=241 a3=1b6 items=1 ppid=3467 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:27.769000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 16 12:33:27.769000 audit: PATH item=0 name="/dev/fd/63" inode=29210 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:33:27.769000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 16 12:33:27.798000 audit[3533]: AVC avc: denied { write } for pid=3533 comm="tee" name="fd" dev="proc" ino=30033 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 16 12:33:27.798000 audit[3533]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffc38c67cd a2=241 a3=1b6 items=1 ppid=3458 pid=3533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:27.798000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 16 12:33:27.798000 audit: PATH item=0 name="/dev/fd/63" inode=29204 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:33:27.798000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 16 12:33:27.810995 env[1306]: time="2025-07-16T12:33:27.810955730Z" level=info msg="StopPodSandbox for \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\"" Jul 16 12:33:27.811000 audit[3539]: AVC avc: denied { write } for pid=3539 comm="tee" name="fd" dev="proc" ino=29225 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 16 12:33:27.811000 audit[3539]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffaacc27cd a2=241 a3=1b6 items=1 ppid=3466 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:27.811000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 16 12:33:27.811000 audit: PATH item=0 name="/dev/fd/63" inode=30032 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:33:27.811000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 16 12:33:27.829462 kubelet[2183]: I0716 12:33:27.829181 2183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ddab6da-2ee6-41f4-a22e-8303dfd78061" path="/var/lib/kubelet/pods/7ddab6da-2ee6-41f4-a22e-8303dfd78061/volumes" Jul 16 12:33:27.836641 env[1306]: time="2025-07-16T12:33:27.836578459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:33:27.836767 env[1306]: time="2025-07-16T12:33:27.836628180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:33:27.836767 env[1306]: time="2025-07-16T12:33:27.836640225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:33:27.837098 env[1306]: time="2025-07-16T12:33:27.837067011Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198 pid=3550 runtime=io.containerd.runc.v2 Jul 16 12:33:27.985837 env[1306]: time="2025-07-16T12:33:27.985798409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d8559b6d9-4bwfz,Uid:33ac34c1-7a4a-4840-aa0e-6ff9fb2aeba6,Namespace:calico-system,Attempt:0,} returns sandbox id \"4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198\"" Jul 16 12:33:27.991882 env[1306]: time="2025-07-16T12:33:27.991818924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.011 [INFO][3578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.012 [INFO][3578] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" iface="eth0" netns="/var/run/netns/cni-8f8573e2-bf6c-881a-448e-82df07c99453" Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.012 [INFO][3578] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" iface="eth0" netns="/var/run/netns/cni-8f8573e2-bf6c-881a-448e-82df07c99453" Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.012 [INFO][3578] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" iface="eth0" netns="/var/run/netns/cni-8f8573e2-bf6c-881a-448e-82df07c99453" Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.012 [INFO][3578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.012 [INFO][3578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.087 [INFO][3602] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" HandleID="k8s-pod-network.09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.087 [INFO][3602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.087 [INFO][3602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.099 [WARNING][3602] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" HandleID="k8s-pod-network.09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.099 [INFO][3602] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" HandleID="k8s-pod-network.09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.104 [INFO][3602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:28.113869 env[1306]: 2025-07-16 12:33:28.109 [INFO][3578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:28.117065 systemd[1]: run-netns-cni\x2d8f8573e2\x2dbf6c\x2d881a\x2d448e\x2d82df07c99453.mount: Deactivated successfully. Jul 16 12:33:28.119756 env[1306]: time="2025-07-16T12:33:28.119508568Z" level=info msg="TearDown network for sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\" successfully" Jul 16 12:33:28.119756 env[1306]: time="2025-07-16T12:33:28.119548385Z" level=info msg="StopPodSandbox for \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\" returns successfully" Jul 16 12:33:28.132934 env[1306]: time="2025-07-16T12:33:28.132897359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fd67f98d-kkwfb,Uid:c753503c-8c32-4abd-adf0-0419aba85c9d,Namespace:calico-system,Attempt:1,}" Jul 16 12:33:28.334000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.334000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.334000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.334000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.334000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.334000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.334000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.334000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.334000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.334000 audit: BPF prog-id=10 op=LOAD Jul 16 12:33:28.334000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe40b6b1a0 a2=98 a3=1fffffffffffffff items=0 ppid=3475 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.334000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 16 12:33:28.335000 audit: BPF prog-id=10 op=UNLOAD Jul 16 12:33:28.337000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.337000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.337000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.337000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.337000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.337000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.337000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.337000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.337000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.337000 audit: BPF prog-id=11 op=LOAD Jul 16 12:33:28.337000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe40b6b080 a2=94 a3=3 items=0 ppid=3475 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.337000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 16 12:33:28.338000 audit: BPF prog-id=11 op=UNLOAD Jul 16 12:33:28.340000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.340000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.340000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.340000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.340000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.340000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.340000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.340000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.340000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.340000 audit: BPF prog-id=12 op=LOAD Jul 16 12:33:28.340000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe40b6b0c0 a2=94 a3=7ffe40b6b2a0 items=0 ppid=3475 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.340000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 16 12:33:28.341000 audit: BPF prog-id=12 op=UNLOAD Jul 16 12:33:28.341000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.341000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffe40b6b190 a2=50 a3=a000000085 items=0 ppid=3475 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.341000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 16 12:33:28.352000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.352000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.352000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.352000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.352000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.352000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.352000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.352000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.352000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.352000 audit: BPF prog-id=13 op=LOAD Jul 16 12:33:28.352000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff051819f0 a2=98 a3=3 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.352000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.352000 audit: BPF prog-id=13 op=UNLOAD Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit: BPF prog-id=14 op=LOAD Jul 16 12:33:28.353000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff051817e0 a2=94 a3=54428f items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.353000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.353000 audit: BPF prog-id=14 op=UNLOAD Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.353000 audit: BPF prog-id=15 op=LOAD Jul 16 12:33:28.353000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff05181810 a2=94 a3=2 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.353000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.353000 audit: BPF prog-id=15 op=UNLOAD Jul 16 12:33:28.390734 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic4b3a35b92c: link becomes ready Jul 16 12:33:28.390075 systemd-networkd[1084]: calic4b3a35b92c: Link UP Jul 16 12:33:28.390407 systemd-networkd[1084]: calic4b3a35b92c: Gained carrier Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.259 [INFO][3624] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0 calico-kube-controllers-64fd67f98d- calico-system c753503c-8c32-4abd-adf0-0419aba85c9d 901 0 2025-07-16 12:33:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64fd67f98d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-f25or.gb1.brightbox.com calico-kube-controllers-64fd67f98d-kkwfb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic4b3a35b92c [] [] }} ContainerID="ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" Namespace="calico-system" Pod="calico-kube-controllers-64fd67f98d-kkwfb" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.259 [INFO][3624] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" Namespace="calico-system" Pod="calico-kube-controllers-64fd67f98d-kkwfb" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.316 [INFO][3666] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" HandleID="k8s-pod-network.ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.316 [INFO][3666] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" HandleID="k8s-pod-network.ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000251610), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f25or.gb1.brightbox.com", "pod":"calico-kube-controllers-64fd67f98d-kkwfb", "timestamp":"2025-07-16 12:33:28.316622874 +0000 UTC"}, Hostname:"srv-f25or.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.316 [INFO][3666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.316 [INFO][3666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.316 [INFO][3666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f25or.gb1.brightbox.com' Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.323 [INFO][3666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.327 [INFO][3666] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.333 [INFO][3666] ipam/ipam.go 511: Trying affinity for 192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.336 [INFO][3666] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.340 [INFO][3666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.340 [INFO][3666] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.343 [INFO][3666] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21 Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.347 [INFO][3666] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.356 [INFO][3666] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.66/26] block=192.168.20.64/26 handle="k8s-pod-network.ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.356 [INFO][3666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.66/26] handle="k8s-pod-network.ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.356 [INFO][3666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:28.410642 env[1306]: 2025-07-16 12:33:28.356 [INFO][3666] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.66/26] IPv6=[] ContainerID="ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" HandleID="k8s-pod-network.ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:28.411521 env[1306]: 2025-07-16 12:33:28.366 [INFO][3624] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" Namespace="calico-system" Pod="calico-kube-controllers-64fd67f98d-kkwfb" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0", GenerateName:"calico-kube-controllers-64fd67f98d-", Namespace:"calico-system", SelfLink:"", UID:"c753503c-8c32-4abd-adf0-0419aba85c9d", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fd67f98d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-64fd67f98d-kkwfb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4b3a35b92c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:28.411521 env[1306]: 2025-07-16 12:33:28.366 [INFO][3624] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.66/32] ContainerID="ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" Namespace="calico-system" Pod="calico-kube-controllers-64fd67f98d-kkwfb" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:28.411521 env[1306]: 2025-07-16 12:33:28.366 [INFO][3624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4b3a35b92c ContainerID="ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" Namespace="calico-system" Pod="calico-kube-controllers-64fd67f98d-kkwfb" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:28.411521 env[1306]: 2025-07-16 12:33:28.394 [INFO][3624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" Namespace="calico-system" Pod="calico-kube-controllers-64fd67f98d-kkwfb" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:28.411521 env[1306]: 2025-07-16 12:33:28.394 [INFO][3624] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" Namespace="calico-system" Pod="calico-kube-controllers-64fd67f98d-kkwfb" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0", GenerateName:"calico-kube-controllers-64fd67f98d-", Namespace:"calico-system", SelfLink:"", UID:"c753503c-8c32-4abd-adf0-0419aba85c9d", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fd67f98d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21", Pod:"calico-kube-controllers-64fd67f98d-kkwfb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4b3a35b92c", MAC:"ee:de:87:c8:dc:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:28.411521 env[1306]: 2025-07-16 12:33:28.408 [INFO][3624] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21" Namespace="calico-system" Pod="calico-kube-controllers-64fd67f98d-kkwfb" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:28.432694 env[1306]: time="2025-07-16T12:33:28.423099298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:33:28.432694 env[1306]: time="2025-07-16T12:33:28.423143073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:33:28.432694 env[1306]: time="2025-07-16T12:33:28.423153592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:33:28.432694 env[1306]: time="2025-07-16T12:33:28.423325566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21 pid=3694 runtime=io.containerd.runc.v2 Jul 16 12:33:28.494444 env[1306]: time="2025-07-16T12:33:28.494383319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fd67f98d-kkwfb,Uid:c753503c-8c32-4abd-adf0-0419aba85c9d,Namespace:calico-system,Attempt:1,} returns sandbox id \"ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21\"" Jul 16 12:33:28.526609 systemd[1]: run-containerd-runc-k8s.io-702e814bc9b28efb7a610e8566f5593b10dc3c0a78903f8443fc9c4dc85c8216-runc.rFbWx4.mount: Deactivated successfully. Jul 16 12:33:28.544000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.544000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.544000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.544000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.544000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.544000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.544000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.544000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.544000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.544000 audit: BPF prog-id=16 op=LOAD Jul 16 12:33:28.544000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff051816d0 a2=94 a3=1 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.544000 audit: BPF prog-id=16 op=UNLOAD Jul 16 12:33:28.544000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.544000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff051817a0 a2=50 a3=7fff05181880 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff051816e0 a2=28 a3=0 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff05181710 a2=28 a3=0 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff05181620 a2=28 a3=0 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff05181730 a2=28 a3=0 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff05181710 a2=28 a3=0 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff05181700 a2=28 a3=0 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff05181730 a2=28 a3=0 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff05181710 a2=28 a3=0 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff05181730 a2=28 a3=0 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff05181700 a2=28 a3=0 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff05181770 a2=28 a3=0 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff05181520 a2=50 a3=1 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit: BPF prog-id=17 op=LOAD Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff05181520 a2=94 a3=5 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit: BPF prog-id=17 op=UNLOAD Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff051815d0 a2=50 a3=1 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff051816f0 a2=4 a3=38 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.556000 audit[3680]: AVC avc: denied { confidentiality } for pid=3680 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 16 12:33:28.556000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff05181740 a2=94 a3=6 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.556000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { confidentiality } for pid=3680 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 16 12:33:28.557000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff05180ef0 a2=94 a3=88 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.557000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.557000 audit[3680]: AVC avc: denied { confidentiality } for pid=3680 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 16 12:33:28.557000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff05180ef0 a2=94 a3=88 items=0 ppid=3475 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.557000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 16 12:33:28.571000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.571000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.571000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.571000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.571000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.571000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.571000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.571000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.571000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.571000 audit: BPF prog-id=18 op=LOAD Jul 16 12:33:28.571000 audit[3733]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd40b656e0 a2=98 a3=1999999999999999 items=0 ppid=3475 pid=3733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.571000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 16 12:33:28.571000 audit: BPF prog-id=18 op=UNLOAD Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit: BPF prog-id=19 op=LOAD Jul 16 12:33:28.572000 audit[3733]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd40b655c0 a2=94 a3=ffff items=0 ppid=3475 pid=3733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.572000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 16 12:33:28.572000 audit: BPF prog-id=19 op=UNLOAD Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { perfmon } for pid=3733 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit[3733]: AVC avc: denied { bpf } for pid=3733 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.572000 audit: BPF prog-id=20 op=LOAD Jul 16 12:33:28.572000 audit[3733]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd40b65600 a2=94 a3=7ffd40b657e0 items=0 ppid=3475 pid=3733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.572000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 16 12:33:28.573000 audit: BPF prog-id=20 op=UNLOAD Jul 16 12:33:28.661021 systemd-networkd[1084]: vxlan.calico: Link UP Jul 16 12:33:28.661029 systemd-networkd[1084]: vxlan.calico: Gained carrier Jul 16 12:33:28.698000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.698000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.698000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.698000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.698000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.698000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.698000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.698000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.698000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.698000 audit: BPF prog-id=21 op=LOAD Jul 16 12:33:28.698000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefa13ed20 a2=98 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.698000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.699000 audit: BPF prog-id=21 op=UNLOAD Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit: BPF prog-id=22 op=LOAD Jul 16 12:33:28.700000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefa13eb30 a2=94 a3=54428f items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.700000 audit: BPF prog-id=22 op=UNLOAD Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.700000 audit: BPF prog-id=23 op=LOAD Jul 16 12:33:28.700000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefa13eb60 a2=94 a3=2 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.701000 audit: BPF prog-id=23 op=UNLOAD Jul 16 12:33:28.701000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.701000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffefa13ea30 a2=28 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.701000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.702000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.702000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffefa13ea60 a2=28 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.702000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.702000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.702000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffefa13e970 a2=28 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.702000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.702000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.702000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffefa13ea80 a2=28 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.702000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.703000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.703000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffefa13ea60 a2=28 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.703000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.703000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.703000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffefa13ea50 a2=28 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.703000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.703000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.703000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffefa13ea80 a2=28 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.703000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.703000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.703000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffefa13ea60 a2=28 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.703000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.703000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.703000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffefa13ea80 a2=28 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.703000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.704000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.704000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffefa13ea50 a2=28 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.704000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.704000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.704000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffefa13eac0 a2=28 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.704000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.704000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.704000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.704000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.704000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.704000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.704000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.704000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.704000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.704000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.704000 audit: BPF prog-id=24 op=LOAD Jul 16 12:33:28.704000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefa13e930 a2=94 a3=0 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.704000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.705000 audit: BPF prog-id=24 op=UNLOAD Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffefa13e920 a2=50 a3=2800 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.709000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffefa13e920 a2=50 a3=2800 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.709000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit: BPF prog-id=25 op=LOAD Jul 16 12:33:28.709000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefa13e140 a2=94 a3=2 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.709000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.709000 audit: BPF prog-id=25 op=UNLOAD Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.709000 audit: BPF prog-id=26 op=LOAD Jul 16 12:33:28.709000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefa13e240 a2=94 a3=30 items=0 ppid=3475 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.709000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 16 12:33:28.738000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.738000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.738000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.738000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.738000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.738000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.738000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.738000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.738000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.738000 audit: BPF prog-id=27 op=LOAD Jul 16 12:33:28.738000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd75651eb0 a2=98 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.738000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.739000 audit: BPF prog-id=27 op=UNLOAD Jul 16 12:33:28.739000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.739000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.739000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.739000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.739000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.739000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.739000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.739000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.739000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.739000 audit: BPF prog-id=28 op=LOAD Jul 16 12:33:28.739000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd75651ca0 a2=94 a3=54428f items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.739000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.740000 audit: BPF prog-id=28 op=UNLOAD Jul 16 12:33:28.740000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.740000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.740000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.740000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.740000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.740000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.740000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.740000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.740000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.740000 audit: BPF prog-id=29 op=LOAD Jul 16 12:33:28.740000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd75651cd0 a2=94 a3=2 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.740000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.740000 audit: BPF prog-id=29 op=UNLOAD Jul 16 12:33:28.874000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.874000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.874000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.874000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.874000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.874000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.874000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.874000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.874000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.874000 audit: BPF prog-id=30 op=LOAD Jul 16 12:33:28.874000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd75651b90 a2=94 a3=1 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.875000 audit: BPF prog-id=30 op=UNLOAD Jul 16 12:33:28.875000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.875000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd75651c60 a2=50 a3=7ffd75651d40 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.875000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd75651ba0 a2=28 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd75651bd0 a2=28 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd75651ae0 a2=28 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd75651bf0 a2=28 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd75651bd0 a2=28 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd75651bc0 a2=28 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd75651bf0 a2=28 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd75651bd0 a2=28 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd75651bf0 a2=28 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd75651bc0 a2=28 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd75651c30 a2=28 a3=0 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd756519e0 a2=50 a3=1 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit: BPF prog-id=31 op=LOAD Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd756519e0 a2=94 a3=5 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit: BPF prog-id=31 op=UNLOAD Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd75651a90 a2=50 a3=1 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd75651bb0 a2=4 a3=38 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.887000 audit[3774]: AVC avc: denied { confidentiality } for pid=3774 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 16 12:33:28.887000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd75651c00 a2=94 a3=6 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { confidentiality } for pid=3774 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 16 12:33:28.888000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd756513b0 a2=94 a3=88 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.888000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { perfmon } for pid=3774 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { confidentiality } for pid=3774 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 16 12:33:28.888000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd756513b0 a2=94 a3=88 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.888000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd75652de0 a2=10 a3=f8f00800 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.888000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.888000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.888000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd75652c80 a2=10 a3=3 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.888000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.889000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.889000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd75652c20 a2=10 a3=3 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.889000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.889000 audit[3774]: AVC avc: denied { bpf } for pid=3774 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 16 12:33:28.889000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd75652c20 a2=10 a3=7 items=0 ppid=3475 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:28.889000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 16 12:33:28.897000 audit: BPF prog-id=26 op=UNLOAD Jul 16 12:33:29.005000 audit[3798]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3798 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 16 12:33:29.005000 audit[3798]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe37f0dac0 a2=0 a3=7ffe37f0daac items=0 ppid=3475 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:29.005000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 16 12:33:29.017000 audit[3799]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3799 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 16 12:33:29.017000 audit[3799]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffff76cd1b0 a2=0 a3=7ffff76cd19c items=0 ppid=3475 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:29.017000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 16 12:33:29.021000 audit[3797]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=3797 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 16 12:33:29.021000 audit[3797]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd9159cb10 a2=0 a3=7ffd9159cafc items=0 ppid=3475 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:29.021000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 16 12:33:29.029000 audit[3801]: NETFILTER_CFG table=filter:104 family=2 entries=122 op=nft_register_chain pid=3801 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 16 12:33:29.029000 audit[3801]: SYSCALL arch=c000003e syscall=46 success=yes exit=69792 a0=3 a1=7ffed4d68e60 a2=0 a3=7ffed4d68e4c items=0 ppid=3475 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:29.029000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 16 12:33:29.431921 systemd-networkd[1084]: calid65a4f398e1: Gained IPv6LL Jul 16 12:33:29.628163 env[1306]: time="2025-07-16T12:33:29.628066764Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:29.631089 env[1306]: time="2025-07-16T12:33:29.631018867Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:29.635772 env[1306]: time="2025-07-16T12:33:29.635708661Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:29.639490 env[1306]: time="2025-07-16T12:33:29.639442808Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:29.641101 env[1306]: time="2025-07-16T12:33:29.641040382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 16 12:33:29.647806 env[1306]: time="2025-07-16T12:33:29.647769993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 16 12:33:29.649644 env[1306]: time="2025-07-16T12:33:29.649558406Z" level=info msg="CreateContainer within sandbox \"4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 16 12:33:29.660295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435513795.mount: Deactivated successfully. Jul 16 12:33:29.671038 env[1306]: time="2025-07-16T12:33:29.670941729Z" level=info msg="CreateContainer within sandbox \"4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5d1f0f7d02facdfece5580eb4571013cdfd72414b9e6e592df92f45f4469475d\"" Jul 16 12:33:29.675130 env[1306]: time="2025-07-16T12:33:29.672389791Z" level=info msg="StartContainer for \"5d1f0f7d02facdfece5580eb4571013cdfd72414b9e6e592df92f45f4469475d\"" Jul 16 12:33:29.784878 env[1306]: time="2025-07-16T12:33:29.780829177Z" level=info msg="StartContainer for \"5d1f0f7d02facdfece5580eb4571013cdfd72414b9e6e592df92f45f4469475d\" returns successfully" Jul 16 12:33:29.802382 env[1306]: time="2025-07-16T12:33:29.802310402Z" level=info msg="StopPodSandbox for \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\"" Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.886 [INFO][3856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.887 [INFO][3856] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" iface="eth0" netns="/var/run/netns/cni-ae36e405-5e63-2b34-1a78-a690b3fa9ca5" Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.887 [INFO][3856] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" iface="eth0" netns="/var/run/netns/cni-ae36e405-5e63-2b34-1a78-a690b3fa9ca5" Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.888 [INFO][3856] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" iface="eth0" netns="/var/run/netns/cni-ae36e405-5e63-2b34-1a78-a690b3fa9ca5" Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.888 [INFO][3856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.888 [INFO][3856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.925 [INFO][3866] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" HandleID="k8s-pod-network.dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.925 [INFO][3866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.925 [INFO][3866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.934 [WARNING][3866] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" HandleID="k8s-pod-network.dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.934 [INFO][3866] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" HandleID="k8s-pod-network.dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.936 [INFO][3866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:29.940842 env[1306]: 2025-07-16 12:33:29.938 [INFO][3856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:29.943940 env[1306]: time="2025-07-16T12:33:29.943861605Z" level=info msg="TearDown network for sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\" successfully" Jul 16 12:33:29.944192 env[1306]: time="2025-07-16T12:33:29.944128611Z" level=info msg="StopPodSandbox for \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\" returns successfully" Jul 16 12:33:29.946145 env[1306]: time="2025-07-16T12:33:29.946083530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qgcq4,Uid:ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5,Namespace:calico-system,Attempt:1,}" Jul 16 12:33:29.948967 systemd[1]: run-netns-cni\x2dae36e405\x2d5e63\x2d2b34\x2d1a78\x2da690b3fa9ca5.mount: Deactivated successfully. Jul 16 12:33:30.163995 systemd-networkd[1084]: cali4eb1dfe2f64: Link UP Jul 16 12:33:30.172899 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 16 12:33:30.173517 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4eb1dfe2f64: link becomes ready Jul 16 12:33:30.178447 systemd-networkd[1084]: cali4eb1dfe2f64: Gained carrier Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.015 [INFO][3873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0 goldmane-58fd7646b9- calico-system ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5 914 0 2025-07-16 12:33:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-f25or.gb1.brightbox.com goldmane-58fd7646b9-qgcq4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4eb1dfe2f64 [] [] }} ContainerID="1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgcq4" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.015 [INFO][3873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgcq4" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.078 [INFO][3886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" HandleID="k8s-pod-network.1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.078 [INFO][3886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" HandleID="k8s-pod-network.1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f25or.gb1.brightbox.com", "pod":"goldmane-58fd7646b9-qgcq4", "timestamp":"2025-07-16 12:33:30.078443843 +0000 UTC"}, Hostname:"srv-f25or.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.078 [INFO][3886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.079 [INFO][3886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.079 [INFO][3886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f25or.gb1.brightbox.com' Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.096 [INFO][3886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.106 [INFO][3886] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.116 [INFO][3886] ipam/ipam.go 511: Trying affinity for 192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.127 [INFO][3886] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.131 [INFO][3886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.131 [INFO][3886] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.133 [INFO][3886] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377 Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.140 [INFO][3886] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.150 [INFO][3886] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.67/26] block=192.168.20.64/26 handle="k8s-pod-network.1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.151 [INFO][3886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.67/26] handle="k8s-pod-network.1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.151 [INFO][3886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:30.199528 env[1306]: 2025-07-16 12:33:30.151 [INFO][3886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.67/26] IPv6=[] ContainerID="1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" HandleID="k8s-pod-network.1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:30.205707 env[1306]: 2025-07-16 12:33:30.154 [INFO][3873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgcq4" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-58fd7646b9-qgcq4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4eb1dfe2f64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:30.205707 env[1306]: 2025-07-16 12:33:30.155 [INFO][3873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.67/32] ContainerID="1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgcq4" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:30.205707 env[1306]: 2025-07-16 12:33:30.155 [INFO][3873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4eb1dfe2f64 ContainerID="1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgcq4" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:30.205707 env[1306]: 2025-07-16 12:33:30.180 [INFO][3873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgcq4" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:30.205707 env[1306]: 2025-07-16 12:33:30.180 [INFO][3873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgcq4" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377", Pod:"goldmane-58fd7646b9-qgcq4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4eb1dfe2f64", MAC:"52:89:e8:59:ad:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:30.205707 env[1306]: 2025-07-16 12:33:30.193 [INFO][3873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377" Namespace="calico-system" Pod="goldmane-58fd7646b9-qgcq4" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:30.219935 env[1306]: time="2025-07-16T12:33:30.219854758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:33:30.219935 env[1306]: time="2025-07-16T12:33:30.219902050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:33:30.219935 env[1306]: time="2025-07-16T12:33:30.219913728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:33:30.220380 env[1306]: time="2025-07-16T12:33:30.220342194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377 pid=3908 runtime=io.containerd.runc.v2 Jul 16 12:33:30.228000 audit[3925]: NETFILTER_CFG table=filter:105 family=2 entries=48 op=nft_register_chain pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 16 12:33:30.228000 audit[3925]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7fff259c7730 a2=0 a3=7fff259c771c items=0 ppid=3475 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:30.228000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 16 12:33:30.256903 systemd-networkd[1084]: calic4b3a35b92c: Gained IPv6LL Jul 16 12:33:30.311141 env[1306]: time="2025-07-16T12:33:30.311023860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qgcq4,Uid:ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5,Namespace:calico-system,Attempt:1,} returns sandbox id \"1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377\"" Jul 16 12:33:30.513219 systemd-networkd[1084]: vxlan.calico: Gained IPv6LL Jul 16 12:33:30.798634 env[1306]: time="2025-07-16T12:33:30.798425800Z" level=info msg="StopPodSandbox for \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\"" Jul 16 12:33:30.800569 env[1306]: time="2025-07-16T12:33:30.800489945Z" level=info msg="StopPodSandbox for \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\"" Jul 16 12:33:30.802384 env[1306]: time="2025-07-16T12:33:30.801421264Z" level=info msg="StopPodSandbox for \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\"" Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:30.878 [INFO][3971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:30.878 [INFO][3971] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" iface="eth0" netns="/var/run/netns/cni-3e37ed49-2875-eaa6-9b47-14f86cb2d97d" Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:30.878 [INFO][3971] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" iface="eth0" netns="/var/run/netns/cni-3e37ed49-2875-eaa6-9b47-14f86cb2d97d" Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:30.880 [INFO][3971] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" iface="eth0" netns="/var/run/netns/cni-3e37ed49-2875-eaa6-9b47-14f86cb2d97d" Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:30.880 [INFO][3971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:30.880 [INFO][3971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:30.983 [INFO][3991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" HandleID="k8s-pod-network.73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:30.987 [INFO][3991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:30.987 [INFO][3991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:31.012 [WARNING][3991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" HandleID="k8s-pod-network.73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:31.012 [INFO][3991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" HandleID="k8s-pod-network.73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:31.015 [INFO][3991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:31.051958 env[1306]: 2025-07-16 12:33:31.028 [INFO][3971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:31.057205 systemd[1]: run-netns-cni\x2d3e37ed49\x2d2875\x2deaa6\x2d9b47\x2d14f86cb2d97d.mount: Deactivated successfully. Jul 16 12:33:31.059303 env[1306]: time="2025-07-16T12:33:31.058652651Z" level=info msg="TearDown network for sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\" successfully" Jul 16 12:33:31.059435 env[1306]: time="2025-07-16T12:33:31.059414423Z" level=info msg="StopPodSandbox for \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\" returns successfully" Jul 16 12:33:31.062146 env[1306]: time="2025-07-16T12:33:31.062111261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fbfc9dbd-f9x9w,Uid:bf32dfb7-052c-4edd-885b-1571df3da4fa,Namespace:calico-apiserver,Attempt:1,}" Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:30.927 [INFO][3982] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:30.933 [INFO][3982] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" iface="eth0" netns="/var/run/netns/cni-cd582081-2e24-555b-40f4-88d85fceafbb" Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:30.934 [INFO][3982] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" iface="eth0" netns="/var/run/netns/cni-cd582081-2e24-555b-40f4-88d85fceafbb" Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:30.934 [INFO][3982] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" iface="eth0" netns="/var/run/netns/cni-cd582081-2e24-555b-40f4-88d85fceafbb" Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:30.934 [INFO][3982] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:30.934 [INFO][3982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:30.992 [INFO][4003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" HandleID="k8s-pod-network.bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:30.992 [INFO][4003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:31.016 [INFO][4003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:31.038 [WARNING][4003] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" HandleID="k8s-pod-network.bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:31.038 [INFO][4003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" HandleID="k8s-pod-network.bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:31.046 [INFO][4003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:31.068049 env[1306]: 2025-07-16 12:33:31.062 [INFO][3982] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:31.072350 env[1306]: time="2025-07-16T12:33:31.072316743Z" level=info msg="TearDown network for sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\" successfully" Jul 16 12:33:31.072524 env[1306]: time="2025-07-16T12:33:31.072503014Z" level=info msg="StopPodSandbox for \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\" returns successfully" Jul 16 12:33:31.077311 env[1306]: time="2025-07-16T12:33:31.077278346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-psh6r,Uid:18b5ae42-266d-4760-b2fc-63a368dee70d,Namespace:kube-system,Attempt:1,}" Jul 16 12:33:31.082284 systemd[1]: run-netns-cni\x2dcd582081\x2d2e24\x2d555b\x2d40f4\x2d88d85fceafbb.mount: Deactivated successfully. Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:30.894 [INFO][3970] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:30.895 [INFO][3970] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" iface="eth0" netns="/var/run/netns/cni-eae5c456-bbba-47fc-a022-4a27e98a6785" Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:30.896 [INFO][3970] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" iface="eth0" netns="/var/run/netns/cni-eae5c456-bbba-47fc-a022-4a27e98a6785" Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:30.896 [INFO][3970] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" iface="eth0" netns="/var/run/netns/cni-eae5c456-bbba-47fc-a022-4a27e98a6785" Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:30.896 [INFO][3970] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:30.896 [INFO][3970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:31.081 [INFO][3994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" HandleID="k8s-pod-network.805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:31.081 [INFO][3994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:31.081 [INFO][3994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:31.104 [WARNING][3994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" HandleID="k8s-pod-network.805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:31.104 [INFO][3994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" HandleID="k8s-pod-network.805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:31.106 [INFO][3994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:31.120413 env[1306]: 2025-07-16 12:33:31.114 [INFO][3970] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:31.120413 env[1306]: time="2025-07-16T12:33:31.119978453Z" level=info msg="TearDown network for sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\" successfully" Jul 16 12:33:31.120413 env[1306]: time="2025-07-16T12:33:31.120027801Z" level=info msg="StopPodSandbox for \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\" returns successfully" Jul 16 12:33:31.154098 env[1306]: time="2025-07-16T12:33:31.153975939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b22q6,Uid:58f808a6-a7a4-4400-b1f3-561a7728fef5,Namespace:calico-system,Attempt:1,}" Jul 16 12:33:31.284812 systemd-networkd[1084]: cali4eb1dfe2f64: Gained IPv6LL Jul 16 12:33:31.386898 systemd-networkd[1084]: calie40ac9a092c: Link UP Jul 16 12:33:31.392886 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 16 12:33:31.392956 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie40ac9a092c: link becomes ready Jul 16 12:33:31.396296 systemd-networkd[1084]: calie40ac9a092c: Gained carrier Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.204 [INFO][4012] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0 calico-apiserver-66fbfc9dbd- calico-apiserver bf32dfb7-052c-4edd-885b-1571df3da4fa 925 0 2025-07-16 12:33:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66fbfc9dbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-f25or.gb1.brightbox.com calico-apiserver-66fbfc9dbd-f9x9w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie40ac9a092c [] [] }} ContainerID="e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-f9x9w" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.204 [INFO][4012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-f9x9w" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.308 [INFO][4044] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" HandleID="k8s-pod-network.e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.309 [INFO][4044] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" HandleID="k8s-pod-network.e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000284630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-f25or.gb1.brightbox.com", "pod":"calico-apiserver-66fbfc9dbd-f9x9w", "timestamp":"2025-07-16 12:33:31.307021515 +0000 UTC"}, Hostname:"srv-f25or.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.310 [INFO][4044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.310 [INFO][4044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.310 [INFO][4044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f25or.gb1.brightbox.com' Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.322 [INFO][4044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.334 [INFO][4044] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.344 [INFO][4044] ipam/ipam.go 511: Trying affinity for 192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.348 [INFO][4044] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.359 [INFO][4044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.360 [INFO][4044] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.361 [INFO][4044] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.367 [INFO][4044] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.376 [INFO][4044] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.68/26] block=192.168.20.64/26 handle="k8s-pod-network.e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.376 [INFO][4044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.68/26] handle="k8s-pod-network.e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.376 [INFO][4044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:31.434785 env[1306]: 2025-07-16 12:33:31.376 [INFO][4044] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.68/26] IPv6=[] ContainerID="e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" HandleID="k8s-pod-network.e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:31.436346 env[1306]: 2025-07-16 12:33:31.379 [INFO][4012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-f9x9w" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0", GenerateName:"calico-apiserver-66fbfc9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf32dfb7-052c-4edd-885b-1571df3da4fa", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fbfc9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-66fbfc9dbd-f9x9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie40ac9a092c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:31.436346 env[1306]: 2025-07-16 12:33:31.379 [INFO][4012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.68/32] ContainerID="e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-f9x9w" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:31.436346 env[1306]: 2025-07-16 12:33:31.379 [INFO][4012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie40ac9a092c ContainerID="e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-f9x9w" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:31.436346 env[1306]: 2025-07-16 12:33:31.397 [INFO][4012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-f9x9w" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:31.436346 env[1306]: 2025-07-16 12:33:31.400 [INFO][4012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-f9x9w" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0", GenerateName:"calico-apiserver-66fbfc9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf32dfb7-052c-4edd-885b-1571df3da4fa", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fbfc9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab", Pod:"calico-apiserver-66fbfc9dbd-f9x9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie40ac9a092c", MAC:"d2:37:84:5e:a8:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:31.436346 env[1306]: 2025-07-16 12:33:31.432 [INFO][4012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-f9x9w" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:31.485111 env[1306]: time="2025-07-16T12:33:31.481866903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:33:31.485111 env[1306]: time="2025-07-16T12:33:31.481922922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:33:31.485111 env[1306]: time="2025-07-16T12:33:31.481945023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:33:31.485111 env[1306]: time="2025-07-16T12:33:31.482104893Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab pid=4085 runtime=io.containerd.runc.v2 Jul 16 12:33:31.531535 systemd-networkd[1084]: cali76677a5b7d5: Link UP Jul 16 12:33:31.534800 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali76677a5b7d5: link becomes ready Jul 16 12:33:31.534355 systemd-networkd[1084]: cali76677a5b7d5: Gained carrier Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.233 [INFO][4020] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0 coredns-7c65d6cfc9- kube-system 18b5ae42-266d-4760-b2fc-63a368dee70d 927 0 2025-07-16 12:32:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-f25or.gb1.brightbox.com coredns-7c65d6cfc9-psh6r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali76677a5b7d5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-psh6r" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.234 [INFO][4020] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-psh6r" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.444 [INFO][4053] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" HandleID="k8s-pod-network.dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.444 [INFO][4053] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" HandleID="k8s-pod-network.dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6d0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-f25or.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-psh6r", "timestamp":"2025-07-16 12:33:31.444201496 +0000 UTC"}, Hostname:"srv-f25or.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.444 [INFO][4053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.445 [INFO][4053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.445 [INFO][4053] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f25or.gb1.brightbox.com' Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.461 [INFO][4053] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.477 [INFO][4053] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.482 [INFO][4053] ipam/ipam.go 511: Trying affinity for 192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.484 [INFO][4053] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.487 [INFO][4053] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.487 [INFO][4053] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.489 [INFO][4053] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8 Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.493 [INFO][4053] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.510 [INFO][4053] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.69/26] block=192.168.20.64/26 handle="k8s-pod-network.dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.510 [INFO][4053] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.69/26] handle="k8s-pod-network.dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.510 [INFO][4053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:31.582009 env[1306]: 2025-07-16 12:33:31.510 [INFO][4053] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.69/26] IPv6=[] ContainerID="dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" HandleID="k8s-pod-network.dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:31.582977 env[1306]: 2025-07-16 12:33:31.513 [INFO][4020] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-psh6r" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18b5ae42-266d-4760-b2fc-63a368dee70d", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-psh6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76677a5b7d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:31.582977 env[1306]: 2025-07-16 12:33:31.513 [INFO][4020] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.69/32] ContainerID="dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-psh6r" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:31.582977 env[1306]: 2025-07-16 12:33:31.513 [INFO][4020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76677a5b7d5 ContainerID="dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-psh6r" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:31.582977 env[1306]: 2025-07-16 12:33:31.537 [INFO][4020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-psh6r" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:31.582977 env[1306]: 2025-07-16 12:33:31.537 [INFO][4020] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-psh6r" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18b5ae42-266d-4760-b2fc-63a368dee70d", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8", Pod:"coredns-7c65d6cfc9-psh6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76677a5b7d5", MAC:"a6:d0:e3:bc:6c:de", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:31.582977 env[1306]: 2025-07-16 12:33:31.579 [INFO][4020] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-psh6r" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:31.617235 env[1306]: time="2025-07-16T12:33:31.617163381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:33:31.617443 env[1306]: time="2025-07-16T12:33:31.617420743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:33:31.617546 env[1306]: time="2025-07-16T12:33:31.617526727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:33:31.617849 env[1306]: time="2025-07-16T12:33:31.617821819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8 pid=4125 runtime=io.containerd.runc.v2 Jul 16 12:33:31.664293 systemd[1]: run-netns-cni\x2deae5c456\x2dbbba\x2d47fc\x2da022\x2d4a27e98a6785.mount: Deactivated successfully. Jul 16 12:33:31.690170 systemd[1]: run-containerd-runc-k8s.io-dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8-runc.51bxSw.mount: Deactivated successfully. Jul 16 12:33:31.739907 systemd-networkd[1084]: calie965d0ee399: Link UP Jul 16 12:33:31.742861 systemd-networkd[1084]: calie965d0ee399: Gained carrier Jul 16 12:33:31.747689 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie965d0ee399: link becomes ready Jul 16 12:33:31.789126 env[1306]: time="2025-07-16T12:33:31.789084546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-psh6r,Uid:18b5ae42-266d-4760-b2fc-63a368dee70d,Namespace:kube-system,Attempt:1,} returns sandbox id \"dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8\"" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.329 [INFO][4036] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0 csi-node-driver- calico-system 58f808a6-a7a4-4400-b1f3-561a7728fef5 926 0 2025-07-16 12:33:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-f25or.gb1.brightbox.com csi-node-driver-b22q6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie965d0ee399 [] [] }} ContainerID="192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" Namespace="calico-system" Pod="csi-node-driver-b22q6" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.331 [INFO][4036] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" Namespace="calico-system" Pod="csi-node-driver-b22q6" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.466 [INFO][4066] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" HandleID="k8s-pod-network.192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.466 [INFO][4066] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" HandleID="k8s-pod-network.192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b020), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f25or.gb1.brightbox.com", "pod":"csi-node-driver-b22q6", "timestamp":"2025-07-16 12:33:31.465528381 +0000 UTC"}, Hostname:"srv-f25or.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.468 [INFO][4066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.510 [INFO][4066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.510 [INFO][4066] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f25or.gb1.brightbox.com' Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.568 [INFO][4066] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.580 [INFO][4066] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.589 [INFO][4066] ipam/ipam.go 511: Trying affinity for 192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.593 [INFO][4066] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.596 [INFO][4066] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.596 [INFO][4066] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.597 [INFO][4066] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.623 [INFO][4066] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.653 [INFO][4066] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.70/26] block=192.168.20.64/26 handle="k8s-pod-network.192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.653 [INFO][4066] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.70/26] handle="k8s-pod-network.192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.653 [INFO][4066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:31.791203 env[1306]: 2025-07-16 12:33:31.653 [INFO][4066] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.70/26] IPv6=[] ContainerID="192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" HandleID="k8s-pod-network.192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:31.792024 env[1306]: 2025-07-16 12:33:31.718 [INFO][4036] cni-plugin/k8s.go 418: Populated endpoint ContainerID="192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" Namespace="calico-system" Pod="csi-node-driver-b22q6" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58f808a6-a7a4-4400-b1f3-561a7728fef5", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-b22q6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie965d0ee399", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:31.792024 env[1306]: 2025-07-16 12:33:31.719 [INFO][4036] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.70/32] ContainerID="192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" Namespace="calico-system" Pod="csi-node-driver-b22q6" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:31.792024 env[1306]: 2025-07-16 12:33:31.719 [INFO][4036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie965d0ee399 ContainerID="192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" Namespace="calico-system" Pod="csi-node-driver-b22q6" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:31.792024 env[1306]: 2025-07-16 12:33:31.750 [INFO][4036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" Namespace="calico-system" Pod="csi-node-driver-b22q6" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:31.792024 env[1306]: 2025-07-16 12:33:31.751 [INFO][4036] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" Namespace="calico-system" Pod="csi-node-driver-b22q6" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58f808a6-a7a4-4400-b1f3-561a7728fef5", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa", Pod:"csi-node-driver-b22q6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie965d0ee399", MAC:"86:28:f1:7a:8d:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:31.792024 env[1306]: 2025-07-16 12:33:31.783 [INFO][4036] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa" Namespace="calico-system" Pod="csi-node-driver-b22q6" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:31.821923 env[1306]: time="2025-07-16T12:33:31.821727958Z" level=info msg="StopPodSandbox for \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\"" Jul 16 12:33:31.831588 env[1306]: time="2025-07-16T12:33:31.831551325Z" level=info msg="CreateContainer within sandbox \"dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 12:33:31.855575 env[1306]: time="2025-07-16T12:33:31.854732577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:33:31.855575 env[1306]: time="2025-07-16T12:33:31.854887149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:33:31.855575 env[1306]: time="2025-07-16T12:33:31.854898337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:33:31.856238 env[1306]: time="2025-07-16T12:33:31.856146510Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa pid=4180 runtime=io.containerd.runc.v2 Jul 16 12:33:31.865769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941334071.mount: Deactivated successfully. Jul 16 12:33:31.874459 env[1306]: time="2025-07-16T12:33:31.874418991Z" level=info msg="CreateContainer within sandbox \"dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6528b34ced9799bdbc43185283a2ee9b5f55873859a9722f9d77157183ef1ee1\"" Jul 16 12:33:31.877340 env[1306]: time="2025-07-16T12:33:31.877317766Z" level=info msg="StartContainer for \"6528b34ced9799bdbc43185283a2ee9b5f55873859a9722f9d77157183ef1ee1\"" Jul 16 12:33:31.895000 audit[4196]: NETFILTER_CFG table=filter:106 family=2 entries=58 op=nft_register_chain pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 16 12:33:31.895000 audit[4196]: SYSCALL arch=c000003e syscall=46 success=yes exit=30584 a0=3 a1=7ffc105b0be0 a2=0 a3=7ffc105b0bcc items=0 ppid=3475 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:31.895000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 16 12:33:31.968659 env[1306]: time="2025-07-16T12:33:31.968622184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fbfc9dbd-f9x9w,Uid:bf32dfb7-052c-4edd-885b-1571df3da4fa,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab\"" Jul 16 12:33:31.984000 audit[4228]: NETFILTER_CFG table=filter:107 family=2 entries=88 op=nft_register_chain pid=4228 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 16 12:33:31.984000 audit[4228]: SYSCALL arch=c000003e syscall=46 success=yes exit=45644 a0=3 a1=7ffc151a3740 a2=0 a3=7ffc151a372c items=0 ppid=3475 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:31.984000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 16 12:33:32.083262 env[1306]: time="2025-07-16T12:33:32.083171569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b22q6,Uid:58f808a6-a7a4-4400-b1f3-561a7728fef5,Namespace:calico-system,Attempt:1,} returns sandbox id \"192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa\"" Jul 16 12:33:32.101578 env[1306]: time="2025-07-16T12:33:32.101533630Z" level=info msg="StartContainer for \"6528b34ced9799bdbc43185283a2ee9b5f55873859a9722f9d77157183ef1ee1\" returns successfully" Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.005 [INFO][4184] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.007 [INFO][4184] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" iface="eth0" netns="/var/run/netns/cni-65c43871-fd33-207a-1213-2f83a1b7a9e1" Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.009 [INFO][4184] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" iface="eth0" netns="/var/run/netns/cni-65c43871-fd33-207a-1213-2f83a1b7a9e1" Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.011 [INFO][4184] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" iface="eth0" netns="/var/run/netns/cni-65c43871-fd33-207a-1213-2f83a1b7a9e1" Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.011 [INFO][4184] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.011 [INFO][4184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.123 [INFO][4247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" HandleID="k8s-pod-network.4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.126 [INFO][4247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.127 [INFO][4247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.135 [WARNING][4247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" HandleID="k8s-pod-network.4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.135 [INFO][4247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" HandleID="k8s-pod-network.4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.136 [INFO][4247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:32.141189 env[1306]: 2025-07-16 12:33:32.138 [INFO][4184] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:32.141189 env[1306]: time="2025-07-16T12:33:32.140997566Z" level=info msg="TearDown network for sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\" successfully" Jul 16 12:33:32.141189 env[1306]: time="2025-07-16T12:33:32.141031536Z" level=info msg="StopPodSandbox for \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\" returns successfully" Jul 16 12:33:32.143291 env[1306]: time="2025-07-16T12:33:32.142365959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4vckm,Uid:2e94965a-1e31-4dae-9e6f-fa9636bb6e1e,Namespace:kube-system,Attempt:1,}" Jul 16 12:33:32.374447 systemd-networkd[1084]: cali22b83c5615e: Link UP Jul 16 12:33:32.383022 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali22b83c5615e: link becomes ready Jul 16 12:33:32.383508 systemd-networkd[1084]: cali22b83c5615e: Gained carrier Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.210 [INFO][4276] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0 coredns-7c65d6cfc9- kube-system 2e94965a-1e31-4dae-9e6f-fa9636bb6e1e 944 0 2025-07-16 12:32:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-f25or.gb1.brightbox.com coredns-7c65d6cfc9-4vckm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali22b83c5615e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vckm" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.210 [INFO][4276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vckm" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.289 [INFO][4292] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" HandleID="k8s-pod-network.a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.289 [INFO][4292] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" HandleID="k8s-pod-network.a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b7490), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-f25or.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-4vckm", "timestamp":"2025-07-16 12:33:32.289112545 +0000 UTC"}, Hostname:"srv-f25or.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.289 [INFO][4292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.289 [INFO][4292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.289 [INFO][4292] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f25or.gb1.brightbox.com' Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.303 [INFO][4292] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.311 [INFO][4292] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.323 [INFO][4292] ipam/ipam.go 511: Trying affinity for 192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.328 [INFO][4292] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.332 [INFO][4292] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.333 [INFO][4292] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.346 [INFO][4292] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.354 [INFO][4292] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.364 [INFO][4292] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.71/26] block=192.168.20.64/26 handle="k8s-pod-network.a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.364 [INFO][4292] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.71/26] handle="k8s-pod-network.a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.364 [INFO][4292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:32.427430 env[1306]: 2025-07-16 12:33:32.364 [INFO][4292] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.71/26] IPv6=[] ContainerID="a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" HandleID="k8s-pod-network.a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:32.429031 env[1306]: 2025-07-16 12:33:32.370 [INFO][4276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vckm" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e94965a-1e31-4dae-9e6f-fa9636bb6e1e", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-4vckm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22b83c5615e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:32.429031 env[1306]: 2025-07-16 12:33:32.370 [INFO][4276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.71/32] ContainerID="a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vckm" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:32.429031 env[1306]: 2025-07-16 12:33:32.370 [INFO][4276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22b83c5615e ContainerID="a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vckm" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:32.429031 env[1306]: 2025-07-16 12:33:32.375 [INFO][4276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vckm" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:32.429031 env[1306]: 2025-07-16 12:33:32.390 [INFO][4276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vckm" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e94965a-1e31-4dae-9e6f-fa9636bb6e1e", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb", Pod:"coredns-7c65d6cfc9-4vckm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22b83c5615e", MAC:"02:51:ad:36:f3:1b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:32.429031 env[1306]: 2025-07-16 12:33:32.420 [INFO][4276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vckm" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:32.458862 env[1306]: time="2025-07-16T12:33:32.458789591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:33:32.459073 env[1306]: time="2025-07-16T12:33:32.459050380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:33:32.459168 env[1306]: time="2025-07-16T12:33:32.459149219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:33:32.459554 env[1306]: time="2025-07-16T12:33:32.459524197Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb pid=4317 runtime=io.containerd.runc.v2 Jul 16 12:33:32.484000 audit[4330]: NETFILTER_CFG table=filter:108 family=2 entries=48 op=nft_register_chain pid=4330 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 16 12:33:32.484000 audit[4330]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffd8d3b0b90 a2=0 a3=7ffd8d3b0b7c items=0 ppid=3475 pid=4330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:32.484000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 16 12:33:32.579240 env[1306]: time="2025-07-16T12:33:32.579171039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4vckm,Uid:2e94965a-1e31-4dae-9e6f-fa9636bb6e1e,Namespace:kube-system,Attempt:1,} returns sandbox id \"a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb\"" Jul 16 12:33:32.586101 env[1306]: time="2025-07-16T12:33:32.586056829Z" level=info msg="CreateContainer within sandbox \"a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 12:33:32.595983 env[1306]: time="2025-07-16T12:33:32.595942619Z" level=info msg="CreateContainer within sandbox \"a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc3eaf44f00a9600c6fddbaeab8bef35e10af3691c5e57527bc3926ed562a8f9\"" Jul 16 12:33:32.596753 env[1306]: time="2025-07-16T12:33:32.596722908Z" level=info msg="StartContainer for \"dc3eaf44f00a9600c6fddbaeab8bef35e10af3691c5e57527bc3926ed562a8f9\"" Jul 16 12:33:32.624997 systemd-networkd[1084]: calie40ac9a092c: Gained IPv6LL Jul 16 12:33:32.670061 systemd[1]: run-netns-cni\x2d65c43871\x2dfd33\x2d207a\x2d1213\x2d2f83a1b7a9e1.mount: Deactivated successfully. Jul 16 12:33:32.692628 systemd-networkd[1084]: cali76677a5b7d5: Gained IPv6LL Jul 16 12:33:32.720878 env[1306]: time="2025-07-16T12:33:32.720839622Z" level=info msg="StartContainer for \"dc3eaf44f00a9600c6fddbaeab8bef35e10af3691c5e57527bc3926ed562a8f9\" returns successfully" Jul 16 12:33:32.797928 env[1306]: time="2025-07-16T12:33:32.797873357Z" level=info msg="StopPodSandbox for \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\"" Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.895 [INFO][4398] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.895 [INFO][4398] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" iface="eth0" netns="/var/run/netns/cni-e0af6722-1aed-aa40-e9cc-442d88799c0b" Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.896 [INFO][4398] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" iface="eth0" netns="/var/run/netns/cni-e0af6722-1aed-aa40-e9cc-442d88799c0b" Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.898 [INFO][4398] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" iface="eth0" netns="/var/run/netns/cni-e0af6722-1aed-aa40-e9cc-442d88799c0b" Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.898 [INFO][4398] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.898 [INFO][4398] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.948 [INFO][4405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" HandleID="k8s-pod-network.11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.948 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.948 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.959 [WARNING][4405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" HandleID="k8s-pod-network.11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.959 [INFO][4405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" HandleID="k8s-pod-network.11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.962 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:32.969944 env[1306]: 2025-07-16 12:33:32.966 [INFO][4398] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:32.975093 systemd[1]: run-netns-cni\x2de0af6722\x2d1aed\x2daa40\x2de9cc\x2d442d88799c0b.mount: Deactivated successfully. Jul 16 12:33:32.977085 env[1306]: time="2025-07-16T12:33:32.976800027Z" level=info msg="TearDown network for sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\" successfully" Jul 16 12:33:32.977085 env[1306]: time="2025-07-16T12:33:32.976872382Z" level=info msg="StopPodSandbox for \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\" returns successfully" Jul 16 12:33:32.978092 env[1306]: time="2025-07-16T12:33:32.978061986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fbfc9dbd-7qrcc,Uid:927b7472-bad5-428f-9ac3-ea2fe33052e3,Namespace:calico-apiserver,Attempt:1,}" Jul 16 12:33:33.008983 systemd-networkd[1084]: calie965d0ee399: Gained IPv6LL Jul 16 12:33:33.164330 kubelet[2183]: I0716 12:33:33.156438 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4vckm" podStartSLOduration=44.156380678 podStartE2EDuration="44.156380678s" podCreationTimestamp="2025-07-16 12:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:33:33.123975164 +0000 UTC m=+49.554016799" watchObservedRunningTime="2025-07-16 12:33:33.156380678 +0000 UTC m=+49.586422356" Jul 16 12:33:33.231000 audit[4432]: NETFILTER_CFG table=filter:109 family=2 entries=20 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:33.234227 kernel: kauditd_printk_skb: 559 callbacks suppressed Jul 16 12:33:33.234636 kernel: audit: type=1325 audit(1752669213.231:420): table=filter:109 family=2 entries=20 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:33.231000 audit[4432]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffcd510640 a2=0 a3=7fffcd51062c items=0 ppid=2334 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:33.242739 kernel: audit: type=1300 audit(1752669213.231:420): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffcd510640 a2=0 a3=7fffcd51062c items=0 ppid=2334 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:33.231000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:33.259713 kernel: audit: type=1327 audit(1752669213.231:420): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:33.237000 audit[4432]: NETFILTER_CFG table=nat:110 family=2 entries=14 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:33.270727 kernel: audit: type=1325 audit(1752669213.237:421): table=nat:110 family=2 entries=14 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:33.237000 audit[4432]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffcd510640 a2=0 a3=0 items=0 ppid=2334 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:33.279708 kernel: audit: type=1300 audit(1752669213.237:421): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffcd510640 a2=0 a3=0 items=0 ppid=2334 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:33.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:33.288739 kernel: audit: type=1327 audit(1752669213.237:421): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:33.289259 systemd-networkd[1084]: cali63b782dfffe: Link UP Jul 16 12:33:33.292467 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 16 12:33:33.292889 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali63b782dfffe: link becomes ready Jul 16 12:33:33.292956 systemd-networkd[1084]: cali63b782dfffe: Gained carrier Jul 16 12:33:33.274000 audit[4434]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=4434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:33.296711 kernel: audit: type=1325 audit(1752669213.274:422): table=filter:111 family=2 entries=17 op=nft_register_rule pid=4434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:33.274000 audit[4434]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff41b5acb0 a2=0 a3=7fff41b5ac9c items=0 ppid=2334 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:33.307695 kernel: audit: type=1300 audit(1752669213.274:422): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff41b5acb0 a2=0 a3=7fff41b5ac9c items=0 ppid=2334 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:33.319701 kubelet[2183]: I0716 12:33:33.311883 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-psh6r" podStartSLOduration=44.311858033 podStartE2EDuration="44.311858033s" podCreationTimestamp="2025-07-16 12:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:33:33.187617864 +0000 UTC m=+49.617659507" watchObservedRunningTime="2025-07-16 12:33:33.311858033 +0000 UTC m=+49.741899677" Jul 16 12:33:33.274000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:33.325739 kernel: audit: type=1327 audit(1752669213.274:422): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:33.343217 kernel: audit: type=1325 audit(1752669213.317:423): table=nat:112 family=2 entries=47 op=nft_register_chain pid=4434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:33.317000 audit[4434]: NETFILTER_CFG table=nat:112 family=2 entries=47 op=nft_register_chain pid=4434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:33.317000 audit[4434]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff41b5acb0 a2=0 a3=7fff41b5ac9c items=0 ppid=2334 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:33.317000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.073 [INFO][4411] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0 calico-apiserver-66fbfc9dbd- calico-apiserver 927b7472-bad5-428f-9ac3-ea2fe33052e3 956 0 2025-07-16 12:33:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66fbfc9dbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-f25or.gb1.brightbox.com calico-apiserver-66fbfc9dbd-7qrcc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali63b782dfffe [] [] }} ContainerID="f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-7qrcc" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.073 [INFO][4411] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-7qrcc" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.177 [INFO][4425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" HandleID="k8s-pod-network.f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.177 [INFO][4425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" HandleID="k8s-pod-network.f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-f25or.gb1.brightbox.com", "pod":"calico-apiserver-66fbfc9dbd-7qrcc", "timestamp":"2025-07-16 12:33:33.177040553 +0000 UTC"}, Hostname:"srv-f25or.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.177 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.177 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.177 [INFO][4425] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f25or.gb1.brightbox.com' Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.196 [INFO][4425] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.208 [INFO][4425] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.227 [INFO][4425] ipam/ipam.go 511: Trying affinity for 192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.231 [INFO][4425] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.234 [INFO][4425] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.64/26 host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.234 [INFO][4425] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.64/26 handle="k8s-pod-network.f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.246 [INFO][4425] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774 Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.257 [INFO][4425] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.64/26 handle="k8s-pod-network.f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.271 [INFO][4425] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.72/26] block=192.168.20.64/26 handle="k8s-pod-network.f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.271 [INFO][4425] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.72/26] handle="k8s-pod-network.f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" host="srv-f25or.gb1.brightbox.com" Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.271 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:33.348246 env[1306]: 2025-07-16 12:33:33.271 [INFO][4425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.72/26] IPv6=[] ContainerID="f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" HandleID="k8s-pod-network.f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:33.349320 env[1306]: 2025-07-16 12:33:33.284 [INFO][4411] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-7qrcc" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0", GenerateName:"calico-apiserver-66fbfc9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"927b7472-bad5-428f-9ac3-ea2fe33052e3", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fbfc9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-66fbfc9dbd-7qrcc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63b782dfffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:33.349320 env[1306]: 2025-07-16 12:33:33.284 [INFO][4411] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.72/32] ContainerID="f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-7qrcc" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:33.349320 env[1306]: 2025-07-16 12:33:33.284 [INFO][4411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63b782dfffe ContainerID="f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-7qrcc" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:33.349320 env[1306]: 2025-07-16 12:33:33.294 [INFO][4411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-7qrcc" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:33.349320 env[1306]: 2025-07-16 12:33:33.295 [INFO][4411] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-7qrcc" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0", GenerateName:"calico-apiserver-66fbfc9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"927b7472-bad5-428f-9ac3-ea2fe33052e3", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fbfc9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774", Pod:"calico-apiserver-66fbfc9dbd-7qrcc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63b782dfffe", MAC:"1e:00:dd:5d:1b:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:33.349320 env[1306]: 2025-07-16 12:33:33.329 [INFO][4411] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774" Namespace="calico-apiserver" Pod="calico-apiserver-66fbfc9dbd-7qrcc" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:33.353000 audit[4440]: NETFILTER_CFG table=filter:113 family=2 entries=57 op=nft_register_chain pid=4440 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 16 12:33:33.353000 audit[4440]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7ffcbd6d33c0 a2=0 a3=7ffcbd6d33ac items=0 ppid=3475 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:33.353000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 16 12:33:33.373977 env[1306]: time="2025-07-16T12:33:33.372851868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:33:33.373977 env[1306]: time="2025-07-16T12:33:33.372891885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:33:33.373977 env[1306]: time="2025-07-16T12:33:33.372903673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:33:33.373977 env[1306]: time="2025-07-16T12:33:33.373033280Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774 pid=4453 runtime=io.containerd.runc.v2 Jul 16 12:33:33.473597 env[1306]: time="2025-07-16T12:33:33.473553054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fbfc9dbd-7qrcc,Uid:927b7472-bad5-428f-9ac3-ea2fe33052e3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774\"" Jul 16 12:33:33.843067 systemd-networkd[1084]: cali22b83c5615e: Gained IPv6LL Jul 16 12:33:34.508573 env[1306]: time="2025-07-16T12:33:34.508464264Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:34.513397 env[1306]: time="2025-07-16T12:33:34.513326738Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:34.516504 env[1306]: time="2025-07-16T12:33:34.516452958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:34.517472 env[1306]: time="2025-07-16T12:33:34.517363116Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:34.518198 env[1306]: time="2025-07-16T12:33:34.517887648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 16 12:33:34.525334 env[1306]: time="2025-07-16T12:33:34.525299483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 16 12:33:34.574834 env[1306]: time="2025-07-16T12:33:34.574773340Z" level=info msg="CreateContainer within sandbox \"ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 16 12:33:34.587735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056906857.mount: Deactivated successfully. Jul 16 12:33:34.591727 env[1306]: time="2025-07-16T12:33:34.591683950Z" level=info msg="CreateContainer within sandbox \"ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fab454efbfe4481fa747ba040a62985a98a975ecc0f79a35d3fcd73c6e4343cd\"" Jul 16 12:33:34.594363 env[1306]: time="2025-07-16T12:33:34.592442925Z" level=info msg="StartContainer for \"fab454efbfe4481fa747ba040a62985a98a975ecc0f79a35d3fcd73c6e4343cd\"" Jul 16 12:33:34.685703 env[1306]: time="2025-07-16T12:33:34.683783363Z" level=info msg="StartContainer for \"fab454efbfe4481fa747ba040a62985a98a975ecc0f79a35d3fcd73c6e4343cd\" returns successfully" Jul 16 12:33:35.172838 kubelet[2183]: I0716 12:33:35.172753 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64fd67f98d-kkwfb" podStartSLOduration=26.14622789 podStartE2EDuration="32.172720256s" podCreationTimestamp="2025-07-16 12:33:03 +0000 UTC" firstStartedPulling="2025-07-16 12:33:28.495450568 +0000 UTC m=+44.925492184" lastFinishedPulling="2025-07-16 12:33:34.521942879 +0000 UTC m=+50.951984550" observedRunningTime="2025-07-16 12:33:35.166958439 +0000 UTC m=+51.597000085" watchObservedRunningTime="2025-07-16 12:33:35.172720256 +0000 UTC m=+51.602761899" Jul 16 12:33:35.194925 systemd-networkd[1084]: cali63b782dfffe: Gained IPv6LL Jul 16 12:33:35.249686 systemd[1]: run-containerd-runc-k8s.io-fab454efbfe4481fa747ba040a62985a98a975ecc0f79a35d3fcd73c6e4343cd-runc.rMUxoL.mount: Deactivated successfully. Jul 16 12:33:37.549898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842090635.mount: Deactivated successfully. Jul 16 12:33:37.566837 env[1306]: time="2025-07-16T12:33:37.566760038Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:37.571056 env[1306]: time="2025-07-16T12:33:37.571002042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:37.572973 env[1306]: time="2025-07-16T12:33:37.572939935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:37.575275 env[1306]: time="2025-07-16T12:33:37.575229056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 16 12:33:37.576717 env[1306]: time="2025-07-16T12:33:37.575954054Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:37.612849 env[1306]: time="2025-07-16T12:33:37.612718623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 16 12:33:37.613862 env[1306]: time="2025-07-16T12:33:37.613824745Z" level=info msg="CreateContainer within sandbox \"4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 16 12:33:37.635697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729646820.mount: Deactivated successfully. Jul 16 12:33:37.641610 env[1306]: time="2025-07-16T12:33:37.641531678Z" level=info msg="CreateContainer within sandbox \"4320fc2b6f5ff455e08a9debf95d8eaee87e019c6280fca90d39b62ca32e5198\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"741d8dabde3f3c7c974dc3ea0f0100e1559bdeec0473c157675be96bf015da0d\"" Jul 16 12:33:37.646631 env[1306]: time="2025-07-16T12:33:37.642861988Z" level=info msg="StartContainer for \"741d8dabde3f3c7c974dc3ea0f0100e1559bdeec0473c157675be96bf015da0d\"" Jul 16 12:33:37.780847 env[1306]: time="2025-07-16T12:33:37.777173134Z" level=info msg="StartContainer for \"741d8dabde3f3c7c974dc3ea0f0100e1559bdeec0473c157675be96bf015da0d\" returns successfully" Jul 16 12:33:38.236852 kubelet[2183]: I0716 12:33:38.236700 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5d8559b6d9-4bwfz" podStartSLOduration=1.6332474879999999 podStartE2EDuration="11.23659123s" podCreationTimestamp="2025-07-16 12:33:27 +0000 UTC" firstStartedPulling="2025-07-16 12:33:27.989608612 +0000 UTC m=+44.419650231" lastFinishedPulling="2025-07-16 12:33:37.592952205 +0000 UTC m=+54.022993973" observedRunningTime="2025-07-16 12:33:38.230232357 +0000 UTC m=+54.660273991" watchObservedRunningTime="2025-07-16 12:33:38.23659123 +0000 UTC m=+54.666632935" Jul 16 12:33:38.280000 audit[4597]: NETFILTER_CFG table=filter:114 family=2 entries=13 op=nft_register_rule pid=4597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:38.285370 kernel: kauditd_printk_skb: 5 callbacks suppressed Jul 16 12:33:38.285515 kernel: audit: type=1325 audit(1752669218.280:425): table=filter:114 family=2 entries=13 op=nft_register_rule pid=4597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:38.280000 audit[4597]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd8a5ebfb0 a2=0 a3=7ffd8a5ebf9c items=0 ppid=2334 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:38.292012 kernel: audit: type=1300 audit(1752669218.280:425): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd8a5ebfb0 a2=0 a3=7ffd8a5ebf9c items=0 ppid=2334 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:38.292270 kernel: audit: type=1327 audit(1752669218.280:425): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:38.280000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:38.292000 audit[4597]: NETFILTER_CFG table=nat:115 family=2 entries=27 op=nft_register_chain pid=4597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:38.296691 kernel: audit: type=1325 audit(1752669218.292:426): table=nat:115 family=2 entries=27 op=nft_register_chain pid=4597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:38.296762 kernel: audit: type=1300 audit(1752669218.292:426): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffd8a5ebfb0 a2=0 a3=7ffd8a5ebf9c items=0 ppid=2334 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:38.292000 audit[4597]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffd8a5ebfb0 a2=0 a3=7ffd8a5ebf9c items=0 ppid=2334 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:38.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:38.308470 kernel: audit: type=1327 audit(1752669218.292:426): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:41.006320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278904191.mount: Deactivated successfully. Jul 16 12:33:41.854339 env[1306]: time="2025-07-16T12:33:41.854232045Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:41.858505 env[1306]: time="2025-07-16T12:33:41.856754692Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:41.862422 env[1306]: time="2025-07-16T12:33:41.862340915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:41.868708 env[1306]: time="2025-07-16T12:33:41.866424338Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:41.868708 env[1306]: time="2025-07-16T12:33:41.867945982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 16 12:33:41.894925 env[1306]: time="2025-07-16T12:33:41.894032130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 16 12:33:41.920370 env[1306]: time="2025-07-16T12:33:41.920272065Z" level=info msg="CreateContainer within sandbox \"1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 16 12:33:41.956880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551837059.mount: Deactivated successfully. Jul 16 12:33:41.967836 env[1306]: time="2025-07-16T12:33:41.967787927Z" level=info msg="CreateContainer within sandbox \"1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fc56ef80903e360a57bf0a61b253ba4c2642b9cb894d88f610690ea9139b8f01\"" Jul 16 12:33:41.970347 env[1306]: time="2025-07-16T12:33:41.970318831Z" level=info msg="StartContainer for \"fc56ef80903e360a57bf0a61b253ba4c2642b9cb894d88f610690ea9139b8f01\"" Jul 16 12:33:42.082281 env[1306]: time="2025-07-16T12:33:42.082224850Z" level=info msg="StartContainer for \"fc56ef80903e360a57bf0a61b253ba4c2642b9cb894d88f610690ea9139b8f01\" returns successfully" Jul 16 12:33:42.279000 audit[4644]: NETFILTER_CFG table=filter:116 family=2 entries=12 op=nft_register_rule pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:42.279000 audit[4644]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd650d2d40 a2=0 a3=7ffd650d2d2c items=0 ppid=2334 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:42.295163 kernel: audit: type=1325 audit(1752669222.279:427): table=filter:116 family=2 entries=12 op=nft_register_rule pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:42.295507 kernel: audit: type=1300 audit(1752669222.279:427): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd650d2d40 a2=0 a3=7ffd650d2d2c items=0 ppid=2334 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:42.301215 kernel: audit: type=1327 audit(1752669222.279:427): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:42.301266 kernel: audit: type=1325 audit(1752669222.295:428): table=nat:117 family=2 entries=22 op=nft_register_rule pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:42.279000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:42.295000 audit[4644]: NETFILTER_CFG table=nat:117 family=2 entries=22 op=nft_register_rule pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:42.295000 audit[4644]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd650d2d40 a2=0 a3=7ffd650d2d2c items=0 ppid=2334 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:42.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:42.955649 systemd[1]: run-containerd-runc-k8s.io-fc56ef80903e360a57bf0a61b253ba4c2642b9cb894d88f610690ea9139b8f01-runc.KFaNS8.mount: Deactivated successfully. Jul 16 12:33:43.199005 systemd[1]: run-containerd-runc-k8s.io-702e814bc9b28efb7a610e8566f5593b10dc3c0a78903f8443fc9c4dc85c8216-runc.nzfrNi.mount: Deactivated successfully. Jul 16 12:33:43.604845 kubelet[2183]: I0716 12:33:43.597205 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-qgcq4" podStartSLOduration=29.017418585 podStartE2EDuration="40.589104862s" podCreationTimestamp="2025-07-16 12:33:03 +0000 UTC" firstStartedPulling="2025-07-16 12:33:30.316502489 +0000 UTC m=+46.746544138" lastFinishedPulling="2025-07-16 12:33:41.888188797 +0000 UTC m=+58.318230415" observedRunningTime="2025-07-16 12:33:42.250728124 +0000 UTC m=+58.680769768" watchObservedRunningTime="2025-07-16 12:33:43.589104862 +0000 UTC m=+60.019146505" Jul 16 12:33:43.761198 env[1306]: time="2025-07-16T12:33:43.760769802Z" level=info msg="StopPodSandbox for \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\"" Jul 16 12:33:43.959152 systemd[1]: run-containerd-runc-k8s.io-fc56ef80903e360a57bf0a61b253ba4c2642b9cb894d88f610690ea9139b8f01-runc.hd7b4o.mount: Deactivated successfully. Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.035 [WARNING][4701] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0", GenerateName:"calico-apiserver-66fbfc9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"927b7472-bad5-428f-9ac3-ea2fe33052e3", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fbfc9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774", Pod:"calico-apiserver-66fbfc9dbd-7qrcc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63b782dfffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.036 [INFO][4701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.037 [INFO][4701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" iface="eth0" netns="" Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.037 [INFO][4701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.037 [INFO][4701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.315 [INFO][4710] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" HandleID="k8s-pod-network.11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.331 [INFO][4710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.332 [INFO][4710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.365 [WARNING][4710] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" HandleID="k8s-pod-network.11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.365 [INFO][4710] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" HandleID="k8s-pod-network.11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.367 [INFO][4710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:44.394986 env[1306]: 2025-07-16 12:33:44.383 [INFO][4701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:44.394986 env[1306]: time="2025-07-16T12:33:44.388256708Z" level=info msg="TearDown network for sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\" successfully" Jul 16 12:33:44.394986 env[1306]: time="2025-07-16T12:33:44.388286226Z" level=info msg="StopPodSandbox for \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\" returns successfully" Jul 16 12:33:44.439634 env[1306]: time="2025-07-16T12:33:44.439597555Z" level=info msg="RemovePodSandbox for \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\"" Jul 16 12:33:44.439879 env[1306]: time="2025-07-16T12:33:44.439825566Z" level=info msg="Forcibly stopping sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\"" Jul 16 12:33:44.453695 systemd[1]: run-containerd-runc-k8s.io-fc56ef80903e360a57bf0a61b253ba4c2642b9cb894d88f610690ea9139b8f01-runc.EhsVZf.mount: Deactivated successfully. Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.580 [WARNING][4733] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0", GenerateName:"calico-apiserver-66fbfc9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"927b7472-bad5-428f-9ac3-ea2fe33052e3", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fbfc9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774", Pod:"calico-apiserver-66fbfc9dbd-7qrcc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63b782dfffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.580 [INFO][4733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.580 [INFO][4733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" iface="eth0" netns="" Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.580 [INFO][4733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.580 [INFO][4733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.672 [INFO][4747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" HandleID="k8s-pod-network.11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.673 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.673 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.680 [WARNING][4747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" HandleID="k8s-pod-network.11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.680 [INFO][4747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" HandleID="k8s-pod-network.11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--7qrcc-eth0" Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.683 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:44.690486 env[1306]: 2025-07-16 12:33:44.687 [INFO][4733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c" Jul 16 12:33:44.692129 env[1306]: time="2025-07-16T12:33:44.691067576Z" level=info msg="TearDown network for sandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\" successfully" Jul 16 12:33:44.699003 env[1306]: time="2025-07-16T12:33:44.698911738Z" level=info msg="RemovePodSandbox \"11e89da5b6c2eb074cf86ccaec73c7a2240c745742446cc8d95272274deec69c\" returns successfully" Jul 16 12:33:44.699982 env[1306]: time="2025-07-16T12:33:44.699832433Z" level=info msg="StopPodSandbox for \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\"" Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.805 [WARNING][4765] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58f808a6-a7a4-4400-b1f3-561a7728fef5", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa", Pod:"csi-node-driver-b22q6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie965d0ee399", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.806 [INFO][4765] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.806 [INFO][4765] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" iface="eth0" netns="" Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.806 [INFO][4765] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.806 [INFO][4765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.853 [INFO][4772] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" HandleID="k8s-pod-network.805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.854 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.855 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.865 [WARNING][4772] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" HandleID="k8s-pod-network.805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.865 [INFO][4772] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" HandleID="k8s-pod-network.805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.867 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:44.886698 env[1306]: 2025-07-16 12:33:44.872 [INFO][4765] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:44.886698 env[1306]: time="2025-07-16T12:33:44.876127921Z" level=info msg="TearDown network for sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\" successfully" Jul 16 12:33:44.886698 env[1306]: time="2025-07-16T12:33:44.876156681Z" level=info msg="StopPodSandbox for \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\" returns successfully" Jul 16 12:33:44.890007 env[1306]: time="2025-07-16T12:33:44.889786669Z" level=info msg="RemovePodSandbox for \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\"" Jul 16 12:33:44.890007 env[1306]: time="2025-07-16T12:33:44.889829222Z" level=info msg="Forcibly stopping sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\"" Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.014 [WARNING][4786] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58f808a6-a7a4-4400-b1f3-561a7728fef5", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa", Pod:"csi-node-driver-b22q6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie965d0ee399", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.014 [INFO][4786] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.014 [INFO][4786] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" iface="eth0" netns="" Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.014 [INFO][4786] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.014 [INFO][4786] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.221 [INFO][4793] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" HandleID="k8s-pod-network.805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.226 [INFO][4793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.227 [INFO][4793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.257 [WARNING][4793] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" HandleID="k8s-pod-network.805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.257 [INFO][4793] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" HandleID="k8s-pod-network.805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Workload="srv--f25or.gb1.brightbox.com-k8s-csi--node--driver--b22q6-eth0" Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.260 [INFO][4793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:45.281160 env[1306]: 2025-07-16 12:33:45.267 [INFO][4786] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b" Jul 16 12:33:45.281160 env[1306]: time="2025-07-16T12:33:45.280842558Z" level=info msg="TearDown network for sandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\" successfully" Jul 16 12:33:45.287508 env[1306]: time="2025-07-16T12:33:45.287475340Z" level=info msg="RemovePodSandbox \"805a824932a5d3730822f5958e6169a0060a38f16cf2e339d7d32cdb47cf126b\" returns successfully" Jul 16 12:33:45.499521 env[1306]: time="2025-07-16T12:33:45.498394726Z" level=info msg="StopPodSandbox for \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\"" Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.690 [WARNING][4808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e94965a-1e31-4dae-9e6f-fa9636bb6e1e", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb", Pod:"coredns-7c65d6cfc9-4vckm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22b83c5615e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.691 [INFO][4808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.691 [INFO][4808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" iface="eth0" netns="" Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.691 [INFO][4808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.691 [INFO][4808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.744 [INFO][4816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" HandleID="k8s-pod-network.4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.745 [INFO][4816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.745 [INFO][4816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.751 [WARNING][4816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" HandleID="k8s-pod-network.4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.751 [INFO][4816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" HandleID="k8s-pod-network.4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.753 [INFO][4816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:45.760196 env[1306]: 2025-07-16 12:33:45.756 [INFO][4808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:45.762293 env[1306]: time="2025-07-16T12:33:45.760248637Z" level=info msg="TearDown network for sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\" successfully" Jul 16 12:33:45.762293 env[1306]: time="2025-07-16T12:33:45.760282100Z" level=info msg="StopPodSandbox for \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\" returns successfully" Jul 16 12:33:45.762293 env[1306]: time="2025-07-16T12:33:45.760736061Z" level=info msg="RemovePodSandbox for \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\"" Jul 16 12:33:45.762293 env[1306]: time="2025-07-16T12:33:45.760765072Z" level=info msg="Forcibly stopping sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\"" Jul 16 12:33:45.909832 kernel: kauditd_printk_skb: 2 callbacks suppressed Jul 16 12:33:45.938918 kernel: audit: type=1325 audit(1752669225.898:429): table=filter:118 family=2 entries=11 op=nft_register_rule pid=4848 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:45.942501 kernel: audit: type=1300 audit(1752669225.898:429): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd6801eec0 a2=0 a3=7ffd6801eeac items=0 ppid=2334 pid=4848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:45.944723 kernel: audit: type=1327 audit(1752669225.898:429): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:45.898000 audit[4848]: NETFILTER_CFG table=filter:118 family=2 entries=11 op=nft_register_rule pid=4848 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:45.962725 kernel: audit: type=1325 audit(1752669225.911:430): table=nat:119 family=2 entries=29 op=nft_register_chain pid=4848 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:45.898000 audit[4848]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd6801eec0 a2=0 a3=7ffd6801eeac items=0 ppid=2334 pid=4848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:45.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:45.911000 audit[4848]: NETFILTER_CFG table=nat:119 family=2 entries=29 op=nft_register_chain pid=4848 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:45.911000 audit[4848]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffd6801eec0 a2=0 a3=7ffd6801eeac items=0 ppid=2334 pid=4848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:45.955198 systemd[1]: run-containerd-runc-k8s.io-fab454efbfe4481fa747ba040a62985a98a975ecc0f79a35d3fcd73c6e4343cd-runc.QtyLTH.mount: Deactivated successfully. Jul 16 12:33:45.970748 kernel: audit: type=1300 audit(1752669225.911:430): arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffd6801eec0 a2=0 a3=7ffd6801eeac items=0 ppid=2334 pid=4848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:45.983704 kernel: audit: type=1327 audit(1752669225.911:430): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:45.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:46.011599 systemd[1]: run-containerd-runc-k8s.io-fc56ef80903e360a57bf0a61b253ba4c2642b9cb894d88f610690ea9139b8f01-runc.vufc6c.mount: Deactivated successfully. Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:45.895 [WARNING][4830] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e94965a-1e31-4dae-9e6f-fa9636bb6e1e", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"a4a7841514126c2bf66bd715de47758199bb05821fc1e1ba4c5e91c76643adfb", Pod:"coredns-7c65d6cfc9-4vckm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22b83c5615e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:45.896 [INFO][4830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:45.896 [INFO][4830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" iface="eth0" netns="" Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:45.896 [INFO][4830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:45.896 [INFO][4830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:46.101 [INFO][4862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" HandleID="k8s-pod-network.4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:46.102 [INFO][4862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:46.102 [INFO][4862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:46.121 [WARNING][4862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" HandleID="k8s-pod-network.4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:46.121 [INFO][4862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" HandleID="k8s-pod-network.4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--4vckm-eth0" Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:46.123 [INFO][4862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:46.136899 env[1306]: 2025-07-16 12:33:46.128 [INFO][4830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6" Jul 16 12:33:46.136899 env[1306]: time="2025-07-16T12:33:46.133866285Z" level=info msg="TearDown network for sandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\" successfully" Jul 16 12:33:46.140620 env[1306]: time="2025-07-16T12:33:46.139078985Z" level=info msg="RemovePodSandbox \"4c36594d2306a403edfb3ae357c3f48f0845d54a9a3a80c7bd1a3f2793563ac6\" returns successfully" Jul 16 12:33:46.140620 env[1306]: time="2025-07-16T12:33:46.139786763Z" level=info msg="StopPodSandbox for \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\"" Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.273 [WARNING][4889] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377", Pod:"goldmane-58fd7646b9-qgcq4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4eb1dfe2f64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.273 [INFO][4889] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.273 [INFO][4889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" iface="eth0" netns="" Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.273 [INFO][4889] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.273 [INFO][4889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.346 [INFO][4900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" HandleID="k8s-pod-network.dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.346 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.347 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.358 [WARNING][4900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" HandleID="k8s-pod-network.dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.358 [INFO][4900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" HandleID="k8s-pod-network.dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.388 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:46.409333 env[1306]: 2025-07-16 12:33:46.391 [INFO][4889] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:46.409333 env[1306]: time="2025-07-16T12:33:46.400029909Z" level=info msg="TearDown network for sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\" successfully" Jul 16 12:33:46.409333 env[1306]: time="2025-07-16T12:33:46.400072142Z" level=info msg="StopPodSandbox for \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\" returns successfully" Jul 16 12:33:46.409333 env[1306]: time="2025-07-16T12:33:46.400508203Z" level=info msg="RemovePodSandbox for \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\"" Jul 16 12:33:46.409333 env[1306]: time="2025-07-16T12:33:46.400534015Z" level=info msg="Forcibly stopping sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\"" Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.516 [WARNING][4915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ccdc85c1-7be8-495d-bb85-55cf9bd3cfe5", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"1767a3b9c74a720b2daf2b4d9bf441b6ff95e6e9c647a6f148671c4249ddd377", Pod:"goldmane-58fd7646b9-qgcq4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4eb1dfe2f64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.516 [INFO][4915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.516 [INFO][4915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" iface="eth0" netns="" Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.516 [INFO][4915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.516 [INFO][4915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.576 [INFO][4922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" HandleID="k8s-pod-network.dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.576 [INFO][4922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.576 [INFO][4922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.587 [WARNING][4922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" HandleID="k8s-pod-network.dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.587 [INFO][4922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" HandleID="k8s-pod-network.dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Workload="srv--f25or.gb1.brightbox.com-k8s-goldmane--58fd7646b9--qgcq4-eth0" Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.591 [INFO][4922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:46.595768 env[1306]: 2025-07-16 12:33:46.593 [INFO][4915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252" Jul 16 12:33:46.598776 env[1306]: time="2025-07-16T12:33:46.595801129Z" level=info msg="TearDown network for sandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\" successfully" Jul 16 12:33:46.604575 env[1306]: time="2025-07-16T12:33:46.604297341Z" level=info msg="RemovePodSandbox \"dc5a6bc05f9b38150d11be11fec5a2e699e61aee5bf92b7373421f35d1f1b252\" returns successfully" Jul 16 12:33:46.605392 env[1306]: time="2025-07-16T12:33:46.604938675Z" level=info msg="StopPodSandbox for \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\"" Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.717 [WARNING][4937] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0", GenerateName:"calico-kube-controllers-64fd67f98d-", Namespace:"calico-system", SelfLink:"", UID:"c753503c-8c32-4abd-adf0-0419aba85c9d", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fd67f98d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21", Pod:"calico-kube-controllers-64fd67f98d-kkwfb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4b3a35b92c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.717 [INFO][4937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.717 [INFO][4937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" iface="eth0" netns="" Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.717 [INFO][4937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.717 [INFO][4937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.758 [INFO][4944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" HandleID="k8s-pod-network.09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.758 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.758 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.767 [WARNING][4944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" HandleID="k8s-pod-network.09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.767 [INFO][4944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" HandleID="k8s-pod-network.09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.770 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:46.783891 env[1306]: 2025-07-16 12:33:46.776 [INFO][4937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:46.783891 env[1306]: time="2025-07-16T12:33:46.783585101Z" level=info msg="TearDown network for sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\" successfully" Jul 16 12:33:46.783891 env[1306]: time="2025-07-16T12:33:46.783618852Z" level=info msg="StopPodSandbox for \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\" returns successfully" Jul 16 12:33:46.806278 env[1306]: time="2025-07-16T12:33:46.806233088Z" level=info msg="RemovePodSandbox for \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\"" Jul 16 12:33:46.806469 env[1306]: time="2025-07-16T12:33:46.806278633Z" level=info msg="Forcibly stopping sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\"" Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:46.940 [WARNING][4960] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0", GenerateName:"calico-kube-controllers-64fd67f98d-", Namespace:"calico-system", SelfLink:"", UID:"c753503c-8c32-4abd-adf0-0419aba85c9d", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fd67f98d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"ccc6093c7c10b84ca2b40d4f234646627043ba02038bddf016fe223c94915a21", Pod:"calico-kube-controllers-64fd67f98d-kkwfb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4b3a35b92c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:46.940 [INFO][4960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:46.940 [INFO][4960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" iface="eth0" netns="" Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:46.940 [INFO][4960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:46.940 [INFO][4960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:46.997 [INFO][4967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" HandleID="k8s-pod-network.09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:46.997 [INFO][4967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:46.997 [INFO][4967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:47.036 [WARNING][4967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" HandleID="k8s-pod-network.09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:47.036 [INFO][4967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" HandleID="k8s-pod-network.09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--kube--controllers--64fd67f98d--kkwfb-eth0" Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:47.038 [INFO][4967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:47.049609 env[1306]: 2025-07-16 12:33:47.040 [INFO][4960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a" Jul 16 12:33:47.049609 env[1306]: time="2025-07-16T12:33:47.042378992Z" level=info msg="TearDown network for sandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\" successfully" Jul 16 12:33:47.049609 env[1306]: time="2025-07-16T12:33:47.045549083Z" level=info msg="RemovePodSandbox \"09cefd71fd403974af9438079acc54ee787d2334171c9e9736a0d79979f6de2a\" returns successfully" Jul 16 12:33:47.056054 env[1306]: time="2025-07-16T12:33:47.049711994Z" level=info msg="StopPodSandbox for \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\"" Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.244 [WARNING][4983] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18b5ae42-266d-4760-b2fc-63a368dee70d", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8", Pod:"coredns-7c65d6cfc9-psh6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76677a5b7d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.244 [INFO][4983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.244 [INFO][4983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" iface="eth0" netns="" Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.244 [INFO][4983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.244 [INFO][4983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.348 [INFO][4990] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" HandleID="k8s-pod-network.bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.350 [INFO][4990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.351 [INFO][4990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.371 [WARNING][4990] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" HandleID="k8s-pod-network.bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.371 [INFO][4990] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" HandleID="k8s-pod-network.bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.380 [INFO][4990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:47.385732 env[1306]: 2025-07-16 12:33:47.383 [INFO][4983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:47.385732 env[1306]: time="2025-07-16T12:33:47.385160327Z" level=info msg="TearDown network for sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\" successfully" Jul 16 12:33:47.385732 env[1306]: time="2025-07-16T12:33:47.385187777Z" level=info msg="StopPodSandbox for \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\" returns successfully" Jul 16 12:33:47.388330 env[1306]: time="2025-07-16T12:33:47.388246956Z" level=info msg="RemovePodSandbox for \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\"" Jul 16 12:33:47.388420 env[1306]: time="2025-07-16T12:33:47.388340122Z" level=info msg="Forcibly stopping sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\"" Jul 16 12:33:47.391129 env[1306]: time="2025-07-16T12:33:47.391104272Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:47.393177 env[1306]: time="2025-07-16T12:33:47.393155130Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:47.394989 env[1306]: time="2025-07-16T12:33:47.394967387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:47.396233 env[1306]: time="2025-07-16T12:33:47.396209142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 16 12:33:47.396878 env[1306]: time="2025-07-16T12:33:47.396857367Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:47.458836 env[1306]: time="2025-07-16T12:33:47.458799895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 16 12:33:47.546199 env[1306]: time="2025-07-16T12:33:47.545152159Z" level=info msg="CreateContainer within sandbox \"e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 12:33:47.574441 env[1306]: time="2025-07-16T12:33:47.574396022Z" level=info msg="CreateContainer within sandbox \"e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d12ca17ab069e2df87cf89f560be20e53c76b2dbf4a44e82368ffa591bc3054e\"" Jul 16 12:33:47.576809 env[1306]: time="2025-07-16T12:33:47.576766953Z" level=info msg="StartContainer for \"d12ca17ab069e2df87cf89f560be20e53c76b2dbf4a44e82368ffa591bc3054e\"" Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.505 [WARNING][5004] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18b5ae42-266d-4760-b2fc-63a368dee70d", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 32, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"dd372419f2b0a3d8a2fa3c357c0669a78242e0ecbc508a279363fb61770448f8", Pod:"coredns-7c65d6cfc9-psh6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76677a5b7d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.506 [INFO][5004] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.506 [INFO][5004] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" iface="eth0" netns="" Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.506 [INFO][5004] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.506 [INFO][5004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.580 [INFO][5011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" HandleID="k8s-pod-network.bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.581 [INFO][5011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.581 [INFO][5011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.605 [WARNING][5011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" HandleID="k8s-pod-network.bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.605 [INFO][5011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" HandleID="k8s-pod-network.bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Workload="srv--f25or.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--psh6r-eth0" Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.610 [INFO][5011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:47.624089 env[1306]: 2025-07-16 12:33:47.620 [INFO][5004] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733" Jul 16 12:33:47.625408 env[1306]: time="2025-07-16T12:33:47.624679687Z" level=info msg="TearDown network for sandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\" successfully" Jul 16 12:33:47.633494 env[1306]: time="2025-07-16T12:33:47.633444227Z" level=info msg="RemovePodSandbox \"bb7bd8cac581ceff48e188876e662f988140e70ed4696a1714be56757272f733\" returns successfully" Jul 16 12:33:47.638218 env[1306]: time="2025-07-16T12:33:47.636540801Z" level=info msg="StopPodSandbox for \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\"" Jul 16 12:33:47.651369 systemd[1]: run-containerd-runc-k8s.io-d12ca17ab069e2df87cf89f560be20e53c76b2dbf4a44e82368ffa591bc3054e-runc.nteIaM.mount: Deactivated successfully. Jul 16 12:33:47.843415 env[1306]: time="2025-07-16T12:33:47.843360549Z" level=info msg="StartContainer for \"d12ca17ab069e2df87cf89f560be20e53c76b2dbf4a44e82368ffa591bc3054e\" returns successfully" Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.765 [WARNING][5043] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-whisker--6ddf456bc9--fznn8-eth0" Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.765 [INFO][5043] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.765 [INFO][5043] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" iface="eth0" netns="" Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.765 [INFO][5043] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.765 [INFO][5043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.834 [INFO][5055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" HandleID="k8s-pod-network.c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--6ddf456bc9--fznn8-eth0" Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.835 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.836 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.842 [WARNING][5055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" HandleID="k8s-pod-network.c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--6ddf456bc9--fznn8-eth0" Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.843 [INFO][5055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" HandleID="k8s-pod-network.c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--6ddf456bc9--fznn8-eth0" Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.846 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:47.851741 env[1306]: 2025-07-16 12:33:47.849 [INFO][5043] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:47.852322 env[1306]: time="2025-07-16T12:33:47.851778833Z" level=info msg="TearDown network for sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\" successfully" Jul 16 12:33:47.852322 env[1306]: time="2025-07-16T12:33:47.851836779Z" level=info msg="StopPodSandbox for \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\" returns successfully" Jul 16 12:33:47.852918 env[1306]: time="2025-07-16T12:33:47.852886041Z" level=info msg="RemovePodSandbox for \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\"" Jul 16 12:33:47.853207 env[1306]: time="2025-07-16T12:33:47.853153789Z" level=info msg="Forcibly stopping sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\"" Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:47.919 [WARNING][5076] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" WorkloadEndpoint="srv--f25or.gb1.brightbox.com-k8s-whisker--6ddf456bc9--fznn8-eth0" Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:47.920 [INFO][5076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:47.920 [INFO][5076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" iface="eth0" netns="" Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:47.920 [INFO][5076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:47.920 [INFO][5076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:48.028 [INFO][5083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" HandleID="k8s-pod-network.c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--6ddf456bc9--fznn8-eth0" Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:48.028 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:48.029 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:48.039 [WARNING][5083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" HandleID="k8s-pod-network.c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--6ddf456bc9--fznn8-eth0" Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:48.039 [INFO][5083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" HandleID="k8s-pod-network.c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Workload="srv--f25or.gb1.brightbox.com-k8s-whisker--6ddf456bc9--fznn8-eth0" Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:48.048 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:48.068188 env[1306]: 2025-07-16 12:33:48.061 [INFO][5076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a" Jul 16 12:33:48.070382 env[1306]: time="2025-07-16T12:33:48.070296303Z" level=info msg="TearDown network for sandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\" successfully" Jul 16 12:33:48.086050 env[1306]: time="2025-07-16T12:33:48.085983745Z" level=info msg="RemovePodSandbox \"c903c42c4988d6a45987d1590adbb26596f3fbc8ebced55bf7a7cccc70c5968a\" returns successfully" Jul 16 12:33:48.087109 env[1306]: time="2025-07-16T12:33:48.087087675Z" level=info msg="StopPodSandbox for \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\"" Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.134 [WARNING][5102] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0", GenerateName:"calico-apiserver-66fbfc9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf32dfb7-052c-4edd-885b-1571df3da4fa", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fbfc9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab", Pod:"calico-apiserver-66fbfc9dbd-f9x9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie40ac9a092c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.135 [INFO][5102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.135 [INFO][5102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" iface="eth0" netns="" Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.135 [INFO][5102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.135 [INFO][5102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.168 [INFO][5109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" HandleID="k8s-pod-network.73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.168 [INFO][5109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.168 [INFO][5109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.177 [WARNING][5109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" HandleID="k8s-pod-network.73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.177 [INFO][5109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" HandleID="k8s-pod-network.73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.182 [INFO][5109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:48.186391 env[1306]: 2025-07-16 12:33:48.184 [INFO][5102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:48.187643 env[1306]: time="2025-07-16T12:33:48.186567311Z" level=info msg="TearDown network for sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\" successfully" Jul 16 12:33:48.187643 env[1306]: time="2025-07-16T12:33:48.186602675Z" level=info msg="StopPodSandbox for \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\" returns successfully" Jul 16 12:33:48.187643 env[1306]: time="2025-07-16T12:33:48.187082118Z" level=info msg="RemovePodSandbox for \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\"" Jul 16 12:33:48.187643 env[1306]: time="2025-07-16T12:33:48.187196298Z" level=info msg="Forcibly stopping sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\"" Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.277 [WARNING][5123] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0", GenerateName:"calico-apiserver-66fbfc9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf32dfb7-052c-4edd-885b-1571df3da4fa", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 12, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fbfc9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f25or.gb1.brightbox.com", ContainerID:"e0f14ccce301fc96e52057e94187dbef234545cc28a99432219a16f15dd3b5ab", Pod:"calico-apiserver-66fbfc9dbd-f9x9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie40ac9a092c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.277 [INFO][5123] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.278 [INFO][5123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" iface="eth0" netns="" Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.278 [INFO][5123] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.278 [INFO][5123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.326 [INFO][5130] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" HandleID="k8s-pod-network.73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.326 [INFO][5130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.326 [INFO][5130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.334 [WARNING][5130] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" HandleID="k8s-pod-network.73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.334 [INFO][5130] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" HandleID="k8s-pod-network.73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Workload="srv--f25or.gb1.brightbox.com-k8s-calico--apiserver--66fbfc9dbd--f9x9w-eth0" Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.335 [INFO][5130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 12:33:48.341003 env[1306]: 2025-07-16 12:33:48.337 [INFO][5123] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2" Jul 16 12:33:48.342509 env[1306]: time="2025-07-16T12:33:48.341268766Z" level=info msg="TearDown network for sandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\" successfully" Jul 16 12:33:48.353247 env[1306]: time="2025-07-16T12:33:48.353201083Z" level=info msg="RemovePodSandbox \"73283894bd66a5a94292593420d0caf9d32039f9084b57605732ede20a5691a2\" returns successfully" Jul 16 12:33:49.001147 kubelet[2183]: I0716 12:33:48.992357 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66fbfc9dbd-f9x9w" podStartSLOduration=33.533885556 podStartE2EDuration="48.963232592s" podCreationTimestamp="2025-07-16 12:33:00 +0000 UTC" firstStartedPulling="2025-07-16 12:33:31.978882473 +0000 UTC m=+48.408924092" lastFinishedPulling="2025-07-16 12:33:47.408229512 +0000 UTC m=+63.838271128" observedRunningTime="2025-07-16 12:33:48.956214906 +0000 UTC m=+65.386256543" watchObservedRunningTime="2025-07-16 12:33:48.963232592 +0000 UTC m=+65.393274231" Jul 16 12:33:49.051000 audit[5141]: NETFILTER_CFG table=filter:120 family=2 entries=10 op=nft_register_rule pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:49.067035 kernel: audit: type=1325 audit(1752669229.051:431): table=filter:120 family=2 entries=10 op=nft_register_rule pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:49.070867 kernel: audit: type=1300 audit(1752669229.051:431): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc2ad89f30 a2=0 a3=7ffc2ad89f1c items=0 ppid=2334 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:49.051000 audit[5141]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc2ad89f30 a2=0 a3=7ffc2ad89f1c items=0 ppid=2334 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:49.082584 kernel: audit: type=1327 audit(1752669229.051:431): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:49.082645 kernel: audit: type=1325 audit(1752669229.076:432): table=nat:121 family=2 entries=24 op=nft_register_rule pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:49.051000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:49.076000 audit[5141]: NETFILTER_CFG table=nat:121 family=2 entries=24 op=nft_register_rule pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:49.076000 audit[5141]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffc2ad89f30 a2=0 a3=7ffc2ad89f1c items=0 ppid=2334 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:49.076000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:49.411705 env[1306]: time="2025-07-16T12:33:49.411542942Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:49.413508 env[1306]: time="2025-07-16T12:33:49.413478625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:49.415282 env[1306]: time="2025-07-16T12:33:49.415253315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:49.416749 env[1306]: time="2025-07-16T12:33:49.416723737Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:49.417330 env[1306]: time="2025-07-16T12:33:49.417279589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 16 12:33:49.419721 env[1306]: time="2025-07-16T12:33:49.419665124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 16 12:33:49.426050 env[1306]: time="2025-07-16T12:33:49.426023503Z" level=info msg="CreateContainer within sandbox \"192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 16 12:33:49.454389 env[1306]: time="2025-07-16T12:33:49.454348804Z" level=info msg="CreateContainer within sandbox \"192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d0bae59a4c9587f8e9667f6829e455134e9d5030639c8fb6c1ad50a2549168e6\"" Jul 16 12:33:49.455142 env[1306]: time="2025-07-16T12:33:49.455110056Z" level=info msg="StartContainer for \"d0bae59a4c9587f8e9667f6829e455134e9d5030639c8fb6c1ad50a2549168e6\"" Jul 16 12:33:49.516713 systemd[1]: run-containerd-runc-k8s.io-d0bae59a4c9587f8e9667f6829e455134e9d5030639c8fb6c1ad50a2549168e6-runc.w1kEpH.mount: Deactivated successfully. Jul 16 12:33:49.624692 env[1306]: time="2025-07-16T12:33:49.622264096Z" level=info msg="StartContainer for \"d0bae59a4c9587f8e9667f6829e455134e9d5030639c8fb6c1ad50a2549168e6\" returns successfully" Jul 16 12:33:49.748152 kubelet[2183]: I0716 12:33:49.748120 2183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 12:33:49.752010 env[1306]: time="2025-07-16T12:33:49.751618959Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:49.755117 env[1306]: time="2025-07-16T12:33:49.755088992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:49.758242 env[1306]: time="2025-07-16T12:33:49.758212697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:49.761615 env[1306]: time="2025-07-16T12:33:49.761550644Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:49.762215 env[1306]: time="2025-07-16T12:33:49.762175626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 16 12:33:49.765374 env[1306]: time="2025-07-16T12:33:49.765336192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 16 12:33:49.767955 env[1306]: time="2025-07-16T12:33:49.767916058Z" level=info msg="CreateContainer within sandbox \"f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 12:33:49.777869 env[1306]: time="2025-07-16T12:33:49.777824261Z" level=info msg="CreateContainer within sandbox \"f3679c4331dbd943929a7af4cb8e5556d638aa6d69d24931e85ed6aa97dfb774\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"080e7b2875a37924aa224d735baa85d4473b72eb055946cb4078cd4a5a987558\"" Jul 16 12:33:49.778426 env[1306]: time="2025-07-16T12:33:49.778394513Z" level=info msg="StartContainer for \"080e7b2875a37924aa224d735baa85d4473b72eb055946cb4078cd4a5a987558\"" Jul 16 12:33:49.868413 env[1306]: time="2025-07-16T12:33:49.868364227Z" level=info msg="StartContainer for \"080e7b2875a37924aa224d735baa85d4473b72eb055946cb4078cd4a5a987558\" returns successfully" Jul 16 12:33:50.817000 audit[5219]: NETFILTER_CFG table=filter:122 family=2 entries=10 op=nft_register_rule pid=5219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:50.817000 audit[5219]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe6d466900 a2=0 a3=7ffe6d4668ec items=0 ppid=2334 pid=5219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:50.817000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:50.822000 audit[5219]: NETFILTER_CFG table=nat:123 family=2 entries=24 op=nft_register_rule pid=5219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:50.822000 audit[5219]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffe6d466900 a2=0 a3=7ffe6d4668ec items=0 ppid=2334 pid=5219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:50.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:51.905856 env[1306]: time="2025-07-16T12:33:51.905316403Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:51.909377 env[1306]: time="2025-07-16T12:33:51.909344828Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:51.911285 env[1306]: time="2025-07-16T12:33:51.911260043Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:51.912448 env[1306]: time="2025-07-16T12:33:51.912421204Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:33:51.912883 env[1306]: time="2025-07-16T12:33:51.912858999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 16 12:33:51.915642 env[1306]: time="2025-07-16T12:33:51.915592584Z" level=info msg="CreateContainer within sandbox \"192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 16 12:33:51.930505 env[1306]: time="2025-07-16T12:33:51.930465023Z" level=info msg="CreateContainer within sandbox \"192ddec5c7acb557cc005567271e89429e36e491ba80a9ff91aebc582b9258fa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"45e486192e1a907c8fe6d8433a9b86b2525ba4caa10dc1fa71b9821894a0d53c\"" Jul 16 12:33:51.931316 env[1306]: time="2025-07-16T12:33:51.931292218Z" level=info msg="StartContainer for \"45e486192e1a907c8fe6d8433a9b86b2525ba4caa10dc1fa71b9821894a0d53c\"" Jul 16 12:33:51.932311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150868918.mount: Deactivated successfully. Jul 16 12:33:52.062700 env[1306]: time="2025-07-16T12:33:52.062650943Z" level=info msg="StartContainer for \"45e486192e1a907c8fe6d8433a9b86b2525ba4caa10dc1fa71b9821894a0d53c\" returns successfully" Jul 16 12:33:52.366117 kubelet[2183]: I0716 12:33:52.366078 2183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 12:33:52.759181 kubelet[2183]: I0716 12:33:52.759151 2183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 12:33:52.805927 kubelet[2183]: I0716 12:33:52.805858 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66fbfc9dbd-7qrcc" podStartSLOduration=36.513783857 podStartE2EDuration="52.802951265s" podCreationTimestamp="2025-07-16 12:33:00 +0000 UTC" firstStartedPulling="2025-07-16 12:33:33.474872604 +0000 UTC m=+49.904914223" lastFinishedPulling="2025-07-16 12:33:49.764040012 +0000 UTC m=+66.194081631" observedRunningTime="2025-07-16 12:33:50.786559185 +0000 UTC m=+67.216600825" watchObservedRunningTime="2025-07-16 12:33:52.802951265 +0000 UTC m=+69.232992909" Jul 16 12:33:52.829976 kubelet[2183]: I0716 12:33:52.829921 2183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b22q6" podStartSLOduration=30.000959537 podStartE2EDuration="49.829900299s" podCreationTimestamp="2025-07-16 12:33:03 +0000 UTC" firstStartedPulling="2025-07-16 12:33:32.085098612 +0000 UTC m=+48.515140231" lastFinishedPulling="2025-07-16 12:33:51.914039375 +0000 UTC m=+68.344080993" observedRunningTime="2025-07-16 12:33:52.806134145 +0000 UTC m=+69.236175779" watchObservedRunningTime="2025-07-16 12:33:52.829900299 +0000 UTC m=+69.259941942" Jul 16 12:33:52.926858 kernel: kauditd_printk_skb: 8 callbacks suppressed Jul 16 12:33:52.930698 kernel: audit: type=1325 audit(1752669232.918:435): table=filter:124 family=2 entries=9 op=nft_register_rule pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:52.918000 audit[5257]: NETFILTER_CFG table=filter:124 family=2 entries=9 op=nft_register_rule pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:52.928560 systemd[1]: run-containerd-runc-k8s.io-45e486192e1a907c8fe6d8433a9b86b2525ba4caa10dc1fa71b9821894a0d53c-runc.2hRlt2.mount: Deactivated successfully. Jul 16 12:33:52.918000 audit[5257]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff250ec350 a2=0 a3=7fff250ec33c items=0 ppid=2334 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:52.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:52.942007 kernel: audit: type=1300 audit(1752669232.918:435): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff250ec350 a2=0 a3=7fff250ec33c items=0 ppid=2334 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:52.942509 kernel: audit: type=1327 audit(1752669232.918:435): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:52.934000 audit[5257]: NETFILTER_CFG table=nat:125 family=2 entries=31 op=nft_register_chain pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:52.948692 kernel: audit: type=1325 audit(1752669232.934:436): table=nat:125 family=2 entries=31 op=nft_register_chain pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:52.934000 audit[5257]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fff250ec350 a2=0 a3=7fff250ec33c items=0 ppid=2334 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:52.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:52.959759 kernel: audit: type=1300 audit(1752669232.934:436): arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fff250ec350 a2=0 a3=7fff250ec33c items=0 ppid=2334 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:52.959837 kernel: audit: type=1327 audit(1752669232.934:436): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:53.168746 kubelet[2183]: I0716 12:33:53.164179 2183 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 16 12:33:53.171876 kubelet[2183]: I0716 12:33:53.171845 2183 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 16 12:33:53.174000 audit[5259]: NETFILTER_CFG table=filter:126 family=2 entries=8 op=nft_register_rule pid=5259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:53.174000 audit[5259]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd1ddb0180 a2=0 a3=7ffd1ddb016c items=0 ppid=2334 pid=5259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:53.191129 kernel: audit: type=1325 audit(1752669233.174:437): table=filter:126 family=2 entries=8 op=nft_register_rule pid=5259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:53.191251 kernel: audit: type=1300 audit(1752669233.174:437): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd1ddb0180 a2=0 a3=7ffd1ddb016c items=0 ppid=2334 pid=5259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:53.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:53.219689 kernel: audit: type=1327 audit(1752669233.174:437): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:33:53.205000 audit[5259]: NETFILTER_CFG table=nat:127 family=2 entries=38 op=nft_register_chain pid=5259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:53.227782 kernel: audit: type=1325 audit(1752669233.205:438): table=nat:127 family=2 entries=38 op=nft_register_chain pid=5259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:33:53.205000 audit[5259]: SYSCALL arch=c000003e syscall=46 success=yes exit=12772 a0=3 a1=7ffd1ddb0180 a2=0 a3=7ffd1ddb016c items=0 ppid=2334 pid=5259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:33:53.205000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:34:11.004641 systemd[1]: Started sshd@9-10.244.89.194:22-147.75.109.163:40896.service. Jul 16 12:34:11.036377 kernel: kauditd_printk_skb: 2 callbacks suppressed Jul 16 12:34:11.037157 kernel: audit: type=1130 audit(1752669251.005:439): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.89.194:22-147.75.109.163:40896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:11.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.89.194:22-147.75.109.163:40896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:11.947000 audit[5289]: USER_ACCT pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:11.958569 sshd[5289]: Accepted publickey for core from 147.75.109.163 port 40896 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:34:11.961021 kernel: audit: type=1101 audit(1752669251.947:440): pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:11.961931 sshd[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:34:11.959000 audit[5289]: CRED_ACQ pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:11.966797 kernel: audit: type=1103 audit(1752669251.959:441): pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:11.966907 kernel: audit: type=1006 audit(1752669251.959:442): pid=5289 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 16 12:34:11.959000 audit[5289]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe72b171e0 a2=3 a3=0 items=0 ppid=1 pid=5289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:11.972797 kernel: audit: type=1300 audit(1752669251.959:442): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe72b171e0 a2=3 a3=0 items=0 ppid=1 pid=5289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:11.973342 kernel: audit: type=1327 audit(1752669251.959:442): proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:11.959000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:12.000877 systemd-logind[1294]: New session 10 of user core. Jul 16 12:34:12.006412 systemd[1]: Started session-10.scope. Jul 16 12:34:12.011000 audit[5289]: USER_START pid=5289 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:12.017699 kernel: audit: type=1105 audit(1752669252.011:443): pid=5289 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:12.017000 audit[5292]: CRED_ACQ pid=5292 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:12.021726 kernel: audit: type=1103 audit(1752669252.017:444): pid=5292 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:13.368450 sshd[5289]: pam_unix(sshd:session): session closed for user core Jul 16 12:34:13.369000 audit[5289]: USER_END pid=5289 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:13.376705 kernel: audit: type=1106 audit(1752669253.369:445): pid=5289 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:13.375000 audit[5289]: CRED_DISP pid=5289 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:13.381729 kernel: audit: type=1104 audit(1752669253.375:446): pid=5289 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:13.388254 systemd[1]: sshd@9-10.244.89.194:22-147.75.109.163:40896.service: Deactivated successfully. Jul 16 12:34:13.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.89.194:22-147.75.109.163:40896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:13.390007 systemd[1]: session-10.scope: Deactivated successfully. Jul 16 12:34:13.390623 systemd-logind[1294]: Session 10 logged out. Waiting for processes to exit. Jul 16 12:34:13.391665 systemd-logind[1294]: Removed session 10. Jul 16 12:34:13.493407 systemd[1]: run-containerd-runc-k8s.io-fab454efbfe4481fa747ba040a62985a98a975ecc0f79a35d3fcd73c6e4343cd-runc.RVp3XA.mount: Deactivated successfully. Jul 16 12:34:15.932601 systemd[1]: run-containerd-runc-k8s.io-fc56ef80903e360a57bf0a61b253ba4c2642b9cb894d88f610690ea9139b8f01-runc.Dg3OKH.mount: Deactivated successfully. Jul 16 12:34:15.960877 systemd[1]: run-containerd-runc-k8s.io-fab454efbfe4481fa747ba040a62985a98a975ecc0f79a35d3fcd73c6e4343cd-runc.hJ8Uah.mount: Deactivated successfully. Jul 16 12:34:18.531299 systemd[1]: Started sshd@10-10.244.89.194:22-147.75.109.163:38668.service. Jul 16 12:34:18.543335 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 16 12:34:18.543986 kernel: audit: type=1130 audit(1752669258.534:448): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.89.194:22-147.75.109.163:38668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:18.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.89.194:22-147.75.109.163:38668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:19.494998 sshd[5383]: Accepted publickey for core from 147.75.109.163 port 38668 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:34:19.504419 kernel: audit: type=1101 audit(1752669259.494:449): pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:19.494000 audit[5383]: USER_ACCT pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:19.506087 sshd[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:34:19.502000 audit[5383]: CRED_ACQ pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:19.519902 kernel: audit: type=1103 audit(1752669259.502:450): pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:19.525011 kernel: audit: type=1006 audit(1752669259.502:451): pid=5383 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jul 16 12:34:19.502000 audit[5383]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb134a2e0 a2=3 a3=0 items=0 ppid=1 pid=5383 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:19.531651 kernel: audit: type=1300 audit(1752669259.502:451): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb134a2e0 a2=3 a3=0 items=0 ppid=1 pid=5383 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:19.502000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:19.534135 kernel: audit: type=1327 audit(1752669259.502:451): proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:19.538589 systemd[1]: Started session-11.scope. Jul 16 12:34:19.538827 systemd-logind[1294]: New session 11 of user core. Jul 16 12:34:19.544000 audit[5383]: USER_START pid=5383 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:19.550699 kernel: audit: type=1105 audit(1752669259.544:452): pid=5383 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:19.549000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:19.554692 kernel: audit: type=1103 audit(1752669259.549:453): pid=5386 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:20.784019 sshd[5383]: pam_unix(sshd:session): session closed for user core Jul 16 12:34:20.784000 audit[5383]: USER_END pid=5383 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:20.795187 kernel: audit: type=1106 audit(1752669260.784:454): pid=5383 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:20.790000 audit[5383]: CRED_DISP pid=5383 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:20.803939 kernel: audit: type=1104 audit(1752669260.790:455): pid=5383 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:20.802479 systemd[1]: sshd@10-10.244.89.194:22-147.75.109.163:38668.service: Deactivated successfully. Jul 16 12:34:20.804610 systemd[1]: session-11.scope: Deactivated successfully. Jul 16 12:34:20.804981 systemd-logind[1294]: Session 11 logged out. Waiting for processes to exit. Jul 16 12:34:20.806449 systemd-logind[1294]: Removed session 11. Jul 16 12:34:20.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.89.194:22-147.75.109.163:38668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:25.940251 systemd[1]: Started sshd@11-10.244.89.194:22-147.75.109.163:38676.service. Jul 16 12:34:25.952278 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 16 12:34:25.952638 kernel: audit: type=1130 audit(1752669265.939:457): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.89.194:22-147.75.109.163:38676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:25.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.89.194:22-147.75.109.163:38676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:26.887000 audit[5398]: USER_ACCT pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:26.898219 kernel: audit: type=1101 audit(1752669266.887:458): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:26.898291 sshd[5398]: Accepted publickey for core from 147.75.109.163 port 38676 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:34:26.899767 sshd[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:34:26.905090 kernel: audit: type=1103 audit(1752669266.896:459): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:26.896000 audit[5398]: CRED_ACQ pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:26.913260 kernel: audit: type=1006 audit(1752669266.896:460): pid=5398 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jul 16 12:34:26.913349 kernel: audit: type=1300 audit(1752669266.896:460): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9b6df960 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:26.896000 audit[5398]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9b6df960 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:26.916366 kernel: audit: type=1327 audit(1752669266.896:460): proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:26.896000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:26.918707 systemd-logind[1294]: New session 12 of user core. Jul 16 12:34:26.919478 systemd[1]: Started session-12.scope. Jul 16 12:34:26.922000 audit[5398]: USER_START pid=5398 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:26.932847 kernel: audit: type=1105 audit(1752669266.922:461): pid=5398 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:26.927000 audit[5401]: CRED_ACQ pid=5401 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:26.938022 kernel: audit: type=1103 audit(1752669266.927:462): pid=5401 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:28.104785 sshd[5398]: pam_unix(sshd:session): session closed for user core Jul 16 12:34:28.106000 audit[5398]: USER_END pid=5398 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:28.113474 systemd[1]: sshd@11-10.244.89.194:22-147.75.109.163:38676.service: Deactivated successfully. Jul 16 12:34:28.117284 systemd[1]: session-12.scope: Deactivated successfully. Jul 16 12:34:28.119881 kernel: audit: type=1106 audit(1752669268.106:463): pid=5398 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:28.122373 systemd-logind[1294]: Session 12 logged out. Waiting for processes to exit. Jul 16 12:34:28.125044 systemd-logind[1294]: Removed session 12. Jul 16 12:34:28.106000 audit[5398]: CRED_DISP pid=5398 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:28.129739 kernel: audit: type=1104 audit(1752669268.106:464): pid=5398 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:28.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.89.194:22-147.75.109.163:38676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:33.265175 systemd[1]: Started sshd@12-10.244.89.194:22-147.75.109.163:41720.service. Jul 16 12:34:33.273508 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 16 12:34:33.274774 kernel: audit: type=1130 audit(1752669273.264:466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.89.194:22-147.75.109.163:41720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:33.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.89.194:22-147.75.109.163:41720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:34.226000 audit[5412]: USER_ACCT pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:34.239272 kernel: audit: type=1101 audit(1752669274.226:467): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:34.239578 sshd[5412]: Accepted publickey for core from 147.75.109.163 port 41720 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:34:34.238000 audit[5412]: CRED_ACQ pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:34.241092 sshd[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:34:34.245043 kernel: audit: type=1103 audit(1752669274.238:468): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:34.247512 kernel: audit: type=1006 audit(1752669274.239:469): pid=5412 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jul 16 12:34:34.251990 kernel: audit: type=1300 audit(1752669274.239:469): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffceccf7070 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:34.239000 audit[5412]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffceccf7070 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:34.253963 kernel: audit: type=1327 audit(1752669274.239:469): proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:34.239000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:34.260046 systemd-logind[1294]: New session 13 of user core. Jul 16 12:34:34.260908 systemd[1]: Started session-13.scope. Jul 16 12:34:34.266000 audit[5412]: USER_START pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:34.273768 kernel: audit: type=1105 audit(1752669274.266:470): pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:34.275998 kernel: audit: type=1103 audit(1752669274.270:471): pid=5415 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:34.270000 audit[5415]: CRED_ACQ pid=5415 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:35.288122 sshd[5412]: pam_unix(sshd:session): session closed for user core Jul 16 12:34:35.291000 audit[5412]: USER_END pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:35.302694 kernel: audit: type=1106 audit(1752669275.291:472): pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:35.303124 systemd[1]: sshd@12-10.244.89.194:22-147.75.109.163:41720.service: Deactivated successfully. Jul 16 12:34:35.304808 systemd[1]: session-13.scope: Deactivated successfully. Jul 16 12:34:35.304827 systemd-logind[1294]: Session 13 logged out. Waiting for processes to exit. Jul 16 12:34:35.306104 systemd-logind[1294]: Removed session 13. Jul 16 12:34:35.291000 audit[5412]: CRED_DISP pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:35.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.89.194:22-147.75.109.163:41720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:35.311703 kernel: audit: type=1104 audit(1752669275.291:473): pid=5412 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:35.433197 systemd[1]: Started sshd@13-10.244.89.194:22-147.75.109.163:41724.service. Jul 16 12:34:35.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.89.194:22-147.75.109.163:41724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:36.336141 sshd[5425]: Accepted publickey for core from 147.75.109.163 port 41724 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:34:36.335000 audit[5425]: USER_ACCT pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:36.338000 audit[5425]: CRED_ACQ pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:36.339000 audit[5425]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc30d6e40 a2=3 a3=0 items=0 ppid=1 pid=5425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:36.339000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:36.340706 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:34:36.351472 systemd-logind[1294]: New session 14 of user core. Jul 16 12:34:36.351919 systemd[1]: Started session-14.scope. Jul 16 12:34:36.358000 audit[5425]: USER_START pid=5425 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:36.360000 audit[5428]: CRED_ACQ pid=5428 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:37.238340 sshd[5425]: pam_unix(sshd:session): session closed for user core Jul 16 12:34:37.242000 audit[5425]: USER_END pid=5425 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:37.243000 audit[5425]: CRED_DISP pid=5425 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:37.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.89.194:22-147.75.109.163:41724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:37.246833 systemd[1]: sshd@13-10.244.89.194:22-147.75.109.163:41724.service: Deactivated successfully. Jul 16 12:34:37.249790 systemd-logind[1294]: Session 14 logged out. Waiting for processes to exit. Jul 16 12:34:37.249865 systemd[1]: session-14.scope: Deactivated successfully. Jul 16 12:34:37.256293 systemd-logind[1294]: Removed session 14. Jul 16 12:34:37.386435 systemd[1]: Started sshd@14-10.244.89.194:22-147.75.109.163:41734.service. Jul 16 12:34:37.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.89.194:22-147.75.109.163:41734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:38.318375 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 16 12:34:38.318726 kernel: audit: type=1101 audit(1752669278.302:485): pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:38.302000 audit[5436]: USER_ACCT pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:38.319188 sshd[5436]: Accepted publickey for core from 147.75.109.163 port 41734 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:34:38.322000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:38.327034 sshd[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:34:38.337581 kernel: audit: type=1103 audit(1752669278.322:486): pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:38.337714 kernel: audit: type=1006 audit(1752669278.322:487): pid=5436 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 16 12:34:38.322000 audit[5436]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7f299c40 a2=3 a3=0 items=0 ppid=1 pid=5436 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:38.322000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:38.345246 kernel: audit: type=1300 audit(1752669278.322:487): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7f299c40 a2=3 a3=0 items=0 ppid=1 pid=5436 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:38.345396 kernel: audit: type=1327 audit(1752669278.322:487): proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:38.352557 systemd-logind[1294]: New session 15 of user core. Jul 16 12:34:38.354971 systemd[1]: Started session-15.scope. Jul 16 12:34:38.360000 audit[5436]: USER_START pid=5436 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:38.370125 kernel: audit: type=1105 audit(1752669278.360:488): pid=5436 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:38.370195 kernel: audit: type=1103 audit(1752669278.365:489): pid=5439 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:38.365000 audit[5439]: CRED_ACQ pid=5439 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:39.123349 sshd[5436]: pam_unix(sshd:session): session closed for user core Jul 16 12:34:39.126000 audit[5436]: USER_END pid=5436 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:39.130456 systemd[1]: sshd@14-10.244.89.194:22-147.75.109.163:41734.service: Deactivated successfully. Jul 16 12:34:39.131688 systemd[1]: session-15.scope: Deactivated successfully. Jul 16 12:34:39.135714 kernel: audit: type=1106 audit(1752669279.126:490): pid=5436 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:39.144777 kernel: audit: type=1104 audit(1752669279.126:491): pid=5436 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:39.126000 audit[5436]: CRED_DISP pid=5436 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:39.142470 systemd-logind[1294]: Session 15 logged out. Waiting for processes to exit. Jul 16 12:34:39.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.89.194:22-147.75.109.163:41734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:39.150708 kernel: audit: type=1131 audit(1752669279.130:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.89.194:22-147.75.109.163:41734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:39.154729 systemd-logind[1294]: Removed session 15. Jul 16 12:34:44.271529 systemd[1]: Started sshd@15-10.244.89.194:22-147.75.109.163:45960.service. Jul 16 12:34:44.282328 kernel: audit: type=1130 audit(1752669284.271:493): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.89.194:22-147.75.109.163:45960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:44.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.89.194:22-147.75.109.163:45960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:45.231000 audit[5474]: USER_ACCT pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:45.247201 kernel: audit: type=1101 audit(1752669285.231:494): pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:45.247719 kernel: audit: type=1103 audit(1752669285.243:495): pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:45.243000 audit[5474]: CRED_ACQ pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:45.250026 sshd[5474]: Accepted publickey for core from 147.75.109.163 port 45960 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:34:45.258693 kernel: audit: type=1006 audit(1752669285.243:496): pid=5474 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 16 12:34:45.254257 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:34:45.243000 audit[5474]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdff1ef9d0 a2=3 a3=0 items=0 ppid=1 pid=5474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:45.263706 kernel: audit: type=1300 audit(1752669285.243:496): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdff1ef9d0 a2=3 a3=0 items=0 ppid=1 pid=5474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:45.243000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:45.275686 kernel: audit: type=1327 audit(1752669285.243:496): proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:45.283957 systemd[1]: Started session-16.scope. Jul 16 12:34:45.285050 systemd-logind[1294]: New session 16 of user core. Jul 16 12:34:45.300000 audit[5474]: USER_START pid=5474 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:45.306766 kernel: audit: type=1105 audit(1752669285.300:497): pid=5474 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:45.308000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:45.314541 kernel: audit: type=1103 audit(1752669285.308:498): pid=5477 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:45.883935 systemd[1]: run-containerd-runc-k8s.io-fc56ef80903e360a57bf0a61b253ba4c2642b9cb894d88f610690ea9139b8f01-runc.CBs6yE.mount: Deactivated successfully. Jul 16 12:34:45.953714 systemd[1]: run-containerd-runc-k8s.io-fab454efbfe4481fa747ba040a62985a98a975ecc0f79a35d3fcd73c6e4343cd-runc.IsZWfM.mount: Deactivated successfully. Jul 16 12:34:46.908176 sshd[5474]: pam_unix(sshd:session): session closed for user core Jul 16 12:34:46.920573 kernel: audit: type=1106 audit(1752669286.910:499): pid=5474 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:46.921399 kernel: audit: type=1104 audit(1752669286.917:500): pid=5474 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:46.910000 audit[5474]: USER_END pid=5474 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:46.917000 audit[5474]: CRED_DISP pid=5474 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:46.933918 systemd[1]: sshd@15-10.244.89.194:22-147.75.109.163:45960.service: Deactivated successfully. Jul 16 12:34:46.939461 systemd[1]: session-16.scope: Deactivated successfully. Jul 16 12:34:46.940138 systemd-logind[1294]: Session 16 logged out. Waiting for processes to exit. Jul 16 12:34:46.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.89.194:22-147.75.109.163:45960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:46.942088 systemd-logind[1294]: Removed session 16. Jul 16 12:34:52.076247 systemd[1]: Started sshd@16-10.244.89.194:22-147.75.109.163:49502.service. Jul 16 12:34:52.098363 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 16 12:34:52.100003 kernel: audit: type=1130 audit(1752669292.074:502): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.89.194:22-147.75.109.163:49502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:52.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.89.194:22-147.75.109.163:49502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:53.021000 audit[5534]: USER_ACCT pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:53.028894 sshd[5534]: Accepted publickey for core from 147.75.109.163 port 49502 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:34:53.033749 kernel: audit: type=1101 audit(1752669293.021:503): pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:53.032000 audit[5534]: CRED_ACQ pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:53.045742 kernel: audit: type=1103 audit(1752669293.032:504): pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:53.048437 sshd[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:34:53.056493 kernel: audit: type=1006 audit(1752669293.032:505): pid=5534 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 16 12:34:53.056585 kernel: audit: type=1300 audit(1752669293.032:505): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9553e4b0 a2=3 a3=0 items=0 ppid=1 pid=5534 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:53.032000 audit[5534]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9553e4b0 a2=3 a3=0 items=0 ppid=1 pid=5534 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:34:53.032000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:53.063773 kernel: audit: type=1327 audit(1752669293.032:505): proctitle=737368643A20636F7265205B707269765D Jul 16 12:34:53.091306 systemd[1]: Started session-17.scope. Jul 16 12:34:53.092757 systemd-logind[1294]: New session 17 of user core. Jul 16 12:34:53.099000 audit[5534]: USER_START pid=5534 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:53.102000 audit[5537]: CRED_ACQ pid=5537 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:53.109879 kernel: audit: type=1105 audit(1752669293.099:506): pid=5534 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:53.109948 kernel: audit: type=1103 audit(1752669293.102:507): pid=5537 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:55.115378 sshd[5534]: pam_unix(sshd:session): session closed for user core Jul 16 12:34:55.119000 audit[5534]: USER_END pid=5534 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:55.129692 kernel: audit: type=1106 audit(1752669295.119:508): pid=5534 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:55.133960 systemd[1]: sshd@16-10.244.89.194:22-147.75.109.163:49502.service: Deactivated successfully. Jul 16 12:34:55.136817 systemd-logind[1294]: Session 17 logged out. Waiting for processes to exit. Jul 16 12:34:55.128000 audit[5534]: CRED_DISP pid=5534 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:55.136823 systemd[1]: session-17.scope: Deactivated successfully. Jul 16 12:34:55.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.89.194:22-147.75.109.163:49502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:34:55.146591 kernel: audit: type=1104 audit(1752669295.128:509): pid=5534 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:34:55.141418 systemd-logind[1294]: Removed session 17. Jul 16 12:34:59.817762 update_engine[1295]: I0716 12:34:59.816520 1295 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 16 12:34:59.817762 update_engine[1295]: I0716 12:34:59.817777 1295 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 16 12:34:59.821072 update_engine[1295]: I0716 12:34:59.821045 1295 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 16 12:34:59.822050 update_engine[1295]: I0716 12:34:59.822022 1295 omaha_request_params.cc:62] Current group set to lts Jul 16 12:34:59.825847 update_engine[1295]: I0716 12:34:59.825773 1295 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 16 12:34:59.825847 update_engine[1295]: I0716 12:34:59.825789 1295 update_attempter.cc:643] Scheduling an action processor start. Jul 16 12:34:59.825847 update_engine[1295]: I0716 12:34:59.825821 1295 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 16 12:34:59.827514 update_engine[1295]: I0716 12:34:59.827461 1295 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 16 12:34:59.828012 update_engine[1295]: I0716 12:34:59.827555 1295 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 16 12:34:59.828012 update_engine[1295]: I0716 12:34:59.827562 1295 omaha_request_action.cc:271] Request: Jul 16 12:34:59.828012 update_engine[1295]: Jul 16 12:34:59.828012 update_engine[1295]: Jul 16 12:34:59.828012 update_engine[1295]: Jul 16 12:34:59.828012 update_engine[1295]: Jul 16 12:34:59.828012 update_engine[1295]: Jul 16 12:34:59.828012 update_engine[1295]: Jul 16 12:34:59.828012 update_engine[1295]: Jul 16 12:34:59.828012 update_engine[1295]: Jul 16 12:34:59.828012 update_engine[1295]: I0716 12:34:59.827567 1295 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 12:34:59.875975 locksmithd[1345]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 16 12:34:59.879603 update_engine[1295]: I0716 12:34:59.879528 1295 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 12:34:59.881706 update_engine[1295]: I0716 12:34:59.881430 1295 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 12:34:59.887918 update_engine[1295]: E0716 12:34:59.887880 1295 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 12:34:59.888610 update_engine[1295]: I0716 12:34:59.888556 1295 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 16 12:35:00.270819 systemd[1]: Started sshd@17-10.244.89.194:22-147.75.109.163:40040.service. Jul 16 12:35:00.281504 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 16 12:35:00.281947 kernel: audit: type=1130 audit(1752669300.270:511): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.89.194:22-147.75.109.163:40040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:00.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.89.194:22-147.75.109.163:40040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:01.242792 kernel: audit: type=1101 audit(1752669301.232:512): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:01.232000 audit[5549]: USER_ACCT pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:01.245921 sshd[5549]: Accepted publickey for core from 147.75.109.163 port 40040 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:35:01.252509 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:35:01.266823 kernel: audit: type=1103 audit(1752669301.247:513): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:01.247000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:01.273695 kernel: audit: type=1006 audit(1752669301.248:514): pid=5549 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jul 16 12:35:01.248000 audit[5549]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5a2fa8f0 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:01.282442 kernel: audit: type=1300 audit(1752669301.248:514): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5a2fa8f0 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:01.282854 kernel: audit: type=1327 audit(1752669301.248:514): proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:01.248000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:01.289068 systemd[1]: Started session-18.scope. Jul 16 12:35:01.289816 systemd-logind[1294]: New session 18 of user core. Jul 16 12:35:01.305000 audit[5549]: USER_START pid=5549 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:01.315893 kernel: audit: type=1105 audit(1752669301.305:515): pid=5549 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:01.308000 audit[5559]: CRED_ACQ pid=5559 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:01.328703 kernel: audit: type=1103 audit(1752669301.308:516): pid=5559 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:02.593980 sshd[5549]: pam_unix(sshd:session): session closed for user core Jul 16 12:35:02.594000 audit[5549]: USER_END pid=5549 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:02.606969 kernel: audit: type=1106 audit(1752669302.594:517): pid=5549 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:02.603859 systemd-logind[1294]: Session 18 logged out. Waiting for processes to exit. Jul 16 12:35:02.605323 systemd[1]: sshd@17-10.244.89.194:22-147.75.109.163:40040.service: Deactivated successfully. Jul 16 12:35:02.595000 audit[5549]: CRED_DISP pid=5549 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:02.606412 systemd[1]: session-18.scope: Deactivated successfully. Jul 16 12:35:02.607785 systemd-logind[1294]: Removed session 18. Jul 16 12:35:02.612705 kernel: audit: type=1104 audit(1752669302.595:518): pid=5549 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:02.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.89.194:22-147.75.109.163:40040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:02.740046 systemd[1]: Started sshd@18-10.244.89.194:22-147.75.109.163:40042.service. Jul 16 12:35:02.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.89.194:22-147.75.109.163:40042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:03.627616 sshd[5569]: Accepted publickey for core from 147.75.109.163 port 40042 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:35:03.625000 audit[5569]: USER_ACCT pid=5569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:03.627000 audit[5569]: CRED_ACQ pid=5569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:03.627000 audit[5569]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2a64d580 a2=3 a3=0 items=0 ppid=1 pid=5569 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:03.627000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:03.630306 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:35:03.638660 systemd-logind[1294]: New session 19 of user core. Jul 16 12:35:03.639470 systemd[1]: Started session-19.scope. Jul 16 12:35:03.642000 audit[5569]: USER_START pid=5569 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:03.644000 audit[5572]: CRED_ACQ pid=5572 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:04.689030 sshd[5569]: pam_unix(sshd:session): session closed for user core Jul 16 12:35:04.695000 audit[5569]: USER_END pid=5569 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:04.696000 audit[5569]: CRED_DISP pid=5569 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:04.700562 systemd[1]: sshd@18-10.244.89.194:22-147.75.109.163:40042.service: Deactivated successfully. Jul 16 12:35:04.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.89.194:22-147.75.109.163:40042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:04.701921 systemd[1]: session-19.scope: Deactivated successfully. Jul 16 12:35:04.702475 systemd-logind[1294]: Session 19 logged out. Waiting for processes to exit. Jul 16 12:35:04.703788 systemd-logind[1294]: Removed session 19. Jul 16 12:35:04.821510 systemd[1]: Started sshd@19-10.244.89.194:22-147.75.109.163:40044.service. Jul 16 12:35:04.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.244.89.194:22-147.75.109.163:40044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:05.731000 audit[5593]: USER_ACCT pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:05.739180 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 16 12:35:05.740346 kernel: audit: type=1101 audit(1752669305.731:530): pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:05.746204 sshd[5593]: Accepted publickey for core from 147.75.109.163 port 40044 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:35:05.745000 audit[5593]: CRED_ACQ pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:05.754111 kernel: audit: type=1103 audit(1752669305.745:531): pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:05.754465 kernel: audit: type=1006 audit(1752669305.745:532): pid=5593 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jul 16 12:35:05.754557 kernel: audit: type=1300 audit(1752669305.745:532): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd623c680 a2=3 a3=0 items=0 ppid=1 pid=5593 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:05.745000 audit[5593]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd623c680 a2=3 a3=0 items=0 ppid=1 pid=5593 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:05.754374 sshd[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:35:05.758746 kernel: audit: type=1327 audit(1752669305.745:532): proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:05.745000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:05.770561 systemd[1]: Started session-20.scope. Jul 16 12:35:05.770819 systemd-logind[1294]: New session 20 of user core. Jul 16 12:35:05.776000 audit[5593]: USER_START pid=5593 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:05.783764 kernel: audit: type=1105 audit(1752669305.776:533): pid=5593 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:05.782000 audit[5596]: CRED_ACQ pid=5596 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:05.787694 kernel: audit: type=1103 audit(1752669305.782:534): pid=5596 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:08.717000 audit[5605]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:35:08.754162 kernel: audit: type=1325 audit(1752669308.717:535): table=filter:128 family=2 entries=20 op=nft_register_rule pid=5605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:35:08.756631 kernel: audit: type=1300 audit(1752669308.717:535): arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fff8fca07f0 a2=0 a3=7fff8fca07dc items=0 ppid=2334 pid=5605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:08.756792 kernel: audit: type=1327 audit(1752669308.717:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:35:08.717000 audit[5605]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fff8fca07f0 a2=0 a3=7fff8fca07dc items=0 ppid=2334 pid=5605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:08.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:35:08.744000 audit[5605]: NETFILTER_CFG table=nat:129 family=2 entries=26 op=nft_register_rule pid=5605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:35:08.744000 audit[5605]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7fff8fca07f0 a2=0 a3=0 items=0 ppid=2334 pid=5605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:08.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:35:08.762000 audit[5607]: NETFILTER_CFG table=filter:130 family=2 entries=32 op=nft_register_rule pid=5607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:35:08.762000 audit[5607]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffdcf69c8f0 a2=0 a3=7ffdcf69c8dc items=0 ppid=2334 pid=5607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:08.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:35:08.767000 audit[5607]: NETFILTER_CFG table=nat:131 family=2 entries=26 op=nft_register_rule pid=5607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:35:08.767000 audit[5607]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffdcf69c8f0 a2=0 a3=0 items=0 ppid=2334 pid=5607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:08.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:35:08.797109 sshd[5593]: pam_unix(sshd:session): session closed for user core Jul 16 12:35:08.828000 audit[5593]: USER_END pid=5593 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:08.831000 audit[5593]: CRED_DISP pid=5593 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:08.844594 systemd[1]: sshd@19-10.244.89.194:22-147.75.109.163:40044.service: Deactivated successfully. Jul 16 12:35:08.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.244.89.194:22-147.75.109.163:40044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:08.851334 systemd[1]: session-20.scope: Deactivated successfully. Jul 16 12:35:08.851943 systemd-logind[1294]: Session 20 logged out. Waiting for processes to exit. Jul 16 12:35:08.860991 systemd-logind[1294]: Removed session 20. Jul 16 12:35:08.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.89.194:22-147.75.109.163:50638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:08.930487 systemd[1]: Started sshd@20-10.244.89.194:22-147.75.109.163:50638.service. Jul 16 12:35:09.819998 update_engine[1295]: I0716 12:35:09.818217 1295 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 12:35:09.840401 update_engine[1295]: I0716 12:35:09.835253 1295 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 12:35:09.840401 update_engine[1295]: I0716 12:35:09.836930 1295 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 12:35:09.840401 update_engine[1295]: E0716 12:35:09.839838 1295 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 12:35:09.840401 update_engine[1295]: I0716 12:35:09.839951 1295 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 16 12:35:09.869128 sshd[5610]: Accepted publickey for core from 147.75.109.163 port 50638 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:35:09.867000 audit[5610]: USER_ACCT pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:09.878000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:09.880000 audit[5610]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffda378a50 a2=3 a3=0 items=0 ppid=1 pid=5610 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:09.880000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:09.884240 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:35:09.922897 systemd[1]: Started session-21.scope. Jul 16 12:35:09.923871 systemd-logind[1294]: New session 21 of user core. Jul 16 12:35:09.928000 audit[5610]: USER_START pid=5610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:09.935000 audit[5632]: CRED_ACQ pid=5632 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:11.617942 sshd[5610]: pam_unix(sshd:session): session closed for user core Jul 16 12:35:11.644000 audit[5610]: USER_END pid=5610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:11.649537 kernel: kauditd_printk_skb: 20 callbacks suppressed Jul 16 12:35:11.650559 kernel: audit: type=1106 audit(1752669311.644:548): pid=5610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:11.644000 audit[5610]: CRED_DISP pid=5610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:11.657018 kernel: audit: type=1104 audit(1752669311.644:549): pid=5610 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:11.664993 systemd[1]: sshd@20-10.244.89.194:22-147.75.109.163:50638.service: Deactivated successfully. Jul 16 12:35:11.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.89.194:22-147.75.109.163:50638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:11.672310 kernel: audit: type=1131 audit(1752669311.667:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.89.194:22-147.75.109.163:50638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:11.677898 systemd[1]: session-21.scope: Deactivated successfully. Jul 16 12:35:11.677930 systemd-logind[1294]: Session 21 logged out. Waiting for processes to exit. Jul 16 12:35:11.687388 systemd-logind[1294]: Removed session 21. Jul 16 12:35:11.754388 systemd[1]: Started sshd@21-10.244.89.194:22-147.75.109.163:50642.service. Jul 16 12:35:11.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.89.194:22-147.75.109.163:50642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:11.764805 kernel: audit: type=1130 audit(1752669311.754:551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.89.194:22-147.75.109.163:50642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:12.697000 audit[5640]: USER_ACCT pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:12.709122 sshd[5640]: Accepted publickey for core from 147.75.109.163 port 50642 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:35:12.713689 kernel: audit: type=1101 audit(1752669312.697:552): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:12.715000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:12.720686 kernel: audit: type=1103 audit(1752669312.715:553): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:12.723431 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:35:12.715000 audit[5640]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec73e4050 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:12.727289 kernel: audit: type=1006 audit(1752669312.715:554): pid=5640 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 16 12:35:12.727529 kernel: audit: type=1300 audit(1752669312.715:554): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec73e4050 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:12.715000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:12.730301 kernel: audit: type=1327 audit(1752669312.715:554): proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:12.740817 systemd[1]: Started session-22.scope. Jul 16 12:35:12.741382 systemd-logind[1294]: New session 22 of user core. Jul 16 12:35:12.747000 audit[5640]: USER_START pid=5640 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:12.753946 kernel: audit: type=1105 audit(1752669312.747:555): pid=5640 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:12.753000 audit[5643]: CRED_ACQ pid=5643 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:15.478909 sshd[5640]: pam_unix(sshd:session): session closed for user core Jul 16 12:35:15.500000 audit[5640]: USER_END pid=5640 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:15.500000 audit[5640]: CRED_DISP pid=5640 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:15.514833 systemd[1]: sshd@21-10.244.89.194:22-147.75.109.163:50642.service: Deactivated successfully. Jul 16 12:35:15.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.89.194:22-147.75.109.163:50642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:15.520422 systemd[1]: session-22.scope: Deactivated successfully. Jul 16 12:35:15.520935 systemd-logind[1294]: Session 22 logged out. Waiting for processes to exit. Jul 16 12:35:15.528226 systemd-logind[1294]: Removed session 22. Jul 16 12:35:15.926618 systemd[1]: run-containerd-runc-k8s.io-fab454efbfe4481fa747ba040a62985a98a975ecc0f79a35d3fcd73c6e4343cd-runc.9TIK0n.mount: Deactivated successfully. Jul 16 12:35:18.211000 audit[5731]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:35:18.234265 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 16 12:35:18.236407 kernel: audit: type=1325 audit(1752669318.211:560): table=filter:132 family=2 entries=20 op=nft_register_rule pid=5731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:35:18.236641 kernel: audit: type=1300 audit(1752669318.211:560): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffdd881ad90 a2=0 a3=7ffdd881ad7c items=0 ppid=2334 pid=5731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:18.236954 kernel: audit: type=1327 audit(1752669318.211:560): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:35:18.211000 audit[5731]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffdd881ad90 a2=0 a3=7ffdd881ad7c items=0 ppid=2334 pid=5731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:18.248862 kernel: audit: type=1325 audit(1752669318.238:561): table=nat:133 family=2 entries=110 op=nft_register_chain pid=5731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:35:18.249762 kernel: audit: type=1300 audit(1752669318.238:561): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffdd881ad90 a2=0 a3=7ffdd881ad7c items=0 ppid=2334 pid=5731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:18.211000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:35:18.238000 audit[5731]: NETFILTER_CFG table=nat:133 family=2 entries=110 op=nft_register_chain pid=5731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 16 12:35:18.238000 audit[5731]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffdd881ad90 a2=0 a3=7ffdd881ad7c items=0 ppid=2334 pid=5731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:18.238000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:35:18.252682 kernel: audit: type=1327 audit(1752669318.238:561): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 16 12:35:19.823751 update_engine[1295]: I0716 12:35:19.818663 1295 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 12:35:19.830632 update_engine[1295]: I0716 12:35:19.830299 1295 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 12:35:19.831634 update_engine[1295]: I0716 12:35:19.831568 1295 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 12:35:19.832357 update_engine[1295]: E0716 12:35:19.832207 1295 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 12:35:19.832357 update_engine[1295]: I0716 12:35:19.832330 1295 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 16 12:35:20.628118 systemd[1]: Started sshd@22-10.244.89.194:22-147.75.109.163:60722.service. Jul 16 12:35:20.643744 kernel: audit: type=1130 audit(1752669320.629:562): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.89.194:22-147.75.109.163:60722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:20.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.89.194:22-147.75.109.163:60722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:21.606000 audit[5733]: USER_ACCT pid=5733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:21.631973 kernel: audit: type=1101 audit(1752669321.606:563): pid=5733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:21.632329 kernel: audit: type=1103 audit(1752669321.623:564): pid=5733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:21.623000 audit[5733]: CRED_ACQ pid=5733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:21.630108 sshd[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:35:21.636977 kernel: audit: type=1006 audit(1752669321.623:565): pid=5733 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 16 12:35:21.637037 sshd[5733]: Accepted publickey for core from 147.75.109.163 port 60722 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:35:21.623000 audit[5733]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe36f9c280 a2=3 a3=0 items=0 ppid=1 pid=5733 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:21.623000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:21.668713 systemd-logind[1294]: New session 23 of user core. Jul 16 12:35:21.671520 systemd[1]: Started session-23.scope. Jul 16 12:35:21.678000 audit[5733]: USER_START pid=5733 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:21.680000 audit[5738]: CRED_ACQ pid=5738 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:22.885574 sshd[5733]: pam_unix(sshd:session): session closed for user core Jul 16 12:35:22.891000 audit[5733]: USER_END pid=5733 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:22.891000 audit[5733]: CRED_DISP pid=5733 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:22.902353 systemd[1]: sshd@22-10.244.89.194:22-147.75.109.163:60722.service: Deactivated successfully. Jul 16 12:35:22.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.89.194:22-147.75.109.163:60722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:22.908259 systemd[1]: session-23.scope: Deactivated successfully. Jul 16 12:35:22.908941 systemd-logind[1294]: Session 23 logged out. Waiting for processes to exit. Jul 16 12:35:22.910022 systemd-logind[1294]: Removed session 23. Jul 16 12:35:28.046379 systemd[1]: Started sshd@23-10.244.89.194:22-147.75.109.163:60724.service. Jul 16 12:35:28.060173 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 16 12:35:28.065474 kernel: audit: type=1130 audit(1752669328.047:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.89.194:22-147.75.109.163:60724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:28.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.89.194:22-147.75.109.163:60724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:29.010000 audit[5748]: USER_ACCT pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:29.029528 kernel: audit: type=1101 audit(1752669329.010:572): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:29.030065 sshd[5748]: Accepted publickey for core from 147.75.109.163 port 60724 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:35:29.048537 kernel: audit: type=1103 audit(1752669329.030:573): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:29.048610 kernel: audit: type=1006 audit(1752669329.030:574): pid=5748 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 16 12:35:29.049081 kernel: audit: type=1300 audit(1752669329.030:574): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddd889af0 a2=3 a3=0 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:29.049606 kernel: audit: type=1327 audit(1752669329.030:574): proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:29.030000 audit[5748]: CRED_ACQ pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:29.030000 audit[5748]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddd889af0 a2=3 a3=0 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:29.030000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:29.043429 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:35:29.066319 systemd-logind[1294]: New session 24 of user core. Jul 16 12:35:29.067554 systemd[1]: Started session-24.scope. Jul 16 12:35:29.072000 audit[5748]: USER_START pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:29.082980 kernel: audit: type=1105 audit(1752669329.072:575): pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:29.083055 kernel: audit: type=1103 audit(1752669329.077:576): pid=5751 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:29.077000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:29.813016 update_engine[1295]: I0716 12:35:29.811509 1295 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 12:35:29.820535 update_engine[1295]: I0716 12:35:29.820501 1295 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 12:35:29.824861 update_engine[1295]: I0716 12:35:29.824728 1295 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 12:35:29.825133 update_engine[1295]: E0716 12:35:29.825045 1295 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 12:35:29.825836 update_engine[1295]: I0716 12:35:29.825774 1295 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 16 12:35:29.825836 update_engine[1295]: I0716 12:35:29.825793 1295 omaha_request_action.cc:621] Omaha request response: Jul 16 12:35:29.826902 update_engine[1295]: E0716 12:35:29.826819 1295 omaha_request_action.cc:640] Omaha request network transfer failed. Jul 16 12:35:29.826902 update_engine[1295]: I0716 12:35:29.826871 1295 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 16 12:35:29.826902 update_engine[1295]: I0716 12:35:29.826876 1295 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 16 12:35:29.826902 update_engine[1295]: I0716 12:35:29.826879 1295 update_attempter.cc:306] Processing Done. Jul 16 12:35:29.827851 update_engine[1295]: E0716 12:35:29.827725 1295 update_attempter.cc:619] Update failed. Jul 16 12:35:29.827851 update_engine[1295]: I0716 12:35:29.827739 1295 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 16 12:35:29.827851 update_engine[1295]: I0716 12:35:29.827742 1295 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 16 12:35:29.827851 update_engine[1295]: I0716 12:35:29.827747 1295 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 16 12:35:29.830061 update_engine[1295]: I0716 12:35:29.829998 1295 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 16 12:35:29.831356 update_engine[1295]: I0716 12:35:29.831261 1295 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 16 12:35:29.831356 update_engine[1295]: I0716 12:35:29.831295 1295 omaha_request_action.cc:271] Request: Jul 16 12:35:29.831356 update_engine[1295]: Jul 16 12:35:29.831356 update_engine[1295]: Jul 16 12:35:29.831356 update_engine[1295]: Jul 16 12:35:29.831356 update_engine[1295]: Jul 16 12:35:29.831356 update_engine[1295]: Jul 16 12:35:29.831356 update_engine[1295]: Jul 16 12:35:29.831356 update_engine[1295]: I0716 12:35:29.831301 1295 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 12:35:29.832070 update_engine[1295]: I0716 12:35:29.831722 1295 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 12:35:29.832070 update_engine[1295]: I0716 12:35:29.831874 1295 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 12:35:29.833119 update_engine[1295]: E0716 12:35:29.833062 1295 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 12:35:29.833863 update_engine[1295]: I0716 12:35:29.833267 1295 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 16 12:35:29.833863 update_engine[1295]: I0716 12:35:29.833280 1295 omaha_request_action.cc:621] Omaha request response: Jul 16 12:35:29.833863 update_engine[1295]: I0716 12:35:29.833284 1295 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 16 12:35:29.833863 update_engine[1295]: I0716 12:35:29.833288 1295 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 16 12:35:29.833863 update_engine[1295]: I0716 12:35:29.833303 1295 update_attempter.cc:306] Processing Done. Jul 16 12:35:29.833863 update_engine[1295]: I0716 12:35:29.833308 1295 update_attempter.cc:310] Error event sent. Jul 16 12:35:29.833863 update_engine[1295]: I0716 12:35:29.833316 1295 update_check_scheduler.cc:74] Next update check in 46m0s Jul 16 12:35:29.856040 locksmithd[1345]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 16 12:35:29.856732 locksmithd[1345]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 16 12:35:30.316871 sshd[5748]: pam_unix(sshd:session): session closed for user core Jul 16 12:35:30.335963 kernel: audit: type=1106 audit(1752669330.319:577): pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:30.339664 kernel: audit: type=1104 audit(1752669330.328:578): pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:30.319000 audit[5748]: USER_END pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:30.328000 audit[5748]: CRED_DISP pid=5748 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:30.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.89.194:22-147.75.109.163:60724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:30.336081 systemd[1]: sshd@23-10.244.89.194:22-147.75.109.163:60724.service: Deactivated successfully. Jul 16 12:35:30.340035 systemd[1]: session-24.scope: Deactivated successfully. Jul 16 12:35:30.340076 systemd-logind[1294]: Session 24 logged out. Waiting for processes to exit. Jul 16 12:35:30.344709 systemd-logind[1294]: Removed session 24. Jul 16 12:35:35.493050 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 16 12:35:35.493842 kernel: audit: type=1130 audit(1752669335.484:580): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.244.89.194:22-147.75.109.163:53072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:35.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.244.89.194:22-147.75.109.163:53072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:35.485143 systemd[1]: Started sshd@24-10.244.89.194:22-147.75.109.163:53072.service. Jul 16 12:35:36.440000 audit[5761]: USER_ACCT pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:36.444594 sshd[5761]: Accepted publickey for core from 147.75.109.163 port 53072 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:35:36.451716 kernel: audit: type=1101 audit(1752669336.440:581): pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:36.452339 kernel: audit: type=1103 audit(1752669336.451:582): pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:36.451000 audit[5761]: CRED_ACQ pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:36.454267 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:35:36.452000 audit[5761]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfa77d670 a2=3 a3=0 items=0 ppid=1 pid=5761 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:36.469194 kernel: audit: type=1006 audit(1752669336.452:583): pid=5761 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 16 12:35:36.469277 kernel: audit: type=1300 audit(1752669336.452:583): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfa77d670 a2=3 a3=0 items=0 ppid=1 pid=5761 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:35:36.469311 kernel: audit: type=1327 audit(1752669336.452:583): proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:36.452000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 16 12:35:36.478044 systemd-logind[1294]: New session 25 of user core. Jul 16 12:35:36.479625 systemd[1]: Started session-25.scope. Jul 16 12:35:36.485000 audit[5761]: USER_START pid=5761 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:36.490685 kernel: audit: type=1105 audit(1752669336.485:584): pid=5761 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:36.491128 kernel: audit: type=1103 audit(1752669336.487:585): pid=5764 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:36.487000 audit[5764]: CRED_ACQ pid=5764 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:37.572701 sshd[5761]: pam_unix(sshd:session): session closed for user core Jul 16 12:35:37.576000 audit[5761]: USER_END pid=5761 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:37.590467 kernel: audit: type=1106 audit(1752669337.576:586): pid=5761 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:37.590541 kernel: audit: type=1104 audit(1752669337.576:587): pid=5761 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:37.576000 audit[5761]: CRED_DISP pid=5761 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 16 12:35:37.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.244.89.194:22-147.75.109.163:53072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:35:37.586140 systemd[1]: sshd@24-10.244.89.194:22-147.75.109.163:53072.service: Deactivated successfully. Jul 16 12:35:37.588501 systemd[1]: session-25.scope: Deactivated successfully. Jul 16 12:35:37.589831 systemd-logind[1294]: Session 25 logged out. Waiting for processes to exit. Jul 16 12:35:37.591155 systemd-logind[1294]: Removed session 25.