Sep 13 00:53:23.907169 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:53:23.907197 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:23.907210 kernel: BIOS-provided physical RAM map: Sep 13 00:53:23.907217 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:53:23.907223 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:53:23.907230 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:53:23.907238 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 13 00:53:23.907244 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 13 00:53:23.907253 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:53:23.907260 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:53:23.907267 kernel: NX (Execute Disable) protection: active Sep 13 00:53:23.907273 kernel: SMBIOS 2.8 present. Sep 13 00:53:23.907280 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 13 00:53:23.907287 kernel: Hypervisor detected: KVM Sep 13 00:53:23.907295 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:53:23.907304 kernel: kvm-clock: cpu 0, msr 3a19f001, primary cpu clock Sep 13 00:53:23.907312 kernel: kvm-clock: using sched offset of 3598804616 cycles Sep 13 00:53:23.907320 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:53:23.907330 kernel: tsc: Detected 2494.136 MHz processor Sep 13 00:53:23.907387 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:53:23.907395 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:53:23.907402 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 13 00:53:23.907409 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:53:23.907420 kernel: ACPI: Early table checksum verification disabled Sep 13 00:53:23.907427 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 13 00:53:23.907434 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:23.907441 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:23.907449 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:23.907456 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 13 00:53:23.907463 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:23.907470 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:23.907477 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:23.907487 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:23.907494 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 13 00:53:23.907502 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 13 00:53:23.907509 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 13 00:53:23.907516 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 13 00:53:23.907523 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 13 00:53:23.907532 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 13 00:53:23.907543 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 13 00:53:23.907562 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:53:23.907570 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:53:23.907578 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:53:23.907586 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 00:53:23.907594 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 13 00:53:23.907602 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 13 00:53:23.907612 kernel: Zone ranges: Sep 13 00:53:23.907620 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:53:23.907628 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 13 00:53:23.907635 kernel: Normal empty Sep 13 00:53:23.907643 kernel: Movable zone start for each node Sep 13 00:53:23.907651 kernel: Early memory node ranges Sep 13 00:53:23.907659 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:53:23.907666 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 13 00:53:23.907674 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 13 00:53:23.907685 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:53:23.907696 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:53:23.907704 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 13 00:53:23.907712 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:53:23.907720 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:53:23.907728 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:53:23.907735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:53:23.907743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:53:23.907751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:53:23.907762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:53:23.907771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:53:23.907779 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:53:23.907787 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:53:23.907795 kernel: TSC deadline timer available Sep 13 00:53:23.907803 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:53:23.907810 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 13 00:53:23.907818 kernel: Booting paravirtualized kernel on KVM Sep 13 00:53:23.907827 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:53:23.907844 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:53:23.907854 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 13 00:53:23.907862 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 13 00:53:23.907870 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:53:23.907877 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Sep 13 00:53:23.907885 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 13 00:53:23.907893 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 13 00:53:23.907901 kernel: Policy zone: DMA32 Sep 13 00:53:23.907910 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:23.907921 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:53:23.907929 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:53:23.907937 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:53:23.907945 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:53:23.907957 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 123076K reserved, 0K cma-reserved) Sep 13 00:53:23.907969 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:53:23.907977 kernel: Kernel/User page tables isolation: enabled Sep 13 00:53:23.907985 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:53:23.907996 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:53:23.908004 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:53:23.908013 kernel: rcu: RCU event tracing is enabled. Sep 13 00:53:23.908021 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:53:23.908044 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:53:23.908052 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:53:23.908061 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:53:23.908072 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:53:23.908080 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:53:23.908091 kernel: random: crng init done Sep 13 00:53:23.908099 kernel: Console: colour VGA+ 80x25 Sep 13 00:53:23.908107 kernel: printk: console [tty0] enabled Sep 13 00:53:23.908115 kernel: printk: console [ttyS0] enabled Sep 13 00:53:23.908122 kernel: ACPI: Core revision 20210730 Sep 13 00:53:23.908130 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:53:23.908138 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:53:23.908146 kernel: x2apic enabled Sep 13 00:53:23.908154 kernel: Switched APIC routing to physical x2apic. Sep 13 00:53:23.908162 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:53:23.908173 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Sep 13 00:53:23.908181 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494136) Sep 13 00:53:23.908193 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 00:53:23.908201 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 00:53:23.908209 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:53:23.908217 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:53:23.908225 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:53:23.908235 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 13 00:53:23.908250 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:53:23.908266 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:53:23.908275 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:53:23.908286 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:53:23.908294 kernel: active return thunk: its_return_thunk Sep 13 00:53:23.908303 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:53:23.908311 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:53:23.908320 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:53:23.908328 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:53:23.908336 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:53:23.908347 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:53:23.908373 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:53:23.908385 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:53:23.908397 kernel: LSM: Security Framework initializing Sep 13 00:53:23.908410 kernel: SELinux: Initializing. Sep 13 00:53:23.908422 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:53:23.908434 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:53:23.908447 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 13 00:53:23.908456 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 13 00:53:23.908464 kernel: signal: max sigframe size: 1776 Sep 13 00:53:23.908473 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:53:23.908481 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:53:23.908490 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:53:23.908498 kernel: x86: Booting SMP configuration: Sep 13 00:53:23.908506 kernel: .... node #0, CPUs: #1 Sep 13 00:53:23.908515 kernel: kvm-clock: cpu 1, msr 3a19f041, secondary cpu clock Sep 13 00:53:23.908526 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Sep 13 00:53:23.908534 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:53:23.908543 kernel: smpboot: Max logical packages: 1 Sep 13 00:53:23.908551 kernel: smpboot: Total of 2 processors activated (9976.54 BogoMIPS) Sep 13 00:53:23.908560 kernel: devtmpfs: initialized Sep 13 00:53:23.908568 kernel: x86/mm: Memory block size: 128MB Sep 13 00:53:23.908577 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:53:23.908586 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:53:23.908594 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:53:23.908606 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:53:23.908614 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:53:23.908623 kernel: audit: type=2000 audit(1757724803.884:1): state=initialized audit_enabled=0 res=1 Sep 13 00:53:23.908631 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:53:23.908639 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:53:23.908648 kernel: cpuidle: using governor menu Sep 13 00:53:23.908659 kernel: ACPI: bus type PCI registered Sep 13 00:53:23.908667 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:53:23.908676 kernel: dca service started, version 1.12.1 Sep 13 00:53:23.908687 kernel: PCI: Using configuration type 1 for base access Sep 13 00:53:23.908698 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:53:23.908711 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:53:23.908720 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:53:23.908728 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:53:23.908737 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:53:23.908745 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:53:23.908753 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:53:23.908762 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:53:23.908773 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:53:23.908782 kernel: ACPI: Interpreter enabled Sep 13 00:53:23.908790 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:53:23.908798 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:53:23.908807 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:53:23.908815 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:53:23.908824 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:53:23.909058 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:53:23.909160 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 13 00:53:23.909172 kernel: acpiphp: Slot [3] registered Sep 13 00:53:23.909181 kernel: acpiphp: Slot [4] registered Sep 13 00:53:23.909189 kernel: acpiphp: Slot [5] registered Sep 13 00:53:23.909198 kernel: acpiphp: Slot [6] registered Sep 13 00:53:23.909206 kernel: acpiphp: Slot [7] registered Sep 13 00:53:23.909215 kernel: acpiphp: Slot [8] registered Sep 13 00:53:23.909223 kernel: acpiphp: Slot [9] registered Sep 13 00:53:23.909231 kernel: acpiphp: Slot [10] registered Sep 13 00:53:23.909243 kernel: acpiphp: Slot [11] registered Sep 13 00:53:23.909252 kernel: acpiphp: Slot [12] registered Sep 13 00:53:23.909260 kernel: acpiphp: Slot [13] registered Sep 13 00:53:23.909268 kernel: acpiphp: Slot [14] registered Sep 13 00:53:23.909276 kernel: acpiphp: Slot [15] registered Sep 13 00:53:23.909285 kernel: acpiphp: Slot [16] registered Sep 13 00:53:23.909293 kernel: acpiphp: Slot [17] registered Sep 13 00:53:23.909301 kernel: acpiphp: Slot [18] registered Sep 13 00:53:23.909310 kernel: acpiphp: Slot [19] registered Sep 13 00:53:23.909320 kernel: acpiphp: Slot [20] registered Sep 13 00:53:23.909328 kernel: acpiphp: Slot [21] registered Sep 13 00:53:23.909337 kernel: acpiphp: Slot [22] registered Sep 13 00:53:23.909345 kernel: acpiphp: Slot [23] registered Sep 13 00:53:23.909366 kernel: acpiphp: Slot [24] registered Sep 13 00:53:23.909375 kernel: acpiphp: Slot [25] registered Sep 13 00:53:23.909383 kernel: acpiphp: Slot [26] registered Sep 13 00:53:23.909391 kernel: acpiphp: Slot [27] registered Sep 13 00:53:23.909400 kernel: acpiphp: Slot [28] registered Sep 13 00:53:23.909408 kernel: acpiphp: Slot [29] registered Sep 13 00:53:23.909420 kernel: acpiphp: Slot [30] registered Sep 13 00:53:23.909428 kernel: acpiphp: Slot [31] registered Sep 13 00:53:23.909437 kernel: PCI host bridge to bus 0000:00 Sep 13 00:53:23.909564 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:53:23.909651 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:53:23.909732 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:53:23.909844 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:53:23.909930 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 13 00:53:23.910011 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:53:23.910126 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:53:23.910229 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:53:23.910336 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 13 00:53:23.910441 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 13 00:53:23.910533 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 13 00:53:23.910620 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 13 00:53:23.910707 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 13 00:53:23.910794 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 13 00:53:23.910896 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 13 00:53:23.911018 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 13 00:53:23.914495 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:53:23.914646 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 13 00:53:23.914742 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 13 00:53:23.914853 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 13 00:53:23.914951 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 13 00:53:23.915051 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 13 00:53:23.915145 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 13 00:53:23.915239 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 13 00:53:23.915336 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:53:23.915489 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:53:23.915585 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 13 00:53:23.915675 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 13 00:53:23.915764 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 13 00:53:23.915872 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:53:23.915970 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 13 00:53:23.916091 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 13 00:53:23.916182 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 13 00:53:23.916285 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 13 00:53:23.916390 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 13 00:53:23.916481 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 13 00:53:23.916571 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 13 00:53:23.916672 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:53:23.916770 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:53:23.916862 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 13 00:53:23.916953 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 13 00:53:23.917064 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:53:23.917154 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 13 00:53:23.917244 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 13 00:53:23.917336 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 13 00:53:23.917446 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 13 00:53:23.917535 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 13 00:53:23.917646 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 13 00:53:23.917658 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:53:23.917667 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:53:23.917676 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:53:23.917688 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:53:23.917697 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:53:23.917706 kernel: iommu: Default domain type: Translated Sep 13 00:53:23.917714 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:53:23.917847 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 13 00:53:23.917942 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:53:23.918057 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 13 00:53:23.918070 kernel: vgaarb: loaded Sep 13 00:53:23.918080 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:53:23.918093 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:53:23.918102 kernel: PTP clock support registered Sep 13 00:53:23.918111 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:53:23.918120 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:53:23.918142 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:53:23.918151 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 13 00:53:23.918159 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:53:23.918168 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:53:23.918176 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:53:23.918188 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:53:23.918197 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:53:23.918205 kernel: pnp: PnP ACPI init Sep 13 00:53:23.918222 kernel: pnp: PnP ACPI: found 4 devices Sep 13 00:53:23.918230 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:53:23.918239 kernel: NET: Registered PF_INET protocol family Sep 13 00:53:23.918248 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:53:23.918257 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:53:23.918269 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:53:23.918278 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:53:23.918286 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 00:53:23.918295 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:53:23.918303 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:53:23.918312 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:53:23.918321 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:53:23.918329 kernel: NET: Registered PF_XDP protocol family Sep 13 00:53:23.918449 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:53:23.918539 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:53:23.918629 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:53:23.918711 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:53:23.918792 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 13 00:53:23.918886 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 13 00:53:23.918980 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:53:23.919092 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 13 00:53:23.919106 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:53:23.919204 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 30094 usecs Sep 13 00:53:23.919215 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:53:23.919225 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:53:23.919234 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Sep 13 00:53:23.919242 kernel: Initialise system trusted keyrings Sep 13 00:53:23.919251 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:53:23.919260 kernel: Key type asymmetric registered Sep 13 00:53:23.919268 kernel: Asymmetric key parser 'x509' registered Sep 13 00:53:23.919277 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:53:23.919288 kernel: io scheduler mq-deadline registered Sep 13 00:53:23.919297 kernel: io scheduler kyber registered Sep 13 00:53:23.919308 kernel: io scheduler bfq registered Sep 13 00:53:23.919321 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:53:23.919333 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 13 00:53:23.919342 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:53:23.919351 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:53:23.919370 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:53:23.919379 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:53:23.919392 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:53:23.919401 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:53:23.919409 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:53:23.919418 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:53:23.919556 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 00:53:23.919643 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 00:53:23.919727 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T00:53:23 UTC (1757724803) Sep 13 00:53:23.919811 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 13 00:53:23.919828 kernel: intel_pstate: CPU model not supported Sep 13 00:53:23.919841 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:53:23.919854 kernel: Segment Routing with IPv6 Sep 13 00:53:23.919866 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:53:23.919878 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:53:23.919891 kernel: Key type dns_resolver registered Sep 13 00:53:23.919905 kernel: IPI shorthand broadcast: enabled Sep 13 00:53:23.919915 kernel: sched_clock: Marking stable (627370597, 79660621)->(825340304, -118309086) Sep 13 00:53:23.919924 kernel: registered taskstats version 1 Sep 13 00:53:23.919941 kernel: Loading compiled-in X.509 certificates Sep 13 00:53:23.919950 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:53:23.919958 kernel: Key type .fscrypt registered Sep 13 00:53:23.919967 kernel: Key type fscrypt-provisioning registered Sep 13 00:53:23.919975 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:53:23.919984 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:53:23.919993 kernel: ima: No architecture policies found Sep 13 00:53:23.920001 kernel: clk: Disabling unused clocks Sep 13 00:53:23.920012 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:53:23.920021 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:53:23.920029 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:53:23.920038 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:53:23.920047 kernel: Run /init as init process Sep 13 00:53:23.920056 kernel: with arguments: Sep 13 00:53:23.920084 kernel: /init Sep 13 00:53:23.920097 kernel: with environment: Sep 13 00:53:23.920105 kernel: HOME=/ Sep 13 00:53:23.920117 kernel: TERM=linux Sep 13 00:53:23.920126 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:53:23.920138 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:53:23.920151 systemd[1]: Detected virtualization kvm. Sep 13 00:53:23.920160 systemd[1]: Detected architecture x86-64. Sep 13 00:53:23.920170 systemd[1]: Running in initrd. Sep 13 00:53:23.920179 systemd[1]: No hostname configured, using default hostname. Sep 13 00:53:23.920188 systemd[1]: Hostname set to . Sep 13 00:53:23.920201 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:53:23.920210 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:53:23.920220 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:53:23.920229 systemd[1]: Reached target cryptsetup.target. Sep 13 00:53:23.920238 systemd[1]: Reached target paths.target. Sep 13 00:53:23.920247 systemd[1]: Reached target slices.target. Sep 13 00:53:23.920256 systemd[1]: Reached target swap.target. Sep 13 00:53:23.920265 systemd[1]: Reached target timers.target. Sep 13 00:53:23.920277 systemd[1]: Listening on iscsid.socket. Sep 13 00:53:23.920291 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:53:23.920304 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:53:23.920313 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:53:23.920323 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:53:23.920332 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:53:23.920341 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:53:23.920350 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:53:23.920376 systemd[1]: Reached target sockets.target. Sep 13 00:53:23.920386 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:53:23.920398 systemd[1]: Finished network-cleanup.service. Sep 13 00:53:23.920407 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:53:23.920417 systemd[1]: Starting systemd-journald.service... Sep 13 00:53:23.920428 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:53:23.920438 systemd[1]: Starting systemd-resolved.service... Sep 13 00:53:23.920448 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:53:23.920457 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:53:23.920473 systemd-journald[184]: Journal started Sep 13 00:53:23.920541 systemd-journald[184]: Runtime Journal (/run/log/journal/02ed8da5f50d41879b5ac005e8555e49) is 4.9M, max 39.5M, 34.5M free. Sep 13 00:53:23.915492 systemd-modules-load[185]: Inserted module 'overlay' Sep 13 00:53:23.950106 systemd[1]: Started systemd-journald.service. Sep 13 00:53:23.950136 kernel: audit: type=1130 audit(1757724803.940:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.927128 systemd-resolved[186]: Positive Trust Anchors: Sep 13 00:53:23.927138 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:53:23.927172 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:53:23.930057 systemd-resolved[186]: Defaulting to hostname 'linux'. Sep 13 00:53:23.957061 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:53:23.957126 kernel: audit: type=1130 audit(1757724803.953:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.957145 kernel: audit: type=1130 audit(1757724803.954:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.957161 kernel: audit: type=1130 audit(1757724803.954:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.954180 systemd[1]: Started systemd-resolved.service. Sep 13 00:53:23.964190 kernel: Bridge firewalling registered Sep 13 00:53:23.954818 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:53:23.974103 kernel: audit: type=1130 audit(1757724803.970:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.955328 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:53:23.964808 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 13 00:53:23.973653 systemd[1]: Reached target nss-lookup.target. Sep 13 00:53:23.976464 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:53:23.978175 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:53:23.993389 kernel: SCSI subsystem initialized Sep 13 00:53:23.994082 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:53:23.997104 kernel: audit: type=1130 audit(1757724803.994:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:23.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.004805 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:53:24.007219 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:53:24.007244 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:53:24.007264 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:53:24.010522 kernel: audit: type=1130 audit(1757724804.006:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.007883 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:53:24.011467 systemd-modules-load[185]: Inserted module 'dm_multipath' Sep 13 00:53:24.012606 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:53:24.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.019508 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:53:24.022635 kernel: audit: type=1130 audit(1757724804.018:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.028155 dracut-cmdline[202]: dracut-dracut-053 Sep 13 00:53:24.031318 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:53:24.034818 kernel: audit: type=1130 audit(1757724804.031:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.034922 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:24.127403 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:53:24.148418 kernel: iscsi: registered transport (tcp) Sep 13 00:53:24.175430 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:53:24.175557 kernel: QLogic iSCSI HBA Driver Sep 13 00:53:24.226145 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:53:24.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.227813 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:53:24.289439 kernel: raid6: avx2x4 gen() 23510 MB/s Sep 13 00:53:24.306442 kernel: raid6: avx2x4 xor() 5747 MB/s Sep 13 00:53:24.323430 kernel: raid6: avx2x2 gen() 24540 MB/s Sep 13 00:53:24.340438 kernel: raid6: avx2x2 xor() 20698 MB/s Sep 13 00:53:24.357434 kernel: raid6: avx2x1 gen() 20944 MB/s Sep 13 00:53:24.374434 kernel: raid6: avx2x1 xor() 17493 MB/s Sep 13 00:53:24.391446 kernel: raid6: sse2x4 gen() 11211 MB/s Sep 13 00:53:24.408435 kernel: raid6: sse2x4 xor() 6213 MB/s Sep 13 00:53:24.425435 kernel: raid6: sse2x2 gen() 11359 MB/s Sep 13 00:53:24.442437 kernel: raid6: sse2x2 xor() 8359 MB/s Sep 13 00:53:24.459436 kernel: raid6: sse2x1 gen() 10294 MB/s Sep 13 00:53:24.476529 kernel: raid6: sse2x1 xor() 6078 MB/s Sep 13 00:53:24.476656 kernel: raid6: using algorithm avx2x2 gen() 24540 MB/s Sep 13 00:53:24.476679 kernel: raid6: .... xor() 20698 MB/s, rmw enabled Sep 13 00:53:24.477614 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:53:24.491395 kernel: xor: automatically using best checksumming function avx Sep 13 00:53:24.595405 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:53:24.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.606000 audit: BPF prog-id=7 op=LOAD Sep 13 00:53:24.606000 audit: BPF prog-id=8 op=LOAD Sep 13 00:53:24.606280 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:53:24.607795 systemd[1]: Starting systemd-udevd.service... Sep 13 00:53:24.622750 systemd-udevd[385]: Using default interface naming scheme 'v252'. Sep 13 00:53:24.628286 systemd[1]: Started systemd-udevd.service. Sep 13 00:53:24.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.632266 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:53:24.649590 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Sep 13 00:53:24.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.690594 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:53:24.692215 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:53:24.742537 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:53:24.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:24.794538 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 13 00:53:24.827698 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:53:24.827723 kernel: GPT:9289727 != 125829119 Sep 13 00:53:24.827740 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:53:24.827756 kernel: GPT:9289727 != 125829119 Sep 13 00:53:24.827772 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:53:24.827789 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:24.827803 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:53:24.827813 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:53:24.830433 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 13 00:53:24.871282 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:53:24.872764 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (436) Sep 13 00:53:24.880408 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:53:24.885274 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:53:24.980060 kernel: AES CTR mode by8 optimization enabled Sep 13 00:53:24.980086 kernel: ACPI: bus type USB registered Sep 13 00:53:24.980098 kernel: usbcore: registered new interface driver usbfs Sep 13 00:53:24.980109 kernel: usbcore: registered new interface driver hub Sep 13 00:53:24.980119 kernel: usbcore: registered new device driver usb Sep 13 00:53:24.980130 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Sep 13 00:53:24.980141 kernel: ehci-pci: EHCI PCI platform driver Sep 13 00:53:24.980151 kernel: libata version 3.00 loaded. Sep 13 00:53:24.980165 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 13 00:53:24.980442 kernel: uhci_hcd: USB Universal Host Controller Interface driver Sep 13 00:53:24.980462 kernel: scsi host1: ata_piix Sep 13 00:53:24.980608 kernel: scsi host2: ata_piix Sep 13 00:53:24.980742 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 13 00:53:24.980754 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 13 00:53:24.980765 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 13 00:53:24.980883 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 13 00:53:24.980993 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 13 00:53:24.981142 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Sep 13 00:53:24.981256 kernel: hub 1-0:1.0: USB hub found Sep 13 00:53:24.981458 kernel: hub 1-0:1.0: 2 ports detected Sep 13 00:53:24.980431 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:53:24.984177 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:53:24.987874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:53:24.991023 systemd[1]: Starting disk-uuid.service... Sep 13 00:53:24.997708 disk-uuid[505]: Primary Header is updated. Sep 13 00:53:24.997708 disk-uuid[505]: Secondary Entries is updated. Sep 13 00:53:24.997708 disk-uuid[505]: Secondary Header is updated. Sep 13 00:53:25.011458 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:25.018397 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:25.035397 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:26.023434 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:26.024006 disk-uuid[506]: The operation has completed successfully. Sep 13 00:53:26.064021 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:53:26.064133 systemd[1]: Finished disk-uuid.service. Sep 13 00:53:26.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.065972 systemd[1]: Starting verity-setup.service... Sep 13 00:53:26.084392 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:53:26.136026 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:53:26.138909 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:53:26.140735 systemd[1]: Finished verity-setup.service. Sep 13 00:53:26.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.225379 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:53:26.226084 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:53:26.227138 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:53:26.228627 systemd[1]: Starting ignition-setup.service... Sep 13 00:53:26.230351 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:53:26.246711 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:53:26.246790 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:53:26.246809 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:53:26.262586 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:53:26.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.269703 systemd[1]: Finished ignition-setup.service. Sep 13 00:53:26.271196 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:53:26.373304 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:53:26.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.374000 audit: BPF prog-id=9 op=LOAD Sep 13 00:53:26.375154 systemd[1]: Starting systemd-networkd.service... Sep 13 00:53:26.392592 ignition[616]: Ignition 2.14.0 Sep 13 00:53:26.392605 ignition[616]: Stage: fetch-offline Sep 13 00:53:26.392674 ignition[616]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:53:26.392707 ignition[616]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:53:26.397880 ignition[616]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:53:26.398034 ignition[616]: parsed url from cmdline: "" Sep 13 00:53:26.398038 ignition[616]: no config URL provided Sep 13 00:53:26.398045 ignition[616]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:53:26.398055 ignition[616]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:53:26.398061 ignition[616]: failed to fetch config: resource requires networking Sep 13 00:53:26.399503 ignition[616]: Ignition finished successfully Sep 13 00:53:26.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.401545 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:53:26.403162 systemd-networkd[691]: lo: Link UP Sep 13 00:53:26.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.403172 systemd-networkd[691]: lo: Gained carrier Sep 13 00:53:26.403780 systemd-networkd[691]: Enumeration completed Sep 13 00:53:26.404104 systemd-networkd[691]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:53:26.404254 systemd[1]: Started systemd-networkd.service. Sep 13 00:53:26.404964 systemd-networkd[691]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 13 00:53:26.405104 systemd[1]: Reached target network.target. Sep 13 00:53:26.405999 systemd-networkd[691]: eth1: Link UP Sep 13 00:53:26.406004 systemd-networkd[691]: eth1: Gained carrier Sep 13 00:53:26.407191 systemd[1]: Starting ignition-fetch.service... Sep 13 00:53:26.408322 systemd[1]: Starting iscsiuio.service... Sep 13 00:53:26.419225 systemd-networkd[691]: eth0: Link UP Sep 13 00:53:26.419230 systemd-networkd[691]: eth0: Gained carrier Sep 13 00:53:26.433444 ignition[693]: Ignition 2.14.0 Sep 13 00:53:26.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.433455 ignition[693]: Stage: fetch Sep 13 00:53:26.436366 systemd[1]: Started iscsiuio.service. Sep 13 00:53:26.433589 ignition[693]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:53:26.437943 systemd[1]: Starting iscsid.service... Sep 13 00:53:26.433631 ignition[693]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:53:26.435547 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:53:26.435687 ignition[693]: parsed url from cmdline: "" Sep 13 00:53:26.435694 ignition[693]: no config URL provided Sep 13 00:53:26.435702 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:53:26.435715 ignition[693]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:53:26.435751 ignition[693]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 13 00:53:26.442876 iscsid[701]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:53:26.442876 iscsid[701]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:53:26.442876 iscsid[701]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:53:26.442876 iscsid[701]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:53:26.442876 iscsid[701]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:53:26.442876 iscsid[701]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:53:26.444476 systemd-networkd[691]: eth1: DHCPv4 address 10.124.0.35/20 acquired from 169.254.169.253 Sep 13 00:53:26.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.445202 systemd[1]: Started iscsid.service. Sep 13 00:53:26.448235 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:53:26.448636 systemd-networkd[691]: eth0: DHCPv4 address 161.35.238.92/20, gateway 161.35.224.1 acquired from 169.254.169.253 Sep 13 00:53:26.464140 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:53:26.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.465232 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:53:26.466078 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:53:26.466806 systemd[1]: Reached target remote-fs.target. Sep 13 00:53:26.468364 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:53:26.472570 ignition[693]: GET result: OK Sep 13 00:53:26.477119 ignition[693]: parsing config with SHA512: 603206791e5f63836d9c5050660b83c9481a1812019e08fb6ae8c11deefd502b4e9079156032122cbf75e1d0a757868b624b8be1c7917ed99c8302edc565def1 Sep 13 00:53:26.483064 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:53:26.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.487717 unknown[693]: fetched base config from "system" Sep 13 00:53:26.488218 unknown[693]: fetched base config from "system" Sep 13 00:53:26.488618 unknown[693]: fetched user config from "digitalocean" Sep 13 00:53:26.489627 ignition[693]: fetch: fetch complete Sep 13 00:53:26.490069 ignition[693]: fetch: fetch passed Sep 13 00:53:26.490579 ignition[693]: Ignition finished successfully Sep 13 00:53:26.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.492395 systemd[1]: Finished ignition-fetch.service. Sep 13 00:53:26.493679 systemd[1]: Starting ignition-kargs.service... Sep 13 00:53:26.505330 ignition[717]: Ignition 2.14.0 Sep 13 00:53:26.505344 ignition[717]: Stage: kargs Sep 13 00:53:26.505530 ignition[717]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:53:26.505550 ignition[717]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:53:26.507405 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:53:26.510014 ignition[717]: kargs: kargs passed Sep 13 00:53:26.510076 ignition[717]: Ignition finished successfully Sep 13 00:53:26.511664 systemd[1]: Finished ignition-kargs.service. Sep 13 00:53:26.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.513153 systemd[1]: Starting ignition-disks.service... Sep 13 00:53:26.523139 ignition[723]: Ignition 2.14.0 Sep 13 00:53:26.523802 ignition[723]: Stage: disks Sep 13 00:53:26.524272 ignition[723]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:53:26.524775 ignition[723]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:53:26.526703 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:53:26.529289 ignition[723]: disks: disks passed Sep 13 00:53:26.530585 ignition[723]: Ignition finished successfully Sep 13 00:53:26.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.531870 systemd[1]: Finished ignition-disks.service. Sep 13 00:53:26.532398 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:53:26.532775 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:53:26.533325 systemd[1]: Reached target local-fs.target. Sep 13 00:53:26.533948 systemd[1]: Reached target sysinit.target. Sep 13 00:53:26.534496 systemd[1]: Reached target basic.target. Sep 13 00:53:26.536245 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:53:26.551725 systemd-fsck[732]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:53:26.555808 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:53:26.557125 systemd[1]: Mounting sysroot.mount... Sep 13 00:53:26.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.569380 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:53:26.569619 systemd[1]: Mounted sysroot.mount. Sep 13 00:53:26.570175 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:53:26.571913 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:53:26.573107 systemd[1]: Starting flatcar-digitalocean-network.service... Sep 13 00:53:26.574823 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 00:53:26.575250 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:53:26.575285 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:53:26.579173 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:53:26.583299 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:53:26.593700 initrd-setup-root[744]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:53:26.606231 initrd-setup-root[752]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:53:26.615549 initrd-setup-root[762]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:53:26.625547 initrd-setup-root[772]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:53:26.698948 coreos-metadata[739]: Sep 13 00:53:26.698 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:53:26.700043 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:53:26.701121 coreos-metadata[738]: Sep 13 00:53:26.700 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:53:26.701698 systemd[1]: Starting ignition-mount.service... Sep 13 00:53:26.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.703077 systemd[1]: Starting sysroot-boot.service... Sep 13 00:53:26.714289 bash[789]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:53:26.720382 coreos-metadata[739]: Sep 13 00:53:26.717 INFO Fetch successful Sep 13 00:53:26.723464 coreos-metadata[738]: Sep 13 00:53:26.723 INFO Fetch successful Sep 13 00:53:26.724964 ignition[790]: INFO : Ignition 2.14.0 Sep 13 00:53:26.724964 ignition[790]: INFO : Stage: mount Sep 13 00:53:26.725931 ignition[790]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:53:26.725931 ignition[790]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:53:26.728379 ignition[790]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:53:26.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.731201 ignition[790]: INFO : mount: mount passed Sep 13 00:53:26.731201 ignition[790]: INFO : Ignition finished successfully Sep 13 00:53:26.730618 systemd[1]: Finished ignition-mount.service. Sep 13 00:53:26.733161 coreos-metadata[739]: Sep 13 00:53:26.732 INFO wrote hostname ci-3510.3.8-n-b7c626372f to /sysroot/etc/hostname Sep 13 00:53:26.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.734396 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 00:53:26.736316 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 13 00:53:26.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:26.736413 systemd[1]: Finished flatcar-digitalocean-network.service. Sep 13 00:53:26.751459 systemd[1]: Finished sysroot-boot.service. Sep 13 00:53:26.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.154956 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:53:27.165411 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (798) Sep 13 00:53:27.174045 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:53:27.174115 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:53:27.174129 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:53:27.179101 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:53:27.180877 systemd[1]: Starting ignition-files.service... Sep 13 00:53:27.202650 ignition[818]: INFO : Ignition 2.14.0 Sep 13 00:53:27.203402 ignition[818]: INFO : Stage: files Sep 13 00:53:27.203924 ignition[818]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:53:27.204426 ignition[818]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:53:27.207000 ignition[818]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:53:27.210193 ignition[818]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:53:27.211819 ignition[818]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:53:27.211819 ignition[818]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:53:27.214481 ignition[818]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:53:27.215017 ignition[818]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:53:27.215532 ignition[818]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:53:27.215484 unknown[818]: wrote ssh authorized keys file for user: core Sep 13 00:53:27.216628 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:53:27.216628 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:53:27.216628 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:53:27.216628 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:53:27.257280 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:53:27.452946 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:53:27.452946 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:27.454733 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:53:27.744811 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:53:27.879545 systemd-networkd[691]: eth0: Gained IPv6LL Sep 13 00:53:28.149232 ignition[818]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:28.149232 ignition[818]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:53:28.149232 ignition[818]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:53:28.149232 ignition[818]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 13 00:53:28.152061 ignition[818]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:53:28.152061 ignition[818]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:53:28.152061 ignition[818]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 13 00:53:28.152061 ignition[818]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 13 00:53:28.152061 ignition[818]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:53:28.152061 ignition[818]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:53:28.152061 ignition[818]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 13 00:53:28.152061 ignition[818]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:53:28.152061 ignition[818]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:53:28.152061 ignition[818]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:53:28.152061 ignition[818]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:53:28.157935 ignition[818]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:53:28.158460 ignition[818]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:53:28.158460 ignition[818]: INFO : files: files passed Sep 13 00:53:28.158460 ignition[818]: INFO : Ignition finished successfully Sep 13 00:53:28.160227 systemd[1]: Finished ignition-files.service. Sep 13 00:53:28.166640 kernel: kauditd_printk_skb: 29 callbacks suppressed Sep 13 00:53:28.166666 kernel: audit: type=1130 audit(1757724808.160:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.161724 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:53:28.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.164256 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:53:28.174803 kernel: audit: type=1130 audit(1757724808.169:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.174831 kernel: audit: type=1131 audit(1757724808.169:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.165289 systemd[1]: Starting ignition-quench.service... Sep 13 00:53:28.175411 initrd-setup-root-after-ignition[843]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:53:28.169184 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:53:28.179655 kernel: audit: type=1130 audit(1757724808.176:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.169280 systemd[1]: Finished ignition-quench.service. Sep 13 00:53:28.175897 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:53:28.176552 systemd[1]: Reached target ignition-complete.target. Sep 13 00:53:28.180843 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:53:28.199392 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:53:28.200075 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:53:28.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.205421 kernel: audit: type=1130 audit(1757724808.200:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.205478 kernel: audit: type=1131 audit(1757724808.202:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.205621 systemd[1]: Reached target initrd-fs.target. Sep 13 00:53:28.206113 systemd[1]: Reached target initrd.target. Sep 13 00:53:28.206778 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:53:28.207768 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:53:28.222658 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:53:28.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.225048 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:53:28.226762 kernel: audit: type=1130 audit(1757724808.223:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.236573 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:53:28.237541 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:53:28.238405 systemd[1]: Stopped target timers.target. Sep 13 00:53:28.245016 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:53:28.245149 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:53:28.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.246147 systemd[1]: Stopped target initrd.target. Sep 13 00:53:28.249438 kernel: audit: type=1131 audit(1757724808.245:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.249920 systemd[1]: Stopped target basic.target. Sep 13 00:53:28.250842 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:53:28.251717 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:53:28.252585 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:53:28.253411 systemd[1]: Stopped target remote-fs.target. Sep 13 00:53:28.254209 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:53:28.255056 systemd[1]: Stopped target sysinit.target. Sep 13 00:53:28.255917 systemd[1]: Stopped target local-fs.target. Sep 13 00:53:28.256733 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:53:28.257668 systemd[1]: Stopped target swap.target. Sep 13 00:53:28.258428 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:53:28.258978 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:53:28.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.260010 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:53:28.262487 kernel: audit: type=1131 audit(1757724808.259:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.262802 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:53:28.262935 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:53:28.263729 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:53:28.267159 kernel: audit: type=1131 audit(1757724808.263:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.263829 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:53:28.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.266813 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:53:28.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.266923 systemd[1]: Stopped ignition-files.service. Sep 13 00:53:28.267540 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:53:28.267656 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 00:53:28.269158 systemd[1]: Stopping ignition-mount.service... Sep 13 00:53:28.269879 systemd[1]: Stopping iscsiuio.service... Sep 13 00:53:28.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.274871 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:53:28.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.275059 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:53:28.290645 ignition[856]: INFO : Ignition 2.14.0 Sep 13 00:53:28.290645 ignition[856]: INFO : Stage: umount Sep 13 00:53:28.290645 ignition[856]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:53:28.290645 ignition[856]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:53:28.290645 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:53:28.290645 ignition[856]: INFO : umount: umount passed Sep 13 00:53:28.290645 ignition[856]: INFO : Ignition finished successfully Sep 13 00:53:28.276754 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:53:28.285863 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:53:28.286097 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:53:28.286601 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:53:28.286697 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:53:28.288879 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:53:28.288981 systemd[1]: Stopped iscsiuio.service. Sep 13 00:53:28.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.298048 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:53:28.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.298140 systemd[1]: Stopped ignition-mount.service. Sep 13 00:53:28.299765 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:53:28.301533 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:53:28.301655 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:53:28.310530 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:53:28.310601 systemd[1]: Stopped ignition-disks.service. Sep 13 00:53:28.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.310959 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:53:28.310995 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:53:28.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.312205 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:53:28.312239 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:53:28.312543 systemd[1]: Stopped target network.target. Sep 13 00:53:28.312798 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:53:28.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.312832 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:53:28.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.313134 systemd[1]: Stopped target paths.target. Sep 13 00:53:28.315112 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:53:28.318438 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:53:28.334000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:53:28.323326 systemd[1]: Stopped target slices.target. Sep 13 00:53:28.324583 systemd[1]: Stopped target sockets.target. Sep 13 00:53:28.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.325155 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:53:28.325189 systemd[1]: Closed iscsid.socket. Sep 13 00:53:28.325771 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:53:28.325825 systemd[1]: Closed iscsiuio.socket. Sep 13 00:53:28.326144 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:53:28.326192 systemd[1]: Stopped ignition-setup.service. Sep 13 00:53:28.326856 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:53:28.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.327427 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:53:28.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.329471 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:53:28.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.329563 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:53:28.330418 systemd-networkd[691]: eth1: DHCPv6 lease lost Sep 13 00:53:28.332148 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:53:28.332231 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:53:28.333081 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:53:28.333122 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:53:28.334498 systemd-networkd[691]: eth0: DHCPv6 lease lost Sep 13 00:53:28.348000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:53:28.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.335502 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:53:28.335593 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:53:28.336480 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:53:28.336513 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:53:28.338164 systemd[1]: Stopping network-cleanup.service... Sep 13 00:53:28.338802 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:53:28.338862 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:53:28.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.339487 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:53:28.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.339528 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:53:28.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.340158 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:53:28.340199 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:53:28.340901 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:53:28.347308 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:53:28.348978 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:53:28.349141 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:53:28.350204 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:53:28.350253 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:53:28.352326 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:53:28.352423 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:53:28.353020 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:53:28.353064 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:53:28.353726 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:53:28.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.353821 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:53:28.354309 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:53:28.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.354347 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:53:28.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.356098 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:53:28.365532 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:53:28.365607 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:53:28.366632 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:53:28.366727 systemd[1]: Stopped network-cleanup.service. Sep 13 00:53:28.367301 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:53:28.367409 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:53:28.367874 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:53:28.369236 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:53:28.378493 systemd[1]: Switching root. Sep 13 00:53:28.380000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:53:28.380000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:53:28.383000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:53:28.383000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:53:28.383000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:53:28.397020 iscsid[701]: iscsid shutting down. Sep 13 00:53:28.397686 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 13 00:53:28.397824 systemd-journald[184]: Journal stopped Sep 13 00:53:31.694823 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:53:31.694893 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:53:31.694913 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:53:31.694926 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:53:31.694944 kernel: SELinux: policy capability open_perms=1 Sep 13 00:53:31.694959 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:53:31.694972 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:53:31.694994 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:53:31.695012 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:53:31.695023 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:53:31.695041 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:53:31.695056 systemd[1]: Successfully loaded SELinux policy in 42.235ms. Sep 13 00:53:31.695081 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.272ms. Sep 13 00:53:31.695095 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:53:31.695112 systemd[1]: Detected virtualization kvm. Sep 13 00:53:31.695130 systemd[1]: Detected architecture x86-64. Sep 13 00:53:31.695142 systemd[1]: Detected first boot. Sep 13 00:53:31.695155 systemd[1]: Hostname set to . Sep 13 00:53:31.695167 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:53:31.695181 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:53:31.695197 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:53:31.695209 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:31.695227 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:31.695241 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:31.695256 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:53:31.695273 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:53:31.695285 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:53:31.695298 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:53:31.695317 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:53:31.695334 systemd[1]: Created slice system-getty.slice. Sep 13 00:53:31.695346 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:53:31.695377 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:53:31.695392 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:53:31.695406 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:53:31.695418 systemd[1]: Created slice user.slice. Sep 13 00:53:31.695431 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:53:31.695443 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:53:31.695457 systemd[1]: Set up automount boot.automount. Sep 13 00:53:31.695473 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:53:31.695486 systemd[1]: Reached target integritysetup.target. Sep 13 00:53:31.695499 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:53:31.695513 systemd[1]: Reached target remote-fs.target. Sep 13 00:53:31.695525 systemd[1]: Reached target slices.target. Sep 13 00:53:31.695538 systemd[1]: Reached target swap.target. Sep 13 00:53:31.695554 systemd[1]: Reached target torcx.target. Sep 13 00:53:31.695566 systemd[1]: Reached target veritysetup.target. Sep 13 00:53:31.695578 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:53:31.695590 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:53:31.695604 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:53:31.695616 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:53:31.695628 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:53:31.695647 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:53:31.695659 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:53:31.695671 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:53:31.695687 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:53:31.695700 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:53:31.695712 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:53:31.695724 systemd[1]: Mounting media.mount... Sep 13 00:53:31.695736 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:31.695748 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:53:31.695761 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:53:31.695773 systemd[1]: Mounting tmp.mount... Sep 13 00:53:31.695786 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:53:31.695802 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:31.695814 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:53:31.695827 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:53:31.695839 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:31.695852 systemd[1]: Starting modprobe@drm.service... Sep 13 00:53:31.695864 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:31.695877 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:53:31.695889 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:31.695902 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:53:31.695923 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:53:31.695935 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:53:31.695947 systemd[1]: Starting systemd-journald.service... Sep 13 00:53:31.695959 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:53:31.695972 kernel: fuse: init (API version 7.34) Sep 13 00:53:31.695984 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:53:31.695997 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:53:31.696010 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:53:31.696023 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:31.696039 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:53:31.696052 kernel: loop: module loaded Sep 13 00:53:31.696064 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:53:31.696077 systemd[1]: Mounted media.mount. Sep 13 00:53:31.696090 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:53:31.696103 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:53:31.696116 systemd[1]: Mounted tmp.mount. Sep 13 00:53:31.696128 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:53:31.696141 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:53:31.696156 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:53:31.696168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:31.696180 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:31.696198 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:53:31.696210 systemd[1]: Finished modprobe@drm.service. Sep 13 00:53:31.696222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:31.696234 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:31.696248 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:53:31.696260 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:53:31.696276 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:31.696288 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:31.696307 systemd-journald[997]: Journal started Sep 13 00:53:31.700131 systemd-journald[997]: Runtime Journal (/run/log/journal/02ed8da5f50d41879b5ac005e8555e49) is 4.9M, max 39.5M, 34.5M free. Sep 13 00:53:31.700220 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:53:31.700255 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:53:31.501000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:53:31.501000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:53:31.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.693000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:53:31.693000 audit[997]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc97f38110 a2=4000 a3=7ffc97f381ac items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:31.693000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:53:31.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.707390 systemd[1]: Started systemd-journald.service. Sep 13 00:53:31.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.705889 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:53:31.706662 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:53:31.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.708862 systemd[1]: Reached target network-pre.target. Sep 13 00:53:31.713468 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:53:31.716056 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:53:31.718715 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:53:31.720900 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:53:31.723210 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:53:31.756109 systemd-journald[997]: Time spent on flushing to /var/log/journal/02ed8da5f50d41879b5ac005e8555e49 is 53.666ms for 1078 entries. Sep 13 00:53:31.756109 systemd-journald[997]: System Journal (/var/log/journal/02ed8da5f50d41879b5ac005e8555e49) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:53:31.828015 systemd-journald[997]: Received client request to flush runtime journal. Sep 13 00:53:31.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.727536 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:31.729204 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:53:31.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.734133 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:31.739572 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:53:31.743055 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:53:31.747757 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:53:31.748219 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:53:31.763563 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:53:31.764077 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:53:31.791213 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:53:31.812821 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:53:31.814972 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:53:31.829209 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:53:31.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.841277 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:53:31.843222 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:53:31.865704 udevadm[1054]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:53:31.866807 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:53:31.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.426985 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:53:32.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.429325 systemd[1]: Starting systemd-udevd.service... Sep 13 00:53:32.454581 systemd-udevd[1057]: Using default interface naming scheme 'v252'. Sep 13 00:53:32.480703 systemd[1]: Started systemd-udevd.service. Sep 13 00:53:32.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.483392 systemd[1]: Starting systemd-networkd.service... Sep 13 00:53:32.505810 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:53:32.570826 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:53:32.574270 systemd[1]: Started systemd-userdbd.service. Sep 13 00:53:32.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.595087 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:32.595345 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:32.597099 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:32.600144 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:32.601998 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:32.604408 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:53:32.604479 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:53:32.604594 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:32.605159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:32.605515 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:32.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.606474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:32.606651 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:32.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.607260 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:32.607453 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:32.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.618105 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:32.618178 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:32.661333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:53:32.721290 systemd-networkd[1061]: lo: Link UP Sep 13 00:53:32.721303 systemd-networkd[1061]: lo: Gained carrier Sep 13 00:53:32.722018 systemd-networkd[1061]: Enumeration completed Sep 13 00:53:32.722196 systemd[1]: Started systemd-networkd.service. Sep 13 00:53:32.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.722753 systemd-networkd[1061]: eth1: Configuring with /run/systemd/network/10-6a:07:46:8f:1d:45.network. Sep 13 00:53:32.724037 systemd-networkd[1061]: eth0: Configuring with /run/systemd/network/10-22:b0:7f:fa:a2:6c.network. Sep 13 00:53:32.724819 systemd-networkd[1061]: eth1: Link UP Sep 13 00:53:32.724827 systemd-networkd[1061]: eth1: Gained carrier Sep 13 00:53:32.729418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:53:32.729841 systemd-networkd[1061]: eth0: Link UP Sep 13 00:53:32.729852 systemd-networkd[1061]: eth0: Gained carrier Sep 13 00:53:32.742424 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:53:32.753000 audit[1067]: AVC avc: denied { confidentiality } for pid=1067 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:53:32.753000 audit[1067]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563ec525bc20 a1=338ec a2=7f54d695ebc5 a3=5 items=110 ppid=1057 pid=1067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:32.753000 audit: CWD cwd="/" Sep 13 00:53:32.753000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=1 name=(null) inode=13756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=2 name=(null) inode=13756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=3 name=(null) inode=13757 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=4 name=(null) inode=13756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=5 name=(null) inode=13758 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=6 name=(null) inode=13756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=7 name=(null) inode=13759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=8 name=(null) inode=13759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=9 name=(null) inode=13760 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=10 name=(null) inode=13759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=11 name=(null) inode=13761 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=12 name=(null) inode=13759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=13 name=(null) inode=13762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=14 name=(null) inode=13759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=15 name=(null) inode=13763 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=16 name=(null) inode=13759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=17 name=(null) inode=13764 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=18 name=(null) inode=13756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=19 name=(null) inode=13765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=20 name=(null) inode=13765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=21 name=(null) inode=13766 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=22 name=(null) inode=13765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=23 name=(null) inode=13767 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=24 name=(null) inode=13765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=25 name=(null) inode=13768 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=26 name=(null) inode=13765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=27 name=(null) inode=13769 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=28 name=(null) inode=13765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=29 name=(null) inode=13770 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=30 name=(null) inode=13756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=31 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=32 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=33 name=(null) inode=13772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=34 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=35 name=(null) inode=13773 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=36 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=37 name=(null) inode=13774 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=38 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=39 name=(null) inode=13775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=40 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=41 name=(null) inode=13776 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=42 name=(null) inode=13756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=43 name=(null) inode=13777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=44 name=(null) inode=13777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=45 name=(null) inode=13778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=46 name=(null) inode=13777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=47 name=(null) inode=13779 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=48 name=(null) inode=13777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=49 name=(null) inode=13780 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=50 name=(null) inode=13777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=51 name=(null) inode=13781 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=52 name=(null) inode=13777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=53 name=(null) inode=13782 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=55 name=(null) inode=13783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=56 name=(null) inode=13783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=57 name=(null) inode=13784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=58 name=(null) inode=13783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=59 name=(null) inode=13785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=60 name=(null) inode=13783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=61 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=62 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=63 name=(null) inode=13787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=64 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=65 name=(null) inode=13788 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=66 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=67 name=(null) inode=13789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=68 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=69 name=(null) inode=13790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=70 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=71 name=(null) inode=13791 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=72 name=(null) inode=13783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=73 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=74 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=75 name=(null) inode=13793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=76 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=77 name=(null) inode=13794 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=78 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=79 name=(null) inode=13795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=80 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=81 name=(null) inode=13796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=82 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=83 name=(null) inode=13797 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=84 name=(null) inode=13783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=85 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=86 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=87 name=(null) inode=13799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=88 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=89 name=(null) inode=13800 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=90 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=91 name=(null) inode=13801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=92 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=93 name=(null) inode=13802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=94 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=95 name=(null) inode=13803 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=96 name=(null) inode=13783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=97 name=(null) inode=13804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=98 name=(null) inode=13804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=99 name=(null) inode=13805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=100 name=(null) inode=13804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=101 name=(null) inode=13806 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=102 name=(null) inode=13804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=103 name=(null) inode=13807 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=104 name=(null) inode=13804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=105 name=(null) inode=13808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=106 name=(null) inode=13804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=107 name=(null) inode=13809 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PATH item=109 name=(null) inode=13810 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.753000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:53:32.774499 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 13 00:53:32.820426 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:53:32.825396 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:53:32.937385 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:53:32.958052 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:53:32.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.960268 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:53:32.980107 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:53:33.008225 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:53:33.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.008798 systemd[1]: Reached target cryptsetup.target. Sep 13 00:53:33.011039 systemd[1]: Starting lvm2-activation.service... Sep 13 00:53:33.018334 lvm[1102]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:53:33.046197 systemd[1]: Finished lvm2-activation.service. Sep 13 00:53:33.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.046832 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:53:33.049431 systemd[1]: Mounting media-configdrive.mount... Sep 13 00:53:33.050005 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:53:33.050078 systemd[1]: Reached target machines.target. Sep 13 00:53:33.053980 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:53:33.067651 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:53:33.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.072379 kernel: ISO 9660 Extensions: RRIP_1991A Sep 13 00:53:33.074037 systemd[1]: Mounted media-configdrive.mount. Sep 13 00:53:33.074602 systemd[1]: Reached target local-fs.target. Sep 13 00:53:33.076765 systemd[1]: Starting ldconfig.service... Sep 13 00:53:33.077741 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.077846 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:33.079875 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:53:33.082584 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:53:33.090898 systemd[1]: Starting systemd-sysext.service... Sep 13 00:53:33.101517 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1112 (bootctl) Sep 13 00:53:33.103588 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:53:33.125343 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:53:33.136505 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:53:33.138271 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:53:33.147640 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:53:33.149944 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:53:33.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.177388 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:53:33.204396 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:53:33.226116 systemd-fsck[1119]: fsck.fat 4.2 (2021-01-31) Sep 13 00:53:33.226116 systemd-fsck[1119]: /dev/vda1: 790 files, 120761/258078 clusters Sep 13 00:53:33.233848 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:53:33.233973 kernel: kauditd_printk_skb: 206 callbacks suppressed Sep 13 00:53:33.234006 kernel: audit: type=1130 audit(1757724813.231:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.231075 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:53:33.233653 systemd[1]: Mounting boot.mount... Sep 13 00:53:33.260641 systemd[1]: Mounted boot.mount. Sep 13 00:53:33.276747 (sd-sysext)[1125]: Using extensions 'kubernetes'. Sep 13 00:53:33.279450 (sd-sysext)[1125]: Merged extensions into '/usr'. Sep 13 00:53:33.305927 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:53:33.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.307341 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:33.309400 kernel: audit: type=1130 audit(1757724813.306:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.309984 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:53:33.310818 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.313082 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:33.315840 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:33.325627 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:33.328601 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.328841 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:33.328992 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:33.330269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:33.330484 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:33.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.332516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:33.332688 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:33.341300 kernel: audit: type=1130 audit(1757724813.331:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.341431 kernel: audit: type=1131 audit(1757724813.331:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.341464 kernel: audit: type=1130 audit(1757724813.338:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.341485 kernel: audit: type=1131 audit(1757724813.338:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.341194 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:53:33.346964 systemd[1]: Finished systemd-sysext.service. Sep 13 00:53:33.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.349976 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:33.350470 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:33.352952 kernel: audit: type=1130 audit(1757724813.349:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.356710 systemd[1]: Starting ensure-sysext.service... Sep 13 00:53:33.357408 kernel: audit: type=1130 audit(1757724813.352:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.357657 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:33.358155 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.362931 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:53:33.369416 kernel: audit: type=1131 audit(1757724813.352:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.378556 systemd[1]: Reloading. Sep 13 00:53:33.407969 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:53:33.411225 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:53:33.414758 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:53:33.528898 ldconfig[1111]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:53:33.561153 /usr/lib/systemd/system-generators/torcx-generator[1163]: time="2025-09-13T00:53:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:33.561186 /usr/lib/systemd/system-generators/torcx-generator[1163]: time="2025-09-13T00:53:33Z" level=info msg="torcx already run" Sep 13 00:53:33.681547 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:33.681579 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:33.708032 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:33.767618 systemd-networkd[1061]: eth1: Gained IPv6LL Sep 13 00:53:33.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.789259 systemd[1]: Finished ldconfig.service. Sep 13 00:53:33.794046 kernel: audit: type=1130 audit(1757724813.789:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.793656 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:53:33.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.798149 systemd[1]: Starting audit-rules.service... Sep 13 00:53:33.800834 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:53:33.803704 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:53:33.808213 systemd[1]: Starting systemd-resolved.service... Sep 13 00:53:33.813907 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:53:33.818282 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:53:33.825818 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:53:33.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.828817 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:53:33.834296 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.838895 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:33.843126 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:33.848613 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:33.849154 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.849345 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:33.849549 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:53:33.851977 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:33.852185 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:33.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.867292 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:33.867494 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:33.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.870792 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:33.871007 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:33.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.872924 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:33.873047 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.875005 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.879052 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:33.883441 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:33.891329 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:33.892565 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.892766 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:33.892929 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:53:33.894162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:33.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.894385 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:33.896850 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:33.897064 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:33.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.898026 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:33.903310 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.912000 audit[1224]: SYSTEM_BOOT pid=1224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.914571 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:33.916995 systemd[1]: Starting modprobe@drm.service... Sep 13 00:53:33.919131 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:33.920390 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.920605 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:33.923373 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:53:33.924746 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:53:33.927726 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:33.927953 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:33.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.941848 systemd[1]: Finished ensure-sysext.service. Sep 13 00:53:33.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.942893 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:53:33.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.943675 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:53:33.943840 systemd[1]: Finished modprobe@drm.service. Sep 13 00:53:33.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.944586 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:33.944783 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:33.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.945622 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:53:33.947665 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:33.950621 systemd[1]: Starting systemd-update-done.service... Sep 13 00:53:33.957947 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:53:33.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.983124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:33.983312 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:33.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.984158 systemd[1]: Finished systemd-update-done.service. Sep 13 00:53:33.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:33.984769 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:34.010000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:53:34.010000 audit[1268]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc41b231b0 a2=420 a3=0 items=0 ppid=1219 pid=1268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:34.010000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:53:34.011038 augenrules[1268]: No rules Sep 13 00:53:34.012313 systemd[1]: Finished audit-rules.service. Sep 13 00:53:34.036961 systemd-resolved[1222]: Positive Trust Anchors: Sep 13 00:53:34.036977 systemd-resolved[1222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:53:34.037009 systemd-resolved[1222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:53:34.043479 systemd-resolved[1222]: Using system hostname 'ci-3510.3.8-n-b7c626372f'. Sep 13 00:53:34.045895 systemd[1]: Started systemd-resolved.service. Sep 13 00:53:34.046339 systemd[1]: Reached target network.target. Sep 13 00:53:34.046639 systemd[1]: Reached target network-online.target. Sep 13 00:53:34.046982 systemd[1]: Reached target nss-lookup.target. Sep 13 00:53:34.055583 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:53:34.056150 systemd[1]: Reached target sysinit.target. Sep 13 00:53:34.056574 systemd[1]: Started motdgen.path. Sep 13 00:53:34.056984 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:53:34.057424 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:53:34.057728 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:53:34.057835 systemd[1]: Reached target paths.target. Sep 13 00:53:34.058146 systemd[1]: Reached target time-set.target. Sep 13 00:53:34.058777 systemd[1]: Started logrotate.timer. Sep 13 00:53:34.059331 systemd[1]: Started mdadm.timer. Sep 13 00:53:34.059720 systemd[1]: Reached target timers.target. Sep 13 00:53:34.060625 systemd[1]: Listening on dbus.socket. Sep 13 00:53:34.062780 systemd[1]: Starting docker.socket... Sep 13 00:53:34.065261 systemd[1]: Listening on sshd.socket. Sep 13 00:53:34.065910 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:34.066527 systemd[1]: Listening on docker.socket. Sep 13 00:53:34.067007 systemd[1]: Reached target sockets.target. Sep 13 00:53:34.067454 systemd[1]: Reached target basic.target. Sep 13 00:53:34.068082 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:53:34.068134 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:53:34.068165 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:53:34.069865 systemd[1]: Starting containerd.service... Sep 13 00:53:34.071570 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:53:34.073452 systemd[1]: Starting dbus.service... Sep 13 00:53:34.075534 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:53:34.080081 systemd[1]: Starting extend-filesystems.service... Sep 13 00:53:34.083759 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:53:34.085658 systemd[1]: Starting kubelet.service... Sep 13 00:53:34.089618 systemd[1]: Starting motdgen.service... Sep 13 00:53:34.094930 systemd[1]: Starting prepare-helm.service... Sep 13 00:53:34.097106 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:53:34.099465 systemd[1]: Starting sshd-keygen.service... Sep 13 00:53:34.105493 systemd[1]: Starting systemd-logind.service... Sep 13 00:53:34.105951 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:34.106064 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:53:34.109197 systemd[1]: Starting update-engine.service... Sep 13 00:53:34.116507 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:53:34.127791 jq[1280]: false Sep 13 00:53:34.131598 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:53:34.131928 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:53:34.132368 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:34.133177 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:34.158013 jq[1294]: true Sep 13 00:53:34.161474 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:53:34.161787 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:53:34.173260 tar[1300]: linux-amd64/helm Sep 13 00:53:34.187351 dbus-daemon[1279]: [system] SELinux support is enabled Sep 13 00:53:34.187924 systemd[1]: Started dbus.service. Sep 13 00:53:34.191072 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:53:34.191117 systemd[1]: Reached target system-config.target. Sep 13 00:53:34.191562 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:53:34.191582 systemd[1]: Reached target user-config.target. Sep 13 00:53:34.218076 jq[1316]: true Sep 13 00:53:34.220524 extend-filesystems[1282]: Found loop1 Sep 13 00:53:34.223489 extend-filesystems[1282]: Found vda Sep 13 00:53:34.229188 extend-filesystems[1282]: Found vda1 Sep 13 00:53:34.229688 extend-filesystems[1282]: Found vda2 Sep 13 00:53:34.230179 extend-filesystems[1282]: Found vda3 Sep 13 00:53:34.230617 extend-filesystems[1282]: Found usr Sep 13 00:53:34.230998 extend-filesystems[1282]: Found vda4 Sep 13 00:53:34.231665 extend-filesystems[1282]: Found vda6 Sep 13 00:53:34.231665 extend-filesystems[1282]: Found vda7 Sep 13 00:53:34.231665 extend-filesystems[1282]: Found vda9 Sep 13 00:53:34.231665 extend-filesystems[1282]: Checking size of /dev/vda9 Sep 13 00:53:34.250166 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:53:34.250530 systemd[1]: Finished motdgen.service. Sep 13 00:53:34.267447 extend-filesystems[1282]: Resized partition /dev/vda9 Sep 13 00:53:34.295063 extend-filesystems[1336]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:53:34.304384 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 13 00:53:34.318436 update_engine[1293]: I0913 00:53:34.317436 1293 main.cc:92] Flatcar Update Engine starting Sep 13 00:53:34.327984 systemd[1]: Started update-engine.service. Sep 13 00:53:34.330946 systemd[1]: Started locksmithd.service. Sep 13 00:53:34.331814 update_engine[1293]: I0913 00:53:34.331727 1293 update_check_scheduler.cc:74] Next update check in 11m38s Sep 13 00:53:35.450987 systemd-resolved[1222]: Clock change detected. Flushing caches. Sep 13 00:53:35.451243 systemd-timesyncd[1223]: Contacted time server 23.95.49.216:123 (0.flatcar.pool.ntp.org). Sep 13 00:53:35.451494 systemd-timesyncd[1223]: Initial clock synchronization to Sat 2025-09-13 00:53:35.450470 UTC. Sep 13 00:53:35.500759 env[1311]: time="2025-09-13T00:53:35.500672098Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:53:35.504941 bash[1345]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:53:35.506190 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:53:35.524479 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 13 00:53:35.548030 extend-filesystems[1336]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:53:35.548030 extend-filesystems[1336]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 13 00:53:35.548030 extend-filesystems[1336]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 13 00:53:35.549575 extend-filesystems[1282]: Resized filesystem in /dev/vda9 Sep 13 00:53:35.549575 extend-filesystems[1282]: Found vdb Sep 13 00:53:35.548611 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:53:35.549087 systemd[1]: Finished extend-filesystems.service. Sep 13 00:53:35.559777 systemd-logind[1291]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:53:35.559809 systemd-logind[1291]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:53:35.563635 systemd-logind[1291]: New seat seat0. Sep 13 00:53:35.563864 coreos-metadata[1278]: Sep 13 00:53:35.558 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:53:35.569792 systemd[1]: Started systemd-logind.service. Sep 13 00:53:35.582769 systemd-networkd[1061]: eth0: Gained IPv6LL Sep 13 00:53:35.584424 coreos-metadata[1278]: Sep 13 00:53:35.584 INFO Fetch successful Sep 13 00:53:35.593561 unknown[1278]: wrote ssh authorized keys file for user: core Sep 13 00:53:35.609510 update-ssh-keys[1352]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:53:35.610387 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:53:35.645124 env[1311]: time="2025-09-13T00:53:35.645052492Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:53:35.645320 env[1311]: time="2025-09-13T00:53:35.645292631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:35.657369 env[1311]: time="2025-09-13T00:53:35.655606550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:53:35.657369 env[1311]: time="2025-09-13T00:53:35.657317612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:35.657979 env[1311]: time="2025-09-13T00:53:35.657938192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:53:35.658040 env[1311]: time="2025-09-13T00:53:35.657979829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:35.658040 env[1311]: time="2025-09-13T00:53:35.658005814Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:53:35.658040 env[1311]: time="2025-09-13T00:53:35.658021564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:35.658209 env[1311]: time="2025-09-13T00:53:35.658182339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:35.659910 env[1311]: time="2025-09-13T00:53:35.659870014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:35.663518 env[1311]: time="2025-09-13T00:53:35.663447413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:53:35.663518 env[1311]: time="2025-09-13T00:53:35.663517938Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:53:35.663705 env[1311]: time="2025-09-13T00:53:35.663675909Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:53:35.663751 env[1311]: time="2025-09-13T00:53:35.663705799Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:53:35.669849 env[1311]: time="2025-09-13T00:53:35.669784686Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:53:35.669976 env[1311]: time="2025-09-13T00:53:35.669861469Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:53:35.669976 env[1311]: time="2025-09-13T00:53:35.669885774Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:53:35.669976 env[1311]: time="2025-09-13T00:53:35.669964216Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:53:35.670066 env[1311]: time="2025-09-13T00:53:35.669990720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:53:35.670066 env[1311]: time="2025-09-13T00:53:35.670012908Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:53:35.670066 env[1311]: time="2025-09-13T00:53:35.670033656Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:53:35.670066 env[1311]: time="2025-09-13T00:53:35.670053783Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:53:35.670168 env[1311]: time="2025-09-13T00:53:35.670074403Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:53:35.670168 env[1311]: time="2025-09-13T00:53:35.670095827Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:53:35.670168 env[1311]: time="2025-09-13T00:53:35.670117790Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:53:35.670168 env[1311]: time="2025-09-13T00:53:35.670136227Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:53:35.670405 env[1311]: time="2025-09-13T00:53:35.670360525Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:53:35.670590 env[1311]: time="2025-09-13T00:53:35.670562713Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:53:35.671131 env[1311]: time="2025-09-13T00:53:35.671098036Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:53:35.671181 env[1311]: time="2025-09-13T00:53:35.671157033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671212 env[1311]: time="2025-09-13T00:53:35.671181140Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:53:35.671265 env[1311]: time="2025-09-13T00:53:35.671245332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671301 env[1311]: time="2025-09-13T00:53:35.671272162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671403 env[1311]: time="2025-09-13T00:53:35.671305029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671403 env[1311]: time="2025-09-13T00:53:35.671328083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671403 env[1311]: time="2025-09-13T00:53:35.671346726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671403 env[1311]: time="2025-09-13T00:53:35.671365794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671403 env[1311]: time="2025-09-13T00:53:35.671384434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671527 env[1311]: time="2025-09-13T00:53:35.671418711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671527 env[1311]: time="2025-09-13T00:53:35.671440322Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:53:35.671676 env[1311]: time="2025-09-13T00:53:35.671648984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671725 env[1311]: time="2025-09-13T00:53:35.671679374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671725 env[1311]: time="2025-09-13T00:53:35.671699583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.671725 env[1311]: time="2025-09-13T00:53:35.671717674Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:53:35.671812 env[1311]: time="2025-09-13T00:53:35.671742492Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:53:35.671812 env[1311]: time="2025-09-13T00:53:35.671762008Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:53:35.671812 env[1311]: time="2025-09-13T00:53:35.671788096Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:53:35.671889 env[1311]: time="2025-09-13T00:53:35.671834827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:53:35.672181 env[1311]: time="2025-09-13T00:53:35.672103935Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:53:35.675341 env[1311]: time="2025-09-13T00:53:35.672196746Z" level=info msg="Connect containerd service" Sep 13 00:53:35.675341 env[1311]: time="2025-09-13T00:53:35.672263918Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:53:35.679415 env[1311]: time="2025-09-13T00:53:35.678135628Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:53:35.679415 env[1311]: time="2025-09-13T00:53:35.678528939Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:53:35.679415 env[1311]: time="2025-09-13T00:53:35.678573016Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:53:35.679415 env[1311]: time="2025-09-13T00:53:35.678621015Z" level=info msg="containerd successfully booted in 0.190486s" Sep 13 00:53:35.679157 systemd[1]: Started containerd.service. Sep 13 00:53:35.680457 env[1311]: time="2025-09-13T00:53:35.680359444Z" level=info msg="Start subscribing containerd event" Sep 13 00:53:35.680536 env[1311]: time="2025-09-13T00:53:35.680498836Z" level=info msg="Start recovering state" Sep 13 00:53:35.680658 env[1311]: time="2025-09-13T00:53:35.680623898Z" level=info msg="Start event monitor" Sep 13 00:53:35.680698 env[1311]: time="2025-09-13T00:53:35.680662664Z" level=info msg="Start snapshots syncer" Sep 13 00:53:35.680698 env[1311]: time="2025-09-13T00:53:35.680680539Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:53:35.680698 env[1311]: time="2025-09-13T00:53:35.680691196Z" level=info msg="Start streaming server" Sep 13 00:53:36.463721 locksmithd[1341]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:53:36.538295 tar[1300]: linux-amd64/LICENSE Sep 13 00:53:36.540954 tar[1300]: linux-amd64/README.md Sep 13 00:53:36.554362 systemd[1]: Finished prepare-helm.service. Sep 13 00:53:36.894667 sshd_keygen[1312]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:53:36.925858 systemd[1]: Finished sshd-keygen.service. Sep 13 00:53:36.928416 systemd[1]: Starting issuegen.service... Sep 13 00:53:36.938676 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:53:36.938964 systemd[1]: Finished issuegen.service. Sep 13 00:53:36.942084 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:53:36.954422 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:53:36.956730 systemd[1]: Started getty@tty1.service. Sep 13 00:53:36.958920 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:53:36.959715 systemd[1]: Reached target getty.target. Sep 13 00:53:36.979807 systemd[1]: Started kubelet.service. Sep 13 00:53:36.980944 systemd[1]: Reached target multi-user.target. Sep 13 00:53:36.983369 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:53:36.995730 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:53:36.995978 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:53:37.004743 systemd[1]: Startup finished in 5.643s (kernel) + 7.370s (userspace) = 13.014s. Sep 13 00:53:37.597698 systemd[1]: Created slice system-sshd.slice. Sep 13 00:53:37.599292 systemd[1]: Started sshd@0-161.35.238.92:22-147.75.109.163:60818.service. Sep 13 00:53:37.671823 sshd[1396]: Accepted publickey for core from 147.75.109.163 port 60818 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:37.677267 kubelet[1388]: E0913 00:53:37.677130 1388 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:53:37.678052 sshd[1396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:37.681590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:53:37.681760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:53:37.692730 systemd[1]: Created slice user-500.slice. Sep 13 00:53:37.694095 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:53:37.700470 systemd-logind[1291]: New session 1 of user core. Sep 13 00:53:37.706085 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:53:37.707958 systemd[1]: Starting user@500.service... Sep 13 00:53:37.714376 (systemd)[1402]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:37.801835 systemd[1402]: Queued start job for default target default.target. Sep 13 00:53:37.802118 systemd[1402]: Reached target paths.target. Sep 13 00:53:37.802136 systemd[1402]: Reached target sockets.target. Sep 13 00:53:37.802150 systemd[1402]: Reached target timers.target. Sep 13 00:53:37.802162 systemd[1402]: Reached target basic.target. Sep 13 00:53:37.802211 systemd[1402]: Reached target default.target. Sep 13 00:53:37.802239 systemd[1402]: Startup finished in 78ms. Sep 13 00:53:37.803482 systemd[1]: Started user@500.service. Sep 13 00:53:37.805192 systemd[1]: Started session-1.scope. Sep 13 00:53:37.866369 systemd[1]: Started sshd@1-161.35.238.92:22-147.75.109.163:60822.service. Sep 13 00:53:37.917155 sshd[1411]: Accepted publickey for core from 147.75.109.163 port 60822 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:37.918893 sshd[1411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:37.925517 systemd[1]: Started session-2.scope. Sep 13 00:53:37.925944 systemd-logind[1291]: New session 2 of user core. Sep 13 00:53:37.996754 sshd[1411]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:38.002491 systemd[1]: Started sshd@2-161.35.238.92:22-147.75.109.163:60834.service. Sep 13 00:53:38.003196 systemd[1]: sshd@1-161.35.238.92:22-147.75.109.163:60822.service: Deactivated successfully. Sep 13 00:53:38.008154 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:53:38.008833 systemd-logind[1291]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:53:38.010785 systemd-logind[1291]: Removed session 2. Sep 13 00:53:38.054773 sshd[1416]: Accepted publickey for core from 147.75.109.163 port 60834 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:38.056771 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:38.062382 systemd-logind[1291]: New session 3 of user core. Sep 13 00:53:38.063749 systemd[1]: Started session-3.scope. Sep 13 00:53:38.125054 sshd[1416]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:38.129999 systemd[1]: Started sshd@3-161.35.238.92:22-147.75.109.163:60842.service. Sep 13 00:53:38.130647 systemd[1]: sshd@2-161.35.238.92:22-147.75.109.163:60834.service: Deactivated successfully. Sep 13 00:53:38.131932 systemd-logind[1291]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:53:38.132010 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:53:38.138732 systemd-logind[1291]: Removed session 3. Sep 13 00:53:38.184289 sshd[1424]: Accepted publickey for core from 147.75.109.163 port 60842 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:38.187217 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:38.193936 systemd[1]: Started session-4.scope. Sep 13 00:53:38.194347 systemd-logind[1291]: New session 4 of user core. Sep 13 00:53:38.260939 sshd[1424]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:38.265182 systemd[1]: sshd@3-161.35.238.92:22-147.75.109.163:60842.service: Deactivated successfully. Sep 13 00:53:38.268666 systemd-logind[1291]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:53:38.270654 systemd[1]: Started sshd@4-161.35.238.92:22-147.75.109.163:60846.service. Sep 13 00:53:38.271915 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:53:38.273747 systemd-logind[1291]: Removed session 4. Sep 13 00:53:38.323640 sshd[1432]: Accepted publickey for core from 147.75.109.163 port 60846 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:38.325203 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:38.330621 systemd[1]: Started session-5.scope. Sep 13 00:53:38.330973 systemd-logind[1291]: New session 5 of user core. Sep 13 00:53:38.403514 sudo[1436]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:53:38.404420 sudo[1436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:38.413662 dbus-daemon[1279]: Ѝ\x80\x83\x87U: received setenforce notice (enforcing=141841840) Sep 13 00:53:38.416652 sudo[1436]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:38.421684 sshd[1432]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:38.427095 systemd[1]: Started sshd@5-161.35.238.92:22-147.75.109.163:60858.service. Sep 13 00:53:38.427989 systemd[1]: sshd@4-161.35.238.92:22-147.75.109.163:60846.service: Deactivated successfully. Sep 13 00:53:38.429565 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:53:38.429988 systemd-logind[1291]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:53:38.433797 systemd-logind[1291]: Removed session 5. Sep 13 00:53:38.485297 sshd[1439]: Accepted publickey for core from 147.75.109.163 port 60858 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:38.486901 sshd[1439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:38.492385 systemd[1]: Started session-6.scope. Sep 13 00:53:38.493469 systemd-logind[1291]: New session 6 of user core. Sep 13 00:53:38.554681 sudo[1445]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:53:38.555477 sudo[1445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:38.559580 sudo[1445]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:38.565971 sudo[1444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:53:38.566224 sudo[1444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:38.579195 systemd[1]: Stopping audit-rules.service... Sep 13 00:53:38.579000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:53:38.579000 audit[1448]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc208c820 a2=420 a3=0 items=0 ppid=1 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:38.579000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:53:38.581058 auditctl[1448]: No rules Sep 13 00:53:38.581377 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:53:38.581636 systemd[1]: Stopped audit-rules.service. Sep 13 00:53:38.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:38.583655 systemd[1]: Starting audit-rules.service... Sep 13 00:53:38.608487 augenrules[1466]: No rules Sep 13 00:53:38.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:38.609000 audit[1444]: USER_END pid=1444 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:38.609000 audit[1444]: CRED_DISP pid=1444 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:38.611138 sudo[1444]: pam_unix(sudo:session): session closed for user root Sep 13 00:53:38.609778 systemd[1]: Finished audit-rules.service. Sep 13 00:53:38.614932 sshd[1439]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:38.614000 audit[1439]: USER_END pid=1439 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:53:38.614000 audit[1439]: CRED_DISP pid=1439 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:53:38.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-161.35.238.92:22-147.75.109.163:60858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:38.617921 systemd[1]: sshd@5-161.35.238.92:22-147.75.109.163:60858.service: Deactivated successfully. Sep 13 00:53:38.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-161.35.238.92:22-147.75.109.163:60872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:38.622195 systemd[1]: Started sshd@6-161.35.238.92:22-147.75.109.163:60872.service. Sep 13 00:53:38.627840 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:53:38.628792 systemd-logind[1291]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:53:38.630258 systemd-logind[1291]: Removed session 6. Sep 13 00:53:38.672000 audit[1473]: USER_ACCT pid=1473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:53:38.675154 sshd[1473]: Accepted publickey for core from 147.75.109.163 port 60872 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:38.674000 audit[1473]: CRED_ACQ pid=1473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:53:38.674000 audit[1473]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe0cc4e60 a2=3 a3=0 items=0 ppid=1 pid=1473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:38.674000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:53:38.676346 sshd[1473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:38.681471 systemd-logind[1291]: New session 7 of user core. Sep 13 00:53:38.682874 systemd[1]: Started session-7.scope. Sep 13 00:53:38.686000 audit[1473]: USER_START pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:53:38.688000 audit[1476]: CRED_ACQ pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:53:38.743000 audit[1477]: USER_ACCT pid=1477 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:38.745196 sudo[1477]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:53:38.743000 audit[1477]: CRED_REFR pid=1477 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:38.745593 sudo[1477]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:38.746000 audit[1477]: USER_START pid=1477 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:53:38.786714 systemd[1]: Starting docker.service... Sep 13 00:53:38.842549 env[1487]: time="2025-09-13T00:53:38.842477581Z" level=info msg="Starting up" Sep 13 00:53:38.845011 env[1487]: time="2025-09-13T00:53:38.844976451Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:53:38.845011 env[1487]: time="2025-09-13T00:53:38.845005771Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:53:38.845189 env[1487]: time="2025-09-13T00:53:38.845027633Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:53:38.845189 env[1487]: time="2025-09-13T00:53:38.845041008Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:53:38.849361 env[1487]: time="2025-09-13T00:53:38.849317175Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:53:38.849661 env[1487]: time="2025-09-13T00:53:38.849638899Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:53:38.849821 env[1487]: time="2025-09-13T00:53:38.849798708Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:53:38.849917 env[1487]: time="2025-09-13T00:53:38.849898061Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:53:38.857947 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1643084042-merged.mount: Deactivated successfully. Sep 13 00:53:38.933166 env[1487]: time="2025-09-13T00:53:38.933038711Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:53:38.933166 env[1487]: time="2025-09-13T00:53:38.933069295Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:53:38.933471 env[1487]: time="2025-09-13T00:53:38.933326478Z" level=info msg="Loading containers: start." Sep 13 00:53:39.018000 audit[1518]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.018000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe81004df0 a2=0 a3=7ffe81004ddc items=0 ppid=1487 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.018000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 13 00:53:39.021000 audit[1520]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.021000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffec0ee9a30 a2=0 a3=7ffec0ee9a1c items=0 ppid=1487 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.021000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 13 00:53:39.023000 audit[1522]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.023000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeaa54c3f0 a2=0 a3=7ffeaa54c3dc items=0 ppid=1487 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.023000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:53:39.026000 audit[1524]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.026000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffff02c9e50 a2=0 a3=7ffff02c9e3c items=0 ppid=1487 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.026000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:53:39.030000 audit[1526]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.030000 audit[1526]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdfd02e560 a2=0 a3=7ffdfd02e54c items=0 ppid=1487 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.030000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 13 00:53:39.050000 audit[1531]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.050000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc6535a330 a2=0 a3=7ffc6535a31c items=0 ppid=1487 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.050000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 13 00:53:39.056000 audit[1533]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.056000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd61224b00 a2=0 a3=7ffd61224aec items=0 ppid=1487 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.056000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 13 00:53:39.059000 audit[1535]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.059000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffea2b8c3d0 a2=0 a3=7ffea2b8c3bc items=0 ppid=1487 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.059000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 13 00:53:39.062000 audit[1537]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.062000 audit[1537]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffdb2cfe3f0 a2=0 a3=7ffdb2cfe3dc items=0 ppid=1487 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.062000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:39.070000 audit[1541]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.070000 audit[1541]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffd3f3c930 a2=0 a3=7fffd3f3c91c items=0 ppid=1487 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.070000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:39.076000 audit[1542]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.076000 audit[1542]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdf2c5b6b0 a2=0 a3=7ffdf2c5b69c items=0 ppid=1487 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.076000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:39.090448 kernel: Initializing XFRM netlink socket Sep 13 00:53:39.131592 env[1487]: time="2025-09-13T00:53:39.131553738Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:53:39.161000 audit[1550]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.161000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff66482670 a2=0 a3=7fff6648265c items=0 ppid=1487 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.161000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 13 00:53:39.174000 audit[1553]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.174000 audit[1553]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdc0453450 a2=0 a3=7ffdc045343c items=0 ppid=1487 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.174000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 13 00:53:39.179000 audit[1556]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.179000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff05f5b110 a2=0 a3=7fff05f5b0fc items=0 ppid=1487 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.179000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 13 00:53:39.182000 audit[1558]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.182000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcd5377b30 a2=0 a3=7ffcd5377b1c items=0 ppid=1487 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.182000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 13 00:53:39.186000 audit[1560]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.186000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc8a4dca80 a2=0 a3=7ffc8a4dca6c items=0 ppid=1487 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.186000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 13 00:53:39.189000 audit[1562]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.189000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe9e1b9870 a2=0 a3=7ffe9e1b985c items=0 ppid=1487 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.189000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 13 00:53:39.191000 audit[1564]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.191000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffee8206660 a2=0 a3=7ffee820664c items=0 ppid=1487 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.191000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 13 00:53:39.201000 audit[1567]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.201000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff23a58dc0 a2=0 a3=7fff23a58dac items=0 ppid=1487 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.201000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 13 00:53:39.204000 audit[1569]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.204000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffde15b5de0 a2=0 a3=7ffde15b5dcc items=0 ppid=1487 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.204000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:53:39.207000 audit[1571]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.207000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc70309200 a2=0 a3=7ffc703091ec items=0 ppid=1487 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.207000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:53:39.209000 audit[1573]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.209000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdc701d440 a2=0 a3=7ffdc701d42c items=0 ppid=1487 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.209000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 13 00:53:39.211945 systemd-networkd[1061]: docker0: Link UP Sep 13 00:53:39.219000 audit[1577]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.219000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeaa5c8df0 a2=0 a3=7ffeaa5c8ddc items=0 ppid=1487 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.219000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:39.225000 audit[1578]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:39.225000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd83bcb0a0 a2=0 a3=7ffd83bcb08c items=0 ppid=1487 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:39.225000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:53:39.227802 env[1487]: time="2025-09-13T00:53:39.227753408Z" level=info msg="Loading containers: done." Sep 13 00:53:39.247560 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2121998059-merged.mount: Deactivated successfully. Sep 13 00:53:39.252417 env[1487]: time="2025-09-13T00:53:39.252353795Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:53:39.252915 env[1487]: time="2025-09-13T00:53:39.252885059Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:53:39.253161 env[1487]: time="2025-09-13T00:53:39.253142140Z" level=info msg="Daemon has completed initialization" Sep 13 00:53:39.267742 systemd[1]: Started docker.service. Sep 13 00:53:39.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:39.274868 env[1487]: time="2025-09-13T00:53:39.274650898Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:53:39.297025 systemd[1]: Starting coreos-metadata.service... Sep 13 00:53:39.342127 coreos-metadata[1603]: Sep 13 00:53:39.342 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:53:39.353142 coreos-metadata[1603]: Sep 13 00:53:39.353 INFO Fetch successful Sep 13 00:53:39.368448 systemd[1]: Finished coreos-metadata.service. Sep 13 00:53:39.372087 kernel: kauditd_printk_skb: 123 callbacks suppressed Sep 13 00:53:39.372217 kernel: audit: type=1130 audit(1757724819.367:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:39.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:40.224940 env[1311]: time="2025-09-13T00:53:40.224755746Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:53:40.759740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3298182795.mount: Deactivated successfully. Sep 13 00:53:42.147155 env[1311]: time="2025-09-13T00:53:42.147086052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:42.148923 env[1311]: time="2025-09-13T00:53:42.148875077Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:42.151079 env[1311]: time="2025-09-13T00:53:42.151032399Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:42.153186 env[1311]: time="2025-09-13T00:53:42.153138794Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:42.154229 env[1311]: time="2025-09-13T00:53:42.154191696Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:53:42.155236 env[1311]: time="2025-09-13T00:53:42.155205341Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:53:43.778714 env[1311]: time="2025-09-13T00:53:43.778647343Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:43.781336 env[1311]: time="2025-09-13T00:53:43.781280608Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:43.783932 env[1311]: time="2025-09-13T00:53:43.783878879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:43.786473 env[1311]: time="2025-09-13T00:53:43.786425470Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:43.787403 env[1311]: time="2025-09-13T00:53:43.787353764Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:53:43.788669 env[1311]: time="2025-09-13T00:53:43.788640798Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:53:45.134922 env[1311]: time="2025-09-13T00:53:45.134861847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:45.136380 env[1311]: time="2025-09-13T00:53:45.136341808Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:45.138226 env[1311]: time="2025-09-13T00:53:45.138179867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:45.140112 env[1311]: time="2025-09-13T00:53:45.140073954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:45.141211 env[1311]: time="2025-09-13T00:53:45.141159024Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:53:45.141938 env[1311]: time="2025-09-13T00:53:45.141915070Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:53:46.217626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877442706.mount: Deactivated successfully. Sep 13 00:53:46.962477 env[1311]: time="2025-09-13T00:53:46.962402173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:46.963787 env[1311]: time="2025-09-13T00:53:46.963750946Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:46.965337 env[1311]: time="2025-09-13T00:53:46.965302342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:46.967326 env[1311]: time="2025-09-13T00:53:46.967283796Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:46.968836 env[1311]: time="2025-09-13T00:53:46.968166191Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:53:46.969529 env[1311]: time="2025-09-13T00:53:46.969496934Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:53:47.476597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount230713119.mount: Deactivated successfully. Sep 13 00:53:47.932719 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:53:47.937477 kernel: audit: type=1130 audit(1757724827.931:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.937604 kernel: audit: type=1131 audit(1757724827.931:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.932970 systemd[1]: Stopped kubelet.service. Sep 13 00:53:47.934913 systemd[1]: Starting kubelet.service... Sep 13 00:53:48.078716 systemd[1]: Started kubelet.service. Sep 13 00:53:48.081461 kernel: audit: type=1130 audit(1757724828.077:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:48.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:48.172042 kubelet[1632]: E0913 00:53:48.171984 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:53:48.175078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:53:48.175250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:53:48.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:48.179432 kernel: audit: type=1131 audit(1757724828.174:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:53:48.501030 env[1311]: time="2025-09-13T00:53:48.500972262Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.502590 env[1311]: time="2025-09-13T00:53:48.502553250Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.505121 env[1311]: time="2025-09-13T00:53:48.505081104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.506901 env[1311]: time="2025-09-13T00:53:48.506863654Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.508327 env[1311]: time="2025-09-13T00:53:48.508230760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:53:48.509218 env[1311]: time="2025-09-13T00:53:48.509179114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:53:48.902117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159947618.mount: Deactivated successfully. Sep 13 00:53:48.906557 env[1311]: time="2025-09-13T00:53:48.906499745Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.908547 env[1311]: time="2025-09-13T00:53:48.908509337Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.910486 env[1311]: time="2025-09-13T00:53:48.910452173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.912482 env[1311]: time="2025-09-13T00:53:48.912449746Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:48.913276 env[1311]: time="2025-09-13T00:53:48.913228339Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:53:48.913974 env[1311]: time="2025-09-13T00:53:48.913944078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:53:49.408795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569042193.mount: Deactivated successfully. Sep 13 00:53:51.733006 env[1311]: time="2025-09-13T00:53:51.732955230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:51.735575 env[1311]: time="2025-09-13T00:53:51.735512241Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:51.737371 env[1311]: time="2025-09-13T00:53:51.737326419Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:51.739445 env[1311]: time="2025-09-13T00:53:51.739374347Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:51.740953 env[1311]: time="2025-09-13T00:53:51.740906450Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:53:54.680149 systemd[1]: Stopped kubelet.service. Sep 13 00:53:54.687026 kernel: audit: type=1130 audit(1757724834.678:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:54.687199 kernel: audit: type=1131 audit(1757724834.678:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:54.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:54.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:54.682951 systemd[1]: Starting kubelet.service... Sep 13 00:53:54.721363 systemd[1]: Reloading. Sep 13 00:53:54.845718 /usr/lib/systemd/system-generators/torcx-generator[1684]: time="2025-09-13T00:53:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:54.845746 /usr/lib/systemd/system-generators/torcx-generator[1684]: time="2025-09-13T00:53:54Z" level=info msg="torcx already run" Sep 13 00:53:54.960064 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:54.960085 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:54.985093 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:55.092970 systemd[1]: Started kubelet.service. Sep 13 00:53:55.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:55.096432 kernel: audit: type=1130 audit(1757724835.092:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:55.099502 systemd[1]: Stopping kubelet.service... Sep 13 00:53:55.101458 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:53:55.101709 systemd[1]: Stopped kubelet.service. Sep 13 00:53:55.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:55.103939 systemd[1]: Starting kubelet.service... Sep 13 00:53:55.106224 kernel: audit: type=1131 audit(1757724835.100:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:55.241339 systemd[1]: Started kubelet.service. Sep 13 00:53:55.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:55.245409 kernel: audit: type=1130 audit(1757724835.240:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:55.298169 kubelet[1755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:55.298169 kubelet[1755]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:53:55.298169 kubelet[1755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:53:55.298800 kubelet[1755]: I0913 00:53:55.298213 1755 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:53:55.900762 kubelet[1755]: I0913 00:53:55.900711 1755 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:53:55.900762 kubelet[1755]: I0913 00:53:55.900747 1755 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:53:55.901094 kubelet[1755]: I0913 00:53:55.901076 1755 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:53:55.930465 kubelet[1755]: E0913 00:53:55.930427 1755 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://161.35.238.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 161.35.238.92:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:55.931649 kubelet[1755]: I0913 00:53:55.931617 1755 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:53:55.941905 kubelet[1755]: E0913 00:53:55.941842 1755 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:53:55.941905 kubelet[1755]: I0913 00:53:55.941901 1755 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:53:55.948359 kubelet[1755]: I0913 00:53:55.948300 1755 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:53:55.949426 kubelet[1755]: I0913 00:53:55.949372 1755 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:53:55.949602 kubelet[1755]: I0913 00:53:55.949558 1755 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:53:55.949789 kubelet[1755]: I0913 00:53:55.949598 1755 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-b7c626372f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:53:55.949906 kubelet[1755]: I0913 00:53:55.949799 1755 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:53:55.949906 kubelet[1755]: I0913 00:53:55.949809 1755 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:53:55.949977 kubelet[1755]: I0913 00:53:55.949914 1755 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:55.957331 kubelet[1755]: I0913 00:53:55.957275 1755 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:53:55.957331 kubelet[1755]: I0913 00:53:55.957335 1755 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:53:55.957590 kubelet[1755]: I0913 00:53:55.957380 1755 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:53:55.957590 kubelet[1755]: I0913 00:53:55.957423 1755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:53:55.972559 kubelet[1755]: W0913 00:53:55.972505 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://161.35.238.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-b7c626372f&limit=500&resourceVersion=0": dial tcp 161.35.238.92:6443: connect: connection refused Sep 13 00:53:55.972930 kubelet[1755]: E0913 00:53:55.972891 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://161.35.238.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-b7c626372f&limit=500&resourceVersion=0\": dial tcp 161.35.238.92:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:55.973196 kubelet[1755]: I0913 00:53:55.973176 1755 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:53:55.973853 kubelet[1755]: I0913 00:53:55.973826 1755 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:53:55.979146 kubelet[1755]: W0913 00:53:55.979091 1755 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:53:55.980978 kubelet[1755]: W0913 00:53:55.980883 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://161.35.238.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 161.35.238.92:6443: connect: connection refused Sep 13 00:53:55.981145 kubelet[1755]: E0913 00:53:55.980979 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://161.35.238.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 161.35.238.92:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:55.982955 kubelet[1755]: I0913 00:53:55.982914 1755 server.go:1274] "Started kubelet" Sep 13 00:53:55.992016 kernel: audit: type=1400 audit(1757724835.983:223): avc: denied { mac_admin } for pid=1755 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:55.992154 kernel: audit: type=1401 audit(1757724835.983:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:55.992181 kernel: audit: type=1300 audit(1757724835.983:223): arch=c000003e syscall=188 success=no exit=-22 a0=c000769d70 a1=c00082aa20 a2=c000769d40 a3=25 items=0 ppid=1 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:55.983000 audit[1755]: AVC avc: denied { mac_admin } for pid=1755 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:55.983000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:55.983000 audit[1755]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000769d70 a1=c00082aa20 a2=c000769d40 a3=25 items=0 ppid=1 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:55.992427 kubelet[1755]: I0913 00:53:55.984617 1755 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:53:55.992427 kubelet[1755]: I0913 00:53:55.984683 1755 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:53:55.992427 kubelet[1755]: I0913 00:53:55.984783 1755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:53:55.992427 kubelet[1755]: I0913 00:53:55.991868 1755 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:53:55.993231 kubelet[1755]: I0913 00:53:55.993193 1755 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:53:55.983000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:55.996412 kernel: audit: type=1327 audit(1757724835.983:223): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:55.996506 kernel: audit: type=1400 audit(1757724835.983:224): avc: denied { mac_admin } for pid=1755 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:55.983000 audit[1755]: AVC avc: denied { mac_admin } for pid=1755 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:55.983000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:55.983000 audit[1755]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000828a60 a1=c00082aa38 a2=c000769e00 a3=25 items=0 ppid=1 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:55.983000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:55.987000 audit[1767]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1767 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:55.987000 audit[1767]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd35b3c140 a2=0 a3=7ffd35b3c12c items=0 ppid=1755 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:55.987000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:53:55.988000 audit[1768]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:55.988000 audit[1768]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9737e1a0 a2=0 a3=7ffc9737e18c items=0 ppid=1755 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:55.988000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:53:55.999956 kubelet[1755]: I0913 00:53:55.999897 1755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:53:56.001224 kubelet[1755]: I0913 00:53:56.000209 1755 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:53:56.001224 kubelet[1755]: I0913 00:53:56.000515 1755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:53:56.002638 kubelet[1755]: I0913 00:53:56.002382 1755 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:53:56.002795 kubelet[1755]: E0913 00:53:56.002771 1755 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-b7c626372f\" not found" Sep 13 00:53:56.007760 kubelet[1755]: E0913 00:53:56.005941 1755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://161.35.238.92:6443/api/v1/namespaces/default/events\": dial tcp 161.35.238.92:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-b7c626372f.1864b16ceda3e4ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-b7c626372f,UID:ci-3510.3.8-n-b7c626372f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-b7c626372f,},FirstTimestamp:2025-09-13 00:53:55.982861498 +0000 UTC m=+0.733736503,LastTimestamp:2025-09-13 00:53:55.982861498 +0000 UTC m=+0.733736503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-b7c626372f,}" Sep 13 00:53:56.008234 kubelet[1755]: I0913 00:53:56.008212 1755 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:53:56.008483 kubelet[1755]: I0913 00:53:56.008461 1755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:53:56.009098 kubelet[1755]: E0913 00:53:56.009066 1755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.238.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-b7c626372f?timeout=10s\": dial tcp 161.35.238.92:6443: connect: connection refused" interval="200ms" Sep 13 00:53:56.012293 kubelet[1755]: I0913 00:53:56.012272 1755 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:53:56.014000 audit[1770]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:56.014000 audit[1770]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd9f4f7580 a2=0 a3=7ffd9f4f756c items=0 ppid=1755 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:56.014000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:53:56.020128 kubelet[1755]: I0913 00:53:56.020094 1755 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:53:56.020260 kubelet[1755]: I0913 00:53:56.020155 1755 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:53:56.019000 audit[1772]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:56.019000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffca43694d0 a2=0 a3=7ffca43694bc items=0 ppid=1755 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:56.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:53:56.033728 kubelet[1755]: W0913 00:53:56.033667 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://161.35.238.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.238.92:6443: connect: connection refused Sep 13 00:53:56.033957 kubelet[1755]: E0913 00:53:56.033923 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://161.35.238.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 161.35.238.92:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:56.036783 kubelet[1755]: E0913 00:53:56.036749 1755 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:53:56.041630 kubelet[1755]: I0913 00:53:56.041601 1755 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:53:56.042320 kubelet[1755]: I0913 00:53:56.042292 1755 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:53:56.042320 kubelet[1755]: I0913 00:53:56.042325 1755 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:53:56.042000 audit[1778]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:56.042000 audit[1778]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcb0171360 a2=0 a3=7ffcb017134c items=0 ppid=1755 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:56.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 13 00:53:56.044321 kubelet[1755]: I0913 00:53:56.044304 1755 policy_none.go:49] "None policy: Start" Sep 13 00:53:56.044576 kubelet[1755]: I0913 00:53:56.044545 1755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:53:56.044000 audit[1781]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=1781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:56.044000 audit[1781]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe23fabfa0 a2=0 a3=7ffe23fabf8c items=0 ppid=1755 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:56.044000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:53:56.045000 audit[1780]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:56.045000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd6e342070 a2=0 a3=7ffd6e34205c items=0 ppid=1755 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:56.045000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:53:56.047060 kubelet[1755]: I0913 00:53:56.047042 1755 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:53:56.047405 kubelet[1755]: I0913 00:53:56.047369 1755 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:53:56.047941 kubelet[1755]: I0913 00:53:56.047222 1755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:53:56.048339 kubelet[1755]: I0913 00:53:56.048323 1755 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:53:56.048744 kubelet[1755]: I0913 00:53:56.048697 1755 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:53:56.047000 audit[1783]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:56.047000 audit[1783]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe53b6b8e0 a2=0 a3=7ffe53b6b8cc items=0 ppid=1755 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:56.047000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:53:56.051883 kubelet[1755]: E0913 00:53:56.051857 1755 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:53:56.051000 audit[1784]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:56.051000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7b840550 a2=0 a3=7ffe7b84053c items=0 ppid=1755 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:56.051000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:53:56.053000 audit[1785]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:56.053000 audit[1785]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff034511a0 a2=0 a3=7fff0345118c items=0 ppid=1755 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:56.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:53:56.055302 kubelet[1755]: W0913 00:53:56.053370 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://161.35.238.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.238.92:6443: connect: connection refused Sep 13 00:53:56.055376 kubelet[1755]: E0913 00:53:56.055330 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://161.35.238.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 161.35.238.92:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:56.054000 audit[1786]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:53:56.054000 audit[1786]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef11ed270 a2=0 a3=7ffef11ed25c items=0 ppid=1755 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:56.054000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:53:56.056427 kubelet[1755]: I0913 00:53:56.056383 1755 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:53:56.055000 audit[1755]: AVC avc: denied { mac_admin } for pid=1755 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:53:56.055000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:53:56.055000 audit[1755]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008a4960 a1=c000754798 a2=c0008a4930 a3=25 items=0 ppid=1 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:56.055000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:53:56.056655 kubelet[1755]: I0913 00:53:56.056474 1755 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:53:56.056655 kubelet[1755]: I0913 00:53:56.056582 1755 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:53:56.056655 kubelet[1755]: I0913 00:53:56.056592 1755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:53:56.058973 kubelet[1755]: I0913 00:53:56.058953 1755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:53:56.061094 kubelet[1755]: E0913 00:53:56.061070 1755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-b7c626372f\" not found" Sep 13 00:53:56.060000 audit[1787]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:53:56.060000 audit[1787]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe2dbee9c0 a2=0 a3=7ffe2dbee9ac items=0 ppid=1755 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:56.060000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:53:56.158511 kubelet[1755]: I0913 00:53:56.158411 1755 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.159165 kubelet[1755]: E0913 00:53:56.159131 1755 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://161.35.238.92:6443/api/v1/nodes\": dial tcp 161.35.238.92:6443: connect: connection refused" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.210321 kubelet[1755]: E0913 00:53:56.210271 1755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.238.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-b7c626372f?timeout=10s\": dial tcp 161.35.238.92:6443: connect: connection refused" interval="400ms" Sep 13 00:53:56.221829 kubelet[1755]: I0913 00:53:56.221743 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/479bf21babad8b7fd37eb815f26ce363-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-b7c626372f\" (UID: \"479bf21babad8b7fd37eb815f26ce363\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.222006 kubelet[1755]: I0913 00:53:56.221933 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/107e98a924cfa2ef434487cb3ebb3013-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-b7c626372f\" (UID: \"107e98a924cfa2ef434487cb3ebb3013\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.222006 kubelet[1755]: I0913 00:53:56.221958 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c146bb97688a16e1d6fb79b63958cb0b-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-b7c626372f\" (UID: \"c146bb97688a16e1d6fb79b63958cb0b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.222006 kubelet[1755]: I0913 00:53:56.221974 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c146bb97688a16e1d6fb79b63958cb0b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-b7c626372f\" (UID: \"c146bb97688a16e1d6fb79b63958cb0b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.222110 kubelet[1755]: I0913 00:53:56.222014 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/479bf21babad8b7fd37eb815f26ce363-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-b7c626372f\" (UID: \"479bf21babad8b7fd37eb815f26ce363\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.222110 kubelet[1755]: I0913 00:53:56.222029 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/479bf21babad8b7fd37eb815f26ce363-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-b7c626372f\" (UID: \"479bf21babad8b7fd37eb815f26ce363\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.222110 kubelet[1755]: I0913 00:53:56.222044 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c146bb97688a16e1d6fb79b63958cb0b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-b7c626372f\" (UID: \"c146bb97688a16e1d6fb79b63958cb0b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.222110 kubelet[1755]: I0913 00:53:56.222085 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/479bf21babad8b7fd37eb815f26ce363-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-b7c626372f\" (UID: \"479bf21babad8b7fd37eb815f26ce363\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.222110 kubelet[1755]: I0913 00:53:56.222101 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/479bf21babad8b7fd37eb815f26ce363-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-b7c626372f\" (UID: \"479bf21babad8b7fd37eb815f26ce363\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.361216 kubelet[1755]: I0913 00:53:56.361174 1755 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.362249 kubelet[1755]: E0913 00:53:56.362211 1755 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://161.35.238.92:6443/api/v1/nodes\": dial tcp 161.35.238.92:6443: connect: connection refused" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.464889 kubelet[1755]: E0913 00:53:56.464718 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:53:56.465194 kubelet[1755]: E0913 00:53:56.464715 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:53:56.466116 env[1311]: time="2025-09-13T00:53:56.466075101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-b7c626372f,Uid:c146bb97688a16e1d6fb79b63958cb0b,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:56.466822 env[1311]: time="2025-09-13T00:53:56.466093826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-b7c626372f,Uid:107e98a924cfa2ef434487cb3ebb3013,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:56.468269 kubelet[1755]: E0913 00:53:56.468244 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:53:56.469619 env[1311]: time="2025-09-13T00:53:56.469586270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-b7c626372f,Uid:479bf21babad8b7fd37eb815f26ce363,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:56.611998 kubelet[1755]: E0913 00:53:56.611944 1755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.238.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-b7c626372f?timeout=10s\": dial tcp 161.35.238.92:6443: connect: connection refused" interval="800ms" Sep 13 00:53:56.764492 kubelet[1755]: I0913 00:53:56.764382 1755 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.765223 kubelet[1755]: E0913 00:53:56.765195 1755 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://161.35.238.92:6443/api/v1/nodes\": dial tcp 161.35.238.92:6443: connect: connection refused" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:53:56.938343 kubelet[1755]: W0913 00:53:56.938239 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://161.35.238.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-b7c626372f&limit=500&resourceVersion=0": dial tcp 161.35.238.92:6443: connect: connection refused Sep 13 00:53:56.938343 kubelet[1755]: E0913 00:53:56.938306 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://161.35.238.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-b7c626372f&limit=500&resourceVersion=0\": dial tcp 161.35.238.92:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:56.987768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680141678.mount: Deactivated successfully. Sep 13 00:53:56.992291 env[1311]: time="2025-09-13T00:53:56.992243120Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:56.994412 env[1311]: time="2025-09-13T00:53:56.994348387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:56.999825 env[1311]: time="2025-09-13T00:53:56.999773173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.003785 env[1311]: time="2025-09-13T00:53:57.003731585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.006918 env[1311]: time="2025-09-13T00:53:57.006860504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.010309 env[1311]: time="2025-09-13T00:53:57.010257154Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.010940 env[1311]: time="2025-09-13T00:53:57.010904812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.011742 env[1311]: time="2025-09-13T00:53:57.011693841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.012481 env[1311]: time="2025-09-13T00:53:57.012442713Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.013338 env[1311]: time="2025-09-13T00:53:57.013288588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.013984 env[1311]: time="2025-09-13T00:53:57.013957406Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.014939 env[1311]: time="2025-09-13T00:53:57.014525885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.046959 env[1311]: time="2025-09-13T00:53:57.046871395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:57.050198 env[1311]: time="2025-09-13T00:53:57.046925348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:57.050198 env[1311]: time="2025-09-13T00:53:57.046936540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:57.050668 env[1311]: time="2025-09-13T00:53:57.050614833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ed790882603faa9a2123619e2b43cec87c0017b362d2a212d81d2571a1390d4 pid=1800 runtime=io.containerd.runc.v2 Sep 13 00:53:57.052205 env[1311]: time="2025-09-13T00:53:57.052037687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:57.052205 env[1311]: time="2025-09-13T00:53:57.052084956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:57.052205 env[1311]: time="2025-09-13T00:53:57.052095992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:57.052446 env[1311]: time="2025-09-13T00:53:57.052222820Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3b86eb37a5e79c53300efb4f4d1ec241c72256c1e6e406e8cbb27494e0e4fa1 pid=1820 runtime=io.containerd.runc.v2 Sep 13 00:53:57.056860 env[1311]: time="2025-09-13T00:53:57.056741831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:57.056860 env[1311]: time="2025-09-13T00:53:57.056788338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:57.056860 env[1311]: time="2025-09-13T00:53:57.056799439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:57.058602 env[1311]: time="2025-09-13T00:53:57.058536741Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d837163cb4b79f852d1fc2a755d2399f44e76f0e3cff6ffd3583542f96a8fdca pid=1803 runtime=io.containerd.runc.v2 Sep 13 00:53:57.064405 kubelet[1755]: W0913 00:53:57.064329 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://161.35.238.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 161.35.238.92:6443: connect: connection refused Sep 13 00:53:57.064405 kubelet[1755]: E0913 00:53:57.064406 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://161.35.238.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 161.35.238.92:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:57.162904 env[1311]: time="2025-09-13T00:53:57.162833725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-b7c626372f,Uid:c146bb97688a16e1d6fb79b63958cb0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3b86eb37a5e79c53300efb4f4d1ec241c72256c1e6e406e8cbb27494e0e4fa1\"" Sep 13 00:53:57.166890 kubelet[1755]: E0913 00:53:57.166854 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:53:57.172787 env[1311]: time="2025-09-13T00:53:57.172744079Z" level=info msg="CreateContainer within sandbox \"d3b86eb37a5e79c53300efb4f4d1ec241c72256c1e6e406e8cbb27494e0e4fa1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:53:57.182850 env[1311]: time="2025-09-13T00:53:57.182799714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-b7c626372f,Uid:479bf21babad8b7fd37eb815f26ce363,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ed790882603faa9a2123619e2b43cec87c0017b362d2a212d81d2571a1390d4\"" Sep 13 00:53:57.183928 kubelet[1755]: E0913 00:53:57.183896 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:53:57.185802 env[1311]: time="2025-09-13T00:53:57.185739804Z" level=info msg="CreateContainer within sandbox \"7ed790882603faa9a2123619e2b43cec87c0017b362d2a212d81d2571a1390d4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:53:57.198487 env[1311]: time="2025-09-13T00:53:57.198253017Z" level=info msg="CreateContainer within sandbox \"d3b86eb37a5e79c53300efb4f4d1ec241c72256c1e6e406e8cbb27494e0e4fa1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57d0501a8c645d1686fc22c9f94c977eecc01ee21ce41a4526f6568d70389d94\"" Sep 13 00:53:57.199536 env[1311]: time="2025-09-13T00:53:57.199495329Z" level=info msg="StartContainer for \"57d0501a8c645d1686fc22c9f94c977eecc01ee21ce41a4526f6568d70389d94\"" Sep 13 00:53:57.201454 env[1311]: time="2025-09-13T00:53:57.201389764Z" level=info msg="CreateContainer within sandbox \"7ed790882603faa9a2123619e2b43cec87c0017b362d2a212d81d2571a1390d4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1763ec25fecae9dd678dacfd61c75d04b0d7b8c8f5323276f58b1028572bedc8\"" Sep 13 00:53:57.202355 env[1311]: time="2025-09-13T00:53:57.202304901Z" level=info msg="StartContainer for \"1763ec25fecae9dd678dacfd61c75d04b0d7b8c8f5323276f58b1028572bedc8\"" Sep 13 00:53:57.213184 env[1311]: time="2025-09-13T00:53:57.213134878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-b7c626372f,Uid:107e98a924cfa2ef434487cb3ebb3013,Namespace:kube-system,Attempt:0,} returns sandbox id \"d837163cb4b79f852d1fc2a755d2399f44e76f0e3cff6ffd3583542f96a8fdca\"" Sep 13 00:53:57.213967 kubelet[1755]: E0913 00:53:57.213940 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:53:57.215861 env[1311]: time="2025-09-13T00:53:57.215821669Z" level=info msg="CreateContainer within sandbox \"d837163cb4b79f852d1fc2a755d2399f44e76f0e3cff6ffd3583542f96a8fdca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:53:57.225639 env[1311]: time="2025-09-13T00:53:57.225575099Z" level=info msg="CreateContainer within sandbox \"d837163cb4b79f852d1fc2a755d2399f44e76f0e3cff6ffd3583542f96a8fdca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f092ff0315135a081babf1519963456988dd1ce2a4c72297bfa898945f0df580\"" Sep 13 00:53:57.226227 env[1311]: time="2025-09-13T00:53:57.226198875Z" level=info msg="StartContainer for \"f092ff0315135a081babf1519963456988dd1ce2a4c72297bfa898945f0df580\"" Sep 13 00:53:57.259032 kubelet[1755]: W0913 00:53:57.258939 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://161.35.238.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.238.92:6443: connect: connection refused Sep 13 00:53:57.259032 kubelet[1755]: E0913 00:53:57.258994 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://161.35.238.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 161.35.238.92:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:57.272378 kubelet[1755]: W0913 00:53:57.271561 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://161.35.238.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.238.92:6443: connect: connection refused Sep 13 00:53:57.272378 kubelet[1755]: E0913 00:53:57.271607 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://161.35.238.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 161.35.238.92:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:53:57.324638 env[1311]: time="2025-09-13T00:53:57.324588201Z" level=info msg="StartContainer for \"57d0501a8c645d1686fc22c9f94c977eecc01ee21ce41a4526f6568d70389d94\" returns successfully" Sep 13 00:53:57.337541 env[1311]: time="2025-09-13T00:53:57.337490908Z" level=info msg="StartContainer for \"1763ec25fecae9dd678dacfd61c75d04b0d7b8c8f5323276f58b1028572bedc8\" returns successfully" Sep 13 00:53:57.367646 env[1311]: time="2025-09-13T00:53:57.366187508Z" level=info msg="StartContainer for \"f092ff0315135a081babf1519963456988dd1ce2a4c72297bfa898945f0df580\" returns successfully" Sep 13 00:53:57.413329 kubelet[1755]: E0913 00:53:57.413267 1755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.238.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-b7c626372f?timeout=10s\": dial tcp 161.35.238.92:6443: connect: connection refused" interval="1.6s" Sep 13 00:53:57.566536 kubelet[1755]: I0913 00:53:57.566434 1755 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:53:57.566890 kubelet[1755]: E0913 00:53:57.566764 1755 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://161.35.238.92:6443/api/v1/nodes\": dial tcp 161.35.238.92:6443: connect: connection refused" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:53:58.065882 kubelet[1755]: E0913 00:53:58.065813 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:53:58.068260 kubelet[1755]: E0913 00:53:58.068228 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:53:58.070174 kubelet[1755]: E0913 00:53:58.070144 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:53:59.073977 kubelet[1755]: E0913 00:53:59.073943 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:53:59.168281 kubelet[1755]: I0913 00:53:59.168249 1755 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:53:59.220982 kubelet[1755]: E0913 00:53:59.220937 1755 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-b7c626372f\" not found" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:53:59.295952 kubelet[1755]: I0913 00:53:59.295916 1755 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:53:59.983215 kubelet[1755]: I0913 00:53:59.983162 1755 apiserver.go:52] "Watching apiserver" Sep 13 00:54:00.021384 kubelet[1755]: I0913 00:54:00.021321 1755 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:54:00.533145 kubelet[1755]: W0913 00:54:00.533095 1755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:54:00.533700 kubelet[1755]: E0913 00:54:00.533492 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:00.960210 kubelet[1755]: W0913 00:54:00.960167 1755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:54:00.960945 kubelet[1755]: E0913 00:54:00.960811 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:01.076241 kubelet[1755]: E0913 00:54:01.076191 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:01.076626 kubelet[1755]: E0913 00:54:01.076597 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:01.706523 systemd[1]: Reloading. Sep 13 00:54:01.817620 /usr/lib/systemd/system-generators/torcx-generator[2044]: time="2025-09-13T00:54:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:54:01.817649 /usr/lib/systemd/system-generators/torcx-generator[2044]: time="2025-09-13T00:54:01Z" level=info msg="torcx already run" Sep 13 00:54:01.962382 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:54:01.963040 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:54:01.993241 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:54:02.131125 systemd[1]: Stopping kubelet.service... Sep 13 00:54:02.150533 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:54:02.151367 systemd[1]: Stopped kubelet.service. Sep 13 00:54:02.155608 kernel: kauditd_printk_skb: 43 callbacks suppressed Sep 13 00:54:02.155785 kernel: audit: type=1131 audit(1757724842.150:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:02.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:02.161950 systemd[1]: Starting kubelet.service... Sep 13 00:54:03.297811 kernel: audit: type=1130 audit(1757724843.290:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:03.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:03.288795 systemd[1]: Started kubelet.service. Sep 13 00:54:03.399968 kubelet[2107]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:03.400561 kubelet[2107]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:54:03.400657 kubelet[2107]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:03.406662 kubelet[2107]: I0913 00:54:03.403526 2107 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:54:03.436656 kubelet[2107]: I0913 00:54:03.436610 2107 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:54:03.436880 kubelet[2107]: I0913 00:54:03.436863 2107 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:54:03.440716 kubelet[2107]: I0913 00:54:03.440517 2107 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:54:03.443560 kubelet[2107]: I0913 00:54:03.442740 2107 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:54:03.474823 kubelet[2107]: I0913 00:54:03.474771 2107 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:54:03.487273 kubelet[2107]: E0913 00:54:03.487224 2107 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:54:03.487523 kubelet[2107]: I0913 00:54:03.487507 2107 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:54:03.491289 kubelet[2107]: I0913 00:54:03.491255 2107 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:54:03.492080 kubelet[2107]: I0913 00:54:03.492062 2107 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:54:03.492362 kubelet[2107]: I0913 00:54:03.492325 2107 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:54:03.492664 kubelet[2107]: I0913 00:54:03.492456 2107 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-b7c626372f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:54:03.492863 kubelet[2107]: I0913 00:54:03.492848 2107 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:54:03.492944 kubelet[2107]: I0913 00:54:03.492934 2107 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:54:03.493053 kubelet[2107]: I0913 00:54:03.493043 2107 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:03.493259 kubelet[2107]: I0913 00:54:03.493248 2107 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:54:03.493348 kubelet[2107]: I0913 00:54:03.493336 2107 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:54:03.493492 kubelet[2107]: I0913 00:54:03.493481 2107 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:54:03.493604 kubelet[2107]: I0913 00:54:03.493593 2107 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:54:03.500724 kubelet[2107]: I0913 00:54:03.500694 2107 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:54:03.511975 kubelet[2107]: I0913 00:54:03.511950 2107 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:54:03.517492 kubelet[2107]: I0913 00:54:03.517462 2107 server.go:1274] "Started kubelet" Sep 13 00:54:03.524500 kubelet[2107]: I0913 00:54:03.524451 2107 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:54:03.525000 audit[2107]: AVC avc: denied { mac_admin } for pid=2107 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:03.525000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:03.531371 kernel: audit: type=1400 audit(1757724843.525:240): avc: denied { mac_admin } for pid=2107 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:03.531508 kernel: audit: type=1401 audit(1757724843.525:240): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:03.525000 audit[2107]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c36d80 a1=c000bc8090 a2=c000c36d50 a3=25 items=0 ppid=1 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:03.534841 kubelet[2107]: I0913 00:54:03.534804 2107 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:54:03.535204 kubelet[2107]: I0913 00:54:03.535188 2107 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:54:03.535413 kubelet[2107]: I0913 00:54:03.529648 2107 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:54:03.535566 kubelet[2107]: I0913 00:54:03.535550 2107 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:54:03.535650 kubelet[2107]: I0913 00:54:03.535640 2107 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:54:03.537422 kernel: audit: type=1300 audit(1757724843.525:240): arch=c000003e syscall=188 success=no exit=-22 a0=c000c36d80 a1=c000bc8090 a2=c000c36d50 a3=25 items=0 ppid=1 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:03.540162 kubelet[2107]: I0913 00:54:03.540125 2107 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:54:03.525000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:03.542477 kubelet[2107]: I0913 00:54:03.542450 2107 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:54:03.545662 kernel: audit: type=1327 audit(1757724843.525:240): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:03.545867 kubelet[2107]: I0913 00:54:03.545846 2107 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:54:03.534000 audit[2107]: AVC avc: denied { mac_admin } for pid=2107 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:03.554160 kubelet[2107]: I0913 00:54:03.554134 2107 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:54:03.554436 kernel: audit: type=1400 audit(1757724843.534:241): avc: denied { mac_admin } for pid=2107 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:03.554661 kubelet[2107]: I0913 00:54:03.554647 2107 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:54:03.534000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:03.560477 kernel: audit: type=1401 audit(1757724843.534:241): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:03.534000 audit[2107]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b5dfe0 a1=c00052bf50 a2=c00070d620 a3=25 items=0 ppid=1 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:03.564482 kernel: audit: type=1300 audit(1757724843.534:241): arch=c000003e syscall=188 success=no exit=-22 a0=c000b5dfe0 a1=c00052bf50 a2=c00070d620 a3=25 items=0 ppid=1 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:03.569559 kernel: audit: type=1327 audit(1757724843.534:241): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:03.534000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:03.573166 kubelet[2107]: I0913 00:54:03.573123 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:54:03.574805 kubelet[2107]: I0913 00:54:03.574772 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:54:03.574982 kubelet[2107]: I0913 00:54:03.574969 2107 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:54:03.575081 kubelet[2107]: I0913 00:54:03.575070 2107 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:54:03.575211 kubelet[2107]: E0913 00:54:03.575191 2107 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:54:03.589442 kubelet[2107]: E0913 00:54:03.589381 2107 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:54:03.593865 kubelet[2107]: I0913 00:54:03.593831 2107 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:54:03.596072 kubelet[2107]: I0913 00:54:03.596046 2107 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:54:03.596237 kubelet[2107]: I0913 00:54:03.596225 2107 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:54:03.676314 kubelet[2107]: E0913 00:54:03.676263 2107 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:54:03.682740 kubelet[2107]: I0913 00:54:03.682715 2107 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:54:03.682966 kubelet[2107]: I0913 00:54:03.682951 2107 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:54:03.683063 kubelet[2107]: I0913 00:54:03.683053 2107 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:03.683321 kubelet[2107]: I0913 00:54:03.683306 2107 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:54:03.683467 kubelet[2107]: I0913 00:54:03.683439 2107 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:54:03.683546 kubelet[2107]: I0913 00:54:03.683537 2107 policy_none.go:49] "None policy: Start" Sep 13 00:54:03.684457 kubelet[2107]: I0913 00:54:03.684384 2107 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:54:03.684577 kubelet[2107]: I0913 00:54:03.684567 2107 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:54:03.684820 kubelet[2107]: I0913 00:54:03.684809 2107 state_mem.go:75] "Updated machine memory state" Sep 13 00:54:03.686290 kubelet[2107]: I0913 00:54:03.686267 2107 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:54:03.684000 audit[2107]: AVC avc: denied { mac_admin } for pid=2107 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:03.684000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:03.684000 audit[2107]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a16a50 a1=c000ff9428 a2=c000a16a20 a3=25 items=0 ppid=1 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:03.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:03.686823 kubelet[2107]: I0913 00:54:03.686803 2107 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:54:03.687090 kubelet[2107]: I0913 00:54:03.687035 2107 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:54:03.687219 kubelet[2107]: I0913 00:54:03.687176 2107 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:54:03.689663 kubelet[2107]: I0913 00:54:03.689647 2107 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:54:03.791218 kubelet[2107]: I0913 00:54:03.791062 2107 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.806457 kubelet[2107]: I0913 00:54:03.806318 2107 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.806457 kubelet[2107]: I0913 00:54:03.806423 2107 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.887063 kubelet[2107]: W0913 00:54:03.887022 2107 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:54:03.887817 kubelet[2107]: W0913 00:54:03.887777 2107 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:54:03.888076 kubelet[2107]: E0913 00:54:03.888051 2107 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-b7c626372f\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.888767 kubelet[2107]: W0913 00:54:03.888717 2107 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:54:03.889161 kubelet[2107]: E0913 00:54:03.888989 2107 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-n-b7c626372f\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.956745 kubelet[2107]: I0913 00:54:03.956657 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/479bf21babad8b7fd37eb815f26ce363-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-b7c626372f\" (UID: \"479bf21babad8b7fd37eb815f26ce363\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.957339 kubelet[2107]: I0913 00:54:03.957306 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/479bf21babad8b7fd37eb815f26ce363-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-b7c626372f\" (UID: \"479bf21babad8b7fd37eb815f26ce363\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.957546 kubelet[2107]: I0913 00:54:03.957517 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/107e98a924cfa2ef434487cb3ebb3013-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-b7c626372f\" (UID: \"107e98a924cfa2ef434487cb3ebb3013\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.957731 kubelet[2107]: I0913 00:54:03.957710 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c146bb97688a16e1d6fb79b63958cb0b-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-b7c626372f\" (UID: \"c146bb97688a16e1d6fb79b63958cb0b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.957869 kubelet[2107]: I0913 00:54:03.957849 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c146bb97688a16e1d6fb79b63958cb0b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-b7c626372f\" (UID: \"c146bb97688a16e1d6fb79b63958cb0b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.957998 kubelet[2107]: I0913 00:54:03.957977 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/479bf21babad8b7fd37eb815f26ce363-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-b7c626372f\" (UID: \"479bf21babad8b7fd37eb815f26ce363\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.958132 kubelet[2107]: I0913 00:54:03.958114 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/479bf21babad8b7fd37eb815f26ce363-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-b7c626372f\" (UID: \"479bf21babad8b7fd37eb815f26ce363\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.958262 kubelet[2107]: I0913 00:54:03.958242 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c146bb97688a16e1d6fb79b63958cb0b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-b7c626372f\" (UID: \"c146bb97688a16e1d6fb79b63958cb0b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-b7c626372f" Sep 13 00:54:03.958467 kubelet[2107]: I0913 00:54:03.958386 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/479bf21babad8b7fd37eb815f26ce363-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-b7c626372f\" (UID: \"479bf21babad8b7fd37eb815f26ce363\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" Sep 13 00:54:04.188028 kubelet[2107]: E0913 00:54:04.187985 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:04.188742 kubelet[2107]: E0913 00:54:04.188663 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:04.190608 kubelet[2107]: E0913 00:54:04.190051 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:04.494801 kubelet[2107]: I0913 00:54:04.494685 2107 apiserver.go:52] "Watching apiserver" Sep 13 00:54:04.554796 kubelet[2107]: I0913 00:54:04.554746 2107 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:54:04.639236 kubelet[2107]: E0913 00:54:04.639192 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:04.640046 kubelet[2107]: E0913 00:54:04.639991 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:04.640276 kubelet[2107]: E0913 00:54:04.640247 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:04.725168 kubelet[2107]: I0913 00:54:04.725089 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-b7c626372f" podStartSLOduration=1.7250681380000001 podStartE2EDuration="1.725068138s" podCreationTimestamp="2025-09-13 00:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:04.703430019 +0000 UTC m=+1.387568503" watchObservedRunningTime="2025-09-13 00:54:04.725068138 +0000 UTC m=+1.409206614" Sep 13 00:54:04.725403 kubelet[2107]: I0913 00:54:04.725221 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-b7c626372f" podStartSLOduration=4.725214759 podStartE2EDuration="4.725214759s" podCreationTimestamp="2025-09-13 00:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:04.723885849 +0000 UTC m=+1.408024332" watchObservedRunningTime="2025-09-13 00:54:04.725214759 +0000 UTC m=+1.409353243" Sep 13 00:54:05.641692 kubelet[2107]: E0913 00:54:05.641645 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:05.642456 kubelet[2107]: E0913 00:54:05.642429 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:05.969478 kubelet[2107]: I0913 00:54:05.969122 2107 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:54:05.969626 env[1311]: time="2025-09-13T00:54:05.969584918Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:54:05.969954 kubelet[2107]: I0913 00:54:05.969825 2107 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:54:06.845204 kubelet[2107]: I0913 00:54:06.845143 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-b7c626372f" podStartSLOduration=6.845122607 podStartE2EDuration="6.845122607s" podCreationTimestamp="2025-09-13 00:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:04.743796525 +0000 UTC m=+1.427935012" watchObservedRunningTime="2025-09-13 00:54:06.845122607 +0000 UTC m=+3.529261092" Sep 13 00:54:06.875759 kubelet[2107]: I0913 00:54:06.875708 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43ee1b2f-afd4-4d12-b53c-c5c2fe7ea46a-xtables-lock\") pod \"kube-proxy-tts7m\" (UID: \"43ee1b2f-afd4-4d12-b53c-c5c2fe7ea46a\") " pod="kube-system/kube-proxy-tts7m" Sep 13 00:54:06.875759 kubelet[2107]: I0913 00:54:06.875754 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43ee1b2f-afd4-4d12-b53c-c5c2fe7ea46a-kube-proxy\") pod \"kube-proxy-tts7m\" (UID: \"43ee1b2f-afd4-4d12-b53c-c5c2fe7ea46a\") " pod="kube-system/kube-proxy-tts7m" Sep 13 00:54:06.875759 kubelet[2107]: I0913 00:54:06.875772 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43ee1b2f-afd4-4d12-b53c-c5c2fe7ea46a-lib-modules\") pod \"kube-proxy-tts7m\" (UID: \"43ee1b2f-afd4-4d12-b53c-c5c2fe7ea46a\") " pod="kube-system/kube-proxy-tts7m" Sep 13 00:54:06.876018 kubelet[2107]: I0913 00:54:06.875789 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncksj\" (UniqueName: \"kubernetes.io/projected/43ee1b2f-afd4-4d12-b53c-c5c2fe7ea46a-kube-api-access-ncksj\") pod \"kube-proxy-tts7m\" (UID: \"43ee1b2f-afd4-4d12-b53c-c5c2fe7ea46a\") " pod="kube-system/kube-proxy-tts7m" Sep 13 00:54:06.986871 kubelet[2107]: I0913 00:54:06.986830 2107 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:54:07.077087 kubelet[2107]: I0913 00:54:07.077039 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/82c014b4-223d-42e5-b43c-9931c8854cb9-var-lib-calico\") pod \"tigera-operator-58fc44c59b-5ms8b\" (UID: \"82c014b4-223d-42e5-b43c-9931c8854cb9\") " pod="tigera-operator/tigera-operator-58fc44c59b-5ms8b" Sep 13 00:54:07.077363 kubelet[2107]: I0913 00:54:07.077342 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnv49\" (UniqueName: \"kubernetes.io/projected/82c014b4-223d-42e5-b43c-9931c8854cb9-kube-api-access-gnv49\") pod \"tigera-operator-58fc44c59b-5ms8b\" (UID: \"82c014b4-223d-42e5-b43c-9931c8854cb9\") " pod="tigera-operator/tigera-operator-58fc44c59b-5ms8b" Sep 13 00:54:07.150779 kubelet[2107]: E0913 00:54:07.148383 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:07.151024 env[1311]: time="2025-09-13T00:54:07.149988237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tts7m,Uid:43ee1b2f-afd4-4d12-b53c-c5c2fe7ea46a,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:07.216384 env[1311]: time="2025-09-13T00:54:07.216243681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:07.216770 env[1311]: time="2025-09-13T00:54:07.216722517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:07.216918 env[1311]: time="2025-09-13T00:54:07.216888528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:07.217334 env[1311]: time="2025-09-13T00:54:07.217270858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c62b3f5b62e7521cf285da6e8e9a68c9ed38b6b4980f94f8c730c5432e5519e pid=2155 runtime=io.containerd.runc.v2 Sep 13 00:54:07.286881 env[1311]: time="2025-09-13T00:54:07.286565702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tts7m,Uid:43ee1b2f-afd4-4d12-b53c-c5c2fe7ea46a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c62b3f5b62e7521cf285da6e8e9a68c9ed38b6b4980f94f8c730c5432e5519e\"" Sep 13 00:54:07.288283 kubelet[2107]: E0913 00:54:07.288029 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:07.293754 env[1311]: time="2025-09-13T00:54:07.293383939Z" level=info msg="CreateContainer within sandbox \"8c62b3f5b62e7521cf285da6e8e9a68c9ed38b6b4980f94f8c730c5432e5519e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:54:07.308957 env[1311]: time="2025-09-13T00:54:07.308771309Z" level=info msg="CreateContainer within sandbox \"8c62b3f5b62e7521cf285da6e8e9a68c9ed38b6b4980f94f8c730c5432e5519e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a66963d07d169a6bb69a188ed37410f7d16ca307323e15785b3d779e37856622\"" Sep 13 00:54:07.312096 env[1311]: time="2025-09-13T00:54:07.312015450Z" level=info msg="StartContainer for \"a66963d07d169a6bb69a188ed37410f7d16ca307323e15785b3d779e37856622\"" Sep 13 00:54:07.364528 env[1311]: time="2025-09-13T00:54:07.362418123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-5ms8b,Uid:82c014b4-223d-42e5-b43c-9931c8854cb9,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:54:07.393192 env[1311]: time="2025-09-13T00:54:07.393070706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:07.393492 env[1311]: time="2025-09-13T00:54:07.393458528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:07.393617 env[1311]: time="2025-09-13T00:54:07.393593029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:07.394272 env[1311]: time="2025-09-13T00:54:07.394210861Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/089526ba0a1700686ef4ef04e43771c38c04a9f525c8f9aff39848792ab244c6 pid=2223 runtime=io.containerd.runc.v2 Sep 13 00:54:07.407069 env[1311]: time="2025-09-13T00:54:07.406235069Z" level=info msg="StartContainer for \"a66963d07d169a6bb69a188ed37410f7d16ca307323e15785b3d779e37856622\" returns successfully" Sep 13 00:54:07.513545 env[1311]: time="2025-09-13T00:54:07.513473444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-5ms8b,Uid:82c014b4-223d-42e5-b43c-9931c8854cb9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"089526ba0a1700686ef4ef04e43771c38c04a9f525c8f9aff39848792ab244c6\"" Sep 13 00:54:07.516096 env[1311]: time="2025-09-13T00:54:07.516058454Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:54:07.646975 kubelet[2107]: E0913 00:54:07.646928 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:07.666857 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:54:07.667012 kernel: audit: type=1325 audit(1757724847.663:243): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2300 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.663000 audit[2300]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2300 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.667227 kubelet[2107]: E0913 00:54:07.665054 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:07.663000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd701b79c0 a2=0 a3=7ffd701b79ac items=0 ppid=2209 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.673427 kernel: audit: type=1300 audit(1757724847.663:243): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd701b79c0 a2=0 a3=7ffd701b79ac items=0 ppid=2209 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:54:07.676421 kernel: audit: type=1327 audit(1757724847.663:243): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:54:07.675000 audit[2301]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.679433 kernel: audit: type=1325 audit(1757724847.675:244): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.675000 audit[2301]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe78f099c0 a2=0 a3=7ffe78f099ac items=0 ppid=2209 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.685428 kernel: audit: type=1300 audit(1757724847.675:244): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe78f099c0 a2=0 a3=7ffe78f099ac items=0 ppid=2209 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.687789 kubelet[2107]: I0913 00:54:07.687729 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tts7m" podStartSLOduration=1.6877106670000002 podStartE2EDuration="1.687710667s" podCreationTimestamp="2025-09-13 00:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:07.660144106 +0000 UTC m=+4.344282591" watchObservedRunningTime="2025-09-13 00:54:07.687710667 +0000 UTC m=+4.371849147" Sep 13 00:54:07.675000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:54:07.692436 kernel: audit: type=1327 audit(1757724847.675:244): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:54:07.696000 audit[2302]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.696000 audit[2302]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff53ad5af0 a2=0 a3=7fff53ad5adc items=0 ppid=2209 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.704468 kernel: audit: type=1325 audit(1757724847.696:245): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.704632 kernel: audit: type=1300 audit(1757724847.696:245): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff53ad5af0 a2=0 a3=7fff53ad5adc items=0 ppid=2209 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.704687 kernel: audit: type=1327 audit(1757724847.696:245): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:54:07.696000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:54:07.699000 audit[2305]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.699000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdec6607b0 a2=0 a3=7ffdec66079c items=0 ppid=2209 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.712550 kernel: audit: type=1325 audit(1757724847.699:246): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.699000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:54:07.706000 audit[2303]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.706000 audit[2303]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9d0d5070 a2=0 a3=7fff9d0d505c items=0 ppid=2209 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:54:07.708000 audit[2306]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.708000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6ff93ea0 a2=0 a3=7fff6ff93e8c items=0 ppid=2209 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:54:07.785000 audit[2307]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.785000 audit[2307]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd77e79e20 a2=0 a3=7ffd77e79e0c items=0 ppid=2209 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:54:07.791000 audit[2309]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.791000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc114c2bf0 a2=0 a3=7ffc114c2bdc items=0 ppid=2209 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.791000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 13 00:54:07.797000 audit[2312]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.797000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcfe56e2d0 a2=0 a3=7ffcfe56e2bc items=0 ppid=2209 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.797000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 13 00:54:07.798000 audit[2313]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.798000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff68d6d220 a2=0 a3=7fff68d6d20c items=0 ppid=2209 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.798000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:54:07.802000 audit[2315]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.802000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcdddb8310 a2=0 a3=7ffcdddb82fc items=0 ppid=2209 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:54:07.804000 audit[2316]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.804000 audit[2316]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbe6afd40 a2=0 a3=7fffbe6afd2c items=0 ppid=2209 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.804000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:54:07.808000 audit[2318]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.808000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe7ec8df80 a2=0 a3=7ffe7ec8df6c items=0 ppid=2209 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:54:07.814000 audit[2321]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.814000 audit[2321]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdd7922c20 a2=0 a3=7ffdd7922c0c items=0 ppid=2209 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 13 00:54:07.816000 audit[2322]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.816000 audit[2322]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff44ab74f0 a2=0 a3=7fff44ab74dc items=0 ppid=2209 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:54:07.820000 audit[2324]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.820000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcaf258ae0 a2=0 a3=7ffcaf258acc items=0 ppid=2209 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.820000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:54:07.822000 audit[2325]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.822000 audit[2325]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff92665c40 a2=0 a3=7fff92665c2c items=0 ppid=2209 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:54:07.826000 audit[2327]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.826000 audit[2327]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd5d015850 a2=0 a3=7ffd5d01583c items=0 ppid=2209 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:54:07.837000 audit[2330]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.837000 audit[2330]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeb2ec3770 a2=0 a3=7ffeb2ec375c items=0 ppid=2209 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.837000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:54:07.842000 audit[2333]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.842000 audit[2333]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb04af810 a2=0 a3=7ffcb04af7fc items=0 ppid=2209 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:54:07.843000 audit[2334]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.843000 audit[2334]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc48112ed0 a2=0 a3=7ffc48112ebc items=0 ppid=2209 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:54:07.847000 audit[2336]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.847000 audit[2336]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc67b1ca70 a2=0 a3=7ffc67b1ca5c items=0 ppid=2209 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:54:07.852000 audit[2339]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.852000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffdbd83990 a2=0 a3=7fffdbd8397c items=0 ppid=2209 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:54:07.854000 audit[2340]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.854000 audit[2340]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1ee1e020 a2=0 a3=7ffc1ee1e00c items=0 ppid=2209 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:54:07.858000 audit[2342]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:07.858000 audit[2342]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffda35ef030 a2=0 a3=7ffda35ef01c items=0 ppid=2209 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:54:07.895000 audit[2348]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:07.895000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc76508330 a2=0 a3=7ffc7650831c items=0 ppid=2209 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:07.905000 audit[2348]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:07.905000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc76508330 a2=0 a3=7ffc7650831c items=0 ppid=2209 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:07.908000 audit[2353]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.908000 audit[2353]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcfe2b2860 a2=0 a3=7ffcfe2b284c items=0 ppid=2209 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:54:07.912000 audit[2355]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2355 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.912000 audit[2355]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcffd234a0 a2=0 a3=7ffcffd2348c items=0 ppid=2209 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 13 00:54:07.917000 audit[2358]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.917000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe034300f0 a2=0 a3=7ffe034300dc items=0 ppid=2209 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 13 00:54:07.921000 audit[2359]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.921000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdaf8e0670 a2=0 a3=7ffdaf8e065c items=0 ppid=2209 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:54:07.927000 audit[2361]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.927000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd1bc2f610 a2=0 a3=7ffd1bc2f5fc items=0 ppid=2209 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.927000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:54:07.928000 audit[2362]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.928000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8aecc510 a2=0 a3=7fff8aecc4fc items=0 ppid=2209 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:54:07.936000 audit[2364]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.936000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd8bae4a20 a2=0 a3=7ffd8bae4a0c items=0 ppid=2209 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 13 00:54:07.942000 audit[2367]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.942000 audit[2367]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffef4fe9790 a2=0 a3=7ffef4fe977c items=0 ppid=2209 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:54:07.944000 audit[2368]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.944000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc21a6ab0 a2=0 a3=7ffdc21a6a9c items=0 ppid=2209 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:54:07.949000 audit[2370]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.949000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe89d7fe50 a2=0 a3=7ffe89d7fe3c items=0 ppid=2209 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:54:07.950000 audit[2371]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.950000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8b645dd0 a2=0 a3=7fff8b645dbc items=0 ppid=2209 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.950000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:54:07.954000 audit[2373]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.954000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcbb3d1b50 a2=0 a3=7ffcbb3d1b3c items=0 ppid=2209 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.954000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:54:07.960000 audit[2376]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.960000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9903cfb0 a2=0 a3=7fff9903cf9c items=0 ppid=2209 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:54:07.965000 audit[2379]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.965000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc58e87930 a2=0 a3=7ffc58e8791c items=0 ppid=2209 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 13 00:54:07.967000 audit[2380]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.967000 audit[2380]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff70580e90 a2=0 a3=7fff70580e7c items=0 ppid=2209 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.967000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:54:07.970000 audit[2382]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.970000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc66b7fe00 a2=0 a3=7ffc66b7fdec items=0 ppid=2209 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.970000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:54:07.975000 audit[2385]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.975000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe5ba8d9f0 a2=0 a3=7ffe5ba8d9dc items=0 ppid=2209 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:54:07.977000 audit[2386]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.977000 audit[2386]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9383f9d0 a2=0 a3=7fff9383f9bc items=0 ppid=2209 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:54:07.984000 audit[2388]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.984000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffecbbb2250 a2=0 a3=7ffecbbb223c items=0 ppid=2209 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.984000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:54:07.986000 audit[2389]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.986000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff10d2ffc0 a2=0 a3=7fff10d2ffac items=0 ppid=2209 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:54:07.997000 audit[2391]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:07.997000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcbbda04c0 a2=0 a3=7ffcbbda04ac items=0 ppid=2209 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.997000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:54:08.003000 audit[2394]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:08.003000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffef9a874c0 a2=0 a3=7ffef9a874ac items=0 ppid=2209 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:08.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:54:08.008000 audit[2396]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:54:08.008000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffee1fb0ae0 a2=0 a3=7ffee1fb0acc items=0 ppid=2209 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:08.008000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:08.009000 audit[2396]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:54:08.009000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffee1fb0ae0 a2=0 a3=7ffee1fb0acc items=0 ppid=2209 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:08.009000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:08.651700 kubelet[2107]: E0913 00:54:08.651665 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:08.799375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242705998.mount: Deactivated successfully. Sep 13 00:54:09.979014 env[1311]: time="2025-09-13T00:54:09.978940604Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:09.980984 env[1311]: time="2025-09-13T00:54:09.980904993Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:09.985372 env[1311]: time="2025-09-13T00:54:09.985325497Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:09.987894 env[1311]: time="2025-09-13T00:54:09.987851022Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:09.989413 env[1311]: time="2025-09-13T00:54:09.988813556Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:54:09.993342 env[1311]: time="2025-09-13T00:54:09.993287091Z" level=info msg="CreateContainer within sandbox \"089526ba0a1700686ef4ef04e43771c38c04a9f525c8f9aff39848792ab244c6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:54:10.007634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount235543574.mount: Deactivated successfully. Sep 13 00:54:10.019169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702858558.mount: Deactivated successfully. Sep 13 00:54:10.022380 env[1311]: time="2025-09-13T00:54:10.022325761Z" level=info msg="CreateContainer within sandbox \"089526ba0a1700686ef4ef04e43771c38c04a9f525c8f9aff39848792ab244c6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e74f0d99314403bf04ceaa92befccaec0affed1d130b0cb90ac1c59f757bbdfa\"" Sep 13 00:54:10.025243 env[1311]: time="2025-09-13T00:54:10.025166618Z" level=info msg="StartContainer for \"e74f0d99314403bf04ceaa92befccaec0affed1d130b0cb90ac1c59f757bbdfa\"" Sep 13 00:54:10.072838 kubelet[2107]: E0913 00:54:10.072088 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:10.117803 env[1311]: time="2025-09-13T00:54:10.117737851Z" level=info msg="StartContainer for \"e74f0d99314403bf04ceaa92befccaec0affed1d130b0cb90ac1c59f757bbdfa\" returns successfully" Sep 13 00:54:10.656495 kubelet[2107]: E0913 00:54:10.656463 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:10.687768 kubelet[2107]: I0913 00:54:10.687713 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-5ms8b" podStartSLOduration=1.2124227730000001 podStartE2EDuration="3.68768423s" podCreationTimestamp="2025-09-13 00:54:07 +0000 UTC" firstStartedPulling="2025-09-13 00:54:07.515123422 +0000 UTC m=+4.199261894" lastFinishedPulling="2025-09-13 00:54:09.990384872 +0000 UTC m=+6.674523351" observedRunningTime="2025-09-13 00:54:10.672119178 +0000 UTC m=+7.356257653" watchObservedRunningTime="2025-09-13 00:54:10.68768423 +0000 UTC m=+7.371822714" Sep 13 00:54:11.657166 kubelet[2107]: E0913 00:54:11.657129 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:14.964325 kubelet[2107]: E0913 00:54:14.964072 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:16.878962 sudo[1477]: pam_unix(sudo:session): session closed for user root Sep 13 00:54:16.882781 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 13 00:54:16.882924 kernel: audit: type=1106 audit(1757724856.877:294): pid=1477 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:16.877000 audit[1477]: USER_END pid=1477 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:16.877000 audit[1477]: CRED_DISP pid=1477 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:16.886799 kernel: audit: type=1104 audit(1757724856.877:295): pid=1477 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:16.890354 sshd[1473]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:16.891000 audit[1473]: USER_END pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:16.896456 kernel: audit: type=1106 audit(1757724856.891:296): pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:16.894353 systemd[1]: sshd@6-161.35.238.92:22-147.75.109.163:60872.service: Deactivated successfully. Sep 13 00:54:16.895231 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:54:16.897196 systemd-logind[1291]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:54:16.891000 audit[1473]: CRED_DISP pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:16.901445 kernel: audit: type=1104 audit(1757724856.891:297): pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:16.901927 systemd-logind[1291]: Removed session 7. Sep 13 00:54:16.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-161.35.238.92:22-147.75.109.163:60872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:16.905419 kernel: audit: type=1131 audit(1757724856.891:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-161.35.238.92:22-147.75.109.163:60872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.414914 kernel: audit: type=1325 audit(1757724857.407:299): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:17.415097 kernel: audit: type=1300 audit(1757724857.407:299): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc35b93e50 a2=0 a3=7ffc35b93e3c items=0 ppid=2209 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:17.407000 audit[2479]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:17.407000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc35b93e50 a2=0 a3=7ffc35b93e3c items=0 ppid=2209 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:17.407000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:17.420426 kernel: audit: type=1327 audit(1757724857.407:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:17.415000 audit[2479]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:17.422425 kernel: audit: type=1325 audit(1757724857.415:300): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:17.415000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc35b93e50 a2=0 a3=0 items=0 ppid=2209 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:17.430425 kernel: audit: type=1300 audit(1757724857.415:300): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc35b93e50 a2=0 a3=0 items=0 ppid=2209 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:17.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:17.457000 audit[2481]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:17.457000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc2092ace0 a2=0 a3=7ffc2092accc items=0 ppid=2209 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:17.457000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:17.470000 audit[2481]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:17.470000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc2092ace0 a2=0 a3=0 items=0 ppid=2209 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:17.470000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:20.251000 audit[2483]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:20.251000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffff37bb4d0 a2=0 a3=7ffff37bb4bc items=0 ppid=2209 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:20.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:20.260000 audit[2483]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:20.260000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffff37bb4d0 a2=0 a3=0 items=0 ppid=2209 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:20.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:20.306000 audit[2485]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:20.306000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcd38ad1d0 a2=0 a3=7ffcd38ad1bc items=0 ppid=2209 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:20.306000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:20.320000 audit[2485]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:20.320000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd38ad1d0 a2=0 a3=0 items=0 ppid=2209 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:20.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:20.475298 kubelet[2107]: I0913 00:54:20.475231 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6ac932b1-b784-4a65-8fda-1126f2677c1b-typha-certs\") pod \"calico-typha-55bd5d4b96-qcmst\" (UID: \"6ac932b1-b784-4a65-8fda-1126f2677c1b\") " pod="calico-system/calico-typha-55bd5d4b96-qcmst" Sep 13 00:54:20.475902 kubelet[2107]: I0913 00:54:20.475303 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ac932b1-b784-4a65-8fda-1126f2677c1b-tigera-ca-bundle\") pod \"calico-typha-55bd5d4b96-qcmst\" (UID: \"6ac932b1-b784-4a65-8fda-1126f2677c1b\") " pod="calico-system/calico-typha-55bd5d4b96-qcmst" Sep 13 00:54:20.475902 kubelet[2107]: I0913 00:54:20.475352 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rsz\" (UniqueName: \"kubernetes.io/projected/6ac932b1-b784-4a65-8fda-1126f2677c1b-kube-api-access-f8rsz\") pod \"calico-typha-55bd5d4b96-qcmst\" (UID: \"6ac932b1-b784-4a65-8fda-1126f2677c1b\") " pod="calico-system/calico-typha-55bd5d4b96-qcmst" Sep 13 00:54:20.674672 kubelet[2107]: E0913 00:54:20.674636 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:20.682818 kubelet[2107]: I0913 00:54:20.682518 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-lib-modules\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.682818 kubelet[2107]: I0913 00:54:20.682566 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-policysync\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.682818 kubelet[2107]: I0913 00:54:20.682589 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-var-run-calico\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.682818 kubelet[2107]: I0913 00:54:20.682611 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-cni-net-dir\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.682818 kubelet[2107]: I0913 00:54:20.682642 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-flexvol-driver-host\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.683186 kubelet[2107]: I0913 00:54:20.682660 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-xtables-lock\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.683186 kubelet[2107]: I0913 00:54:20.682679 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-cni-bin-dir\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.683186 kubelet[2107]: I0913 00:54:20.682698 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-cni-log-dir\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.683186 kubelet[2107]: I0913 00:54:20.682719 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-node-certs\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.683186 kubelet[2107]: I0913 00:54:20.682764 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-tigera-ca-bundle\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.683333 kubelet[2107]: I0913 00:54:20.682782 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-447sr\" (UniqueName: \"kubernetes.io/projected/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-kube-api-access-447sr\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.683333 kubelet[2107]: I0913 00:54:20.682802 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3528cbe7-e35e-46dd-aab9-10d81fa30ebb-var-lib-calico\") pod \"calico-node-pnpwg\" (UID: \"3528cbe7-e35e-46dd-aab9-10d81fa30ebb\") " pod="calico-system/calico-node-pnpwg" Sep 13 00:54:20.684036 env[1311]: time="2025-09-13T00:54:20.683653650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bd5d4b96-qcmst,Uid:6ac932b1-b784-4a65-8fda-1126f2677c1b,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:20.712455 env[1311]: time="2025-09-13T00:54:20.710288635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:20.712455 env[1311]: time="2025-09-13T00:54:20.710323042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:20.712455 env[1311]: time="2025-09-13T00:54:20.710333847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:20.712455 env[1311]: time="2025-09-13T00:54:20.710494467Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0077aef51403d60caf64fe0dfe7ca9961ca9e3999d3b7126549c0503859fe3fd pid=2495 runtime=io.containerd.runc.v2 Sep 13 00:54:20.793240 kubelet[2107]: E0913 00:54:20.788439 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:20.793240 kubelet[2107]: W0913 00:54:20.788480 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:20.793240 kubelet[2107]: E0913 00:54:20.788505 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:20.839124 env[1311]: time="2025-09-13T00:54:20.838064750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bd5d4b96-qcmst,Uid:6ac932b1-b784-4a65-8fda-1126f2677c1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0077aef51403d60caf64fe0dfe7ca9961ca9e3999d3b7126549c0503859fe3fd\"" Sep 13 00:54:20.843694 kubelet[2107]: E0913 00:54:20.841538 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:20.843694 kubelet[2107]: W0913 00:54:20.841568 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:20.843694 kubelet[2107]: E0913 00:54:20.841607 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:20.844273 kubelet[2107]: E0913 00:54:20.844251 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:20.845489 env[1311]: time="2025-09-13T00:54:20.845448623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:54:20.950947 env[1311]: time="2025-09-13T00:54:20.950773196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pnpwg,Uid:3528cbe7-e35e-46dd-aab9-10d81fa30ebb,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:20.967876 env[1311]: time="2025-09-13T00:54:20.967751182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:20.967876 env[1311]: time="2025-09-13T00:54:20.967805971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:20.967876 env[1311]: time="2025-09-13T00:54:20.967820185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:20.969505 env[1311]: time="2025-09-13T00:54:20.968378031Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f849ed86883a75819fe737ca6fbd7e1c99bbc29bebeb5a33138951ec05a5e664 pid=2539 runtime=io.containerd.runc.v2 Sep 13 00:54:20.997284 update_engine[1293]: I0913 00:54:20.995997 1293 update_attempter.cc:509] Updating boot flags... Sep 13 00:54:21.002372 kubelet[2107]: E0913 00:54:21.002174 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4d8z6" podUID="5f057288-90ee-4889-a341-9af038f7cf7a" Sep 13 00:54:21.068370 kubelet[2107]: E0913 00:54:21.068284 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.068370 kubelet[2107]: W0913 00:54:21.068310 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.068880 kubelet[2107]: E0913 00:54:21.068637 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.069286 kubelet[2107]: E0913 00:54:21.069188 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.069286 kubelet[2107]: W0913 00:54:21.069222 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.069286 kubelet[2107]: E0913 00:54:21.069240 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.071047 kubelet[2107]: E0913 00:54:21.070271 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.071047 kubelet[2107]: W0913 00:54:21.070304 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.071047 kubelet[2107]: E0913 00:54:21.070324 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.073124 kubelet[2107]: E0913 00:54:21.071547 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.073124 kubelet[2107]: W0913 00:54:21.071562 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.073124 kubelet[2107]: E0913 00:54:21.071576 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.075001 kubelet[2107]: E0913 00:54:21.074671 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.075001 kubelet[2107]: W0913 00:54:21.074686 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.075001 kubelet[2107]: E0913 00:54:21.074704 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.075001 kubelet[2107]: E0913 00:54:21.074869 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.075001 kubelet[2107]: W0913 00:54:21.074877 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.075001 kubelet[2107]: E0913 00:54:21.074886 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.075422 kubelet[2107]: E0913 00:54:21.075260 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.075422 kubelet[2107]: W0913 00:54:21.075273 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.075422 kubelet[2107]: E0913 00:54:21.075284 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.075666 kubelet[2107]: E0913 00:54:21.075654 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.075739 kubelet[2107]: W0913 00:54:21.075726 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.075904 kubelet[2107]: E0913 00:54:21.075788 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.076071 kubelet[2107]: E0913 00:54:21.076060 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.076144 kubelet[2107]: W0913 00:54:21.076131 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.076212 kubelet[2107]: E0913 00:54:21.076200 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.076553 kubelet[2107]: E0913 00:54:21.076521 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.076634 kubelet[2107]: W0913 00:54:21.076619 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.076696 kubelet[2107]: E0913 00:54:21.076684 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.076967 kubelet[2107]: E0913 00:54:21.076953 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.077054 kubelet[2107]: W0913 00:54:21.077041 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.077221 kubelet[2107]: E0913 00:54:21.077207 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.086297 kubelet[2107]: E0913 00:54:21.086266 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.086522 kubelet[2107]: W0913 00:54:21.086498 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.090755 kubelet[2107]: E0913 00:54:21.086596 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.091596 kubelet[2107]: E0913 00:54:21.091571 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.091787 kubelet[2107]: W0913 00:54:21.091764 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.091886 kubelet[2107]: E0913 00:54:21.091863 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.092141 kubelet[2107]: E0913 00:54:21.092129 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.101116 kubelet[2107]: W0913 00:54:21.101072 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.101307 kubelet[2107]: E0913 00:54:21.101291 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.101662 kubelet[2107]: E0913 00:54:21.101644 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.101830 kubelet[2107]: W0913 00:54:21.101812 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.101905 kubelet[2107]: E0913 00:54:21.101891 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.114153 kubelet[2107]: E0913 00:54:21.114122 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.114339 kubelet[2107]: W0913 00:54:21.114317 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.114437 kubelet[2107]: E0913 00:54:21.114421 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.115275 env[1311]: time="2025-09-13T00:54:21.115232373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pnpwg,Uid:3528cbe7-e35e-46dd-aab9-10d81fa30ebb,Namespace:calico-system,Attempt:0,} returns sandbox id \"f849ed86883a75819fe737ca6fbd7e1c99bbc29bebeb5a33138951ec05a5e664\"" Sep 13 00:54:21.115777 kubelet[2107]: E0913 00:54:21.115758 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.115891 kubelet[2107]: W0913 00:54:21.115874 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.115964 kubelet[2107]: E0913 00:54:21.115950 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.124341 kubelet[2107]: E0913 00:54:21.122463 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.124341 kubelet[2107]: W0913 00:54:21.122490 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.124341 kubelet[2107]: E0913 00:54:21.122519 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.124341 kubelet[2107]: E0913 00:54:21.122767 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.124341 kubelet[2107]: W0913 00:54:21.122777 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.124341 kubelet[2107]: E0913 00:54:21.122788 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.124341 kubelet[2107]: E0913 00:54:21.122942 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.124341 kubelet[2107]: W0913 00:54:21.122948 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.124341 kubelet[2107]: E0913 00:54:21.122958 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.124341 kubelet[2107]: E0913 00:54:21.123215 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.124811 kubelet[2107]: W0913 00:54:21.123223 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.124811 kubelet[2107]: E0913 00:54:21.123233 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.124811 kubelet[2107]: I0913 00:54:21.123258 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f057288-90ee-4889-a341-9af038f7cf7a-kubelet-dir\") pod \"csi-node-driver-4d8z6\" (UID: \"5f057288-90ee-4889-a341-9af038f7cf7a\") " pod="calico-system/csi-node-driver-4d8z6" Sep 13 00:54:21.124811 kubelet[2107]: E0913 00:54:21.123456 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.124811 kubelet[2107]: W0913 00:54:21.123464 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.124811 kubelet[2107]: E0913 00:54:21.123477 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.124811 kubelet[2107]: I0913 00:54:21.123506 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5f057288-90ee-4889-a341-9af038f7cf7a-socket-dir\") pod \"csi-node-driver-4d8z6\" (UID: \"5f057288-90ee-4889-a341-9af038f7cf7a\") " pod="calico-system/csi-node-driver-4d8z6" Sep 13 00:54:21.124811 kubelet[2107]: E0913 00:54:21.123720 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.125025 kubelet[2107]: W0913 00:54:21.123730 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.125025 kubelet[2107]: E0913 00:54:21.123742 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.125025 kubelet[2107]: I0913 00:54:21.123762 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9kx\" (UniqueName: \"kubernetes.io/projected/5f057288-90ee-4889-a341-9af038f7cf7a-kube-api-access-8t9kx\") pod \"csi-node-driver-4d8z6\" (UID: \"5f057288-90ee-4889-a341-9af038f7cf7a\") " pod="calico-system/csi-node-driver-4d8z6" Sep 13 00:54:21.125025 kubelet[2107]: E0913 00:54:21.124018 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.125025 kubelet[2107]: W0913 00:54:21.124028 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.125025 kubelet[2107]: E0913 00:54:21.124045 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.125025 kubelet[2107]: I0913 00:54:21.124061 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5f057288-90ee-4889-a341-9af038f7cf7a-varrun\") pod \"csi-node-driver-4d8z6\" (UID: \"5f057288-90ee-4889-a341-9af038f7cf7a\") " pod="calico-system/csi-node-driver-4d8z6" Sep 13 00:54:21.131345 kubelet[2107]: E0913 00:54:21.128960 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.131345 kubelet[2107]: W0913 00:54:21.128992 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.131345 kubelet[2107]: E0913 00:54:21.129110 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.131345 kubelet[2107]: I0913 00:54:21.129143 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5f057288-90ee-4889-a341-9af038f7cf7a-registration-dir\") pod \"csi-node-driver-4d8z6\" (UID: \"5f057288-90ee-4889-a341-9af038f7cf7a\") " pod="calico-system/csi-node-driver-4d8z6" Sep 13 00:54:21.131345 kubelet[2107]: E0913 00:54:21.129281 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.131345 kubelet[2107]: W0913 00:54:21.129289 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.131345 kubelet[2107]: E0913 00:54:21.129358 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.131345 kubelet[2107]: E0913 00:54:21.129531 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.131345 kubelet[2107]: W0913 00:54:21.129538 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.132990 kubelet[2107]: E0913 00:54:21.129602 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.132990 kubelet[2107]: E0913 00:54:21.129705 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.132990 kubelet[2107]: W0913 00:54:21.129711 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.132990 kubelet[2107]: E0913 00:54:21.129772 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.132990 kubelet[2107]: E0913 00:54:21.129888 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.132990 kubelet[2107]: W0913 00:54:21.129895 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.132990 kubelet[2107]: E0913 00:54:21.129905 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.132990 kubelet[2107]: E0913 00:54:21.130054 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.132990 kubelet[2107]: W0913 00:54:21.130061 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.132990 kubelet[2107]: E0913 00:54:21.130071 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.136736 kubelet[2107]: E0913 00:54:21.130212 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.136736 kubelet[2107]: W0913 00:54:21.130219 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.136736 kubelet[2107]: E0913 00:54:21.130226 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.136736 kubelet[2107]: E0913 00:54:21.130378 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.136736 kubelet[2107]: W0913 00:54:21.130385 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.136736 kubelet[2107]: E0913 00:54:21.132448 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.136736 kubelet[2107]: E0913 00:54:21.132728 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.136736 kubelet[2107]: W0913 00:54:21.132741 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.136736 kubelet[2107]: E0913 00:54:21.132755 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.136736 kubelet[2107]: E0913 00:54:21.132943 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.137078 kubelet[2107]: W0913 00:54:21.132952 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.137078 kubelet[2107]: E0913 00:54:21.132961 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.137078 kubelet[2107]: E0913 00:54:21.133117 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.137078 kubelet[2107]: W0913 00:54:21.133124 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.137078 kubelet[2107]: E0913 00:54:21.133132 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.236678 kubelet[2107]: E0913 00:54:21.232544 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.236678 kubelet[2107]: W0913 00:54:21.232570 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.236678 kubelet[2107]: E0913 00:54:21.232596 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.236678 kubelet[2107]: E0913 00:54:21.235475 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.236678 kubelet[2107]: W0913 00:54:21.235497 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.236678 kubelet[2107]: E0913 00:54:21.235537 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.236678 kubelet[2107]: E0913 00:54:21.235847 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.236678 kubelet[2107]: W0913 00:54:21.235859 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.236678 kubelet[2107]: E0913 00:54:21.235941 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.236678 kubelet[2107]: E0913 00:54:21.236070 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.237110 kubelet[2107]: W0913 00:54:21.236077 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.237110 kubelet[2107]: E0913 00:54:21.236140 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.237110 kubelet[2107]: E0913 00:54:21.236284 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.237110 kubelet[2107]: W0913 00:54:21.236292 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.237110 kubelet[2107]: E0913 00:54:21.236307 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.237110 kubelet[2107]: E0913 00:54:21.236487 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.237110 kubelet[2107]: W0913 00:54:21.236494 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.237110 kubelet[2107]: E0913 00:54:21.236504 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.237110 kubelet[2107]: E0913 00:54:21.236659 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.237110 kubelet[2107]: W0913 00:54:21.236666 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.241710 kubelet[2107]: E0913 00:54:21.236675 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.241710 kubelet[2107]: E0913 00:54:21.236947 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.241710 kubelet[2107]: W0913 00:54:21.236956 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.241710 kubelet[2107]: E0913 00:54:21.236969 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.241710 kubelet[2107]: E0913 00:54:21.237349 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.241710 kubelet[2107]: W0913 00:54:21.237359 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.241710 kubelet[2107]: E0913 00:54:21.237374 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.241710 kubelet[2107]: E0913 00:54:21.237648 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.241710 kubelet[2107]: W0913 00:54:21.237657 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.241710 kubelet[2107]: E0913 00:54:21.237671 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242033 kubelet[2107]: E0913 00:54:21.237867 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242033 kubelet[2107]: W0913 00:54:21.237875 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242033 kubelet[2107]: E0913 00:54:21.237943 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242033 kubelet[2107]: E0913 00:54:21.238076 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242033 kubelet[2107]: W0913 00:54:21.238085 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242033 kubelet[2107]: E0913 00:54:21.238150 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242033 kubelet[2107]: E0913 00:54:21.238322 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242033 kubelet[2107]: W0913 00:54:21.238331 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242033 kubelet[2107]: E0913 00:54:21.238345 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242033 kubelet[2107]: E0913 00:54:21.238564 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242307 kubelet[2107]: W0913 00:54:21.238571 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242307 kubelet[2107]: E0913 00:54:21.238583 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242307 kubelet[2107]: E0913 00:54:21.238739 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242307 kubelet[2107]: W0913 00:54:21.238746 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242307 kubelet[2107]: E0913 00:54:21.238810 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242307 kubelet[2107]: E0913 00:54:21.238988 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242307 kubelet[2107]: W0913 00:54:21.238997 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242307 kubelet[2107]: E0913 00:54:21.239164 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242307 kubelet[2107]: E0913 00:54:21.239306 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242307 kubelet[2107]: W0913 00:54:21.239314 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242659 kubelet[2107]: E0913 00:54:21.239423 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242659 kubelet[2107]: E0913 00:54:21.239548 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242659 kubelet[2107]: W0913 00:54:21.239555 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242659 kubelet[2107]: E0913 00:54:21.239618 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242659 kubelet[2107]: E0913 00:54:21.239760 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242659 kubelet[2107]: W0913 00:54:21.239767 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242659 kubelet[2107]: E0913 00:54:21.239776 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242659 kubelet[2107]: E0913 00:54:21.239961 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242659 kubelet[2107]: W0913 00:54:21.239968 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242659 kubelet[2107]: E0913 00:54:21.239982 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242925 kubelet[2107]: E0913 00:54:21.240243 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242925 kubelet[2107]: W0913 00:54:21.240251 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242925 kubelet[2107]: E0913 00:54:21.240268 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242925 kubelet[2107]: E0913 00:54:21.240491 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242925 kubelet[2107]: W0913 00:54:21.240499 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242925 kubelet[2107]: E0913 00:54:21.240591 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242925 kubelet[2107]: E0913 00:54:21.240705 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.242925 kubelet[2107]: W0913 00:54:21.240714 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.242925 kubelet[2107]: E0913 00:54:21.240819 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.242925 kubelet[2107]: E0913 00:54:21.240932 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.243192 kubelet[2107]: W0913 00:54:21.240939 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.243192 kubelet[2107]: E0913 00:54:21.240951 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.243192 kubelet[2107]: E0913 00:54:21.241111 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.243192 kubelet[2107]: W0913 00:54:21.241117 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.243192 kubelet[2107]: E0913 00:54:21.241125 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.248400 kubelet[2107]: E0913 00:54:21.248360 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:21.248400 kubelet[2107]: W0913 00:54:21.248383 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:21.248565 kubelet[2107]: E0913 00:54:21.248428 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:21.343000 audit[2658]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2658 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:21.343000 audit[2658]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd4f42d6a0 a2=0 a3=7ffd4f42d68c items=0 ppid=2209 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:21.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:21.347000 audit[2658]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2658 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:21.347000 audit[2658]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd4f42d6a0 a2=0 a3=0 items=0 ppid=2209 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:21.347000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:21.594723 systemd[1]: run-containerd-runc-k8s.io-0077aef51403d60caf64fe0dfe7ca9961ca9e3999d3b7126549c0503859fe3fd-runc.iLZnQI.mount: Deactivated successfully. Sep 13 00:54:22.095457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3594380508.mount: Deactivated successfully. Sep 13 00:54:22.576169 kubelet[2107]: E0913 00:54:22.576032 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4d8z6" podUID="5f057288-90ee-4889-a341-9af038f7cf7a" Sep 13 00:54:23.041994 env[1311]: time="2025-09-13T00:54:23.041929721Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:23.044145 env[1311]: time="2025-09-13T00:54:23.044107673Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:23.045994 env[1311]: time="2025-09-13T00:54:23.045959588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:23.047868 env[1311]: time="2025-09-13T00:54:23.047826073Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:23.048583 env[1311]: time="2025-09-13T00:54:23.048550982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:54:23.052452 env[1311]: time="2025-09-13T00:54:23.052380662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:54:23.074948 env[1311]: time="2025-09-13T00:54:23.074909294Z" level=info msg="CreateContainer within sandbox \"0077aef51403d60caf64fe0dfe7ca9961ca9e3999d3b7126549c0503859fe3fd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:54:23.086109 env[1311]: time="2025-09-13T00:54:23.086057152Z" level=info msg="CreateContainer within sandbox \"0077aef51403d60caf64fe0dfe7ca9961ca9e3999d3b7126549c0503859fe3fd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ac47c1e7b9300e264ed5d4caccfd1a0d7aaca2fcd72abda0d3a147fc61e4c403\"" Sep 13 00:54:23.087257 env[1311]: time="2025-09-13T00:54:23.087227902Z" level=info msg="StartContainer for \"ac47c1e7b9300e264ed5d4caccfd1a0d7aaca2fcd72abda0d3a147fc61e4c403\"" Sep 13 00:54:23.185339 env[1311]: time="2025-09-13T00:54:23.185275655Z" level=info msg="StartContainer for \"ac47c1e7b9300e264ed5d4caccfd1a0d7aaca2fcd72abda0d3a147fc61e4c403\" returns successfully" Sep 13 00:54:23.680897 kubelet[2107]: E0913 00:54:23.680866 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:23.693572 kubelet[2107]: I0913 00:54:23.693504 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55bd5d4b96-qcmst" podStartSLOduration=1.488458526 podStartE2EDuration="3.693466005s" podCreationTimestamp="2025-09-13 00:54:20 +0000 UTC" firstStartedPulling="2025-09-13 00:54:20.845076827 +0000 UTC m=+17.529215290" lastFinishedPulling="2025-09-13 00:54:23.050084294 +0000 UTC m=+19.734222769" observedRunningTime="2025-09-13 00:54:23.69297914 +0000 UTC m=+20.377117625" watchObservedRunningTime="2025-09-13 00:54:23.693466005 +0000 UTC m=+20.377604515" Sep 13 00:54:23.741925 kubelet[2107]: E0913 00:54:23.741879 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.741925 kubelet[2107]: W0913 00:54:23.741910 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.742150 kubelet[2107]: E0913 00:54:23.741939 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.742212 kubelet[2107]: E0913 00:54:23.742197 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.742253 kubelet[2107]: W0913 00:54:23.742212 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.742253 kubelet[2107]: E0913 00:54:23.742224 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.742445 kubelet[2107]: E0913 00:54:23.742426 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.742445 kubelet[2107]: W0913 00:54:23.742440 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.742521 kubelet[2107]: E0913 00:54:23.742455 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.742683 kubelet[2107]: E0913 00:54:23.742642 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.742683 kubelet[2107]: W0913 00:54:23.742655 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.742683 kubelet[2107]: E0913 00:54:23.742663 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.742859 kubelet[2107]: E0913 00:54:23.742843 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.742859 kubelet[2107]: W0913 00:54:23.742850 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.742859 kubelet[2107]: E0913 00:54:23.742858 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.743023 kubelet[2107]: E0913 00:54:23.743011 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.743023 kubelet[2107]: W0913 00:54:23.743021 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.743085 kubelet[2107]: E0913 00:54:23.743029 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.743185 kubelet[2107]: E0913 00:54:23.743174 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.743185 kubelet[2107]: W0913 00:54:23.743184 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.743251 kubelet[2107]: E0913 00:54:23.743191 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.743350 kubelet[2107]: E0913 00:54:23.743340 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.743403 kubelet[2107]: W0913 00:54:23.743350 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.743403 kubelet[2107]: E0913 00:54:23.743357 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.743529 kubelet[2107]: E0913 00:54:23.743518 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.743529 kubelet[2107]: W0913 00:54:23.743529 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.743604 kubelet[2107]: E0913 00:54:23.743536 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.743683 kubelet[2107]: E0913 00:54:23.743674 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.743721 kubelet[2107]: W0913 00:54:23.743683 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.743721 kubelet[2107]: E0913 00:54:23.743690 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.743843 kubelet[2107]: E0913 00:54:23.743833 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.743843 kubelet[2107]: W0913 00:54:23.743842 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.743909 kubelet[2107]: E0913 00:54:23.743849 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.743996 kubelet[2107]: E0913 00:54:23.743985 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.744033 kubelet[2107]: W0913 00:54:23.743999 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.744033 kubelet[2107]: E0913 00:54:23.744006 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.744166 kubelet[2107]: E0913 00:54:23.744156 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.744206 kubelet[2107]: W0913 00:54:23.744166 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.744206 kubelet[2107]: E0913 00:54:23.744174 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.744329 kubelet[2107]: E0913 00:54:23.744319 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.744329 kubelet[2107]: W0913 00:54:23.744329 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.744418 kubelet[2107]: E0913 00:54:23.744336 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.744525 kubelet[2107]: E0913 00:54:23.744510 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.744525 kubelet[2107]: W0913 00:54:23.744524 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.744610 kubelet[2107]: E0913 00:54:23.744536 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.757438 kubelet[2107]: E0913 00:54:23.755053 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.757438 kubelet[2107]: W0913 00:54:23.755079 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.757438 kubelet[2107]: E0913 00:54:23.755103 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.757438 kubelet[2107]: E0913 00:54:23.755318 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.757438 kubelet[2107]: W0913 00:54:23.755329 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.757438 kubelet[2107]: E0913 00:54:23.755344 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.757438 kubelet[2107]: E0913 00:54:23.755559 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.757438 kubelet[2107]: W0913 00:54:23.755570 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.757438 kubelet[2107]: E0913 00:54:23.755580 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.757438 kubelet[2107]: E0913 00:54:23.755760 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.757881 kubelet[2107]: W0913 00:54:23.755767 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.757881 kubelet[2107]: E0913 00:54:23.755775 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.757881 kubelet[2107]: E0913 00:54:23.755951 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.757881 kubelet[2107]: W0913 00:54:23.755957 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.757881 kubelet[2107]: E0913 00:54:23.755973 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.757881 kubelet[2107]: E0913 00:54:23.756126 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.757881 kubelet[2107]: W0913 00:54:23.756133 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.757881 kubelet[2107]: E0913 00:54:23.756141 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.757881 kubelet[2107]: E0913 00:54:23.756311 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.757881 kubelet[2107]: W0913 00:54:23.756318 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.758172 kubelet[2107]: E0913 00:54:23.756325 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.759453 kubelet[2107]: E0913 00:54:23.759434 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.759595 kubelet[2107]: W0913 00:54:23.759579 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.759665 kubelet[2107]: E0913 00:54:23.759653 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.759939 kubelet[2107]: E0913 00:54:23.759924 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.760042 kubelet[2107]: W0913 00:54:23.760028 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.760106 kubelet[2107]: E0913 00:54:23.760094 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.760324 kubelet[2107]: E0913 00:54:23.760313 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.760416 kubelet[2107]: W0913 00:54:23.760387 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.760492 kubelet[2107]: E0913 00:54:23.760481 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.760744 kubelet[2107]: E0913 00:54:23.760732 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.760852 kubelet[2107]: W0913 00:54:23.760837 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.760940 kubelet[2107]: E0913 00:54:23.760925 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.761484 kubelet[2107]: E0913 00:54:23.761470 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.761592 kubelet[2107]: W0913 00:54:23.761577 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.761708 kubelet[2107]: E0913 00:54:23.761695 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.763330 kubelet[2107]: E0913 00:54:23.763304 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.763471 kubelet[2107]: W0913 00:54:23.763455 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.763543 kubelet[2107]: E0913 00:54:23.763531 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.763778 kubelet[2107]: E0913 00:54:23.763767 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.763852 kubelet[2107]: W0913 00:54:23.763839 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.763914 kubelet[2107]: E0913 00:54:23.763902 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.764142 kubelet[2107]: E0913 00:54:23.764127 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.764219 kubelet[2107]: W0913 00:54:23.764206 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.764289 kubelet[2107]: E0913 00:54:23.764276 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.765771 kubelet[2107]: E0913 00:54:23.764967 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.765771 kubelet[2107]: W0913 00:54:23.765031 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.767577 kubelet[2107]: E0913 00:54:23.765059 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.769200 kubelet[2107]: E0913 00:54:23.769183 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.772583 kubelet[2107]: W0913 00:54:23.772544 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.772716 kubelet[2107]: E0913 00:54:23.772700 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:23.773274 kubelet[2107]: E0913 00:54:23.773257 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:54:23.773382 kubelet[2107]: W0913 00:54:23.773362 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:54:23.773515 kubelet[2107]: E0913 00:54:23.773502 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:54:24.349628 env[1311]: time="2025-09-13T00:54:24.349557138Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:24.351581 env[1311]: time="2025-09-13T00:54:24.351525973Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:24.353688 env[1311]: time="2025-09-13T00:54:24.353643997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:24.355530 env[1311]: time="2025-09-13T00:54:24.355486210Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:24.356239 env[1311]: time="2025-09-13T00:54:24.356201558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:54:24.362383 env[1311]: time="2025-09-13T00:54:24.362321961Z" level=info msg="CreateContainer within sandbox \"f849ed86883a75819fe737ca6fbd7e1c99bbc29bebeb5a33138951ec05a5e664\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:54:24.377351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123126390.mount: Deactivated successfully. Sep 13 00:54:24.382867 env[1311]: time="2025-09-13T00:54:24.382774442Z" level=info msg="CreateContainer within sandbox \"f849ed86883a75819fe737ca6fbd7e1c99bbc29bebeb5a33138951ec05a5e664\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"52d1b36b088ecb2e88725cf22a06cb34324212edb95c1b4e919f4d80c9b4433d\"" Sep 13 00:54:24.385968 env[1311]: time="2025-09-13T00:54:24.385905512Z" level=info msg="StartContainer for \"52d1b36b088ecb2e88725cf22a06cb34324212edb95c1b4e919f4d80c9b4433d\"" Sep 13 00:54:24.470337 env[1311]: time="2025-09-13T00:54:24.470267710Z" level=info msg="StartContainer for \"52d1b36b088ecb2e88725cf22a06cb34324212edb95c1b4e919f4d80c9b4433d\" returns successfully" Sep 13 00:54:24.498980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52d1b36b088ecb2e88725cf22a06cb34324212edb95c1b4e919f4d80c9b4433d-rootfs.mount: Deactivated successfully. Sep 13 00:54:24.515827 env[1311]: time="2025-09-13T00:54:24.515759665Z" level=info msg="shim disconnected" id=52d1b36b088ecb2e88725cf22a06cb34324212edb95c1b4e919f4d80c9b4433d Sep 13 00:54:24.516202 env[1311]: time="2025-09-13T00:54:24.516163664Z" level=warning msg="cleaning up after shim disconnected" id=52d1b36b088ecb2e88725cf22a06cb34324212edb95c1b4e919f4d80c9b4433d namespace=k8s.io Sep 13 00:54:24.516319 env[1311]: time="2025-09-13T00:54:24.516299380Z" level=info msg="cleaning up dead shim" Sep 13 00:54:24.528679 env[1311]: time="2025-09-13T00:54:24.528581798Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2779 runtime=io.containerd.runc.v2\n" Sep 13 00:54:24.576085 kubelet[2107]: E0913 00:54:24.576019 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4d8z6" podUID="5f057288-90ee-4889-a341-9af038f7cf7a" Sep 13 00:54:24.684003 kubelet[2107]: I0913 00:54:24.683963 2107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:24.684659 kubelet[2107]: E0913 00:54:24.684415 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:24.685973 env[1311]: time="2025-09-13T00:54:24.685938034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:54:26.576231 kubelet[2107]: E0913 00:54:26.576174 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4d8z6" podUID="5f057288-90ee-4889-a341-9af038f7cf7a" Sep 13 00:54:28.070197 env[1311]: time="2025-09-13T00:54:28.070134527Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:28.072779 env[1311]: time="2025-09-13T00:54:28.072706866Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:28.074978 env[1311]: time="2025-09-13T00:54:28.074940008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:28.076738 env[1311]: time="2025-09-13T00:54:28.076704186Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:28.077533 env[1311]: time="2025-09-13T00:54:28.077501860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:54:28.081509 env[1311]: time="2025-09-13T00:54:28.081472318Z" level=info msg="CreateContainer within sandbox \"f849ed86883a75819fe737ca6fbd7e1c99bbc29bebeb5a33138951ec05a5e664\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:54:28.094348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300311656.mount: Deactivated successfully. Sep 13 00:54:28.101853 env[1311]: time="2025-09-13T00:54:28.101802029Z" level=info msg="CreateContainer within sandbox \"f849ed86883a75819fe737ca6fbd7e1c99bbc29bebeb5a33138951ec05a5e664\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"824a5ecfea8a616955ff56f545970b1adb29502b3c9cfc7678cffaf991b9251d\"" Sep 13 00:54:28.106687 env[1311]: time="2025-09-13T00:54:28.103598197Z" level=info msg="StartContainer for \"824a5ecfea8a616955ff56f545970b1adb29502b3c9cfc7678cffaf991b9251d\"" Sep 13 00:54:28.197126 env[1311]: time="2025-09-13T00:54:28.197075241Z" level=info msg="StartContainer for \"824a5ecfea8a616955ff56f545970b1adb29502b3c9cfc7678cffaf991b9251d\" returns successfully" Sep 13 00:54:28.575627 kubelet[2107]: E0913 00:54:28.575569 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4d8z6" podUID="5f057288-90ee-4889-a341-9af038f7cf7a" Sep 13 00:54:28.853895 env[1311]: time="2025-09-13T00:54:28.853756280Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:54:28.888865 env[1311]: time="2025-09-13T00:54:28.888716470Z" level=info msg="shim disconnected" id=824a5ecfea8a616955ff56f545970b1adb29502b3c9cfc7678cffaf991b9251d Sep 13 00:54:28.888865 env[1311]: time="2025-09-13T00:54:28.888859493Z" level=warning msg="cleaning up after shim disconnected" id=824a5ecfea8a616955ff56f545970b1adb29502b3c9cfc7678cffaf991b9251d namespace=k8s.io Sep 13 00:54:28.888865 env[1311]: time="2025-09-13T00:54:28.888871351Z" level=info msg="cleaning up dead shim" Sep 13 00:54:28.898493 env[1311]: time="2025-09-13T00:54:28.898443929Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2848 runtime=io.containerd.runc.v2\n" Sep 13 00:54:28.946798 kubelet[2107]: I0913 00:54:28.946765 2107 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:54:28.992380 kubelet[2107]: W0913 00:54:28.992340 2107 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.8-n-b7c626372f" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510.3.8-n-b7c626372f' and this object Sep 13 00:54:28.992380 kubelet[2107]: E0913 00:54:28.992388 2107 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510.3.8-n-b7c626372f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-3510.3.8-n-b7c626372f' and this object" logger="UnhandledError" Sep 13 00:54:29.007624 kubelet[2107]: I0913 00:54:29.002868 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pvtw\" (UniqueName: \"kubernetes.io/projected/4a90c3d8-bff3-4795-ac7c-5bfe09cf7345-kube-api-access-7pvtw\") pod \"coredns-7c65d6cfc9-b7mc7\" (UID: \"4a90c3d8-bff3-4795-ac7c-5bfe09cf7345\") " pod="kube-system/coredns-7c65d6cfc9-b7mc7" Sep 13 00:54:29.008827 kubelet[2107]: I0913 00:54:29.008792 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a90c3d8-bff3-4795-ac7c-5bfe09cf7345-config-volume\") pod \"coredns-7c65d6cfc9-b7mc7\" (UID: \"4a90c3d8-bff3-4795-ac7c-5bfe09cf7345\") " pod="kube-system/coredns-7c65d6cfc9-b7mc7" Sep 13 00:54:29.009047 kubelet[2107]: I0913 00:54:29.009031 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxfvk\" (UniqueName: \"kubernetes.io/projected/e2eac8c3-d7f4-4255-85a6-44ee22635692-kube-api-access-mxfvk\") pod \"calico-kube-controllers-64dcf69d7d-d9zgr\" (UID: \"e2eac8c3-d7f4-4255-85a6-44ee22635692\") " pod="calico-system/calico-kube-controllers-64dcf69d7d-d9zgr" Sep 13 00:54:29.009160 kubelet[2107]: I0913 00:54:29.009144 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m76rd\" (UniqueName: \"kubernetes.io/projected/cec0420a-0ebf-4565-8d09-fd0c2c488b56-kube-api-access-m76rd\") pod \"calico-apiserver-86766b5d6c-z4fvv\" (UID: \"cec0420a-0ebf-4565-8d09-fd0c2c488b56\") " pod="calico-apiserver/calico-apiserver-86766b5d6c-z4fvv" Sep 13 00:54:29.009258 kubelet[2107]: I0913 00:54:29.009245 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2eac8c3-d7f4-4255-85a6-44ee22635692-tigera-ca-bundle\") pod \"calico-kube-controllers-64dcf69d7d-d9zgr\" (UID: \"e2eac8c3-d7f4-4255-85a6-44ee22635692\") " pod="calico-system/calico-kube-controllers-64dcf69d7d-d9zgr" Sep 13 00:54:29.009353 kubelet[2107]: I0913 00:54:29.009337 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cec0420a-0ebf-4565-8d09-fd0c2c488b56-calico-apiserver-certs\") pod \"calico-apiserver-86766b5d6c-z4fvv\" (UID: \"cec0420a-0ebf-4565-8d09-fd0c2c488b56\") " pod="calico-apiserver/calico-apiserver-86766b5d6c-z4fvv" Sep 13 00:54:29.091704 systemd[1]: run-containerd-runc-k8s.io-824a5ecfea8a616955ff56f545970b1adb29502b3c9cfc7678cffaf991b9251d-runc.wkg0DC.mount: Deactivated successfully. Sep 13 00:54:29.093685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-824a5ecfea8a616955ff56f545970b1adb29502b3c9cfc7678cffaf991b9251d-rootfs.mount: Deactivated successfully. Sep 13 00:54:29.110555 kubelet[2107]: I0913 00:54:29.110426 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdzhz\" (UniqueName: \"kubernetes.io/projected/8833f507-515d-400e-9991-59b6f2cca14f-kube-api-access-mdzhz\") pod \"calico-apiserver-86766b5d6c-6f24s\" (UID: \"8833f507-515d-400e-9991-59b6f2cca14f\") " pod="calico-apiserver/calico-apiserver-86766b5d6c-6f24s" Sep 13 00:54:29.110834 kubelet[2107]: I0913 00:54:29.110815 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8833f507-515d-400e-9991-59b6f2cca14f-calico-apiserver-certs\") pod \"calico-apiserver-86766b5d6c-6f24s\" (UID: \"8833f507-515d-400e-9991-59b6f2cca14f\") " pod="calico-apiserver/calico-apiserver-86766b5d6c-6f24s" Sep 13 00:54:29.110942 kubelet[2107]: I0913 00:54:29.110928 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86879210-53e1-4a0a-87e7-2bb62916a082-goldmane-ca-bundle\") pod \"goldmane-7988f88666-zx8vp\" (UID: \"86879210-53e1-4a0a-87e7-2bb62916a082\") " pod="calico-system/goldmane-7988f88666-zx8vp" Sep 13 00:54:29.111050 kubelet[2107]: I0913 00:54:29.111035 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9682k\" (UniqueName: \"kubernetes.io/projected/8f31ef69-711d-48a0-989a-767140cb31a7-kube-api-access-9682k\") pod \"whisker-64f68966d7-rz62c\" (UID: \"8f31ef69-711d-48a0-989a-767140cb31a7\") " pod="calico-system/whisker-64f68966d7-rz62c" Sep 13 00:54:29.111190 kubelet[2107]: I0913 00:54:29.111175 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f31ef69-711d-48a0-989a-767140cb31a7-whisker-ca-bundle\") pod \"whisker-64f68966d7-rz62c\" (UID: \"8f31ef69-711d-48a0-989a-767140cb31a7\") " pod="calico-system/whisker-64f68966d7-rz62c" Sep 13 00:54:29.111293 kubelet[2107]: I0913 00:54:29.111278 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86879210-53e1-4a0a-87e7-2bb62916a082-config\") pod \"goldmane-7988f88666-zx8vp\" (UID: \"86879210-53e1-4a0a-87e7-2bb62916a082\") " pod="calico-system/goldmane-7988f88666-zx8vp" Sep 13 00:54:29.111380 kubelet[2107]: I0913 00:54:29.111367 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7b7k\" (UniqueName: \"kubernetes.io/projected/86879210-53e1-4a0a-87e7-2bb62916a082-kube-api-access-l7b7k\") pod \"goldmane-7988f88666-zx8vp\" (UID: \"86879210-53e1-4a0a-87e7-2bb62916a082\") " pod="calico-system/goldmane-7988f88666-zx8vp" Sep 13 00:54:29.111511 kubelet[2107]: I0913 00:54:29.111497 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c733dfc1-fe7d-49df-84e8-9292d570b93c-config-volume\") pod \"coredns-7c65d6cfc9-rmqpc\" (UID: \"c733dfc1-fe7d-49df-84e8-9292d570b93c\") " pod="kube-system/coredns-7c65d6cfc9-rmqpc" Sep 13 00:54:29.111620 kubelet[2107]: I0913 00:54:29.111606 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/86879210-53e1-4a0a-87e7-2bb62916a082-goldmane-key-pair\") pod \"goldmane-7988f88666-zx8vp\" (UID: \"86879210-53e1-4a0a-87e7-2bb62916a082\") " pod="calico-system/goldmane-7988f88666-zx8vp" Sep 13 00:54:29.111742 kubelet[2107]: I0913 00:54:29.111728 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f31ef69-711d-48a0-989a-767140cb31a7-whisker-backend-key-pair\") pod \"whisker-64f68966d7-rz62c\" (UID: \"8f31ef69-711d-48a0-989a-767140cb31a7\") " pod="calico-system/whisker-64f68966d7-rz62c" Sep 13 00:54:29.111849 kubelet[2107]: I0913 00:54:29.111824 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49bxz\" (UniqueName: \"kubernetes.io/projected/c733dfc1-fe7d-49df-84e8-9292d570b93c-kube-api-access-49bxz\") pod \"coredns-7c65d6cfc9-rmqpc\" (UID: \"c733dfc1-fe7d-49df-84e8-9292d570b93c\") " pod="kube-system/coredns-7c65d6cfc9-rmqpc" Sep 13 00:54:29.287864 kubelet[2107]: E0913 00:54:29.287828 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:29.289868 env[1311]: time="2025-09-13T00:54:29.288851861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dcf69d7d-d9zgr,Uid:e2eac8c3-d7f4-4255-85a6-44ee22635692,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:29.290887 env[1311]: time="2025-09-13T00:54:29.290616033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b7mc7,Uid:4a90c3d8-bff3-4795-ac7c-5bfe09cf7345,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:29.314638 kubelet[2107]: E0913 00:54:29.314591 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:29.317333 env[1311]: time="2025-09-13T00:54:29.316495514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rmqpc,Uid:c733dfc1-fe7d-49df-84e8-9292d570b93c,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:29.325297 env[1311]: time="2025-09-13T00:54:29.325257877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zx8vp,Uid:86879210-53e1-4a0a-87e7-2bb62916a082,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:29.326180 env[1311]: time="2025-09-13T00:54:29.326149997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64f68966d7-rz62c,Uid:8f31ef69-711d-48a0-989a-767140cb31a7,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:29.526827 env[1311]: time="2025-09-13T00:54:29.526705215Z" level=error msg="Failed to destroy network for sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.527738 env[1311]: time="2025-09-13T00:54:29.527696391Z" level=error msg="encountered an error cleaning up failed sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.528166 env[1311]: time="2025-09-13T00:54:29.528128021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dcf69d7d-d9zgr,Uid:e2eac8c3-d7f4-4255-85a6-44ee22635692,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.528836 kubelet[2107]: E0913 00:54:29.528541 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.528836 kubelet[2107]: E0913 00:54:29.528622 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dcf69d7d-d9zgr" Sep 13 00:54:29.528836 kubelet[2107]: E0913 00:54:29.528650 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dcf69d7d-d9zgr" Sep 13 00:54:29.529116 kubelet[2107]: E0913 00:54:29.528702 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64dcf69d7d-d9zgr_calico-system(e2eac8c3-d7f4-4255-85a6-44ee22635692)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64dcf69d7d-d9zgr_calico-system(e2eac8c3-d7f4-4255-85a6-44ee22635692)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64dcf69d7d-d9zgr" podUID="e2eac8c3-d7f4-4255-85a6-44ee22635692" Sep 13 00:54:29.553646 env[1311]: time="2025-09-13T00:54:29.553541325Z" level=error msg="Failed to destroy network for sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.554221 env[1311]: time="2025-09-13T00:54:29.554177621Z" level=error msg="encountered an error cleaning up failed sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.554437 env[1311]: time="2025-09-13T00:54:29.554406773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b7mc7,Uid:4a90c3d8-bff3-4795-ac7c-5bfe09cf7345,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.555973 kubelet[2107]: E0913 00:54:29.554721 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.555973 kubelet[2107]: E0913 00:54:29.554784 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b7mc7" Sep 13 00:54:29.555973 kubelet[2107]: E0913 00:54:29.554811 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b7mc7" Sep 13 00:54:29.556227 kubelet[2107]: E0913 00:54:29.554850 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-b7mc7_kube-system(4a90c3d8-bff3-4795-ac7c-5bfe09cf7345)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-b7mc7_kube-system(4a90c3d8-bff3-4795-ac7c-5bfe09cf7345)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-b7mc7" podUID="4a90c3d8-bff3-4795-ac7c-5bfe09cf7345" Sep 13 00:54:29.565993 env[1311]: time="2025-09-13T00:54:29.565929745Z" level=error msg="Failed to destroy network for sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.566329 env[1311]: time="2025-09-13T00:54:29.566296688Z" level=error msg="encountered an error cleaning up failed sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.566427 env[1311]: time="2025-09-13T00:54:29.566351218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rmqpc,Uid:c733dfc1-fe7d-49df-84e8-9292d570b93c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.567453 kubelet[2107]: E0913 00:54:29.566590 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.567453 kubelet[2107]: E0913 00:54:29.566690 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rmqpc" Sep 13 00:54:29.567453 kubelet[2107]: E0913 00:54:29.566712 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rmqpc" Sep 13 00:54:29.567670 kubelet[2107]: E0913 00:54:29.566767 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rmqpc_kube-system(c733dfc1-fe7d-49df-84e8-9292d570b93c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rmqpc_kube-system(c733dfc1-fe7d-49df-84e8-9292d570b93c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rmqpc" podUID="c733dfc1-fe7d-49df-84e8-9292d570b93c" Sep 13 00:54:29.576162 env[1311]: time="2025-09-13T00:54:29.576101527Z" level=error msg="Failed to destroy network for sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.577322 env[1311]: time="2025-09-13T00:54:29.577271006Z" level=error msg="encountered an error cleaning up failed sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.578730 env[1311]: time="2025-09-13T00:54:29.577544517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zx8vp,Uid:86879210-53e1-4a0a-87e7-2bb62916a082,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.579051 kubelet[2107]: E0913 00:54:29.579014 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.579623 kubelet[2107]: E0913 00:54:29.579070 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-zx8vp" Sep 13 00:54:29.579623 kubelet[2107]: E0913 00:54:29.579091 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-zx8vp" Sep 13 00:54:29.579623 kubelet[2107]: E0913 00:54:29.579136 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-zx8vp_calico-system(86879210-53e1-4a0a-87e7-2bb62916a082)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-zx8vp_calico-system(86879210-53e1-4a0a-87e7-2bb62916a082)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-zx8vp" podUID="86879210-53e1-4a0a-87e7-2bb62916a082" Sep 13 00:54:29.581776 env[1311]: time="2025-09-13T00:54:29.581727117Z" level=error msg="Failed to destroy network for sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.582262 env[1311]: time="2025-09-13T00:54:29.582223140Z" level=error msg="encountered an error cleaning up failed sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.582428 env[1311]: time="2025-09-13T00:54:29.582384572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64f68966d7-rz62c,Uid:8f31ef69-711d-48a0-989a-767140cb31a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.583162 kubelet[2107]: E0913 00:54:29.582704 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.583162 kubelet[2107]: E0913 00:54:29.582756 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64f68966d7-rz62c" Sep 13 00:54:29.583162 kubelet[2107]: E0913 00:54:29.582784 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64f68966d7-rz62c" Sep 13 00:54:29.583324 kubelet[2107]: E0913 00:54:29.582831 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64f68966d7-rz62c_calico-system(8f31ef69-711d-48a0-989a-767140cb31a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64f68966d7-rz62c_calico-system(8f31ef69-711d-48a0-989a-767140cb31a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64f68966d7-rz62c" podUID="8f31ef69-711d-48a0-989a-767140cb31a7" Sep 13 00:54:29.697125 kubelet[2107]: I0913 00:54:29.696846 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:54:29.698237 env[1311]: time="2025-09-13T00:54:29.698177289Z" level=info msg="StopPodSandbox for \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\"" Sep 13 00:54:29.700120 kubelet[2107]: I0913 00:54:29.699447 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:54:29.700288 env[1311]: time="2025-09-13T00:54:29.700229818Z" level=info msg="StopPodSandbox for \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\"" Sep 13 00:54:29.708572 kubelet[2107]: I0913 00:54:29.708459 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:54:29.710456 env[1311]: time="2025-09-13T00:54:29.710413443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:54:29.712123 env[1311]: time="2025-09-13T00:54:29.712084406Z" level=info msg="StopPodSandbox for \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\"" Sep 13 00:54:29.723732 kubelet[2107]: I0913 00:54:29.722630 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:54:29.724936 env[1311]: time="2025-09-13T00:54:29.724894886Z" level=info msg="StopPodSandbox for \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\"" Sep 13 00:54:29.731557 kubelet[2107]: I0913 00:54:29.730564 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:54:29.731948 env[1311]: time="2025-09-13T00:54:29.731913619Z" level=info msg="StopPodSandbox for \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\"" Sep 13 00:54:29.814454 env[1311]: time="2025-09-13T00:54:29.812047168Z" level=error msg="StopPodSandbox for \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\" failed" error="failed to destroy network for sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.815285 kubelet[2107]: E0913 00:54:29.814990 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:54:29.815285 kubelet[2107]: E0913 00:54:29.815071 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb"} Sep 13 00:54:29.815285 kubelet[2107]: E0913 00:54:29.815172 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c733dfc1-fe7d-49df-84e8-9292d570b93c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:29.815285 kubelet[2107]: E0913 00:54:29.815207 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c733dfc1-fe7d-49df-84e8-9292d570b93c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rmqpc" podUID="c733dfc1-fe7d-49df-84e8-9292d570b93c" Sep 13 00:54:29.844856 env[1311]: time="2025-09-13T00:54:29.844780416Z" level=error msg="StopPodSandbox for \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\" failed" error="failed to destroy network for sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.845331 env[1311]: time="2025-09-13T00:54:29.845134056Z" level=error msg="StopPodSandbox for \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\" failed" error="failed to destroy network for sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.845915 kubelet[2107]: E0913 00:54:29.845681 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:54:29.845915 kubelet[2107]: E0913 00:54:29.845747 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03"} Sep 13 00:54:29.845915 kubelet[2107]: E0913 00:54:29.845795 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f31ef69-711d-48a0-989a-767140cb31a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:29.845915 kubelet[2107]: E0913 00:54:29.845827 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f31ef69-711d-48a0-989a-767140cb31a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64f68966d7-rz62c" podUID="8f31ef69-711d-48a0-989a-767140cb31a7" Sep 13 00:54:29.846785 kubelet[2107]: E0913 00:54:29.846615 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:54:29.846785 kubelet[2107]: E0913 00:54:29.846667 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd"} Sep 13 00:54:29.846785 kubelet[2107]: E0913 00:54:29.846709 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86879210-53e1-4a0a-87e7-2bb62916a082\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:29.846785 kubelet[2107]: E0913 00:54:29.846746 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86879210-53e1-4a0a-87e7-2bb62916a082\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-zx8vp" podUID="86879210-53e1-4a0a-87e7-2bb62916a082" Sep 13 00:54:29.850160 env[1311]: time="2025-09-13T00:54:29.850091836Z" level=error msg="StopPodSandbox for \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\" failed" error="failed to destroy network for sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.850836 kubelet[2107]: E0913 00:54:29.850606 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:54:29.850836 kubelet[2107]: E0913 00:54:29.850674 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da"} Sep 13 00:54:29.850836 kubelet[2107]: E0913 00:54:29.850728 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2eac8c3-d7f4-4255-85a6-44ee22635692\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:29.850836 kubelet[2107]: E0913 00:54:29.850761 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2eac8c3-d7f4-4255-85a6-44ee22635692\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64dcf69d7d-d9zgr" podUID="e2eac8c3-d7f4-4255-85a6-44ee22635692" Sep 13 00:54:29.851438 env[1311]: time="2025-09-13T00:54:29.851347632Z" level=error msg="StopPodSandbox for \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\" failed" error="failed to destroy network for sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:29.851978 kubelet[2107]: E0913 00:54:29.851797 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:54:29.851978 kubelet[2107]: E0913 00:54:29.851856 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0"} Sep 13 00:54:29.851978 kubelet[2107]: E0913 00:54:29.851902 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a90c3d8-bff3-4795-ac7c-5bfe09cf7345\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:29.851978 kubelet[2107]: E0913 00:54:29.851933 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a90c3d8-bff3-4795-ac7c-5bfe09cf7345\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-b7mc7" podUID="4a90c3d8-bff3-4795-ac7c-5bfe09cf7345" Sep 13 00:54:30.126792 kubelet[2107]: E0913 00:54:30.126723 2107 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:54:30.127121 kubelet[2107]: E0913 00:54:30.127096 2107 projected.go:194] Error preparing data for projected volume kube-api-access-m76rd for pod calico-apiserver/calico-apiserver-86766b5d6c-z4fvv: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:54:30.127359 kubelet[2107]: E0913 00:54:30.127336 2107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cec0420a-0ebf-4565-8d09-fd0c2c488b56-kube-api-access-m76rd podName:cec0420a-0ebf-4565-8d09-fd0c2c488b56 nodeName:}" failed. No retries permitted until 2025-09-13 00:54:30.627297555 +0000 UTC m=+27.311436039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m76rd" (UniqueName: "kubernetes.io/projected/cec0420a-0ebf-4565-8d09-fd0c2c488b56-kube-api-access-m76rd") pod "calico-apiserver-86766b5d6c-z4fvv" (UID: "cec0420a-0ebf-4565-8d09-fd0c2c488b56") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:54:30.231580 kubelet[2107]: E0913 00:54:30.231513 2107 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:54:30.231855 kubelet[2107]: E0913 00:54:30.231828 2107 projected.go:194] Error preparing data for projected volume kube-api-access-mdzhz for pod calico-apiserver/calico-apiserver-86766b5d6c-6f24s: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:54:30.232071 kubelet[2107]: E0913 00:54:30.232049 2107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8833f507-515d-400e-9991-59b6f2cca14f-kube-api-access-mdzhz podName:8833f507-515d-400e-9991-59b6f2cca14f nodeName:}" failed. No retries permitted until 2025-09-13 00:54:30.732022165 +0000 UTC m=+27.416160663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mdzhz" (UniqueName: "kubernetes.io/projected/8833f507-515d-400e-9991-59b6f2cca14f-kube-api-access-mdzhz") pod "calico-apiserver-86766b5d6c-6f24s" (UID: "8833f507-515d-400e-9991-59b6f2cca14f") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:54:30.579646 env[1311]: time="2025-09-13T00:54:30.579011560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4d8z6,Uid:5f057288-90ee-4889-a341-9af038f7cf7a,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:30.654882 env[1311]: time="2025-09-13T00:54:30.654805318Z" level=error msg="Failed to destroy network for sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:30.657817 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e-shm.mount: Deactivated successfully. Sep 13 00:54:30.659373 env[1311]: time="2025-09-13T00:54:30.659311950Z" level=error msg="encountered an error cleaning up failed sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:30.659597 env[1311]: time="2025-09-13T00:54:30.659559040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4d8z6,Uid:5f057288-90ee-4889-a341-9af038f7cf7a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:30.660001 kubelet[2107]: E0913 00:54:30.659948 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:30.660298 kubelet[2107]: E0913 00:54:30.660028 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4d8z6" Sep 13 00:54:30.660298 kubelet[2107]: E0913 00:54:30.660061 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4d8z6" Sep 13 00:54:30.660298 kubelet[2107]: E0913 00:54:30.660110 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4d8z6_calico-system(5f057288-90ee-4889-a341-9af038f7cf7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4d8z6_calico-system(5f057288-90ee-4889-a341-9af038f7cf7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4d8z6" podUID="5f057288-90ee-4889-a341-9af038f7cf7a" Sep 13 00:54:30.737438 kubelet[2107]: I0913 00:54:30.737080 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:54:30.738571 env[1311]: time="2025-09-13T00:54:30.738521274Z" level=info msg="StopPodSandbox for \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\"" Sep 13 00:54:30.771094 env[1311]: time="2025-09-13T00:54:30.771026537Z" level=error msg="StopPodSandbox for \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\" failed" error="failed to destroy network for sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:30.771730 kubelet[2107]: E0913 00:54:30.771467 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:54:30.771730 kubelet[2107]: E0913 00:54:30.771582 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e"} Sep 13 00:54:30.771730 kubelet[2107]: E0913 00:54:30.771654 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f057288-90ee-4889-a341-9af038f7cf7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:30.771730 kubelet[2107]: E0913 00:54:30.771679 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f057288-90ee-4889-a341-9af038f7cf7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4d8z6" podUID="5f057288-90ee-4889-a341-9af038f7cf7a" Sep 13 00:54:30.787526 env[1311]: time="2025-09-13T00:54:30.787484795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86766b5d6c-z4fvv,Uid:cec0420a-0ebf-4565-8d09-fd0c2c488b56,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:54:30.855753 env[1311]: time="2025-09-13T00:54:30.855608204Z" level=error msg="Failed to destroy network for sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:30.856658 env[1311]: time="2025-09-13T00:54:30.856613570Z" level=error msg="encountered an error cleaning up failed sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:30.856824 env[1311]: time="2025-09-13T00:54:30.856687682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86766b5d6c-z4fvv,Uid:cec0420a-0ebf-4565-8d09-fd0c2c488b56,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:30.857016 kubelet[2107]: E0913 00:54:30.856983 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:30.857093 kubelet[2107]: E0913 00:54:30.857041 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86766b5d6c-z4fvv" Sep 13 00:54:30.857093 kubelet[2107]: E0913 00:54:30.857064 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86766b5d6c-z4fvv" Sep 13 00:54:30.857166 kubelet[2107]: E0913 00:54:30.857108 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86766b5d6c-z4fvv_calico-apiserver(cec0420a-0ebf-4565-8d09-fd0c2c488b56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86766b5d6c-z4fvv_calico-apiserver(cec0420a-0ebf-4565-8d09-fd0c2c488b56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86766b5d6c-z4fvv" podUID="cec0420a-0ebf-4565-8d09-fd0c2c488b56" Sep 13 00:54:31.129224 env[1311]: time="2025-09-13T00:54:31.129168114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86766b5d6c-6f24s,Uid:8833f507-515d-400e-9991-59b6f2cca14f,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:54:31.266646 env[1311]: time="2025-09-13T00:54:31.266587559Z" level=error msg="Failed to destroy network for sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:31.269695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9-shm.mount: Deactivated successfully. Sep 13 00:54:31.271091 env[1311]: time="2025-09-13T00:54:31.271037247Z" level=error msg="encountered an error cleaning up failed sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:31.271309 env[1311]: time="2025-09-13T00:54:31.271274512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86766b5d6c-6f24s,Uid:8833f507-515d-400e-9991-59b6f2cca14f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:31.271728 kubelet[2107]: E0913 00:54:31.271686 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:31.271865 kubelet[2107]: E0913 00:54:31.271753 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86766b5d6c-6f24s" Sep 13 00:54:31.271865 kubelet[2107]: E0913 00:54:31.271780 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86766b5d6c-6f24s" Sep 13 00:54:31.271865 kubelet[2107]: E0913 00:54:31.271823 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86766b5d6c-6f24s_calico-apiserver(8833f507-515d-400e-9991-59b6f2cca14f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86766b5d6c-6f24s_calico-apiserver(8833f507-515d-400e-9991-59b6f2cca14f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86766b5d6c-6f24s" podUID="8833f507-515d-400e-9991-59b6f2cca14f" Sep 13 00:54:31.740308 kubelet[2107]: I0913 00:54:31.740234 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:54:31.743027 env[1311]: time="2025-09-13T00:54:31.742994221Z" level=info msg="StopPodSandbox for \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\"" Sep 13 00:54:31.760904 kubelet[2107]: I0913 00:54:31.760869 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:54:31.761903 env[1311]: time="2025-09-13T00:54:31.761823374Z" level=info msg="StopPodSandbox for \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\"" Sep 13 00:54:31.829248 env[1311]: time="2025-09-13T00:54:31.829177614Z" level=error msg="StopPodSandbox for \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\" failed" error="failed to destroy network for sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:31.829515 kubelet[2107]: E0913 00:54:31.829465 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:54:31.829638 kubelet[2107]: E0913 00:54:31.829530 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161"} Sep 13 00:54:31.829638 kubelet[2107]: E0913 00:54:31.829580 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cec0420a-0ebf-4565-8d09-fd0c2c488b56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:31.829638 kubelet[2107]: E0913 00:54:31.829605 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cec0420a-0ebf-4565-8d09-fd0c2c488b56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86766b5d6c-z4fvv" podUID="cec0420a-0ebf-4565-8d09-fd0c2c488b56" Sep 13 00:54:31.834493 env[1311]: time="2025-09-13T00:54:31.834418246Z" level=error msg="StopPodSandbox for \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\" failed" error="failed to destroy network for sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:54:31.836321 kubelet[2107]: E0913 00:54:31.836022 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:54:31.836321 kubelet[2107]: E0913 00:54:31.836072 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9"} Sep 13 00:54:31.836321 kubelet[2107]: E0913 00:54:31.836106 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8833f507-515d-400e-9991-59b6f2cca14f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:54:31.837585 kubelet[2107]: E0913 00:54:31.836853 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8833f507-515d-400e-9991-59b6f2cca14f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86766b5d6c-6f24s" podUID="8833f507-515d-400e-9991-59b6f2cca14f" Sep 13 00:54:36.401195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1439972674.mount: Deactivated successfully. Sep 13 00:54:36.429022 env[1311]: time="2025-09-13T00:54:36.428944448Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:36.430476 env[1311]: time="2025-09-13T00:54:36.430439670Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:36.432224 env[1311]: time="2025-09-13T00:54:36.432173674Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:36.433913 env[1311]: time="2025-09-13T00:54:36.433868044Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:36.434747 env[1311]: time="2025-09-13T00:54:36.434683877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:54:36.478651 env[1311]: time="2025-09-13T00:54:36.478601973Z" level=info msg="CreateContainer within sandbox \"f849ed86883a75819fe737ca6fbd7e1c99bbc29bebeb5a33138951ec05a5e664\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:54:36.495152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464918138.mount: Deactivated successfully. Sep 13 00:54:36.501240 env[1311]: time="2025-09-13T00:54:36.501107279Z" level=info msg="CreateContainer within sandbox \"f849ed86883a75819fe737ca6fbd7e1c99bbc29bebeb5a33138951ec05a5e664\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0d7d231652388582a5043b91f1063acf3a7758f8977bf0d6e4b109a9e5eaa68b\"" Sep 13 00:54:36.503835 env[1311]: time="2025-09-13T00:54:36.503725656Z" level=info msg="StartContainer for \"0d7d231652388582a5043b91f1063acf3a7758f8977bf0d6e4b109a9e5eaa68b\"" Sep 13 00:54:36.585312 env[1311]: time="2025-09-13T00:54:36.583957009Z" level=info msg="StartContainer for \"0d7d231652388582a5043b91f1063acf3a7758f8977bf0d6e4b109a9e5eaa68b\" returns successfully" Sep 13 00:54:36.848137 kubelet[2107]: I0913 00:54:36.846129 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pnpwg" podStartSLOduration=1.5225224480000001 podStartE2EDuration="16.841324838s" podCreationTimestamp="2025-09-13 00:54:20 +0000 UTC" firstStartedPulling="2025-09-13 00:54:21.117199192 +0000 UTC m=+17.801337655" lastFinishedPulling="2025-09-13 00:54:36.436001569 +0000 UTC m=+33.120140045" observedRunningTime="2025-09-13 00:54:36.828079034 +0000 UTC m=+33.512217520" watchObservedRunningTime="2025-09-13 00:54:36.841324838 +0000 UTC m=+33.525463322" Sep 13 00:54:36.921880 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:54:36.922914 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:54:37.169740 env[1311]: time="2025-09-13T00:54:37.169697324Z" level=info msg="StopPodSandbox for \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\"" Sep 13 00:54:37.439974 systemd[1]: run-containerd-runc-k8s.io-0d7d231652388582a5043b91f1063acf3a7758f8977bf0d6e4b109a9e5eaa68b-runc.ZqnKRO.mount: Deactivated successfully. Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.330 [INFO][3274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.331 [INFO][3274] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" iface="eth0" netns="/var/run/netns/cni-d6ee74c6-d931-ded3-d421-72c5861ace29" Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.331 [INFO][3274] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" iface="eth0" netns="/var/run/netns/cni-d6ee74c6-d931-ded3-d421-72c5861ace29" Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.332 [INFO][3274] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" iface="eth0" netns="/var/run/netns/cni-d6ee74c6-d931-ded3-d421-72c5861ace29" Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.332 [INFO][3274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.332 [INFO][3274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.520 [INFO][3282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" HandleID="k8s-pod-network.859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--64f68966d7--rz62c-eth0" Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.522 [INFO][3282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.522 [INFO][3282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.533 [WARNING][3282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" HandleID="k8s-pod-network.859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--64f68966d7--rz62c-eth0" Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.533 [INFO][3282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" HandleID="k8s-pod-network.859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--64f68966d7--rz62c-eth0" Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.535 [INFO][3282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:37.540179 env[1311]: 2025-09-13 00:54:37.538 [INFO][3274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:54:37.545812 env[1311]: time="2025-09-13T00:54:37.544149947Z" level=info msg="TearDown network for sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\" successfully" Sep 13 00:54:37.545812 env[1311]: time="2025-09-13T00:54:37.544196605Z" level=info msg="StopPodSandbox for \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\" returns successfully" Sep 13 00:54:37.543224 systemd[1]: run-netns-cni\x2dd6ee74c6\x2dd931\x2dded3\x2dd421\x2d72c5861ace29.mount: Deactivated successfully. Sep 13 00:54:37.583052 kubelet[2107]: I0913 00:54:37.583005 2107 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f31ef69-711d-48a0-989a-767140cb31a7-whisker-backend-key-pair\") pod \"8f31ef69-711d-48a0-989a-767140cb31a7\" (UID: \"8f31ef69-711d-48a0-989a-767140cb31a7\") " Sep 13 00:54:37.583286 kubelet[2107]: I0913 00:54:37.583090 2107 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f31ef69-711d-48a0-989a-767140cb31a7-whisker-ca-bundle\") pod \"8f31ef69-711d-48a0-989a-767140cb31a7\" (UID: \"8f31ef69-711d-48a0-989a-767140cb31a7\") " Sep 13 00:54:37.583286 kubelet[2107]: I0913 00:54:37.583111 2107 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9682k\" (UniqueName: \"kubernetes.io/projected/8f31ef69-711d-48a0-989a-767140cb31a7-kube-api-access-9682k\") pod \"8f31ef69-711d-48a0-989a-767140cb31a7\" (UID: \"8f31ef69-711d-48a0-989a-767140cb31a7\") " Sep 13 00:54:37.585864 kubelet[2107]: I0913 00:54:37.584771 2107 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f31ef69-711d-48a0-989a-767140cb31a7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8f31ef69-711d-48a0-989a-767140cb31a7" (UID: "8f31ef69-711d-48a0-989a-767140cb31a7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:54:37.592064 kubelet[2107]: I0913 00:54:37.591676 2107 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f31ef69-711d-48a0-989a-767140cb31a7-kube-api-access-9682k" (OuterVolumeSpecName: "kube-api-access-9682k") pod "8f31ef69-711d-48a0-989a-767140cb31a7" (UID: "8f31ef69-711d-48a0-989a-767140cb31a7"). InnerVolumeSpecName "kube-api-access-9682k". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:54:37.596126 kubelet[2107]: I0913 00:54:37.595951 2107 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f31ef69-711d-48a0-989a-767140cb31a7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8f31ef69-711d-48a0-989a-767140cb31a7" (UID: "8f31ef69-711d-48a0-989a-767140cb31a7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:54:37.688535 kubelet[2107]: I0913 00:54:37.688487 2107 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f31ef69-711d-48a0-989a-767140cb31a7-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-b7c626372f\" DevicePath \"\"" Sep 13 00:54:37.688535 kubelet[2107]: I0913 00:54:37.688531 2107 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9682k\" (UniqueName: \"kubernetes.io/projected/8f31ef69-711d-48a0-989a-767140cb31a7-kube-api-access-9682k\") on node \"ci-3510.3.8-n-b7c626372f\" DevicePath \"\"" Sep 13 00:54:37.688535 kubelet[2107]: I0913 00:54:37.688547 2107 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f31ef69-711d-48a0-989a-767140cb31a7-whisker-ca-bundle\") on node \"ci-3510.3.8-n-b7c626372f\" DevicePath \"\"" Sep 13 00:54:37.891710 kubelet[2107]: I0913 00:54:37.891649 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2l4p\" (UniqueName: \"kubernetes.io/projected/c07e111b-07db-4c84-a43b-823904516c61-kube-api-access-x2l4p\") pod \"whisker-76fdf687db-wbs6w\" (UID: \"c07e111b-07db-4c84-a43b-823904516c61\") " pod="calico-system/whisker-76fdf687db-wbs6w" Sep 13 00:54:37.892144 kubelet[2107]: I0913 00:54:37.891738 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c07e111b-07db-4c84-a43b-823904516c61-whisker-ca-bundle\") pod \"whisker-76fdf687db-wbs6w\" (UID: \"c07e111b-07db-4c84-a43b-823904516c61\") " pod="calico-system/whisker-76fdf687db-wbs6w" Sep 13 00:54:37.892144 kubelet[2107]: I0913 00:54:37.891782 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c07e111b-07db-4c84-a43b-823904516c61-whisker-backend-key-pair\") pod \"whisker-76fdf687db-wbs6w\" (UID: \"c07e111b-07db-4c84-a43b-823904516c61\") " pod="calico-system/whisker-76fdf687db-wbs6w" Sep 13 00:54:38.174859 env[1311]: time="2025-09-13T00:54:38.174548201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76fdf687db-wbs6w,Uid:c07e111b-07db-4c84-a43b-823904516c61,Namespace:calico-system,Attempt:0,}" Sep 13 00:54:38.349254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:54:38.349615 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali62a4e152145: link becomes ready Sep 13 00:54:38.353015 systemd-networkd[1061]: cali62a4e152145: Link UP Sep 13 00:54:38.353241 systemd-networkd[1061]: cali62a4e152145: Gained carrier Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.217 [INFO][3369] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.236 [INFO][3369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0 whisker-76fdf687db- calico-system c07e111b-07db-4c84-a43b-823904516c61 940 0 2025-09-13 00:54:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76fdf687db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-b7c626372f whisker-76fdf687db-wbs6w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali62a4e152145 [] [] }} ContainerID="386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" Namespace="calico-system" Pod="whisker-76fdf687db-wbs6w" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.236 [INFO][3369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" Namespace="calico-system" Pod="whisker-76fdf687db-wbs6w" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.272 [INFO][3381] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" HandleID="k8s-pod-network.386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.274 [INFO][3381] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" HandleID="k8s-pod-network.386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-b7c626372f", "pod":"whisker-76fdf687db-wbs6w", "timestamp":"2025-09-13 00:54:38.272578878 +0000 UTC"}, Hostname:"ci-3510.3.8-n-b7c626372f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.274 [INFO][3381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.274 [INFO][3381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.274 [INFO][3381] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-b7c626372f' Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.287 [INFO][3381] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.298 [INFO][3381] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.304 [INFO][3381] ipam/ipam.go 511: Trying affinity for 192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.307 [INFO][3381] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.310 [INFO][3381] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.310 [INFO][3381] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.0/26 handle="k8s-pod-network.386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.312 [INFO][3381] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129 Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.318 [INFO][3381] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.0/26 handle="k8s-pod-network.386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.326 [INFO][3381] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.1/26] block=192.168.56.0/26 handle="k8s-pod-network.386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.326 [INFO][3381] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.1/26] handle="k8s-pod-network.386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.326 [INFO][3381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:38.383456 env[1311]: 2025-09-13 00:54:38.326 [INFO][3381] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.1/26] IPv6=[] ContainerID="386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" HandleID="k8s-pod-network.386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0" Sep 13 00:54:38.385614 env[1311]: 2025-09-13 00:54:38.331 [INFO][3369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" Namespace="calico-system" Pod="whisker-76fdf687db-wbs6w" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0", GenerateName:"whisker-76fdf687db-", Namespace:"calico-system", SelfLink:"", UID:"c07e111b-07db-4c84-a43b-823904516c61", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76fdf687db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"", Pod:"whisker-76fdf687db-wbs6w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.56.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali62a4e152145", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:38.385614 env[1311]: 2025-09-13 00:54:38.331 [INFO][3369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.1/32] ContainerID="386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" Namespace="calico-system" Pod="whisker-76fdf687db-wbs6w" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0" Sep 13 00:54:38.385614 env[1311]: 2025-09-13 00:54:38.332 [INFO][3369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62a4e152145 ContainerID="386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" Namespace="calico-system" Pod="whisker-76fdf687db-wbs6w" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0" Sep 13 00:54:38.385614 env[1311]: 2025-09-13 00:54:38.350 [INFO][3369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" Namespace="calico-system" Pod="whisker-76fdf687db-wbs6w" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0" Sep 13 00:54:38.385614 env[1311]: 2025-09-13 00:54:38.351 [INFO][3369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" Namespace="calico-system" Pod="whisker-76fdf687db-wbs6w" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0", GenerateName:"whisker-76fdf687db-", Namespace:"calico-system", SelfLink:"", UID:"c07e111b-07db-4c84-a43b-823904516c61", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76fdf687db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129", Pod:"whisker-76fdf687db-wbs6w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.56.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali62a4e152145", MAC:"de:8c:47:80:c5:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:38.385614 env[1311]: 2025-09-13 00:54:38.377 [INFO][3369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129" Namespace="calico-system" Pod="whisker-76fdf687db-wbs6w" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-whisker--76fdf687db--wbs6w-eth0" Sep 13 00:54:38.406270 systemd[1]: var-lib-kubelet-pods-8f31ef69\x2d711d\x2d48a0\x2d989a\x2d767140cb31a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9682k.mount: Deactivated successfully. Sep 13 00:54:38.407517 systemd[1]: var-lib-kubelet-pods-8f31ef69\x2d711d\x2d48a0\x2d989a\x2d767140cb31a7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:54:38.411945 env[1311]: time="2025-09-13T00:54:38.411833869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:38.411945 env[1311]: time="2025-09-13T00:54:38.411890822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:38.411945 env[1311]: time="2025-09-13T00:54:38.411901849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:38.412441 env[1311]: time="2025-09-13T00:54:38.412360350Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129 pid=3403 runtime=io.containerd.runc.v2 Sep 13 00:54:38.490101 env[1311]: time="2025-09-13T00:54:38.490052767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76fdf687db-wbs6w,Uid:c07e111b-07db-4c84-a43b-823904516c61,Namespace:calico-system,Attempt:0,} returns sandbox id \"386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129\"" Sep 13 00:54:38.499578 env[1311]: time="2025-09-13T00:54:38.499538471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:54:38.738000 audit[3478]: AVC avc: denied { write } for pid=3478 comm="tee" name="fd" dev="proc" ino=24872 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:38.740560 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 13 00:54:38.740946 kernel: audit: type=1400 audit(1757724878.738:309): avc: denied { write } for pid=3478 comm="tee" name="fd" dev="proc" ino=24872 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:38.738000 audit[3478]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd050887c4 a2=241 a3=1b6 items=1 ppid=3451 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.751076 kernel: audit: type=1300 audit(1757724878.738:309): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd050887c4 a2=241 a3=1b6 items=1 ppid=3451 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.745000 audit[3482]: AVC avc: denied { write } for pid=3482 comm="tee" name="fd" dev="proc" ino=24508 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:38.770746 kernel: audit: type=1400 audit(1757724878.745:310): avc: denied { write } for pid=3482 comm="tee" name="fd" dev="proc" ino=24508 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:38.745000 audit[3482]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd98c1f7b5 a2=241 a3=1b6 items=1 ppid=3448 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.745000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 00:54:38.779523 kernel: audit: type=1300 audit(1757724878.745:310): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd98c1f7b5 a2=241 a3=1b6 items=1 ppid=3448 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.779642 kernel: audit: type=1307 audit(1757724878.745:310): cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 00:54:38.779683 kernel: audit: type=1302 audit(1757724878.745:310): item=0 name="/dev/fd/63" inode=24861 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:38.745000 audit: PATH item=0 name="/dev/fd/63" inode=24861 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:38.745000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:38.784685 kernel: audit: type=1327 audit(1757724878.745:310): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:38.784848 kernel: audit: type=1400 audit(1757724878.754:311): avc: denied { write } for pid=3485 comm="tee" name="fd" dev="proc" ino=24516 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:38.754000 audit[3485]: AVC avc: denied { write } for pid=3485 comm="tee" name="fd" dev="proc" ino=24516 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:38.787427 kernel: audit: type=1300 audit(1757724878.754:311): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe71ca47c4 a2=241 a3=1b6 items=1 ppid=3456 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.754000 audit[3485]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe71ca47c4 a2=241 a3=1b6 items=1 ppid=3456 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.793548 kernel: audit: type=1307 audit(1757724878.754:311): cwd="/etc/service/enabled/felix/log" Sep 13 00:54:38.754000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 13 00:54:38.754000 audit: PATH item=0 name="/dev/fd/63" inode=24866 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:38.754000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:38.738000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 13 00:54:38.738000 audit: PATH item=0 name="/dev/fd/63" inode=24493 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:38.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:38.769000 audit[3487]: AVC avc: denied { write } for pid=3487 comm="tee" name="fd" dev="proc" ino=24523 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:38.769000 audit[3487]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf760c7b4 a2=241 a3=1b6 items=1 ppid=3454 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.769000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 13 00:54:38.769000 audit: PATH item=0 name="/dev/fd/63" inode=24867 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:38.769000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:38.800000 audit[3499]: AVC avc: denied { write } for pid=3499 comm="tee" name="fd" dev="proc" ino=24532 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:38.826000 audit[3492]: AVC avc: denied { write } for pid=3492 comm="tee" name="fd" dev="proc" ino=24890 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:38.826000 audit[3492]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd299767c4 a2=241 a3=1b6 items=1 ppid=3447 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.826000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 13 00:54:38.826000 audit: PATH item=0 name="/dev/fd/63" inode=24882 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:38.826000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:38.833000 audit[3494]: AVC avc: denied { write } for pid=3494 comm="tee" name="fd" dev="proc" ino=24894 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:54:38.833000 audit[3494]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdb10a27c6 a2=241 a3=1b6 items=1 ppid=3457 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.833000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 13 00:54:38.833000 audit: PATH item=0 name="/dev/fd/63" inode=24518 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:38.833000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:38.800000 audit[3499]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd3cc1a7c5 a2=241 a3=1b6 items=1 ppid=3445 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.800000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 13 00:54:38.800000 audit: PATH item=0 name="/dev/fd/63" inode=24522 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:38.800000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:54:39.403072 systemd[1]: run-containerd-runc-k8s.io-0d7d231652388582a5043b91f1063acf3a7758f8977bf0d6e4b109a9e5eaa68b-runc.zeAADh.mount: Deactivated successfully. Sep 13 00:54:39.582282 kubelet[2107]: I0913 00:54:39.582223 2107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f31ef69-711d-48a0-989a-767140cb31a7" path="/var/lib/kubelet/pods/8f31ef69-711d-48a0-989a-767140cb31a7/volumes" Sep 13 00:54:40.052697 env[1311]: time="2025-09-13T00:54:40.052628901Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:40.054644 env[1311]: time="2025-09-13T00:54:40.054598696Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:40.056593 env[1311]: time="2025-09-13T00:54:40.056542894Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:40.058367 env[1311]: time="2025-09-13T00:54:40.058322053Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:40.059679 env[1311]: time="2025-09-13T00:54:40.059551780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:54:40.068485 env[1311]: time="2025-09-13T00:54:40.068444556Z" level=info msg="CreateContainer within sandbox \"386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:54:40.082536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274282965.mount: Deactivated successfully. Sep 13 00:54:40.087485 env[1311]: time="2025-09-13T00:54:40.087415608Z" level=info msg="CreateContainer within sandbox \"386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"aa7cd91d2fad65fdf94f17c4126346547f0c843369f73d05403b633b2b4e550d\"" Sep 13 00:54:40.088637 env[1311]: time="2025-09-13T00:54:40.088606045Z" level=info msg="StartContainer for \"aa7cd91d2fad65fdf94f17c4126346547f0c843369f73d05403b633b2b4e550d\"" Sep 13 00:54:40.227502 env[1311]: time="2025-09-13T00:54:40.227438092Z" level=info msg="StartContainer for \"aa7cd91d2fad65fdf94f17c4126346547f0c843369f73d05403b633b2b4e550d\" returns successfully" Sep 13 00:54:40.229156 env[1311]: time="2025-09-13T00:54:40.229112722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:54:40.350548 systemd-networkd[1061]: cali62a4e152145: Gained IPv6LL Sep 13 00:54:41.579178 env[1311]: time="2025-09-13T00:54:41.579134454Z" level=info msg="StopPodSandbox for \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\"" Sep 13 00:54:41.579969 env[1311]: time="2025-09-13T00:54:41.579894265Z" level=info msg="StopPodSandbox for \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\"" Sep 13 00:54:41.580851 env[1311]: time="2025-09-13T00:54:41.579591352Z" level=info msg="StopPodSandbox for \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\"" Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.732 [INFO][3655] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.732 [INFO][3655] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" iface="eth0" netns="/var/run/netns/cni-50bbe2f2-1e8d-fbe6-6d4f-a11cc2c621b3" Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.732 [INFO][3655] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" iface="eth0" netns="/var/run/netns/cni-50bbe2f2-1e8d-fbe6-6d4f-a11cc2c621b3" Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.732 [INFO][3655] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" iface="eth0" netns="/var/run/netns/cni-50bbe2f2-1e8d-fbe6-6d4f-a11cc2c621b3" Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.732 [INFO][3655] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.732 [INFO][3655] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.807 [INFO][3670] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" HandleID="k8s-pod-network.d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.807 [INFO][3670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.807 [INFO][3670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.818 [WARNING][3670] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" HandleID="k8s-pod-network.d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.818 [INFO][3670] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" HandleID="k8s-pod-network.d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.820 [INFO][3670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:41.826194 env[1311]: 2025-09-13 00:54:41.823 [INFO][3655] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:54:41.832141 env[1311]: time="2025-09-13T00:54:41.830456444Z" level=info msg="TearDown network for sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\" successfully" Sep 13 00:54:41.832141 env[1311]: time="2025-09-13T00:54:41.830533912Z" level=info msg="StopPodSandbox for \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\" returns successfully" Sep 13 00:54:41.829253 systemd[1]: run-netns-cni\x2d50bbe2f2\x2d1e8d\x2dfbe6\x2d6d4f\x2da11cc2c621b3.mount: Deactivated successfully. Sep 13 00:54:41.833800 kubelet[2107]: E0913 00:54:41.832132 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:41.835370 env[1311]: time="2025-09-13T00:54:41.833168676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rmqpc,Uid:c733dfc1-fe7d-49df-84e8-9292d570b93c,Namespace:kube-system,Attempt:1,}" Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.755 [INFO][3654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.755 [INFO][3654] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" iface="eth0" netns="/var/run/netns/cni-d4dc3b2f-9038-ecbd-43c3-2e6963694d42" Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.756 [INFO][3654] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" iface="eth0" netns="/var/run/netns/cni-d4dc3b2f-9038-ecbd-43c3-2e6963694d42" Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.756 [INFO][3654] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" iface="eth0" netns="/var/run/netns/cni-d4dc3b2f-9038-ecbd-43c3-2e6963694d42" Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.756 [INFO][3654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.757 [INFO][3654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.870 [INFO][3679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" HandleID="k8s-pod-network.dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.870 [INFO][3679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.870 [INFO][3679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.879 [WARNING][3679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" HandleID="k8s-pod-network.dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.879 [INFO][3679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" HandleID="k8s-pod-network.dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.882 [INFO][3679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:41.895752 env[1311]: 2025-09-13 00:54:41.894 [INFO][3654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:54:41.899679 systemd[1]: run-netns-cni\x2dd4dc3b2f\x2d9038\x2decbd\x2d43c3\x2d2e6963694d42.mount: Deactivated successfully. Sep 13 00:54:41.901217 env[1311]: time="2025-09-13T00:54:41.901147170Z" level=info msg="TearDown network for sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\" successfully" Sep 13 00:54:41.901453 env[1311]: time="2025-09-13T00:54:41.901418284Z" level=info msg="StopPodSandbox for \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\" returns successfully" Sep 13 00:54:41.902030 kubelet[2107]: E0913 00:54:41.901993 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:41.904178 env[1311]: time="2025-09-13T00:54:41.904121629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b7mc7,Uid:4a90c3d8-bff3-4795-ac7c-5bfe09cf7345,Namespace:kube-system,Attempt:1,}" Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.793 [INFO][3656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.793 [INFO][3656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" iface="eth0" netns="/var/run/netns/cni-28b2c8f4-0822-4e23-c6c5-2bec6f2e0b8c" Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.793 [INFO][3656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" iface="eth0" netns="/var/run/netns/cni-28b2c8f4-0822-4e23-c6c5-2bec6f2e0b8c" Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.793 [INFO][3656] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" iface="eth0" netns="/var/run/netns/cni-28b2c8f4-0822-4e23-c6c5-2bec6f2e0b8c" Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.793 [INFO][3656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.793 [INFO][3656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.934 [INFO][3686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" HandleID="k8s-pod-network.17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.934 [INFO][3686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.934 [INFO][3686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.944 [WARNING][3686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" HandleID="k8s-pod-network.17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.944 [INFO][3686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" HandleID="k8s-pod-network.17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.949 [INFO][3686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:41.953646 env[1311]: 2025-09-13 00:54:41.952 [INFO][3656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:54:41.954343 env[1311]: time="2025-09-13T00:54:41.954303606Z" level=info msg="TearDown network for sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\" successfully" Sep 13 00:54:41.955165 env[1311]: time="2025-09-13T00:54:41.954488299Z" level=info msg="StopPodSandbox for \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\" returns successfully" Sep 13 00:54:41.955383 env[1311]: time="2025-09-13T00:54:41.955340645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dcf69d7d-d9zgr,Uid:e2eac8c3-d7f4-4255-85a6-44ee22635692,Namespace:calico-system,Attempt:1,}" Sep 13 00:54:42.196988 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:54:42.204178 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali95c6e3b965c: link becomes ready Sep 13 00:54:42.184286 systemd-networkd[1061]: cali95c6e3b965c: Link UP Sep 13 00:54:42.205082 systemd-networkd[1061]: cali95c6e3b965c: Gained carrier Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:41.960 [INFO][3691] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.002 [INFO][3691] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0 coredns-7c65d6cfc9- kube-system c733dfc1-fe7d-49df-84e8-9292d570b93c 962 0 2025-09-13 00:54:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-b7c626372f coredns-7c65d6cfc9-rmqpc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95c6e3b965c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rmqpc" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.002 [INFO][3691] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rmqpc" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.130 [INFO][3725] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" HandleID="k8s-pod-network.5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.131 [INFO][3725] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" HandleID="k8s-pod-network.5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000331a60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-b7c626372f", "pod":"coredns-7c65d6cfc9-rmqpc", "timestamp":"2025-09-13 00:54:42.130727482 +0000 UTC"}, Hostname:"ci-3510.3.8-n-b7c626372f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.131 [INFO][3725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.131 [INFO][3725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.131 [INFO][3725] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-b7c626372f' Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.141 [INFO][3725] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.146 [INFO][3725] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.152 [INFO][3725] ipam/ipam.go 511: Trying affinity for 192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.155 [INFO][3725] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.157 [INFO][3725] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.158 [INFO][3725] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.0/26 handle="k8s-pod-network.5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.160 [INFO][3725] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.164 [INFO][3725] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.0/26 handle="k8s-pod-network.5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.171 [INFO][3725] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.2/26] block=192.168.56.0/26 handle="k8s-pod-network.5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.171 [INFO][3725] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.2/26] handle="k8s-pod-network.5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.171 [INFO][3725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:42.236022 env[1311]: 2025-09-13 00:54:42.171 [INFO][3725] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.2/26] IPv6=[] ContainerID="5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" HandleID="k8s-pod-network.5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:54:42.237873 env[1311]: 2025-09-13 00:54:42.173 [INFO][3691] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rmqpc" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c733dfc1-fe7d-49df-84e8-9292d570b93c", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"", Pod:"coredns-7c65d6cfc9-rmqpc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95c6e3b965c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:42.237873 env[1311]: 2025-09-13 00:54:42.173 [INFO][3691] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.2/32] ContainerID="5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rmqpc" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:54:42.237873 env[1311]: 2025-09-13 00:54:42.173 [INFO][3691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95c6e3b965c ContainerID="5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rmqpc" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:54:42.237873 env[1311]: 2025-09-13 00:54:42.191 [INFO][3691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rmqpc" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:54:42.237873 env[1311]: 2025-09-13 00:54:42.212 [INFO][3691] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rmqpc" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c733dfc1-fe7d-49df-84e8-9292d570b93c", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb", Pod:"coredns-7c65d6cfc9-rmqpc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95c6e3b965c", MAC:"be:ee:32:61:fc:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:42.237873 env[1311]: 2025-09-13 00:54:42.224 [INFO][3691] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rmqpc" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:54:42.309964 systemd-networkd[1061]: calie41473a54b3: Link UP Sep 13 00:54:42.312663 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie41473a54b3: link becomes ready Sep 13 00:54:42.312487 systemd-networkd[1061]: calie41473a54b3: Gained carrier Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.020 [INFO][3704] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.046 [INFO][3704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0 coredns-7c65d6cfc9- kube-system 4a90c3d8-bff3-4795-ac7c-5bfe09cf7345 963 0 2025-09-13 00:54:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-b7c626372f coredns-7c65d6cfc9-b7mc7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie41473a54b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7mc7" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.046 [INFO][3704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7mc7" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.187 [INFO][3737] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" HandleID="k8s-pod-network.a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.187 [INFO][3737] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" HandleID="k8s-pod-network.a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4080), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-b7c626372f", "pod":"coredns-7c65d6cfc9-b7mc7", "timestamp":"2025-09-13 00:54:42.18099677 +0000 UTC"}, Hostname:"ci-3510.3.8-n-b7c626372f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.187 [INFO][3737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.188 [INFO][3737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.188 [INFO][3737] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-b7c626372f' Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.244 [INFO][3737] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.252 [INFO][3737] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.262 [INFO][3737] ipam/ipam.go 511: Trying affinity for 192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.265 [INFO][3737] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.269 [INFO][3737] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.269 [INFO][3737] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.0/26 handle="k8s-pod-network.a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.272 [INFO][3737] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.278 [INFO][3737] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.0/26 handle="k8s-pod-network.a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.285 [INFO][3737] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.3/26] block=192.168.56.0/26 handle="k8s-pod-network.a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.285 [INFO][3737] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.3/26] handle="k8s-pod-network.a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.285 [INFO][3737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:42.331114 env[1311]: 2025-09-13 00:54:42.285 [INFO][3737] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.3/26] IPv6=[] ContainerID="a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" HandleID="k8s-pod-network.a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:54:42.332441 env[1311]: 2025-09-13 00:54:42.289 [INFO][3704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7mc7" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4a90c3d8-bff3-4795-ac7c-5bfe09cf7345", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"", Pod:"coredns-7c65d6cfc9-b7mc7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie41473a54b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:42.332441 env[1311]: 2025-09-13 00:54:42.289 [INFO][3704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.3/32] ContainerID="a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7mc7" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:54:42.332441 env[1311]: 2025-09-13 00:54:42.289 [INFO][3704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie41473a54b3 ContainerID="a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7mc7" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:54:42.332441 env[1311]: 2025-09-13 00:54:42.313 [INFO][3704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7mc7" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:54:42.332441 env[1311]: 2025-09-13 00:54:42.314 [INFO][3704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7mc7" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4a90c3d8-bff3-4795-ac7c-5bfe09cf7345", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb", Pod:"coredns-7c65d6cfc9-b7mc7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie41473a54b3", MAC:"da:9f:52:a3:fc:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:42.332441 env[1311]: 2025-09-13 00:54:42.329 [INFO][3704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7mc7" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:54:42.354793 env[1311]: time="2025-09-13T00:54:42.339285098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:42.354793 env[1311]: time="2025-09-13T00:54:42.339321609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:42.354793 env[1311]: time="2025-09-13T00:54:42.339331827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:42.354793 env[1311]: time="2025-09-13T00:54:42.339482253Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb pid=3775 runtime=io.containerd.runc.v2 Sep 13 00:54:42.370660 env[1311]: time="2025-09-13T00:54:42.370577178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:42.370870 env[1311]: time="2025-09-13T00:54:42.370842202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:42.370980 env[1311]: time="2025-09-13T00:54:42.370958117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:42.371345 env[1311]: time="2025-09-13T00:54:42.371304103Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb pid=3801 runtime=io.containerd.runc.v2 Sep 13 00:54:42.398817 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali86973ae3863: link becomes ready Sep 13 00:54:42.398460 systemd-networkd[1061]: cali86973ae3863: Link UP Sep 13 00:54:42.398688 systemd-networkd[1061]: cali86973ae3863: Gained carrier Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.092 [INFO][3717] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.120 [INFO][3717] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0 calico-kube-controllers-64dcf69d7d- calico-system e2eac8c3-d7f4-4255-85a6-44ee22635692 964 0 2025-09-13 00:54:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64dcf69d7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-b7c626372f calico-kube-controllers-64dcf69d7d-d9zgr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali86973ae3863 [] [] }} ContainerID="1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" Namespace="calico-system" Pod="calico-kube-controllers-64dcf69d7d-d9zgr" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.120 [INFO][3717] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" Namespace="calico-system" Pod="calico-kube-controllers-64dcf69d7d-d9zgr" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.296 [INFO][3749] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" HandleID="k8s-pod-network.1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.296 [INFO][3749] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" HandleID="k8s-pod-network.1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-b7c626372f", "pod":"calico-kube-controllers-64dcf69d7d-d9zgr", "timestamp":"2025-09-13 00:54:42.296363732 +0000 UTC"}, Hostname:"ci-3510.3.8-n-b7c626372f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.296 [INFO][3749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.296 [INFO][3749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.296 [INFO][3749] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-b7c626372f' Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.344 [INFO][3749] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.350 [INFO][3749] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.360 [INFO][3749] ipam/ipam.go 511: Trying affinity for 192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.363 [INFO][3749] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.366 [INFO][3749] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.366 [INFO][3749] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.0/26 handle="k8s-pod-network.1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.368 [INFO][3749] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7 Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.372 [INFO][3749] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.0/26 handle="k8s-pod-network.1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.380 [INFO][3749] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.4/26] block=192.168.56.0/26 handle="k8s-pod-network.1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.380 [INFO][3749] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.4/26] handle="k8s-pod-network.1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.380 [INFO][3749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:42.417668 env[1311]: 2025-09-13 00:54:42.380 [INFO][3749] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.4/26] IPv6=[] ContainerID="1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" HandleID="k8s-pod-network.1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:54:42.418543 env[1311]: 2025-09-13 00:54:42.385 [INFO][3717] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" Namespace="calico-system" Pod="calico-kube-controllers-64dcf69d7d-d9zgr" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0", GenerateName:"calico-kube-controllers-64dcf69d7d-", Namespace:"calico-system", SelfLink:"", UID:"e2eac8c3-d7f4-4255-85a6-44ee22635692", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64dcf69d7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"", Pod:"calico-kube-controllers-64dcf69d7d-d9zgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86973ae3863", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:42.418543 env[1311]: 2025-09-13 00:54:42.385 [INFO][3717] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.4/32] ContainerID="1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" Namespace="calico-system" Pod="calico-kube-controllers-64dcf69d7d-d9zgr" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:54:42.418543 env[1311]: 2025-09-13 00:54:42.385 [INFO][3717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86973ae3863 ContainerID="1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" Namespace="calico-system" Pod="calico-kube-controllers-64dcf69d7d-d9zgr" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:54:42.418543 env[1311]: 2025-09-13 00:54:42.399 [INFO][3717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" Namespace="calico-system" Pod="calico-kube-controllers-64dcf69d7d-d9zgr" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:54:42.418543 env[1311]: 2025-09-13 00:54:42.399 [INFO][3717] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" Namespace="calico-system" Pod="calico-kube-controllers-64dcf69d7d-d9zgr" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0", GenerateName:"calico-kube-controllers-64dcf69d7d-", Namespace:"calico-system", SelfLink:"", UID:"e2eac8c3-d7f4-4255-85a6-44ee22635692", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64dcf69d7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7", Pod:"calico-kube-controllers-64dcf69d7d-d9zgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86973ae3863", MAC:"be:60:7c:6c:75:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:42.418543 env[1311]: 2025-09-13 00:54:42.415 [INFO][3717] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7" Namespace="calico-system" Pod="calico-kube-controllers-64dcf69d7d-d9zgr" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:54:42.517352 env[1311]: time="2025-09-13T00:54:42.517255170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:42.517598 env[1311]: time="2025-09-13T00:54:42.517571469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:42.517708 env[1311]: time="2025-09-13T00:54:42.517685750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:42.518012 env[1311]: time="2025-09-13T00:54:42.517972185Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7 pid=3848 runtime=io.containerd.runc.v2 Sep 13 00:54:42.536637 env[1311]: time="2025-09-13T00:54:42.536581627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b7mc7,Uid:4a90c3d8-bff3-4795-ac7c-5bfe09cf7345,Namespace:kube-system,Attempt:1,} returns sandbox id \"a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb\"" Sep 13 00:54:42.541268 kubelet[2107]: E0913 00:54:42.541233 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:42.553872 env[1311]: time="2025-09-13T00:54:42.553826770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rmqpc,Uid:c733dfc1-fe7d-49df-84e8-9292d570b93c,Namespace:kube-system,Attempt:1,} returns sandbox id \"5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb\"" Sep 13 00:54:42.554556 env[1311]: time="2025-09-13T00:54:42.554524882Z" level=info msg="CreateContainer within sandbox \"a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:54:42.603197 env[1311]: time="2025-09-13T00:54:42.603141149Z" level=info msg="StopPodSandbox for \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\"" Sep 13 00:54:42.606480 kubelet[2107]: E0913 00:54:42.604630 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:42.621664 env[1311]: time="2025-09-13T00:54:42.603624033Z" level=info msg="StopPodSandbox for \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\"" Sep 13 00:54:42.644838 env[1311]: time="2025-09-13T00:54:42.644682585Z" level=info msg="CreateContainer within sandbox \"5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:54:42.655197 systemd[1]: run-netns-cni\x2d28b2c8f4\x2d0822\x2d4e23\x2dc6c5\x2d2bec6f2e0b8c.mount: Deactivated successfully. Sep 13 00:54:42.669060 env[1311]: time="2025-09-13T00:54:42.669002431Z" level=info msg="CreateContainer within sandbox \"a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"92f9dc8a59db2da8daa94fe15aedbe25d0f622a36e7418adfd474defd3c42f00\"" Sep 13 00:54:42.670129 env[1311]: time="2025-09-13T00:54:42.670102947Z" level=info msg="StartContainer for \"92f9dc8a59db2da8daa94fe15aedbe25d0f622a36e7418adfd474defd3c42f00\"" Sep 13 00:54:42.750261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3438702240.mount: Deactivated successfully. Sep 13 00:54:42.767522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174404054.mount: Deactivated successfully. Sep 13 00:54:42.795569 env[1311]: time="2025-09-13T00:54:42.795514843Z" level=info msg="CreateContainer within sandbox \"5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"217a76c5cb37fd6aa1b026eacc2f514fd854294afdab773e339889279b37c987\"" Sep 13 00:54:42.796377 env[1311]: time="2025-09-13T00:54:42.796347457Z" level=info msg="StartContainer for \"217a76c5cb37fd6aa1b026eacc2f514fd854294afdab773e339889279b37c987\"" Sep 13 00:54:42.876137 env[1311]: time="2025-09-13T00:54:42.876088524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dcf69d7d-d9zgr,Uid:e2eac8c3-d7f4-4255-85a6-44ee22635692,Namespace:calico-system,Attempt:1,} returns sandbox id \"1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7\"" Sep 13 00:54:43.005609 env[1311]: time="2025-09-13T00:54:43.005538161Z" level=info msg="StartContainer for \"92f9dc8a59db2da8daa94fe15aedbe25d0f622a36e7418adfd474defd3c42f00\" returns successfully" Sep 13 00:54:43.035041 env[1311]: time="2025-09-13T00:54:43.034514863Z" level=info msg="StartContainer for \"217a76c5cb37fd6aa1b026eacc2f514fd854294afdab773e339889279b37c987\" returns successfully" Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.009 [INFO][3940] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.010 [INFO][3940] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" iface="eth0" netns="/var/run/netns/cni-bff4c645-4f6f-6326-ea95-caaf582a921c" Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.010 [INFO][3940] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" iface="eth0" netns="/var/run/netns/cni-bff4c645-4f6f-6326-ea95-caaf582a921c" Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.010 [INFO][3940] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" iface="eth0" netns="/var/run/netns/cni-bff4c645-4f6f-6326-ea95-caaf582a921c" Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.010 [INFO][3940] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.010 [INFO][3940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.149 [INFO][4011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" HandleID="k8s-pod-network.9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.149 [INFO][4011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.149 [INFO][4011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.165 [WARNING][4011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" HandleID="k8s-pod-network.9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.174 [INFO][4011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" HandleID="k8s-pod-network.9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.190 [INFO][4011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:43.208506 env[1311]: 2025-09-13 00:54:43.200 [INFO][3940] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:54:43.210762 env[1311]: time="2025-09-13T00:54:43.209175662Z" level=info msg="TearDown network for sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\" successfully" Sep 13 00:54:43.210762 env[1311]: time="2025-09-13T00:54:43.209236431Z" level=info msg="StopPodSandbox for \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\" returns successfully" Sep 13 00:54:43.211612 env[1311]: time="2025-09-13T00:54:43.211571813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86766b5d6c-z4fvv,Uid:cec0420a-0ebf-4565-8d09-fd0c2c488b56,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.033 [INFO][3945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.033 [INFO][3945] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" iface="eth0" netns="/var/run/netns/cni-35f1a419-1d6f-9894-eb86-a0e3b5dd72fe" Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.034 [INFO][3945] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" iface="eth0" netns="/var/run/netns/cni-35f1a419-1d6f-9894-eb86-a0e3b5dd72fe" Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.034 [INFO][3945] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" iface="eth0" netns="/var/run/netns/cni-35f1a419-1d6f-9894-eb86-a0e3b5dd72fe" Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.034 [INFO][3945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.034 [INFO][3945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.165 [INFO][4027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" HandleID="k8s-pod-network.345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.166 [INFO][4027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.190 [INFO][4027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.215 [WARNING][4027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" HandleID="k8s-pod-network.345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.215 [INFO][4027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" HandleID="k8s-pod-network.345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.217 [INFO][4027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:43.224609 env[1311]: 2025-09-13 00:54:43.221 [INFO][3945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:54:43.226659 env[1311]: time="2025-09-13T00:54:43.225095034Z" level=info msg="TearDown network for sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\" successfully" Sep 13 00:54:43.226960 env[1311]: time="2025-09-13T00:54:43.226915882Z" level=info msg="StopPodSandbox for \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\" returns successfully" Sep 13 00:54:43.228076 env[1311]: time="2025-09-13T00:54:43.228031120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zx8vp,Uid:86879210-53e1-4a0a-87e7-2bb62916a082,Namespace:calico-system,Attempt:1,}" Sep 13 00:54:43.403136 systemd-networkd[1061]: cali188b0193cdc: Link UP Sep 13 00:54:43.404838 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:54:43.404926 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali188b0193cdc: link becomes ready Sep 13 00:54:43.405094 systemd-networkd[1061]: cali188b0193cdc: Gained carrier Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.258 [INFO][4040] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.278 [INFO][4040] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0 calico-apiserver-86766b5d6c- calico-apiserver cec0420a-0ebf-4565-8d09-fd0c2c488b56 985 0 2025-09-13 00:54:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86766b5d6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-b7c626372f calico-apiserver-86766b5d6c-z4fvv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali188b0193cdc [] [] }} ContainerID="c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-z4fvv" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.278 [INFO][4040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-z4fvv" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.342 [INFO][4064] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" HandleID="k8s-pod-network.c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.343 [INFO][4064] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" HandleID="k8s-pod-network.c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003254a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-b7c626372f", "pod":"calico-apiserver-86766b5d6c-z4fvv", "timestamp":"2025-09-13 00:54:43.342759466 +0000 UTC"}, Hostname:"ci-3510.3.8-n-b7c626372f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.343 [INFO][4064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.343 [INFO][4064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.343 [INFO][4064] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-b7c626372f' Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.352 [INFO][4064] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.356 [INFO][4064] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.365 [INFO][4064] ipam/ipam.go 511: Trying affinity for 192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.367 [INFO][4064] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.370 [INFO][4064] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.370 [INFO][4064] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.0/26 handle="k8s-pod-network.c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.372 [INFO][4064] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.376 [INFO][4064] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.0/26 handle="k8s-pod-network.c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.385 [INFO][4064] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.5/26] block=192.168.56.0/26 handle="k8s-pod-network.c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.385 [INFO][4064] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.5/26] handle="k8s-pod-network.c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.385 [INFO][4064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:43.421901 env[1311]: 2025-09-13 00:54:43.385 [INFO][4064] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.5/26] IPv6=[] ContainerID="c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" HandleID="k8s-pod-network.c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:54:43.422716 env[1311]: 2025-09-13 00:54:43.390 [INFO][4040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-z4fvv" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0", GenerateName:"calico-apiserver-86766b5d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cec0420a-0ebf-4565-8d09-fd0c2c488b56", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86766b5d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"", Pod:"calico-apiserver-86766b5d6c-z4fvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali188b0193cdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:43.422716 env[1311]: 2025-09-13 00:54:43.390 [INFO][4040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.5/32] ContainerID="c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-z4fvv" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:54:43.422716 env[1311]: 2025-09-13 00:54:43.390 [INFO][4040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali188b0193cdc ContainerID="c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-z4fvv" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:54:43.422716 env[1311]: 2025-09-13 00:54:43.405 [INFO][4040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-z4fvv" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:54:43.422716 env[1311]: 2025-09-13 00:54:43.407 [INFO][4040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-z4fvv" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0", GenerateName:"calico-apiserver-86766b5d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cec0420a-0ebf-4565-8d09-fd0c2c488b56", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86766b5d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff", Pod:"calico-apiserver-86766b5d6c-z4fvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali188b0193cdc", MAC:"4e:a5:0e:47:e7:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:43.422716 env[1311]: 2025-09-13 00:54:43.419 [INFO][4040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-z4fvv" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:54:43.447116 env[1311]: time="2025-09-13T00:54:43.447011703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:43.447435 env[1311]: time="2025-09-13T00:54:43.447361414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:43.447632 env[1311]: time="2025-09-13T00:54:43.447596548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:43.448251 env[1311]: time="2025-09-13T00:54:43.448183546Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff pid=4092 runtime=io.containerd.runc.v2 Sep 13 00:54:43.529717 systemd-networkd[1061]: calif4817d98258: Link UP Sep 13 00:54:43.531811 systemd-networkd[1061]: calif4817d98258: Gained carrier Sep 13 00:54:43.532452 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif4817d98258: link becomes ready Sep 13 00:54:43.551578 systemd-networkd[1061]: cali95c6e3b965c: Gained IPv6LL Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.331 [INFO][4051] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.342 [INFO][4051] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0 goldmane-7988f88666- calico-system 86879210-53e1-4a0a-87e7-2bb62916a082 987 0 2025-09-13 00:54:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-b7c626372f goldmane-7988f88666-zx8vp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif4817d98258 [] [] }} ContainerID="9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" Namespace="calico-system" Pod="goldmane-7988f88666-zx8vp" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.343 [INFO][4051] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" Namespace="calico-system" Pod="goldmane-7988f88666-zx8vp" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.427 [INFO][4072] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" HandleID="k8s-pod-network.9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.428 [INFO][4072] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" HandleID="k8s-pod-network.9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cdb80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-b7c626372f", "pod":"goldmane-7988f88666-zx8vp", "timestamp":"2025-09-13 00:54:43.427575846 +0000 UTC"}, Hostname:"ci-3510.3.8-n-b7c626372f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.428 [INFO][4072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.428 [INFO][4072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.428 [INFO][4072] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-b7c626372f' Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.453 [INFO][4072] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.462 [INFO][4072] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.472 [INFO][4072] ipam/ipam.go 511: Trying affinity for 192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.475 [INFO][4072] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.478 [INFO][4072] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.478 [INFO][4072] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.0/26 handle="k8s-pod-network.9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.480 [INFO][4072] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959 Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.485 [INFO][4072] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.0/26 handle="k8s-pod-network.9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.497 [INFO][4072] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.6/26] block=192.168.56.0/26 handle="k8s-pod-network.9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.501 [INFO][4072] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.6/26] handle="k8s-pod-network.9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.501 [INFO][4072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:43.560475 env[1311]: 2025-09-13 00:54:43.501 [INFO][4072] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.6/26] IPv6=[] ContainerID="9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" HandleID="k8s-pod-network.9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:54:43.561269 env[1311]: 2025-09-13 00:54:43.519 [INFO][4051] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" Namespace="calico-system" Pod="goldmane-7988f88666-zx8vp" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"86879210-53e1-4a0a-87e7-2bb62916a082", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"", Pod:"goldmane-7988f88666-zx8vp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif4817d98258", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:43.561269 env[1311]: 2025-09-13 00:54:43.522 [INFO][4051] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.6/32] ContainerID="9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" Namespace="calico-system" Pod="goldmane-7988f88666-zx8vp" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:54:43.561269 env[1311]: 2025-09-13 00:54:43.523 [INFO][4051] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4817d98258 ContainerID="9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" Namespace="calico-system" Pod="goldmane-7988f88666-zx8vp" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:54:43.561269 env[1311]: 2025-09-13 00:54:43.532 [INFO][4051] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" Namespace="calico-system" Pod="goldmane-7988f88666-zx8vp" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:54:43.561269 env[1311]: 2025-09-13 00:54:43.533 [INFO][4051] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" Namespace="calico-system" Pod="goldmane-7988f88666-zx8vp" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"86879210-53e1-4a0a-87e7-2bb62916a082", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959", Pod:"goldmane-7988f88666-zx8vp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif4817d98258", MAC:"86:dd:39:5a:e2:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:43.561269 env[1311]: 2025-09-13 00:54:43.543 [INFO][4051] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959" Namespace="calico-system" Pod="goldmane-7988f88666-zx8vp" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:54:43.577829 env[1311]: time="2025-09-13T00:54:43.577787074Z" level=info msg="StopPodSandbox for \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\"" Sep 13 00:54:43.598755 env[1311]: time="2025-09-13T00:54:43.598655228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:43.598755 env[1311]: time="2025-09-13T00:54:43.598698153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:43.599033 env[1311]: time="2025-09-13T00:54:43.598709402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:43.599033 env[1311]: time="2025-09-13T00:54:43.598836691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959 pid=4133 runtime=io.containerd.runc.v2 Sep 13 00:54:43.604595 env[1311]: time="2025-09-13T00:54:43.604548582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86766b5d6c-z4fvv,Uid:cec0420a-0ebf-4565-8d09-fd0c2c488b56,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff\"" Sep 13 00:54:43.622502 kubelet[2107]: I0913 00:54:43.622448 2107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:43.624538 kubelet[2107]: E0913 00:54:43.624514 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:43.663487 systemd[1]: run-netns-cni\x2dbff4c645\x2d4f6f\x2d6326\x2dea95\x2dcaaf582a921c.mount: Deactivated successfully. Sep 13 00:54:43.663664 systemd[1]: run-netns-cni\x2d35f1a419\x2d1d6f\x2d9894\x2deb86\x2da0e3b5dd72fe.mount: Deactivated successfully. Sep 13 00:54:43.769860 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 13 00:54:43.770339 kernel: audit: type=1325 audit(1757724883.762:316): table=filter:99 family=2 entries=21 op=nft_register_rule pid=4179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:43.770387 kernel: audit: type=1300 audit(1757724883.762:316): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc0fb29720 a2=0 a3=7ffc0fb2970c items=0 ppid=2209 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:43.770436 kernel: audit: type=1327 audit(1757724883.762:316): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:43.762000 audit[4179]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=4179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:43.762000 audit[4179]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc0fb29720 a2=0 a3=7ffc0fb2970c items=0 ppid=2209 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:43.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:43.778901 kernel: audit: type=1325 audit(1757724883.772:317): table=nat:100 family=2 entries=19 op=nft_register_chain pid=4179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:43.779027 kernel: audit: type=1300 audit(1757724883.772:317): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc0fb29720 a2=0 a3=7ffc0fb2970c items=0 ppid=2209 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:43.772000 audit[4179]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=4179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:43.772000 audit[4179]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc0fb29720 a2=0 a3=7ffc0fb2970c items=0 ppid=2209 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:43.772000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:43.784509 kernel: audit: type=1327 audit(1757724883.772:317): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:43.806585 env[1311]: time="2025-09-13T00:54:43.806540450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zx8vp,Uid:86879210-53e1-4a0a-87e7-2bb62916a082,Namespace:calico-system,Attempt:1,} returns sandbox id \"9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959\"" Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.815 [INFO][4158] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.816 [INFO][4158] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" iface="eth0" netns="/var/run/netns/cni-5384e4b1-d588-b991-f7b4-03ca8db48fd0" Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.816 [INFO][4158] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" iface="eth0" netns="/var/run/netns/cni-5384e4b1-d588-b991-f7b4-03ca8db48fd0" Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.816 [INFO][4158] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" iface="eth0" netns="/var/run/netns/cni-5384e4b1-d588-b991-f7b4-03ca8db48fd0" Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.816 [INFO][4158] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.816 [INFO][4158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.886 [INFO][4193] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" HandleID="k8s-pod-network.d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.886 [INFO][4193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.887 [INFO][4193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.897 [WARNING][4193] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" HandleID="k8s-pod-network.d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.897 [INFO][4193] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" HandleID="k8s-pod-network.d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.900 [INFO][4193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:43.933175 env[1311]: 2025-09-13 00:54:43.916 [INFO][4158] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:54:43.931575 systemd[1]: run-netns-cni\x2d5384e4b1\x2dd588\x2db991\x2df7b4\x2d03ca8db48fd0.mount: Deactivated successfully. Sep 13 00:54:43.940853 env[1311]: time="2025-09-13T00:54:43.932622290Z" level=info msg="TearDown network for sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\" successfully" Sep 13 00:54:43.941035 env[1311]: time="2025-09-13T00:54:43.940850922Z" level=info msg="StopPodSandbox for \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\" returns successfully" Sep 13 00:54:43.947422 env[1311]: time="2025-09-13T00:54:43.945685434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86766b5d6c-6f24s,Uid:8833f507-515d-400e-9991-59b6f2cca14f,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:54:43.952844 kubelet[2107]: E0913 00:54:43.951970 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:43.957430 kubelet[2107]: E0913 00:54:43.957367 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:43.960664 kubelet[2107]: E0913 00:54:43.960632 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:43.999382 kubelet[2107]: I0913 00:54:43.998667 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rmqpc" podStartSLOduration=36.99864196 podStartE2EDuration="36.99864196s" podCreationTimestamp="2025-09-13 00:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:43.993022388 +0000 UTC m=+40.677160895" watchObservedRunningTime="2025-09-13 00:54:43.99864196 +0000 UTC m=+40.682780436" Sep 13 00:54:44.024153 kubelet[2107]: I0913 00:54:44.023043 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-b7mc7" podStartSLOduration=37.023024099 podStartE2EDuration="37.023024099s" podCreationTimestamp="2025-09-13 00:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:44.020507244 +0000 UTC m=+40.704645722" watchObservedRunningTime="2025-09-13 00:54:44.023024099 +0000 UTC m=+40.707162583" Sep 13 00:54:44.106044 kernel: audit: type=1325 audit(1757724884.095:318): table=filter:101 family=2 entries=20 op=nft_register_rule pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:44.106186 kernel: audit: type=1300 audit(1757724884.095:318): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc211f05e0 a2=0 a3=7ffc211f05cc items=0 ppid=2209 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.106230 kernel: audit: type=1327 audit(1757724884.095:318): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:44.095000 audit[4227]: NETFILTER_CFG table=filter:101 family=2 entries=20 op=nft_register_rule pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:44.095000 audit[4227]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc211f05e0 a2=0 a3=7ffc211f05cc items=0 ppid=2209 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:44.106000 audit[4227]: NETFILTER_CFG table=nat:102 family=2 entries=14 op=nft_register_rule pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:44.106000 audit[4227]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc211f05e0 a2=0 a3=0 items=0 ppid=2209 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.106000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:44.109524 kernel: audit: type=1325 audit(1757724884.106:319): table=nat:102 family=2 entries=14 op=nft_register_rule pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:44.130879 systemd-networkd[1061]: calie41473a54b3: Gained IPv6LL Sep 13 00:54:44.318989 systemd-networkd[1061]: cali86973ae3863: Gained IPv6LL Sep 13 00:54:44.375167 systemd-networkd[1061]: cali4bb514ec3c3: Link UP Sep 13 00:54:44.379491 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4bb514ec3c3: link becomes ready Sep 13 00:54:44.378543 systemd-networkd[1061]: cali4bb514ec3c3: Gained carrier Sep 13 00:54:44.403465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488947193.mount: Deactivated successfully. Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.132 [INFO][4215] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.154 [INFO][4215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0 calico-apiserver-86766b5d6c- calico-apiserver 8833f507-515d-400e-9991-59b6f2cca14f 1007 0 2025-09-13 00:54:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86766b5d6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-b7c626372f calico-apiserver-86766b5d6c-6f24s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4bb514ec3c3 [] [] }} ContainerID="e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-6f24s" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.154 [INFO][4215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-6f24s" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.274 [INFO][4230] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" HandleID="k8s-pod-network.e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.278 [INFO][4230] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" HandleID="k8s-pod-network.e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-b7c626372f", "pod":"calico-apiserver-86766b5d6c-6f24s", "timestamp":"2025-09-13 00:54:44.274612763 +0000 UTC"}, Hostname:"ci-3510.3.8-n-b7c626372f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.279 [INFO][4230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.279 [INFO][4230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.279 [INFO][4230] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-b7c626372f' Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.302 [INFO][4230] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.331 [INFO][4230] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.340 [INFO][4230] ipam/ipam.go 511: Trying affinity for 192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.345 [INFO][4230] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.349 [INFO][4230] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.349 [INFO][4230] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.0/26 handle="k8s-pod-network.e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.351 [INFO][4230] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.357 [INFO][4230] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.0/26 handle="k8s-pod-network.e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.364 [INFO][4230] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.7/26] block=192.168.56.0/26 handle="k8s-pod-network.e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.364 [INFO][4230] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.7/26] handle="k8s-pod-network.e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.364 [INFO][4230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:44.415869 env[1311]: 2025-09-13 00:54:44.364 [INFO][4230] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.7/26] IPv6=[] ContainerID="e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" HandleID="k8s-pod-network.e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:54:44.416728 env[1311]: 2025-09-13 00:54:44.369 [INFO][4215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-6f24s" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0", GenerateName:"calico-apiserver-86766b5d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8833f507-515d-400e-9991-59b6f2cca14f", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86766b5d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"", Pod:"calico-apiserver-86766b5d6c-6f24s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4bb514ec3c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:44.416728 env[1311]: 2025-09-13 00:54:44.369 [INFO][4215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.7/32] ContainerID="e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-6f24s" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:54:44.416728 env[1311]: 2025-09-13 00:54:44.369 [INFO][4215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4bb514ec3c3 ContainerID="e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-6f24s" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:54:44.416728 env[1311]: 2025-09-13 00:54:44.398 [INFO][4215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-6f24s" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:54:44.416728 env[1311]: 2025-09-13 00:54:44.398 [INFO][4215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-6f24s" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0", GenerateName:"calico-apiserver-86766b5d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8833f507-515d-400e-9991-59b6f2cca14f", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86766b5d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc", Pod:"calico-apiserver-86766b5d6c-6f24s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4bb514ec3c3", MAC:"f2:81:4b:9f:59:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:44.416728 env[1311]: 2025-09-13 00:54:44.413 [INFO][4215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc" Namespace="calico-apiserver" Pod="calico-apiserver-86766b5d6c-6f24s" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:54:44.423564 env[1311]: time="2025-09-13T00:54:44.423514087Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.424973 env[1311]: time="2025-09-13T00:54:44.424936143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.426871 env[1311]: time="2025-09-13T00:54:44.426831426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.428921 env[1311]: time="2025-09-13T00:54:44.428882427Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.429689 env[1311]: time="2025-09-13T00:54:44.429334158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:54:44.433861 env[1311]: time="2025-09-13T00:54:44.433367137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:54:44.435770 env[1311]: time="2025-09-13T00:54:44.435734305Z" level=info msg="CreateContainer within sandbox \"386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:54:44.446295 env[1311]: time="2025-09-13T00:54:44.446174729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:44.446498 env[1311]: time="2025-09-13T00:54:44.446322927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:44.446498 env[1311]: time="2025-09-13T00:54:44.446359562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:44.446775 env[1311]: time="2025-09-13T00:54:44.446717062Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc pid=4278 runtime=io.containerd.runc.v2 Sep 13 00:54:44.449168 env[1311]: time="2025-09-13T00:54:44.449102942Z" level=info msg="CreateContainer within sandbox \"386645597e51da8d4834d3bf6470f05ffc673dcf48636f24f52fcae3b1453129\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4770a53c5d39169736f4ed83d760ae2219c3c11c251e2a9a7f1050a43aeddc23\"" Sep 13 00:54:44.452903 env[1311]: time="2025-09-13T00:54:44.452825855Z" level=info msg="StartContainer for \"4770a53c5d39169736f4ed83d760ae2219c3c11c251e2a9a7f1050a43aeddc23\"" Sep 13 00:54:44.568356 env[1311]: time="2025-09-13T00:54:44.568309524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86766b5d6c-6f24s,Uid:8833f507-515d-400e-9991-59b6f2cca14f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc\"" Sep 13 00:54:44.631732 env[1311]: time="2025-09-13T00:54:44.631669428Z" level=info msg="StartContainer for \"4770a53c5d39169736f4ed83d760ae2219c3c11c251e2a9a7f1050a43aeddc23\" returns successfully" Sep 13 00:54:44.666000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.666000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.666000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.666000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.666000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.666000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.666000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.666000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.666000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.666000 audit: BPF prog-id=10 op=LOAD Sep 13 00:54:44.666000 audit[4353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc004cf600 a2=98 a3=1fffffffffffffff items=0 ppid=4250 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.666000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:44.667000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:54:44.667000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.667000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.667000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.667000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.667000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.667000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.667000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.667000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.667000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.667000 audit: BPF prog-id=11 op=LOAD Sep 13 00:54:44.667000 audit[4353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc004cf4e0 a2=94 a3=3 items=0 ppid=4250 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.667000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:44.668000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:54:44.668000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.668000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.668000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.668000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.668000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.668000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.668000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.668000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.668000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.668000 audit: BPF prog-id=12 op=LOAD Sep 13 00:54:44.668000 audit[4353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc004cf520 a2=94 a3=7ffc004cf700 items=0 ppid=4250 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.668000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:44.668000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:54:44.668000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.668000 audit[4353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc004cf5f0 a2=50 a3=a000000085 items=0 ppid=4250 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.668000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:54:44.671000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.671000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.671000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.671000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.671000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.671000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.671000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.671000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.671000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.671000 audit: BPF prog-id=13 op=LOAD Sep 13 00:54:44.671000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8187a8b0 a2=98 a3=3 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.671000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.671000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:54:44.672000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.672000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.672000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.672000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.672000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.672000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.672000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.672000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.672000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.672000 audit: BPF prog-id=14 op=LOAD Sep 13 00:54:44.672000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff8187a6a0 a2=94 a3=54428f items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.672000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.673000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:54:44.673000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.673000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.673000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.673000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.673000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.673000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.673000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.673000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.673000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.673000 audit: BPF prog-id=15 op=LOAD Sep 13 00:54:44.673000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff8187a6d0 a2=94 a3=2 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.674000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:54:44.799000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.799000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.799000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.799000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.799000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.799000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.799000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.799000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.799000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.799000 audit: BPF prog-id=16 op=LOAD Sep 13 00:54:44.799000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff8187a590 a2=94 a3=1 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.799000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.800000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:54:44.800000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.800000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff8187a660 a2=50 a3=7fff8187a740 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.800000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.811000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.811000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff8187a5a0 a2=28 a3=0 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.812000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.812000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8187a5d0 a2=28 a3=0 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.812000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.812000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8187a4e0 a2=28 a3=0 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.812000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.812000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff8187a5f0 a2=28 a3=0 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.812000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.812000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff8187a5d0 a2=28 a3=0 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.812000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.812000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff8187a5c0 a2=28 a3=0 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.812000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.812000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff8187a5f0 a2=28 a3=0 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.812000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.812000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8187a5d0 a2=28 a3=0 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.812000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.812000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8187a5f0 a2=28 a3=0 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.812000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.812000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8187a5c0 a2=28 a3=0 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff8187a630 a2=28 a3=0 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.813000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff8187a3e0 a2=50 a3=1 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.813000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit: BPF prog-id=17 op=LOAD Sep 13 00:54:44.813000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8187a3e0 a2=94 a3=5 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.813000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.813000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff8187a490 a2=50 a3=1 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.813000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff8187a5b0 a2=4 a3=38 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.813000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.813000 audit[4354]: AVC avc: denied { confidentiality } for pid=4354 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:44.813000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff8187a600 a2=94 a3=6 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.813000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { confidentiality } for pid=4354 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:44.814000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff81879db0 a2=94 a3=88 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.814000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.814000 audit[4354]: AVC avc: denied { confidentiality } for pid=4354 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:44.814000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff81879db0 a2=94 a3=88 items=0 ppid=4250 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.814000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:54:44.835000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.835000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.835000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.835000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.835000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.835000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.835000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.835000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.835000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.835000 audit: BPF prog-id=18 op=LOAD Sep 13 00:54:44.835000 audit[4361]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc03c508a0 a2=98 a3=1999999999999999 items=0 ppid=4250 pid=4361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.835000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:54:44.836000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit: BPF prog-id=19 op=LOAD Sep 13 00:54:44.836000 audit[4361]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc03c50780 a2=94 a3=ffff items=0 ppid=4250 pid=4361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.836000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:54:44.836000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { perfmon } for pid=4361 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit[4361]: AVC avc: denied { bpf } for pid=4361 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.836000 audit: BPF prog-id=20 op=LOAD Sep 13 00:54:44.836000 audit[4361]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc03c507c0 a2=94 a3=7ffc03c509a0 items=0 ppid=4250 pid=4361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.836000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:54:44.837000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:54:44.921429 systemd-networkd[1061]: vxlan.calico: Link UP Sep 13 00:54:44.921441 systemd-networkd[1061]: vxlan.calico: Gained carrier Sep 13 00:54:44.967000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.967000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.967000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.967000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.967000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.967000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.967000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.967000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.967000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.967000 audit: BPF prog-id=21 op=LOAD Sep 13 00:54:44.967000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc597d2c50 a2=98 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.967000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.977000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:54:44.982462 kubelet[2107]: E0913 00:54:44.982164 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:44.983436 kubelet[2107]: E0913 00:54:44.983407 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:44.994000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.994000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.994000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.994000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.994000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.994000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.994000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.994000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.994000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.994000 audit: BPF prog-id=22 op=LOAD Sep 13 00:54:44.994000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc597d2a60 a2=94 a3=54428f items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.994000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.996000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:54:44.996000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.996000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.996000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.996000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.996000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.996000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.996000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.996000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.996000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.996000 audit: BPF prog-id=23 op=LOAD Sep 13 00:54:44.996000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc597d2a90 a2=94 a3=2 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.996000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.997000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:54:44.997000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.997000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc597d2960 a2=28 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.997000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.997000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc597d2990 a2=28 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.997000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.997000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc597d28a0 a2=28 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.997000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.997000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc597d29b0 a2=28 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.997000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.997000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc597d2990 a2=28 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.997000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.997000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc597d2980 a2=28 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.997000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.997000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc597d29b0 a2=28 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.997000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.997000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc597d2990 a2=28 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.997000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.997000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc597d29b0 a2=28 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.997000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.997000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc597d2980 a2=28 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.997000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.997000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc597d29f0 a2=28 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.998000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.998000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.998000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.998000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.998000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.998000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.998000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.998000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.998000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:44.998000 audit: BPF prog-id=24 op=LOAD Sep 13 00:54:44.998000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc597d2860 a2=94 a3=0 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:44.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:44.998000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:54:45.006000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.006000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffc597d2850 a2=50 a3=2800 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.006000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffc597d2850 a2=50 a3=2800 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.009000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit: BPF prog-id=25 op=LOAD Sep 13 00:54:45.009000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc597d2070 a2=94 a3=2 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.009000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:45.009000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { perfmon } for pid=4386 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit[4386]: AVC avc: denied { bpf } for pid=4386 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.009000 audit: BPF prog-id=26 op=LOAD Sep 13 00:54:45.009000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc597d2170 a2=94 a3=30 items=0 ppid=4250 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.009000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:54:45.015000 audit[4391]: NETFILTER_CFG table=filter:103 family=2 entries=16 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:45.015000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fffc17cc500 a2=0 a3=7fffc17cc4ec items=0 ppid=2209 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.015000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:45.017000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.017000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.017000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.017000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.017000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.017000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.017000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.017000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.017000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.017000 audit: BPF prog-id=27 op=LOAD Sep 13 00:54:45.017000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd4bac6a0 a2=98 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.017000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.018000 audit: BPF prog-id=27 op=UNLOAD Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit: BPF prog-id=28 op=LOAD Sep 13 00:54:45.018000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd4bac490 a2=94 a3=54428f items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.018000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.018000 audit: BPF prog-id=28 op=UNLOAD Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.018000 audit: BPF prog-id=29 op=LOAD Sep 13 00:54:45.018000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd4bac4c0 a2=94 a3=2 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.018000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.018000 audit: BPF prog-id=29 op=UNLOAD Sep 13 00:54:45.039000 audit[4391]: NETFILTER_CFG table=nat:104 family=2 entries=54 op=nft_register_chain pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:45.039000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=23436 a0=3 a1=7fffc17cc500 a2=0 a3=7fffc17cc4ec items=0 ppid=2209 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:45.180000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.180000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.180000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.180000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.180000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.180000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.180000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.180000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.180000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.180000 audit: BPF prog-id=30 op=LOAD Sep 13 00:54:45.180000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd4bac380 a2=94 a3=1 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.180000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.185000 audit: BPF prog-id=30 op=UNLOAD Sep 13 00:54:45.185000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.185000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcd4bac450 a2=50 a3=7ffcd4bac530 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.185000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.199000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.199000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcd4bac390 a2=28 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.199000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.200000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.200000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcd4bac3c0 a2=28 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.200000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.200000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.200000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcd4bac2d0 a2=28 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.200000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.200000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.200000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcd4bac3e0 a2=28 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.200000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.201000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.201000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcd4bac3c0 a2=28 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.201000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.201000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.201000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcd4bac3b0 a2=28 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.201000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.201000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.201000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcd4bac3e0 a2=28 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.201000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.201000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.201000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcd4bac3c0 a2=28 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.201000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.202000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.202000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcd4bac3e0 a2=28 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.202000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.202000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.202000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcd4bac3b0 a2=28 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.202000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.202000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.202000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcd4bac420 a2=28 a3=0 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.202000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.202000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.202000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcd4bac1d0 a2=50 a3=1 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.202000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.203000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.203000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.203000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.203000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.203000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.203000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.203000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.203000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.203000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.203000 audit: BPF prog-id=31 op=LOAD Sep 13 00:54:45.203000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd4bac1d0 a2=94 a3=5 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.203000 audit: BPF prog-id=31 op=UNLOAD Sep 13 00:54:45.203000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.203000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcd4bac280 a2=50 a3=1 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.204000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffcd4bac3a0 a2=4 a3=38 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.204000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.204000 audit[4393]: AVC avc: denied { confidentiality } for pid=4393 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:45.204000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcd4bac3f0 a2=94 a3=6 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.204000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.205000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.205000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.205000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.205000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.205000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.205000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.205000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.205000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.205000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.205000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.205000 audit[4393]: AVC avc: denied { confidentiality } for pid=4393 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:45.205000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcd4babba0 a2=94 a3=88 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.206000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.206000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.206000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.206000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.206000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.206000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.206000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.206000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.206000 audit[4393]: AVC avc: denied { perfmon } for pid=4393 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.206000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.206000 audit[4393]: AVC avc: denied { confidentiality } for pid=4393 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:54:45.206000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcd4babba0 a2=94 a3=88 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.206000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.207000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.207000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffcd4bad5d0 a2=10 a3=f8f00800 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.207000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.207000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffcd4bad470 a2=10 a3=3 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.207000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.207000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffcd4bad410 a2=10 a3=3 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.208000 audit[4393]: AVC avc: denied { bpf } for pid=4393 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:45.208000 audit[4393]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffcd4bad410 a2=10 a3=7 items=0 ppid=4250 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.208000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:54:45.215000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:54:45.279538 systemd-networkd[1061]: cali188b0193cdc: Gained IPv6LL Sep 13 00:54:45.341000 audit[4428]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=4428 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:45.341000 audit[4428]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc69b60d80 a2=0 a3=7ffc69b60d6c items=0 ppid=4250 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.341000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:45.345000 audit[4427]: NETFILTER_CFG table=nat:106 family=2 entries=15 op=nft_register_chain pid=4427 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:45.345000 audit[4427]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe5fe72700 a2=0 a3=7ffe5fe726ec items=0 ppid=4250 pid=4427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.345000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:45.347000 audit[4425]: NETFILTER_CFG table=raw:107 family=2 entries=21 op=nft_register_chain pid=4425 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:45.347000 audit[4425]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc0436b770 a2=0 a3=7ffc0436b75c items=0 ppid=4250 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.347000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:45.359000 audit[4430]: NETFILTER_CFG table=filter:108 family=2 entries=287 op=nft_register_chain pid=4430 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:45.359000 audit[4430]: SYSCALL arch=c000003e syscall=46 success=yes exit=171088 a0=3 a1=7ffff7d25fd0 a2=0 a3=7ffff7d25fbc items=0 ppid=4250 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:45.359000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:45.471561 systemd-networkd[1061]: calif4817d98258: Gained IPv6LL Sep 13 00:54:45.578571 env[1311]: time="2025-09-13T00:54:45.578332284Z" level=info msg="StopPodSandbox for \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\"" Sep 13 00:54:45.670674 kubelet[2107]: I0913 00:54:45.670223 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-76fdf687db-wbs6w" podStartSLOduration=2.729748438 podStartE2EDuration="8.670196313s" podCreationTimestamp="2025-09-13 00:54:37 +0000 UTC" firstStartedPulling="2025-09-13 00:54:38.492664657 +0000 UTC m=+35.176803120" lastFinishedPulling="2025-09-13 00:54:44.433112519 +0000 UTC m=+41.117250995" observedRunningTime="2025-09-13 00:54:45.016676806 +0000 UTC m=+41.700815290" watchObservedRunningTime="2025-09-13 00:54:45.670196313 +0000 UTC m=+42.354334798" Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.666 [INFO][4450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.667 [INFO][4450] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" iface="eth0" netns="/var/run/netns/cni-395feeed-5101-a146-ab7a-020bb7ce6bd1" Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.671 [INFO][4450] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" iface="eth0" netns="/var/run/netns/cni-395feeed-5101-a146-ab7a-020bb7ce6bd1" Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.672 [INFO][4450] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" iface="eth0" netns="/var/run/netns/cni-395feeed-5101-a146-ab7a-020bb7ce6bd1" Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.672 [INFO][4450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.672 [INFO][4450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.722 [INFO][4457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" HandleID="k8s-pod-network.49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.722 [INFO][4457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.722 [INFO][4457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.735 [WARNING][4457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" HandleID="k8s-pod-network.49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.736 [INFO][4457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" HandleID="k8s-pod-network.49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.737 [INFO][4457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:45.744447 env[1311]: 2025-09-13 00:54:45.740 [INFO][4450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:54:45.751311 systemd[1]: run-netns-cni\x2d395feeed\x2d5101\x2da146\x2dab7a\x2d020bb7ce6bd1.mount: Deactivated successfully. Sep 13 00:54:45.753625 env[1311]: time="2025-09-13T00:54:45.753566782Z" level=info msg="TearDown network for sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\" successfully" Sep 13 00:54:45.753781 env[1311]: time="2025-09-13T00:54:45.753756725Z" level=info msg="StopPodSandbox for \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\" returns successfully" Sep 13 00:54:45.754871 env[1311]: time="2025-09-13T00:54:45.754825225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4d8z6,Uid:5f057288-90ee-4889-a341-9af038f7cf7a,Namespace:calico-system,Attempt:1,}" Sep 13 00:54:45.953950 systemd-networkd[1061]: cali5287a7b0303: Link UP Sep 13 00:54:45.955542 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5287a7b0303: link becomes ready Sep 13 00:54:45.955077 systemd-networkd[1061]: cali5287a7b0303: Gained carrier Sep 13 00:54:45.989136 kubelet[2107]: E0913 00:54:45.989095 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:45.990276 kubelet[2107]: E0913 00:54:45.990250 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.851 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0 csi-node-driver- calico-system 5f057288-90ee-4889-a341-9af038f7cf7a 1047 0 2025-09-13 00:54:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-b7c626372f csi-node-driver-4d8z6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5287a7b0303 [] [] }} ContainerID="1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" Namespace="calico-system" Pod="csi-node-driver-4d8z6" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.852 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" Namespace="calico-system" Pod="csi-node-driver-4d8z6" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.900 [INFO][4478] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" HandleID="k8s-pod-network.1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.900 [INFO][4478] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" HandleID="k8s-pod-network.1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-b7c626372f", "pod":"csi-node-driver-4d8z6", "timestamp":"2025-09-13 00:54:45.90012201 +0000 UTC"}, Hostname:"ci-3510.3.8-n-b7c626372f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.900 [INFO][4478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.900 [INFO][4478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.900 [INFO][4478] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-b7c626372f' Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.911 [INFO][4478] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.916 [INFO][4478] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.921 [INFO][4478] ipam/ipam.go 511: Trying affinity for 192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.923 [INFO][4478] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.926 [INFO][4478] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.0/26 host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.926 [INFO][4478] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.0/26 handle="k8s-pod-network.1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.929 [INFO][4478] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.935 [INFO][4478] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.0/26 handle="k8s-pod-network.1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.943 [INFO][4478] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.8/26] block=192.168.56.0/26 handle="k8s-pod-network.1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.943 [INFO][4478] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.8/26] handle="k8s-pod-network.1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" host="ci-3510.3.8-n-b7c626372f" Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.943 [INFO][4478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:54:45.999327 env[1311]: 2025-09-13 00:54:45.943 [INFO][4478] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.8/26] IPv6=[] ContainerID="1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" HandleID="k8s-pod-network.1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:54:46.001006 env[1311]: 2025-09-13 00:54:45.946 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" Namespace="calico-system" Pod="csi-node-driver-4d8z6" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5f057288-90ee-4889-a341-9af038f7cf7a", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"", Pod:"csi-node-driver-4d8z6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5287a7b0303", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:46.001006 env[1311]: 2025-09-13 00:54:45.946 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.8/32] ContainerID="1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" Namespace="calico-system" Pod="csi-node-driver-4d8z6" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:54:46.001006 env[1311]: 2025-09-13 00:54:45.946 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5287a7b0303 ContainerID="1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" Namespace="calico-system" Pod="csi-node-driver-4d8z6" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:54:46.001006 env[1311]: 2025-09-13 00:54:45.956 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" Namespace="calico-system" Pod="csi-node-driver-4d8z6" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:54:46.001006 env[1311]: 2025-09-13 00:54:45.957 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" Namespace="calico-system" Pod="csi-node-driver-4d8z6" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5f057288-90ee-4889-a341-9af038f7cf7a", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a", Pod:"csi-node-driver-4d8z6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5287a7b0303", MAC:"3a:5c:09:5b:37:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:54:46.001006 env[1311]: 2025-09-13 00:54:45.983 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a" Namespace="calico-system" Pod="csi-node-driver-4d8z6" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:54:46.009000 audit[4490]: NETFILTER_CFG table=filter:109 family=2 entries=60 op=nft_register_chain pid=4490 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:54:46.009000 audit[4490]: SYSCALL arch=c000003e syscall=46 success=yes exit=26704 a0=3 a1=7ffeb0f90c40 a2=0 a3=7ffeb0f90c2c items=0 ppid=4250 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:46.009000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:54:46.032034 env[1311]: time="2025-09-13T00:54:46.031955533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:46.032204 env[1311]: time="2025-09-13T00:54:46.032041497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:46.032204 env[1311]: time="2025-09-13T00:54:46.032065073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:46.032273 env[1311]: time="2025-09-13T00:54:46.032206261Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a pid=4502 runtime=io.containerd.runc.v2 Sep 13 00:54:46.046579 systemd-networkd[1061]: vxlan.calico: Gained IPv6LL Sep 13 00:54:46.142898 env[1311]: time="2025-09-13T00:54:46.142830599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4d8z6,Uid:5f057288-90ee-4889-a341-9af038f7cf7a,Namespace:calico-system,Attempt:1,} returns sandbox id \"1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a\"" Sep 13 00:54:46.174622 systemd-networkd[1061]: cali4bb514ec3c3: Gained IPv6LL Sep 13 00:54:46.749364 systemd[1]: run-containerd-runc-k8s.io-1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a-runc.RfOPUS.mount: Deactivated successfully. Sep 13 00:54:47.071203 systemd-networkd[1061]: cali5287a7b0303: Gained IPv6LL Sep 13 00:54:47.380976 env[1311]: time="2025-09-13T00:54:47.380911145Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.382519 env[1311]: time="2025-09-13T00:54:47.382443434Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.384781 env[1311]: time="2025-09-13T00:54:47.384703898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.386700 env[1311]: time="2025-09-13T00:54:47.386658170Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:47.387352 env[1311]: time="2025-09-13T00:54:47.387314542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:54:47.389705 env[1311]: time="2025-09-13T00:54:47.389664722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:54:47.424200 env[1311]: time="2025-09-13T00:54:47.424135942Z" level=info msg="CreateContainer within sandbox \"1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:54:47.440881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768242462.mount: Deactivated successfully. Sep 13 00:54:47.444874 env[1311]: time="2025-09-13T00:54:47.444807435Z" level=info msg="CreateContainer within sandbox \"1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"df951fd2f48ec9d33b9b0137a1c3e2681a69e26a2cf3866709ed08fa01fcd896\"" Sep 13 00:54:47.445931 env[1311]: time="2025-09-13T00:54:47.445903783Z" level=info msg="StartContainer for \"df951fd2f48ec9d33b9b0137a1c3e2681a69e26a2cf3866709ed08fa01fcd896\"" Sep 13 00:54:47.539253 env[1311]: time="2025-09-13T00:54:47.538663246Z" level=info msg="StartContainer for \"df951fd2f48ec9d33b9b0137a1c3e2681a69e26a2cf3866709ed08fa01fcd896\" returns successfully" Sep 13 00:54:48.084612 systemd[1]: run-containerd-runc-k8s.io-df951fd2f48ec9d33b9b0137a1c3e2681a69e26a2cf3866709ed08fa01fcd896-runc.ZNJO4E.mount: Deactivated successfully. Sep 13 00:54:48.112365 kubelet[2107]: I0913 00:54:48.112252 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64dcf69d7d-d9zgr" podStartSLOduration=22.607068676 podStartE2EDuration="27.112220925s" podCreationTimestamp="2025-09-13 00:54:21 +0000 UTC" firstStartedPulling="2025-09-13 00:54:42.883747554 +0000 UTC m=+39.567886018" lastFinishedPulling="2025-09-13 00:54:47.388899805 +0000 UTC m=+44.073038267" observedRunningTime="2025-09-13 00:54:48.111953546 +0000 UTC m=+44.796092038" watchObservedRunningTime="2025-09-13 00:54:48.112220925 +0000 UTC m=+44.796359412" Sep 13 00:54:50.034678 env[1311]: time="2025-09-13T00:54:50.034619413Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:50.036159 env[1311]: time="2025-09-13T00:54:50.036118474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:50.037759 env[1311]: time="2025-09-13T00:54:50.037718838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:50.039256 env[1311]: time="2025-09-13T00:54:50.039216109Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:50.039846 env[1311]: time="2025-09-13T00:54:50.039809571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:54:50.042211 env[1311]: time="2025-09-13T00:54:50.041628145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:54:50.045743 env[1311]: time="2025-09-13T00:54:50.045563542Z" level=info msg="CreateContainer within sandbox \"c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:54:50.061569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3248572565.mount: Deactivated successfully. Sep 13 00:54:50.070433 env[1311]: time="2025-09-13T00:54:50.069285969Z" level=info msg="CreateContainer within sandbox \"c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a51f4aa5e71b3f3cf9f834390bee34b1d35521e5d5b0e1397a61a4d3de43f63\"" Sep 13 00:54:50.072992 env[1311]: time="2025-09-13T00:54:50.072929777Z" level=info msg="StartContainer for \"1a51f4aa5e71b3f3cf9f834390bee34b1d35521e5d5b0e1397a61a4d3de43f63\"" Sep 13 00:54:50.113432 systemd[1]: run-containerd-runc-k8s.io-1a51f4aa5e71b3f3cf9f834390bee34b1d35521e5d5b0e1397a61a4d3de43f63-runc.xYQqYz.mount: Deactivated successfully. Sep 13 00:54:50.183169 env[1311]: time="2025-09-13T00:54:50.183116139Z" level=info msg="StartContainer for \"1a51f4aa5e71b3f3cf9f834390bee34b1d35521e5d5b0e1397a61a4d3de43f63\" returns successfully" Sep 13 00:54:51.089000 audit[4645]: NETFILTER_CFG table=filter:110 family=2 entries=12 op=nft_register_rule pid=4645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:51.092311 kernel: kauditd_printk_skb: 533 callbacks suppressed Sep 13 00:54:51.093137 kernel: audit: type=1325 audit(1757724891.089:425): table=filter:110 family=2 entries=12 op=nft_register_rule pid=4645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:51.089000 audit[4645]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc815b8f30 a2=0 a3=7ffc815b8f1c items=0 ppid=2209 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.097383 kernel: audit: type=1300 audit(1757724891.089:425): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc815b8f30 a2=0 a3=7ffc815b8f1c items=0 ppid=2209 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:51.101433 kernel: audit: type=1327 audit(1757724891.089:425): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:51.100000 audit[4645]: NETFILTER_CFG table=nat:111 family=2 entries=22 op=nft_register_rule pid=4645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:51.100000 audit[4645]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc815b8f30 a2=0 a3=7ffc815b8f1c items=0 ppid=2209 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.107006 kernel: audit: type=1325 audit(1757724891.100:426): table=nat:111 family=2 entries=22 op=nft_register_rule pid=4645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:51.107135 kernel: audit: type=1300 audit(1757724891.100:426): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc815b8f30 a2=0 a3=7ffc815b8f1c items=0 ppid=2209 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:51.109442 kernel: audit: type=1327 audit(1757724891.100:426): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:51.518814 kubelet[2107]: I0913 00:54:51.518728 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86766b5d6c-z4fvv" podStartSLOduration=28.08313689 podStartE2EDuration="34.517775597s" podCreationTimestamp="2025-09-13 00:54:17 +0000 UTC" firstStartedPulling="2025-09-13 00:54:43.606590681 +0000 UTC m=+40.290729144" lastFinishedPulling="2025-09-13 00:54:50.041229388 +0000 UTC m=+46.725367851" observedRunningTime="2025-09-13 00:54:51.040051308 +0000 UTC m=+47.724189792" watchObservedRunningTime="2025-09-13 00:54:51.517775597 +0000 UTC m=+48.201914080" Sep 13 00:54:51.607000 audit[4647]: NETFILTER_CFG table=filter:112 family=2 entries=11 op=nft_register_rule pid=4647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:51.611713 kernel: audit: type=1325 audit(1757724891.607:427): table=filter:112 family=2 entries=11 op=nft_register_rule pid=4647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:51.621065 kernel: audit: type=1300 audit(1757724891.607:427): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff11ea2160 a2=0 a3=7fff11ea214c items=0 ppid=2209 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.621140 kernel: audit: type=1327 audit(1757724891.607:427): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:51.607000 audit[4647]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff11ea2160 a2=0 a3=7fff11ea214c items=0 ppid=2209 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:51.623000 audit[4647]: NETFILTER_CFG table=nat:113 family=2 entries=29 op=nft_register_chain pid=4647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:51.626569 kernel: audit: type=1325 audit(1757724891.623:428): table=nat:113 family=2 entries=29 op=nft_register_chain pid=4647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:51.623000 audit[4647]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fff11ea2160 a2=0 a3=7fff11ea214c items=0 ppid=2209 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:52.738342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount15013038.mount: Deactivated successfully. Sep 13 00:54:53.558455 env[1311]: time="2025-09-13T00:54:53.558372967Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.560261 env[1311]: time="2025-09-13T00:54:53.560220724Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.562055 env[1311]: time="2025-09-13T00:54:53.562018261Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.563688 env[1311]: time="2025-09-13T00:54:53.563653119Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.564441 env[1311]: time="2025-09-13T00:54:53.564378889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:54:53.632323 env[1311]: time="2025-09-13T00:54:53.631590698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:54:53.702222 env[1311]: time="2025-09-13T00:54:53.702083145Z" level=info msg="CreateContainer within sandbox \"9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:54:53.716329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4198211344.mount: Deactivated successfully. Sep 13 00:54:53.722100 env[1311]: time="2025-09-13T00:54:53.722052714Z" level=info msg="CreateContainer within sandbox \"9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b2d7a1d81121a664219eb883213246ee5a3d80d6ac361ab333cef92789faf898\"" Sep 13 00:54:53.724750 env[1311]: time="2025-09-13T00:54:53.724690865Z" level=info msg="StartContainer for \"b2d7a1d81121a664219eb883213246ee5a3d80d6ac361ab333cef92789faf898\"" Sep 13 00:54:53.770380 systemd[1]: run-containerd-runc-k8s.io-b2d7a1d81121a664219eb883213246ee5a3d80d6ac361ab333cef92789faf898-runc.F4iLn9.mount: Deactivated successfully. Sep 13 00:54:53.841560 env[1311]: time="2025-09-13T00:54:53.841034684Z" level=info msg="StartContainer for \"b2d7a1d81121a664219eb883213246ee5a3d80d6ac361ab333cef92789faf898\" returns successfully" Sep 13 00:54:53.984168 env[1311]: time="2025-09-13T00:54:53.984078870Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.985438 env[1311]: time="2025-09-13T00:54:53.985374889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.986605 env[1311]: time="2025-09-13T00:54:53.986562808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.987951 env[1311]: time="2025-09-13T00:54:53.987917699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.988961 env[1311]: time="2025-09-13T00:54:53.988910831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:54:53.990860 env[1311]: time="2025-09-13T00:54:53.990831110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:54:53.994122 env[1311]: time="2025-09-13T00:54:53.994079186Z" level=info msg="CreateContainer within sandbox \"e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:54:54.012542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733189699.mount: Deactivated successfully. Sep 13 00:54:54.017860 env[1311]: time="2025-09-13T00:54:54.017805985Z" level=info msg="CreateContainer within sandbox \"e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8f5500dba4a955ffb651b9e0185b36755d6f425ce90e5221a7d005db02f72c13\"" Sep 13 00:54:54.018785 env[1311]: time="2025-09-13T00:54:54.018732000Z" level=info msg="StartContainer for \"8f5500dba4a955ffb651b9e0185b36755d6f425ce90e5221a7d005db02f72c13\"" Sep 13 00:54:54.112000 audit[4711]: NETFILTER_CFG table=filter:114 family=2 entries=10 op=nft_register_rule pid=4711 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:54.112000 audit[4711]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fffa23a01d0 a2=0 a3=7fffa23a01bc items=0 ppid=2209 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:54.112000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:54.115684 kubelet[2107]: I0913 00:54:54.114549 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-zx8vp" podStartSLOduration=24.279270722 podStartE2EDuration="34.097051656s" podCreationTimestamp="2025-09-13 00:54:20 +0000 UTC" firstStartedPulling="2025-09-13 00:54:43.808348501 +0000 UTC m=+40.492486967" lastFinishedPulling="2025-09-13 00:54:53.62612942 +0000 UTC m=+50.310267901" observedRunningTime="2025-09-13 00:54:54.070548524 +0000 UTC m=+50.754687008" watchObservedRunningTime="2025-09-13 00:54:54.097051656 +0000 UTC m=+50.781190135" Sep 13 00:54:54.120000 audit[4711]: NETFILTER_CFG table=nat:115 family=2 entries=24 op=nft_register_rule pid=4711 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:54.120000 audit[4711]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7fffa23a01d0 a2=0 a3=7fffa23a01bc items=0 ppid=2209 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:54.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:54.212492 env[1311]: time="2025-09-13T00:54:54.212444425Z" level=info msg="StartContainer for \"8f5500dba4a955ffb651b9e0185b36755d6f425ce90e5221a7d005db02f72c13\" returns successfully" Sep 13 00:54:55.080232 systemd[1]: run-containerd-runc-k8s.io-b2d7a1d81121a664219eb883213246ee5a3d80d6ac361ab333cef92789faf898-runc.vEhq4M.mount: Deactivated successfully. Sep 13 00:54:55.090000 audit[4755]: NETFILTER_CFG table=filter:116 family=2 entries=10 op=nft_register_rule pid=4755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:55.090000 audit[4755]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd15c3eb30 a2=0 a3=7ffd15c3eb1c items=0 ppid=2209 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:55.090000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:55.095000 audit[4755]: NETFILTER_CFG table=nat:117 family=2 entries=32 op=nft_register_rule pid=4755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:54:55.095000 audit[4755]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffd15c3eb30 a2=0 a3=7ffd15c3eb1c items=0 ppid=2209 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:55.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:54:55.462246 env[1311]: time="2025-09-13T00:54:55.462185066Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:55.464102 env[1311]: time="2025-09-13T00:54:55.464049973Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:55.465987 env[1311]: time="2025-09-13T00:54:55.465931450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:55.467960 env[1311]: time="2025-09-13T00:54:55.467913085Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:55.468510 env[1311]: time="2025-09-13T00:54:55.468151859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:54:55.474716 env[1311]: time="2025-09-13T00:54:55.474657093Z" level=info msg="CreateContainer within sandbox \"1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:54:55.505425 env[1311]: time="2025-09-13T00:54:55.505338991Z" level=info msg="CreateContainer within sandbox \"1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"39e366878fc553ba503f231a8da106219b200b7445a128eeea74d04a438f0d94\"" Sep 13 00:54:55.505959 env[1311]: time="2025-09-13T00:54:55.505921395Z" level=info msg="StartContainer for \"39e366878fc553ba503f231a8da106219b200b7445a128eeea74d04a438f0d94\"" Sep 13 00:54:55.607614 env[1311]: time="2025-09-13T00:54:55.607567686Z" level=info msg="StartContainer for \"39e366878fc553ba503f231a8da106219b200b7445a128eeea74d04a438f0d94\" returns successfully" Sep 13 00:54:55.609767 env[1311]: time="2025-09-13T00:54:55.609719374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:54:56.048244 kubelet[2107]: I0913 00:54:56.048183 2107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:57.231442 env[1311]: time="2025-09-13T00:54:57.231359548Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:57.233570 env[1311]: time="2025-09-13T00:54:57.233525038Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:57.234938 env[1311]: time="2025-09-13T00:54:57.234911327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:57.236124 env[1311]: time="2025-09-13T00:54:57.236096081Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:57.236893 env[1311]: time="2025-09-13T00:54:57.236860732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:54:57.241726 env[1311]: time="2025-09-13T00:54:57.241677537Z" level=info msg="CreateContainer within sandbox \"1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:54:57.260691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4282416544.mount: Deactivated successfully. Sep 13 00:54:57.263789 env[1311]: time="2025-09-13T00:54:57.263738605Z" level=info msg="CreateContainer within sandbox \"1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e3681399720b490cac820033590872f39cf183f1ca9825095e38e7f002ccf43c\"" Sep 13 00:54:57.266134 env[1311]: time="2025-09-13T00:54:57.266101756Z" level=info msg="StartContainer for \"e3681399720b490cac820033590872f39cf183f1ca9825095e38e7f002ccf43c\"" Sep 13 00:54:57.358430 env[1311]: time="2025-09-13T00:54:57.358368667Z" level=info msg="StartContainer for \"e3681399720b490cac820033590872f39cf183f1ca9825095e38e7f002ccf43c\" returns successfully" Sep 13 00:54:57.846886 systemd[1]: Started sshd@7-161.35.238.92:22-147.75.109.163:38308.service. Sep 13 00:54:57.855241 kernel: kauditd_printk_skb: 14 callbacks suppressed Sep 13 00:54:57.855450 kernel: audit: type=1130 audit(1757724897.847:433): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-161.35.238.92:22-147.75.109.163:38308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:57.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-161.35.238.92:22-147.75.109.163:38308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:57.869634 kubelet[2107]: I0913 00:54:57.863753 2107 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:54:57.886263 kubelet[2107]: I0913 00:54:57.886165 2107 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:54:57.996000 audit[4847]: USER_ACCT pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:57.998436 sshd[4847]: Accepted publickey for core from 147.75.109.163 port 38308 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:54:58.000503 kernel: audit: type=1101 audit(1757724897.996:434): pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:58.005000 audit[4847]: CRED_ACQ pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:58.011623 kernel: audit: type=1103 audit(1757724898.005:435): pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:58.012009 kernel: audit: type=1006 audit(1757724898.006:436): pid=4847 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Sep 13 00:54:58.012071 kernel: audit: type=1300 audit(1757724898.006:436): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc66e9b000 a2=3 a3=0 items=0 ppid=1 pid=4847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:58.006000 audit[4847]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc66e9b000 a2=3 a3=0 items=0 ppid=1 pid=4847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:58.012225 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:58.006000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:58.017837 kernel: audit: type=1327 audit(1757724898.006:436): proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:58.047467 systemd-logind[1291]: New session 8 of user core. Sep 13 00:54:58.048981 systemd[1]: Started session-8.scope. Sep 13 00:54:58.061000 audit[4847]: USER_START pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:58.067760 kernel: audit: type=1105 audit(1757724898.061:437): pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:58.074943 kernel: audit: type=1103 audit(1757724898.068:438): pid=4850 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:58.068000 audit[4850]: CRED_ACQ pid=4850 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:58.303662 kubelet[2107]: I0913 00:54:58.303568 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86766b5d6c-6f24s" podStartSLOduration=31.883391848 podStartE2EDuration="41.303528379s" podCreationTimestamp="2025-09-13 00:54:17 +0000 UTC" firstStartedPulling="2025-09-13 00:54:44.570161499 +0000 UTC m=+41.254299961" lastFinishedPulling="2025-09-13 00:54:53.990298014 +0000 UTC m=+50.674436492" observedRunningTime="2025-09-13 00:54:55.060912529 +0000 UTC m=+51.745051021" watchObservedRunningTime="2025-09-13 00:54:58.303528379 +0000 UTC m=+54.987666865" Sep 13 00:54:58.304501 kubelet[2107]: I0913 00:54:58.304139 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4d8z6" podStartSLOduration=27.216091672 podStartE2EDuration="38.304119388s" podCreationTimestamp="2025-09-13 00:54:20 +0000 UTC" firstStartedPulling="2025-09-13 00:54:46.150567799 +0000 UTC m=+42.834706266" lastFinishedPulling="2025-09-13 00:54:57.238595506 +0000 UTC m=+53.922733982" observedRunningTime="2025-09-13 00:54:58.278412903 +0000 UTC m=+54.962551384" watchObservedRunningTime="2025-09-13 00:54:58.304119388 +0000 UTC m=+54.988257872" Sep 13 00:54:58.889632 sshd[4847]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:58.891000 audit[4847]: USER_END pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:58.895432 kernel: audit: type=1106 audit(1757724898.891:439): pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:58.895000 audit[4847]: CRED_DISP pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:58.899419 kernel: audit: type=1104 audit(1757724898.895:440): pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:54:58.899944 systemd[1]: sshd@7-161.35.238.92:22-147.75.109.163:38308.service: Deactivated successfully. Sep 13 00:54:58.901768 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:54:58.902353 systemd-logind[1291]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:54:58.903627 systemd-logind[1291]: Removed session 8. Sep 13 00:54:58.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-161.35.238.92:22-147.75.109.163:38308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:03.569278 env[1311]: time="2025-09-13T00:55:03.569153055Z" level=info msg="StopPodSandbox for \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\"" Sep 13 00:55:03.895958 systemd[1]: Started sshd@8-161.35.238.92:22-147.75.109.163:51964.service. Sep 13 00:55:03.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-161.35.238.92:22-147.75.109.163:51964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:03.897308 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:03.897453 kernel: audit: type=1130 audit(1757724903.895:442): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-161.35.238.92:22-147.75.109.163:51964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:03.808 [WARNING][4895] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4a90c3d8-bff3-4795-ac7c-5bfe09cf7345", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb", Pod:"coredns-7c65d6cfc9-b7mc7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie41473a54b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:03.811 [INFO][4895] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:03.811 [INFO][4895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" iface="eth0" netns="" Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:03.811 [INFO][4895] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:03.811 [INFO][4895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:04.069 [INFO][4902] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" HandleID="k8s-pod-network.dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:04.073 [INFO][4902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:04.074 [INFO][4902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:04.107 [WARNING][4902] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" HandleID="k8s-pod-network.dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:04.107 [INFO][4902] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" HandleID="k8s-pod-network.dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:04.111 [INFO][4902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:04.118510 env[1311]: 2025-09-13 00:55:04.114 [INFO][4895] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:55:04.118510 env[1311]: time="2025-09-13T00:55:04.118425942Z" level=info msg="TearDown network for sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\" successfully" Sep 13 00:55:04.118510 env[1311]: time="2025-09-13T00:55:04.118473790Z" level=info msg="StopPodSandbox for \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\" returns successfully" Sep 13 00:55:04.173000 audit[4906]: USER_ACCT pid=4906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:04.178314 sshd[4906]: Accepted publickey for core from 147.75.109.163 port 51964 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:04.178925 kernel: audit: type=1101 audit(1757724904.173:443): pid=4906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:04.179000 audit[4906]: CRED_ACQ pid=4906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:04.182525 sshd[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:04.187502 kernel: audit: type=1103 audit(1757724904.179:444): pid=4906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:04.187641 kernel: audit: type=1006 audit(1757724904.179:445): pid=4906 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Sep 13 00:55:04.187671 kernel: audit: type=1300 audit(1757724904.179:445): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc150bbd0 a2=3 a3=0 items=0 ppid=1 pid=4906 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.179000 audit[4906]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc150bbd0 a2=3 a3=0 items=0 ppid=1 pid=4906 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.193521 kernel: audit: type=1327 audit(1757724904.179:445): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:04.179000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:04.199526 systemd-logind[1291]: New session 9 of user core. Sep 13 00:55:04.201081 systemd[1]: Started session-9.scope. Sep 13 00:55:04.202699 env[1311]: time="2025-09-13T00:55:04.202568476Z" level=info msg="RemovePodSandbox for \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\"" Sep 13 00:55:04.202699 env[1311]: time="2025-09-13T00:55:04.202631326Z" level=info msg="Forcibly stopping sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\"" Sep 13 00:55:04.215000 audit[4906]: USER_START pid=4906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:04.221512 kernel: audit: type=1105 audit(1757724904.215:446): pid=4906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:04.223000 audit[4919]: CRED_ACQ pid=4919 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:04.229427 kernel: audit: type=1103 audit(1757724904.223:447): pid=4919 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.325 [WARNING][4920] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4a90c3d8-bff3-4795-ac7c-5bfe09cf7345", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"a31a3db8c09e904ba4239c61e3b5658b80f703093a75484b3dee14e7240215fb", Pod:"coredns-7c65d6cfc9-b7mc7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie41473a54b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.326 [INFO][4920] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.326 [INFO][4920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" iface="eth0" netns="" Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.326 [INFO][4920] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.326 [INFO][4920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.380 [INFO][4931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" HandleID="k8s-pod-network.dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.383 [INFO][4931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.383 [INFO][4931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.397 [WARNING][4931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" HandleID="k8s-pod-network.dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.397 [INFO][4931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" HandleID="k8s-pod-network.dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--b7mc7-eth0" Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.403 [INFO][4931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:04.423382 env[1311]: 2025-09-13 00:55:04.416 [INFO][4920] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0" Sep 13 00:55:04.426471 env[1311]: time="2025-09-13T00:55:04.423718302Z" level=info msg="TearDown network for sandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\" successfully" Sep 13 00:55:04.450153 env[1311]: time="2025-09-13T00:55:04.449485733Z" level=info msg="RemovePodSandbox \"dce5823e2a3ad3ef6083d83a09f7475590bac2a5825bcfa5a469a5f522de00b0\" returns successfully" Sep 13 00:55:04.453717 env[1311]: time="2025-09-13T00:55:04.453677441Z" level=info msg="StopPodSandbox for \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\"" Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.542 [WARNING][4948] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0", GenerateName:"calico-kube-controllers-64dcf69d7d-", Namespace:"calico-system", SelfLink:"", UID:"e2eac8c3-d7f4-4255-85a6-44ee22635692", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64dcf69d7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7", Pod:"calico-kube-controllers-64dcf69d7d-d9zgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86973ae3863", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.543 [INFO][4948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.543 [INFO][4948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" iface="eth0" netns="" Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.543 [INFO][4948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.543 [INFO][4948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.600 [INFO][4955] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" HandleID="k8s-pod-network.17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.601 [INFO][4955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.601 [INFO][4955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.624 [WARNING][4955] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" HandleID="k8s-pod-network.17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.624 [INFO][4955] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" HandleID="k8s-pod-network.17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.629 [INFO][4955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:04.641713 env[1311]: 2025-09-13 00:55:04.637 [INFO][4948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:55:04.642829 env[1311]: time="2025-09-13T00:55:04.641751890Z" level=info msg="TearDown network for sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\" successfully" Sep 13 00:55:04.642829 env[1311]: time="2025-09-13T00:55:04.642795557Z" level=info msg="StopPodSandbox for \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\" returns successfully" Sep 13 00:55:04.690292 env[1311]: time="2025-09-13T00:55:04.688879921Z" level=info msg="RemovePodSandbox for \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\"" Sep 13 00:55:04.690292 env[1311]: time="2025-09-13T00:55:04.688954252Z" level=info msg="Forcibly stopping sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\"" Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.756 [WARNING][4970] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0", GenerateName:"calico-kube-controllers-64dcf69d7d-", Namespace:"calico-system", SelfLink:"", UID:"e2eac8c3-d7f4-4255-85a6-44ee22635692", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64dcf69d7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"1cf1d70db7989f2bcf3b2f326255b411bfb3c2859e7412b999fe34836bc77ab7", Pod:"calico-kube-controllers-64dcf69d7d-d9zgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86973ae3863", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.759 [INFO][4970] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.759 [INFO][4970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" iface="eth0" netns="" Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.759 [INFO][4970] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.759 [INFO][4970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.816 [INFO][4977] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" HandleID="k8s-pod-network.17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.816 [INFO][4977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.816 [INFO][4977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.827 [WARNING][4977] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" HandleID="k8s-pod-network.17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.827 [INFO][4977] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" HandleID="k8s-pod-network.17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--kube--controllers--64dcf69d7d--d9zgr-eth0" Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.831 [INFO][4977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:04.843071 env[1311]: 2025-09-13 00:55:04.834 [INFO][4970] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da" Sep 13 00:55:04.845465 env[1311]: time="2025-09-13T00:55:04.843046916Z" level=info msg="TearDown network for sandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\" successfully" Sep 13 00:55:04.852183 env[1311]: time="2025-09-13T00:55:04.851246796Z" level=info msg="RemovePodSandbox \"17e85275e046699fe5ae178f6a9c88201dfa51e0654d37ab2f18dcc8bce773da\" returns successfully" Sep 13 00:55:04.855736 env[1311]: time="2025-09-13T00:55:04.855665542Z" level=info msg="StopPodSandbox for \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\"" Sep 13 00:55:05.043466 sshd[4906]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:05.044000 audit[4906]: USER_END pid=4906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:05.044000 audit[4906]: CRED_DISP pid=4906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:04.987 [WARNING][4993] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c733dfc1-fe7d-49df-84e8-9292d570b93c", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb", Pod:"coredns-7c65d6cfc9-rmqpc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95c6e3b965c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:04.987 [INFO][4993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:04.987 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" iface="eth0" netns="" Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:04.987 [INFO][4993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:04.987 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:05.024 [INFO][5000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" HandleID="k8s-pod-network.d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:05.026 [INFO][5000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:05.026 [INFO][5000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:05.035 [WARNING][5000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" HandleID="k8s-pod-network.d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:05.036 [INFO][5000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" HandleID="k8s-pod-network.d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:05.040 [INFO][5000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:05.050798 env[1311]: 2025-09-13 00:55:05.042 [INFO][4993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:55:05.051427 env[1311]: time="2025-09-13T00:55:05.050583797Z" level=info msg="TearDown network for sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\" successfully" Sep 13 00:55:05.051427 env[1311]: time="2025-09-13T00:55:05.051122251Z" level=info msg="StopPodSandbox for \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\" returns successfully" Sep 13 00:55:05.052037 kernel: audit: type=1106 audit(1757724905.044:448): pid=4906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:05.053599 kernel: audit: type=1104 audit(1757724905.044:449): pid=4906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:05.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-161.35.238.92:22-147.75.109.163:51964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:05.052787 systemd[1]: sshd@8-161.35.238.92:22-147.75.109.163:51964.service: Deactivated successfully. Sep 13 00:55:05.053874 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:55:05.055009 systemd-logind[1291]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:55:05.057084 systemd-logind[1291]: Removed session 9. Sep 13 00:55:05.058601 env[1311]: time="2025-09-13T00:55:05.058389422Z" level=info msg="RemovePodSandbox for \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\"" Sep 13 00:55:05.059266 env[1311]: time="2025-09-13T00:55:05.059200193Z" level=info msg="Forcibly stopping sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\"" Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.110 [WARNING][5016] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c733dfc1-fe7d-49df-84e8-9292d570b93c", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"5e5fb9471ddc2133ac36e9729a3337c252dd89d4ff899074de729a92a4683cdb", Pod:"coredns-7c65d6cfc9-rmqpc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95c6e3b965c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.112 [INFO][5016] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.112 [INFO][5016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" iface="eth0" netns="" Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.112 [INFO][5016] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.112 [INFO][5016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.153 [INFO][5023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" HandleID="k8s-pod-network.d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.153 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.153 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.163 [WARNING][5023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" HandleID="k8s-pod-network.d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.163 [INFO][5023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" HandleID="k8s-pod-network.d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Workload="ci--3510.3.8--n--b7c626372f-k8s-coredns--7c65d6cfc9--rmqpc-eth0" Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.166 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:05.173487 env[1311]: 2025-09-13 00:55:05.170 [INFO][5016] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb" Sep 13 00:55:05.175263 env[1311]: time="2025-09-13T00:55:05.174047869Z" level=info msg="TearDown network for sandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\" successfully" Sep 13 00:55:05.179870 env[1311]: time="2025-09-13T00:55:05.179563802Z" level=info msg="RemovePodSandbox \"d41664156dba282c2cce7d2b243856dbd4408a95c8ceb17921f6cf64e66f45eb\" returns successfully" Sep 13 00:55:05.181904 env[1311]: time="2025-09-13T00:55:05.181864059Z" level=info msg="StopPodSandbox for \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\"" Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.247 [WARNING][5038] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5f057288-90ee-4889-a341-9af038f7cf7a", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a", Pod:"csi-node-driver-4d8z6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5287a7b0303", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.255 [INFO][5038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.255 [INFO][5038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" iface="eth0" netns="" Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.255 [INFO][5038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.255 [INFO][5038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.292 [INFO][5045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" HandleID="k8s-pod-network.49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.292 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.292 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.301 [WARNING][5045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" HandleID="k8s-pod-network.49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.301 [INFO][5045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" HandleID="k8s-pod-network.49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.304 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:05.309008 env[1311]: 2025-09-13 00:55:05.306 [INFO][5038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:55:05.309853 env[1311]: time="2025-09-13T00:55:05.309801817Z" level=info msg="TearDown network for sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\" successfully" Sep 13 00:55:05.309959 env[1311]: time="2025-09-13T00:55:05.309938145Z" level=info msg="StopPodSandbox for \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\" returns successfully" Sep 13 00:55:05.310823 env[1311]: time="2025-09-13T00:55:05.310795811Z" level=info msg="RemovePodSandbox for \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\"" Sep 13 00:55:05.311027 env[1311]: time="2025-09-13T00:55:05.310978458Z" level=info msg="Forcibly stopping sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\"" Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.363 [WARNING][5059] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5f057288-90ee-4889-a341-9af038f7cf7a", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"1b5a950984e07eb9aa277a9cef59f061340a2404d39240265cd9e77b3b06574a", Pod:"csi-node-driver-4d8z6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5287a7b0303", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.363 [INFO][5059] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.363 [INFO][5059] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" iface="eth0" netns="" Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.363 [INFO][5059] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.363 [INFO][5059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.416 [INFO][5068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" HandleID="k8s-pod-network.49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.416 [INFO][5068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.417 [INFO][5068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.425 [WARNING][5068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" HandleID="k8s-pod-network.49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.425 [INFO][5068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" HandleID="k8s-pod-network.49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Workload="ci--3510.3.8--n--b7c626372f-k8s-csi--node--driver--4d8z6-eth0" Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.427 [INFO][5068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:05.436427 env[1311]: 2025-09-13 00:55:05.432 [INFO][5059] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e" Sep 13 00:55:05.438315 env[1311]: time="2025-09-13T00:55:05.438259657Z" level=info msg="TearDown network for sandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\" successfully" Sep 13 00:55:05.443212 env[1311]: time="2025-09-13T00:55:05.443145995Z" level=info msg="RemovePodSandbox \"49f2f4dddee73ba08bef38a5fc0e28a682942086a25254ebfd1e0ef68487dc0e\" returns successfully" Sep 13 00:55:05.444162 env[1311]: time="2025-09-13T00:55:05.444128565Z" level=info msg="StopPodSandbox for \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\"" Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.498 [WARNING][5087] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-whisker--64f68966d7--rz62c-eth0" Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.498 [INFO][5087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.498 [INFO][5087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" iface="eth0" netns="" Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.498 [INFO][5087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.498 [INFO][5087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.530 [INFO][5094] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" HandleID="k8s-pod-network.859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--64f68966d7--rz62c-eth0" Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.531 [INFO][5094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.531 [INFO][5094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.547 [WARNING][5094] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" HandleID="k8s-pod-network.859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--64f68966d7--rz62c-eth0" Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.548 [INFO][5094] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" HandleID="k8s-pod-network.859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--64f68966d7--rz62c-eth0" Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.550 [INFO][5094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:05.555981 env[1311]: 2025-09-13 00:55:05.553 [INFO][5087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:55:05.557608 env[1311]: time="2025-09-13T00:55:05.556647134Z" level=info msg="TearDown network for sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\" successfully" Sep 13 00:55:05.557608 env[1311]: time="2025-09-13T00:55:05.556700516Z" level=info msg="StopPodSandbox for \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\" returns successfully" Sep 13 00:55:05.558099 env[1311]: time="2025-09-13T00:55:05.558043376Z" level=info msg="RemovePodSandbox for \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\"" Sep 13 00:55:05.558184 env[1311]: time="2025-09-13T00:55:05.558104142Z" level=info msg="Forcibly stopping sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\"" Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.611 [WARNING][5108] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" WorkloadEndpoint="ci--3510.3.8--n--b7c626372f-k8s-whisker--64f68966d7--rz62c-eth0" Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.611 [INFO][5108] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.611 [INFO][5108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" iface="eth0" netns="" Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.611 [INFO][5108] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.611 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.645 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" HandleID="k8s-pod-network.859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--64f68966d7--rz62c-eth0" Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.645 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.645 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.655 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" HandleID="k8s-pod-network.859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--64f68966d7--rz62c-eth0" Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.655 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" HandleID="k8s-pod-network.859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Workload="ci--3510.3.8--n--b7c626372f-k8s-whisker--64f68966d7--rz62c-eth0" Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.657 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:05.662447 env[1311]: 2025-09-13 00:55:05.659 [INFO][5108] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03" Sep 13 00:55:05.663440 env[1311]: time="2025-09-13T00:55:05.663358691Z" level=info msg="TearDown network for sandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\" successfully" Sep 13 00:55:05.667461 env[1311]: time="2025-09-13T00:55:05.667372248Z" level=info msg="RemovePodSandbox \"859ba1f38633d20287b3cc74269bd5b02c5a17790fe871ae3082a393e51b3c03\" returns successfully" Sep 13 00:55:05.668416 env[1311]: time="2025-09-13T00:55:05.668369380Z" level=info msg="StopPodSandbox for \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\"" Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.743 [WARNING][5130] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0", GenerateName:"calico-apiserver-86766b5d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cec0420a-0ebf-4565-8d09-fd0c2c488b56", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86766b5d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff", Pod:"calico-apiserver-86766b5d6c-z4fvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali188b0193cdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.743 [INFO][5130] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.743 [INFO][5130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" iface="eth0" netns="" Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.743 [INFO][5130] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.743 [INFO][5130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.775 [INFO][5137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" HandleID="k8s-pod-network.9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.775 [INFO][5137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.775 [INFO][5137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.783 [WARNING][5137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" HandleID="k8s-pod-network.9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.784 [INFO][5137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" HandleID="k8s-pod-network.9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.786 [INFO][5137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:05.791166 env[1311]: 2025-09-13 00:55:05.788 [INFO][5130] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:55:05.793011 env[1311]: time="2025-09-13T00:55:05.791130705Z" level=info msg="TearDown network for sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\" successfully" Sep 13 00:55:05.793070 env[1311]: time="2025-09-13T00:55:05.793012446Z" level=info msg="StopPodSandbox for \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\" returns successfully" Sep 13 00:55:05.793850 env[1311]: time="2025-09-13T00:55:05.793781943Z" level=info msg="RemovePodSandbox for \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\"" Sep 13 00:55:05.794420 env[1311]: time="2025-09-13T00:55:05.793848312Z" level=info msg="Forcibly stopping sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\"" Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.842 [WARNING][5151] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0", GenerateName:"calico-apiserver-86766b5d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cec0420a-0ebf-4565-8d09-fd0c2c488b56", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86766b5d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"c5f4492236b966a3b68323ac232cac32db0a5961c2502bc23efd3397f0c9e2ff", Pod:"calico-apiserver-86766b5d6c-z4fvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali188b0193cdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.842 [INFO][5151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.842 [INFO][5151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" iface="eth0" netns="" Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.842 [INFO][5151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.842 [INFO][5151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.873 [INFO][5158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" HandleID="k8s-pod-network.9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.874 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.874 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.881 [WARNING][5158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" HandleID="k8s-pod-network.9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.881 [INFO][5158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" HandleID="k8s-pod-network.9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--z4fvv-eth0" Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.883 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:05.888586 env[1311]: 2025-09-13 00:55:05.886 [INFO][5151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161" Sep 13 00:55:05.889755 env[1311]: time="2025-09-13T00:55:05.888630054Z" level=info msg="TearDown network for sandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\" successfully" Sep 13 00:55:05.892533 env[1311]: time="2025-09-13T00:55:05.892472518Z" level=info msg="RemovePodSandbox \"9e91e9201d944685d70161f627a74d88a32d45d6cacf1294efbd4cb39125e161\" returns successfully" Sep 13 00:55:05.893487 env[1311]: time="2025-09-13T00:55:05.893439051Z" level=info msg="StopPodSandbox for \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\"" Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.942 [WARNING][5172] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0", GenerateName:"calico-apiserver-86766b5d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8833f507-515d-400e-9991-59b6f2cca14f", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86766b5d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc", Pod:"calico-apiserver-86766b5d6c-6f24s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4bb514ec3c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.943 [INFO][5172] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.943 [INFO][5172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" iface="eth0" netns="" Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.943 [INFO][5172] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.943 [INFO][5172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.982 [INFO][5179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" HandleID="k8s-pod-network.d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.983 [INFO][5179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.983 [INFO][5179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.991 [WARNING][5179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" HandleID="k8s-pod-network.d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.991 [INFO][5179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" HandleID="k8s-pod-network.d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.993 [INFO][5179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:05.999338 env[1311]: 2025-09-13 00:55:05.996 [INFO][5172] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:55:06.000936 env[1311]: time="2025-09-13T00:55:05.999352630Z" level=info msg="TearDown network for sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\" successfully" Sep 13 00:55:06.000936 env[1311]: time="2025-09-13T00:55:05.999390221Z" level=info msg="StopPodSandbox for \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\" returns successfully" Sep 13 00:55:06.000936 env[1311]: time="2025-09-13T00:55:06.000127998Z" level=info msg="RemovePodSandbox for \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\"" Sep 13 00:55:06.000936 env[1311]: time="2025-09-13T00:55:06.000170353Z" level=info msg="Forcibly stopping sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\"" Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.066 [WARNING][5193] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0", GenerateName:"calico-apiserver-86766b5d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8833f507-515d-400e-9991-59b6f2cca14f", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86766b5d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"e67427552c7c7dcf51716ea801a669faef787fe23059729f47e790dcac4c4acc", Pod:"calico-apiserver-86766b5d6c-6f24s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4bb514ec3c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.066 [INFO][5193] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.066 [INFO][5193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" iface="eth0" netns="" Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.066 [INFO][5193] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.066 [INFO][5193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.097 [INFO][5202] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" HandleID="k8s-pod-network.d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.097 [INFO][5202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.097 [INFO][5202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.105 [WARNING][5202] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" HandleID="k8s-pod-network.d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.105 [INFO][5202] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" HandleID="k8s-pod-network.d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Workload="ci--3510.3.8--n--b7c626372f-k8s-calico--apiserver--86766b5d6c--6f24s-eth0" Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.108 [INFO][5202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:06.113057 env[1311]: 2025-09-13 00:55:06.110 [INFO][5193] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9" Sep 13 00:55:06.113057 env[1311]: time="2025-09-13T00:55:06.112958164Z" level=info msg="TearDown network for sandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\" successfully" Sep 13 00:55:06.117258 env[1311]: time="2025-09-13T00:55:06.117161599Z" level=info msg="RemovePodSandbox \"d46a17b27e41d51949f970da69ab8562a247f235461187e2717ee6d22d72e0a9\" returns successfully" Sep 13 00:55:06.118028 env[1311]: time="2025-09-13T00:55:06.117996537Z" level=info msg="StopPodSandbox for \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\"" Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.177 [WARNING][5217] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"86879210-53e1-4a0a-87e7-2bb62916a082", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959", Pod:"goldmane-7988f88666-zx8vp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif4817d98258", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.177 [INFO][5217] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.177 [INFO][5217] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" iface="eth0" netns="" Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.177 [INFO][5217] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.177 [INFO][5217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.207 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" HandleID="k8s-pod-network.345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.207 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.207 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.216 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" HandleID="k8s-pod-network.345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.217 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" HandleID="k8s-pod-network.345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.220 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:06.225724 env[1311]: 2025-09-13 00:55:06.222 [INFO][5217] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:55:06.225724 env[1311]: time="2025-09-13T00:55:06.225673651Z" level=info msg="TearDown network for sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\" successfully" Sep 13 00:55:06.226505 env[1311]: time="2025-09-13T00:55:06.225743069Z" level=info msg="StopPodSandbox for \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\" returns successfully" Sep 13 00:55:06.227486 env[1311]: time="2025-09-13T00:55:06.227346568Z" level=info msg="RemovePodSandbox for \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\"" Sep 13 00:55:06.227649 env[1311]: time="2025-09-13T00:55:06.227497605Z" level=info msg="Forcibly stopping sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\"" Sep 13 00:55:06.395430 kubelet[2107]: I0913 00:55:06.394209 2107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.288 [WARNING][5238] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"86879210-53e1-4a0a-87e7-2bb62916a082", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-b7c626372f", ContainerID:"9c185c5c3091d41b0958832f84346a5f30bc047264086bcb0173c428c0ed1959", Pod:"goldmane-7988f88666-zx8vp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif4817d98258", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.289 [INFO][5238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.289 [INFO][5238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" iface="eth0" netns="" Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.289 [INFO][5238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.289 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.351 [INFO][5245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" HandleID="k8s-pod-network.345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.351 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.352 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.376 [WARNING][5245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" HandleID="k8s-pod-network.345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.381 [INFO][5245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" HandleID="k8s-pod-network.345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Workload="ci--3510.3.8--n--b7c626372f-k8s-goldmane--7988f88666--zx8vp-eth0" Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.391 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:06.415043 env[1311]: 2025-09-13 00:55:06.410 [INFO][5238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd" Sep 13 00:55:06.415043 env[1311]: time="2025-09-13T00:55:06.414912391Z" level=info msg="TearDown network for sandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\" successfully" Sep 13 00:55:06.422479 env[1311]: time="2025-09-13T00:55:06.422102704Z" level=info msg="RemovePodSandbox \"345cbf741dc61ec6733118f7bcb8c1bac631454785628b06690cd3277c8ac3fd\" returns successfully" Sep 13 00:55:06.559000 audit[5252]: NETFILTER_CFG table=filter:118 family=2 entries=10 op=nft_register_rule pid=5252 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:06.559000 audit[5252]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff5b5a0430 a2=0 a3=7fff5b5a041c items=0 ppid=2209 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:06.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:06.567000 audit[5252]: NETFILTER_CFG table=nat:119 family=2 entries=36 op=nft_register_chain pid=5252 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:06.567000 audit[5252]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7fff5b5a0430 a2=0 a3=7fff5b5a041c items=0 ppid=2209 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:06.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:10.053000 systemd[1]: Started sshd@9-161.35.238.92:22-147.75.109.163:43538.service. Sep 13 00:55:10.058126 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:55:10.059389 kernel: audit: type=1130 audit(1757724910.052:453): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-161.35.238.92:22-147.75.109.163:43538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:10.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-161.35.238.92:22-147.75.109.163:43538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:10.213000 audit[5297]: USER_ACCT pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.214219 sshd[5297]: Accepted publickey for core from 147.75.109.163 port 43538 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:10.217433 kernel: audit: type=1101 audit(1757724910.213:454): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.218000 audit[5297]: CRED_ACQ pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.220975 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:10.224492 kernel: audit: type=1103 audit(1757724910.218:455): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.218000 audit[5297]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff816a5d30 a2=3 a3=0 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:10.232364 kernel: audit: type=1006 audit(1757724910.218:456): pid=5297 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 13 00:55:10.232938 kernel: audit: type=1300 audit(1757724910.218:456): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff816a5d30 a2=3 a3=0 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:10.218000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:10.236475 kernel: audit: type=1327 audit(1757724910.218:456): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:10.238758 systemd[1]: Started session-10.scope. Sep 13 00:55:10.239030 systemd-logind[1291]: New session 10 of user core. Sep 13 00:55:10.256000 audit[5297]: USER_START pid=5297 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.262574 kernel: audit: type=1105 audit(1757724910.256:457): pid=5297 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.263000 audit[5300]: CRED_ACQ pid=5300 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.271485 kernel: audit: type=1103 audit(1757724910.263:458): pid=5300 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.913084 sshd[5297]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:10.914000 audit[5297]: USER_END pid=5297 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.925171 systemd[1]: sshd@9-161.35.238.92:22-147.75.109.163:43538.service: Deactivated successfully. Sep 13 00:55:10.926301 kernel: audit: type=1106 audit(1757724910.914:459): pid=5297 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.926750 systemd-logind[1291]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:55:10.930657 kernel: audit: type=1104 audit(1757724910.915:460): pid=5297 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.915000 audit[5297]: CRED_DISP pid=5297 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:10.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-161.35.238.92:22-147.75.109.163:43538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:10.926875 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:55:10.931372 systemd-logind[1291]: Removed session 10. Sep 13 00:55:15.919574 systemd[1]: Started sshd@10-161.35.238.92:22-147.75.109.163:43540.service. Sep 13 00:55:15.921160 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:15.924205 kernel: audit: type=1130 audit(1757724915.919:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-161.35.238.92:22-147.75.109.163:43540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:15.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-161.35.238.92:22-147.75.109.163:43540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:15.985000 audit[5311]: USER_ACCT pid=5311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:15.990140 sshd[5311]: Accepted publickey for core from 147.75.109.163 port 43540 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:15.994584 kernel: audit: type=1101 audit(1757724915.985:463): pid=5311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:15.994685 kernel: audit: type=1103 audit(1757724915.990:464): pid=5311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:15.990000 audit[5311]: CRED_ACQ pid=5311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:15.992319 sshd[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:15.990000 audit[5311]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6c0ee840 a2=3 a3=0 items=0 ppid=1 pid=5311 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:16.001992 kernel: audit: type=1006 audit(1757724915.990:465): pid=5311 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Sep 13 00:55:16.002160 kernel: audit: type=1300 audit(1757724915.990:465): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6c0ee840 a2=3 a3=0 items=0 ppid=1 pid=5311 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:16.002248 kernel: audit: type=1327 audit(1757724915.990:465): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:15.990000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:16.008750 systemd[1]: Started session-11.scope. Sep 13 00:55:16.008764 systemd-logind[1291]: New session 11 of user core. Sep 13 00:55:16.019000 audit[5311]: USER_START pid=5311 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.023455 kernel: audit: type=1105 audit(1757724916.019:466): pid=5311 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.025000 audit[5314]: CRED_ACQ pid=5314 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.029444 kernel: audit: type=1103 audit(1757724916.025:467): pid=5314 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.334638 sshd[5311]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:16.340372 systemd[1]: Started sshd@11-161.35.238.92:22-147.75.109.163:43552.service. Sep 13 00:55:16.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-161.35.238.92:22-147.75.109.163:43552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:16.345511 kernel: audit: type=1130 audit(1757724916.340:468): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-161.35.238.92:22-147.75.109.163:43552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:16.346000 audit[5311]: USER_END pid=5311 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.352438 kernel: audit: type=1106 audit(1757724916.346:469): pid=5311 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.352616 systemd[1]: sshd@10-161.35.238.92:22-147.75.109.163:43540.service: Deactivated successfully. Sep 13 00:55:16.346000 audit[5311]: CRED_DISP pid=5311 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-161.35.238.92:22-147.75.109.163:43540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:16.357785 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:55:16.358918 systemd-logind[1291]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:55:16.366527 systemd-logind[1291]: Removed session 11. Sep 13 00:55:16.422000 audit[5322]: USER_ACCT pid=5322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.423423 sshd[5322]: Accepted publickey for core from 147.75.109.163 port 43552 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:16.424000 audit[5322]: CRED_ACQ pid=5322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.424000 audit[5322]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb5cabaf0 a2=3 a3=0 items=0 ppid=1 pid=5322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:16.424000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:16.425895 sshd[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:16.434214 systemd-logind[1291]: New session 12 of user core. Sep 13 00:55:16.434589 systemd[1]: Started session-12.scope. Sep 13 00:55:16.442000 audit[5322]: USER_START pid=5322 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.444000 audit[5327]: CRED_ACQ pid=5327 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.632648 sshd[5322]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:16.633000 audit[5322]: USER_END pid=5322 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.633000 audit[5322]: CRED_DISP pid=5322 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.638677 systemd[1]: Started sshd@12-161.35.238.92:22-147.75.109.163:43562.service. Sep 13 00:55:16.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-161.35.238.92:22-147.75.109.163:43562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:16.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-161.35.238.92:22-147.75.109.163:43552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:16.643640 systemd[1]: sshd@11-161.35.238.92:22-147.75.109.163:43552.service: Deactivated successfully. Sep 13 00:55:16.645448 systemd-logind[1291]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:55:16.645578 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:55:16.649021 systemd-logind[1291]: Removed session 12. Sep 13 00:55:16.721185 sshd[5333]: Accepted publickey for core from 147.75.109.163 port 43562 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:16.719000 audit[5333]: USER_ACCT pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.721000 audit[5333]: CRED_ACQ pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.722000 audit[5333]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbf609650 a2=3 a3=0 items=0 ppid=1 pid=5333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:16.722000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:16.723211 sshd[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:16.729484 systemd-logind[1291]: New session 13 of user core. Sep 13 00:55:16.729971 systemd[1]: Started session-13.scope. Sep 13 00:55:16.740000 audit[5333]: USER_START pid=5333 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.746000 audit[5338]: CRED_ACQ pid=5338 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.932725 sshd[5333]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:16.935000 audit[5333]: USER_END pid=5333 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.936000 audit[5333]: CRED_DISP pid=5333 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:16.938947 systemd-logind[1291]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:55:16.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-161.35.238.92:22-147.75.109.163:43562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:16.939493 systemd[1]: sshd@12-161.35.238.92:22-147.75.109.163:43562.service: Deactivated successfully. Sep 13 00:55:16.940574 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:55:16.942260 systemd-logind[1291]: Removed session 13. Sep 13 00:55:21.939313 systemd[1]: Started sshd@13-161.35.238.92:22-147.75.109.163:48452.service. Sep 13 00:55:21.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-161.35.238.92:22-147.75.109.163:48452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:21.945569 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:55:21.945679 kernel: audit: type=1130 audit(1757724921.941:489): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-161.35.238.92:22-147.75.109.163:48452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:22.024000 audit[5348]: USER_ACCT pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.024823 sshd[5348]: Accepted publickey for core from 147.75.109.163 port 48452 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:22.030568 kernel: audit: type=1101 audit(1757724922.024:490): pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.031000 audit[5348]: CRED_ACQ pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.033338 sshd[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:22.037619 kernel: audit: type=1103 audit(1757724922.031:491): pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.037706 kernel: audit: type=1006 audit(1757724922.031:492): pid=5348 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 13 00:55:22.037736 kernel: audit: type=1300 audit(1757724922.031:492): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff12bb0500 a2=3 a3=0 items=0 ppid=1 pid=5348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:22.031000 audit[5348]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff12bb0500 a2=3 a3=0 items=0 ppid=1 pid=5348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:22.031000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:22.045527 kernel: audit: type=1327 audit(1757724922.031:492): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:22.046859 systemd-logind[1291]: New session 14 of user core. Sep 13 00:55:22.047078 systemd[1]: Started session-14.scope. Sep 13 00:55:22.066000 audit[5348]: USER_START pid=5348 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.072434 kernel: audit: type=1105 audit(1757724922.066:493): pid=5348 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.074000 audit[5351]: CRED_ACQ pid=5351 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.079523 kernel: audit: type=1103 audit(1757724922.074:494): pid=5351 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.419668 sshd[5348]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:22.429962 kernel: audit: type=1106 audit(1757724922.423:495): pid=5348 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.430106 kernel: audit: type=1104 audit(1757724922.423:496): pid=5348 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.423000 audit[5348]: USER_END pid=5348 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.423000 audit[5348]: CRED_DISP pid=5348 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:22.426063 systemd-logind[1291]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:55:22.430475 systemd[1]: sshd@13-161.35.238.92:22-147.75.109.163:48452.service: Deactivated successfully. Sep 13 00:55:22.431492 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:55:22.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-161.35.238.92:22-147.75.109.163:48452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:22.433086 systemd-logind[1291]: Removed session 14. Sep 13 00:55:23.143000 audit[5398]: NETFILTER_CFG table=filter:120 family=2 entries=9 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:23.143000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fffcb2bb360 a2=0 a3=7fffcb2bb34c items=0 ppid=2209 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:23.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:23.148000 audit[5398]: NETFILTER_CFG table=nat:121 family=2 entries=31 op=nft_register_chain pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:23.148000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fffcb2bb360 a2=0 a3=7fffcb2bb34c items=0 ppid=2209 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:23.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:27.426751 systemd[1]: Started sshd@14-161.35.238.92:22-147.75.109.163:48456.service. Sep 13 00:55:27.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-161.35.238.92:22-147.75.109.163:48456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:27.431656 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:55:27.432773 kernel: audit: type=1130 audit(1757724927.426:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-161.35.238.92:22-147.75.109.163:48456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:27.540000 audit[5411]: USER_ACCT pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:27.542513 sshd[5411]: Accepted publickey for core from 147.75.109.163 port 48456 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:27.544000 audit[5411]: CRED_ACQ pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:27.547305 sshd[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:27.548668 kernel: audit: type=1101 audit(1757724927.540:501): pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:27.549175 kernel: audit: type=1103 audit(1757724927.544:502): pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:27.549258 kernel: audit: type=1006 audit(1757724927.544:503): pid=5411 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 13 00:55:27.544000 audit[5411]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdad83de40 a2=3 a3=0 items=0 ppid=1 pid=5411 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:27.556252 kernel: audit: type=1300 audit(1757724927.544:503): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdad83de40 a2=3 a3=0 items=0 ppid=1 pid=5411 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:27.556342 kernel: audit: type=1327 audit(1757724927.544:503): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:27.544000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:27.564789 systemd-logind[1291]: New session 15 of user core. Sep 13 00:55:27.565381 systemd[1]: Started session-15.scope. Sep 13 00:55:27.571000 audit[5411]: USER_START pid=5411 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:27.573000 audit[5414]: CRED_ACQ pid=5414 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:27.580554 kernel: audit: type=1105 audit(1757724927.571:504): pid=5411 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:27.580637 kernel: audit: type=1103 audit(1757724927.573:505): pid=5414 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:28.043270 sshd[5411]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:28.044000 audit[5411]: USER_END pid=5411 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:28.049917 systemd[1]: sshd@14-161.35.238.92:22-147.75.109.163:48456.service: Deactivated successfully. Sep 13 00:55:28.052250 kernel: audit: type=1106 audit(1757724928.044:506): pid=5411 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:28.051019 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:55:28.045000 audit[5411]: CRED_DISP pid=5411 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:28.053482 systemd-logind[1291]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:55:28.057431 kernel: audit: type=1104 audit(1757724928.045:507): pid=5411 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:28.058864 systemd-logind[1291]: Removed session 15. Sep 13 00:55:28.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-161.35.238.92:22-147.75.109.163:48456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:28.651278 kubelet[2107]: E0913 00:55:28.651131 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:55:28.652941 kubelet[2107]: E0913 00:55:28.652423 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:55:31.125361 systemd[1]: run-containerd-runc-k8s.io-df951fd2f48ec9d33b9b0137a1c3e2681a69e26a2cf3866709ed08fa01fcd896-runc.lzIzrp.mount: Deactivated successfully. Sep 13 00:55:33.048384 systemd[1]: Started sshd@15-161.35.238.92:22-147.75.109.163:58978.service. Sep 13 00:55:33.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-161.35.238.92:22-147.75.109.163:58978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:33.049965 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:33.050089 kernel: audit: type=1130 audit(1757724933.048:509): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-161.35.238.92:22-147.75.109.163:58978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:33.105000 audit[5443]: USER_ACCT pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.110242 kernel: audit: type=1101 audit(1757724933.105:510): pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.110379 kernel: audit: type=1103 audit(1757724933.109:511): pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.109000 audit[5443]: CRED_ACQ pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.110767 sshd[5443]: Accepted publickey for core from 147.75.109.163 port 58978 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:33.112937 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:33.115198 kernel: audit: type=1006 audit(1757724933.109:512): pid=5443 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 13 00:55:33.115294 kernel: audit: type=1300 audit(1757724933.109:512): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedabf6bb0 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:33.109000 audit[5443]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedabf6bb0 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:33.109000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:33.118981 kernel: audit: type=1327 audit(1757724933.109:512): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:33.122699 systemd[1]: Started session-16.scope. Sep 13 00:55:33.123173 systemd-logind[1291]: New session 16 of user core. Sep 13 00:55:33.127000 audit[5443]: USER_START pid=5443 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.135176 kernel: audit: type=1105 audit(1757724933.127:513): pid=5443 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.135293 kernel: audit: type=1103 audit(1757724933.130:514): pid=5446 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.130000 audit[5446]: CRED_ACQ pid=5446 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.388615 sshd[5443]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:33.389000 audit[5443]: USER_END pid=5443 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.394525 kernel: audit: type=1106 audit(1757724933.389:515): pid=5443 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.389000 audit[5443]: CRED_DISP pid=5443 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.398508 kernel: audit: type=1104 audit(1757724933.389:516): pid=5443 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:33.396198 systemd-logind[1291]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:55:33.397884 systemd[1]: sshd@15-161.35.238.92:22-147.75.109.163:58978.service: Deactivated successfully. Sep 13 00:55:33.398847 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:55:33.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-161.35.238.92:22-147.75.109.163:58978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:33.400374 systemd-logind[1291]: Removed session 16. Sep 13 00:55:37.385522 systemd[1]: run-containerd-runc-k8s.io-0d7d231652388582a5043b91f1063acf3a7758f8977bf0d6e4b109a9e5eaa68b-runc.4cmooa.mount: Deactivated successfully. Sep 13 00:55:38.394350 systemd[1]: Started sshd@16-161.35.238.92:22-147.75.109.163:58990.service. Sep 13 00:55:38.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-161.35.238.92:22-147.75.109.163:58990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:38.397261 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:38.397329 kernel: audit: type=1130 audit(1757724938.395:518): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-161.35.238.92:22-147.75.109.163:58990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:38.483000 audit[5478]: USER_ACCT pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.484917 sshd[5478]: Accepted publickey for core from 147.75.109.163 port 58990 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:38.487492 kernel: audit: type=1101 audit(1757724938.483:519): pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.489000 audit[5478]: CRED_ACQ pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.500980 kernel: audit: type=1103 audit(1757724938.489:520): pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.501155 kernel: audit: type=1006 audit(1757724938.489:521): pid=5478 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 13 00:55:38.501195 kernel: audit: type=1300 audit(1757724938.489:521): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecc211a40 a2=3 a3=0 items=0 ppid=1 pid=5478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:38.489000 audit[5478]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecc211a40 a2=3 a3=0 items=0 ppid=1 pid=5478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:38.489000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:38.505098 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:38.505623 kernel: audit: type=1327 audit(1757724938.489:521): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:38.515036 systemd[1]: Started session-17.scope. Sep 13 00:55:38.516492 systemd-logind[1291]: New session 17 of user core. Sep 13 00:55:38.522000 audit[5478]: USER_START pid=5478 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.525000 audit[5481]: CRED_ACQ pid=5481 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.530455 kernel: audit: type=1105 audit(1757724938.522:522): pid=5478 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.530693 kernel: audit: type=1103 audit(1757724938.525:523): pid=5481 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.792283 sshd[5478]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:38.794000 audit[5478]: USER_END pid=5478 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.796000 audit[5478]: CRED_DISP pid=5478 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.799254 systemd-logind[1291]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:55:38.800489 kernel: audit: type=1106 audit(1757724938.794:524): pid=5478 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.800593 kernel: audit: type=1104 audit(1757724938.796:525): pid=5478 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:38.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-161.35.238.92:22-147.75.109.163:58990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:38.801483 systemd[1]: sshd@16-161.35.238.92:22-147.75.109.163:58990.service: Deactivated successfully. Sep 13 00:55:38.803934 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:55:38.808536 systemd-logind[1291]: Removed session 17. Sep 13 00:55:40.576903 kubelet[2107]: E0913 00:55:40.576861 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:55:41.576355 kubelet[2107]: E0913 00:55:41.576313 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:55:43.797879 systemd[1]: Started sshd@17-161.35.238.92:22-147.75.109.163:51796.service. Sep 13 00:55:43.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-161.35.238.92:22-147.75.109.163:51796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:43.799274 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:55:43.799358 kernel: audit: type=1130 audit(1757724943.797:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-161.35.238.92:22-147.75.109.163:51796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:43.853000 audit[5491]: USER_ACCT pid=5491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:43.853876 sshd[5491]: Accepted publickey for core from 147.75.109.163 port 51796 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:43.857519 kernel: audit: type=1101 audit(1757724943.853:528): pid=5491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:43.858000 audit[5491]: CRED_ACQ pid=5491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:43.859088 sshd[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:43.864349 kernel: audit: type=1103 audit(1757724943.858:529): pid=5491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:43.864802 kernel: audit: type=1006 audit(1757724943.858:530): pid=5491 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Sep 13 00:55:43.858000 audit[5491]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce1f1bbf0 a2=3 a3=0 items=0 ppid=1 pid=5491 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:43.868703 systemd[1]: Started session-18.scope. Sep 13 00:55:43.858000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:43.869927 systemd-logind[1291]: New session 18 of user core. Sep 13 00:55:43.870911 kernel: audit: type=1300 audit(1757724943.858:530): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce1f1bbf0 a2=3 a3=0 items=0 ppid=1 pid=5491 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:43.870967 kernel: audit: type=1327 audit(1757724943.858:530): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:43.880000 audit[5491]: USER_START pid=5491 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:43.883000 audit[5494]: CRED_ACQ pid=5494 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:43.886794 kernel: audit: type=1105 audit(1757724943.880:531): pid=5491 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:43.886963 kernel: audit: type=1103 audit(1757724943.883:532): pid=5494 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.054356 sshd[5491]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:44.059000 audit[5491]: USER_END pid=5491 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.061331 systemd[1]: Started sshd@18-161.35.238.92:22-147.75.109.163:51798.service. Sep 13 00:55:44.065461 kernel: audit: type=1106 audit(1757724944.059:533): pid=5491 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.065000 audit[5491]: CRED_DISP pid=5491 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.069870 systemd[1]: sshd@17-161.35.238.92:22-147.75.109.163:51796.service: Deactivated successfully. Sep 13 00:55:44.071577 kernel: audit: type=1104 audit(1757724944.065:534): pid=5491 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-161.35.238.92:22-147.75.109.163:51798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:44.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-161.35.238.92:22-147.75.109.163:51796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:44.072657 systemd-logind[1291]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:55:44.073315 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:55:44.082656 systemd-logind[1291]: Removed session 18. Sep 13 00:55:44.125000 audit[5503]: USER_ACCT pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.126232 sshd[5503]: Accepted publickey for core from 147.75.109.163 port 51798 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:44.127000 audit[5503]: CRED_ACQ pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.127000 audit[5503]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff04d9d210 a2=3 a3=0 items=0 ppid=1 pid=5503 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:44.127000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:44.128301 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:44.134855 systemd[1]: Started session-19.scope. Sep 13 00:55:44.136293 systemd-logind[1291]: New session 19 of user core. Sep 13 00:55:44.143000 audit[5503]: USER_START pid=5503 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.146000 audit[5508]: CRED_ACQ pid=5508 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.504559 sshd[5503]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:44.508000 audit[5503]: USER_END pid=5503 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.508000 audit[5503]: CRED_DISP pid=5503 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.510262 systemd[1]: Started sshd@19-161.35.238.92:22-147.75.109.163:51808.service. Sep 13 00:55:44.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-161.35.238.92:22-147.75.109.163:51808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:44.511836 systemd[1]: sshd@18-161.35.238.92:22-147.75.109.163:51798.service: Deactivated successfully. Sep 13 00:55:44.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-161.35.238.92:22-147.75.109.163:51798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:44.513619 systemd-logind[1291]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:55:44.514368 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:55:44.516817 systemd-logind[1291]: Removed session 19. Sep 13 00:55:44.593000 audit[5514]: USER_ACCT pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.594183 sshd[5514]: Accepted publickey for core from 147.75.109.163 port 51808 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:44.595000 audit[5514]: CRED_ACQ pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.595000 audit[5514]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc62471820 a2=3 a3=0 items=0 ppid=1 pid=5514 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:44.595000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:44.595915 sshd[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:44.601924 systemd[1]: Started session-20.scope. Sep 13 00:55:44.602119 systemd-logind[1291]: New session 20 of user core. Sep 13 00:55:44.606000 audit[5514]: USER_START pid=5514 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:44.608000 audit[5519]: CRED_ACQ pid=5519 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:47.077158 sshd[5514]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:47.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-161.35.238.92:22-147.75.109.163:51818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:47.106000 audit[5514]: USER_END pid=5514 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:47.107000 audit[5514]: CRED_DISP pid=5514 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:47.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-161.35.238.92:22-147.75.109.163:51808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:47.089346 systemd[1]: Started sshd@20-161.35.238.92:22-147.75.109.163:51818.service. Sep 13 00:55:47.116045 systemd[1]: sshd@19-161.35.238.92:22-147.75.109.163:51808.service: Deactivated successfully. Sep 13 00:55:47.117137 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:55:47.120012 systemd-logind[1291]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:55:47.123795 systemd-logind[1291]: Removed session 20. Sep 13 00:55:47.186000 audit[5531]: NETFILTER_CFG table=filter:122 family=2 entries=20 op=nft_register_rule pid=5531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:47.186000 audit[5531]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffdc554d5f0 a2=0 a3=7ffdc554d5dc items=0 ppid=2209 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:47.186000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:47.192000 audit[5531]: NETFILTER_CFG table=nat:123 family=2 entries=26 op=nft_register_rule pid=5531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:47.192000 audit[5531]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffdc554d5f0 a2=0 a3=0 items=0 ppid=2209 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:47.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:47.218000 audit[5534]: NETFILTER_CFG table=filter:124 family=2 entries=32 op=nft_register_rule pid=5534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:47.218000 audit[5534]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffcc92eeb20 a2=0 a3=7ffcc92eeb0c items=0 ppid=2209 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:47.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:47.225000 audit[5534]: NETFILTER_CFG table=nat:125 family=2 entries=26 op=nft_register_rule pid=5534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:47.225000 audit[5534]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffcc92eeb20 a2=0 a3=0 items=0 ppid=2209 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:47.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:47.278000 audit[5527]: USER_ACCT pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:47.279144 sshd[5527]: Accepted publickey for core from 147.75.109.163 port 51818 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:47.280000 audit[5527]: CRED_ACQ pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:47.280000 audit[5527]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1d4f0560 a2=3 a3=0 items=0 ppid=1 pid=5527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:47.280000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:47.282764 sshd[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:47.296262 systemd[1]: Started session-21.scope. Sep 13 00:55:47.297468 systemd-logind[1291]: New session 21 of user core. Sep 13 00:55:47.304000 audit[5527]: USER_START pid=5527 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:47.307000 audit[5536]: CRED_ACQ pid=5536 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:49.000854 sshd[5527]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:49.078255 kernel: kauditd_printk_skb: 43 callbacks suppressed Sep 13 00:55:49.080977 kernel: audit: type=1130 audit(1757724949.031:564): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-161.35.238.92:22-147.75.109.163:51830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:49.082889 kernel: audit: type=1106 audit(1757724949.052:565): pid=5527 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:49.083095 kernel: audit: type=1104 audit(1757724949.053:566): pid=5527 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:49.083480 kernel: audit: type=1131 audit(1757724949.055:567): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-161.35.238.92:22-147.75.109.163:51818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:49.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-161.35.238.92:22-147.75.109.163:51830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:49.052000 audit[5527]: USER_END pid=5527 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:49.053000 audit[5527]: CRED_DISP pid=5527 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:49.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-161.35.238.92:22-147.75.109.163:51818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:49.026418 systemd[1]: Started sshd@21-161.35.238.92:22-147.75.109.163:51830.service. Sep 13 00:55:49.055390 systemd[1]: sshd@20-161.35.238.92:22-147.75.109.163:51818.service: Deactivated successfully. Sep 13 00:55:49.057157 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:55:49.063305 systemd-logind[1291]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:55:49.072093 systemd-logind[1291]: Removed session 21. Sep 13 00:55:49.187000 audit[5542]: USER_ACCT pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:49.190000 audit[5542]: CRED_ACQ pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:49.191673 sshd[5542]: Accepted publickey for core from 147.75.109.163 port 51830 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:49.192703 sshd[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:49.194513 kernel: audit: type=1101 audit(1757724949.187:568): pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:49.194579 kernel: audit: type=1103 audit(1757724949.190:569): pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:49.194607 kernel: audit: type=1006 audit(1757724949.190:570): pid=5542 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Sep 13 00:55:49.190000 audit[5542]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda1bf4080 a2=3 a3=0 items=0 ppid=1 pid=5542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:49.204423 kernel: audit: type=1300 audit(1757724949.190:570): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda1bf4080 a2=3 a3=0 items=0 ppid=1 pid=5542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:49.214863 kernel: audit: type=1327 audit(1757724949.190:570): proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:49.190000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:49.209349 systemd[1]: Started session-22.scope. Sep 13 00:55:49.211606 systemd-logind[1291]: New session 22 of user core. Sep 13 00:55:49.223000 audit[5542]: USER_START pid=5542 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:49.230895 kernel: audit: type=1105 audit(1757724949.223:571): pid=5542 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:49.229000 audit[5547]: CRED_ACQ pid=5547 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:51.823682 sshd[5542]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:51.850000 audit[5542]: USER_END pid=5542 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:51.852000 audit[5542]: CRED_DISP pid=5542 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:51.862728 systemd[1]: sshd@21-161.35.238.92:22-147.75.109.163:51830.service: Deactivated successfully. Sep 13 00:55:51.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-161.35.238.92:22-147.75.109.163:51830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:51.869969 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:55:51.870976 systemd-logind[1291]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:55:51.880714 systemd-logind[1291]: Removed session 22. Sep 13 00:55:52.632896 systemd[1]: run-containerd-runc-k8s.io-b2d7a1d81121a664219eb883213246ee5a3d80d6ac361ab333cef92789faf898-runc.BDzWdV.mount: Deactivated successfully. Sep 13 00:55:52.734972 kubelet[2107]: E0913 00:55:52.734904 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:55:55.077000 audit[5578]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:55.092532 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:55:55.092646 kernel: audit: type=1325 audit(1757724955.077:576): table=filter:126 family=2 entries=20 op=nft_register_rule pid=5578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:55.093266 kernel: audit: type=1300 audit(1757724955.077:576): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff4bdd9120 a2=0 a3=7fff4bdd910c items=0 ppid=2209 pid=5578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:55.093324 kernel: audit: type=1327 audit(1757724955.077:576): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:55.077000 audit[5578]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff4bdd9120 a2=0 a3=7fff4bdd910c items=0 ppid=2209 pid=5578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:55.077000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:55.101000 audit[5578]: NETFILTER_CFG table=nat:127 family=2 entries=110 op=nft_register_chain pid=5578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:55.104440 kernel: audit: type=1325 audit(1757724955.101:577): table=nat:127 family=2 entries=110 op=nft_register_chain pid=5578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:55.101000 audit[5578]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fff4bdd9120 a2=0 a3=7fff4bdd910c items=0 ppid=2209 pid=5578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:55.108432 kernel: audit: type=1300 audit(1757724955.101:577): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fff4bdd9120 a2=0 a3=7fff4bdd910c items=0 ppid=2209 pid=5578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:55.101000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:55.112446 kernel: audit: type=1327 audit(1757724955.101:577): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:56.826186 systemd[1]: Started sshd@22-161.35.238.92:22-147.75.109.163:53888.service. Sep 13 00:55:56.831760 kernel: audit: type=1130 audit(1757724956.827:578): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-161.35.238.92:22-147.75.109.163:53888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-161.35.238.92:22-147.75.109.163:53888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.937571 sshd[5580]: Accepted publickey for core from 147.75.109.163 port 53888 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:55:56.941700 kernel: audit: type=1101 audit(1757724956.937:579): pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:56.937000 audit[5580]: USER_ACCT pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:56.941000 audit[5580]: CRED_ACQ pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:56.947756 kernel: audit: type=1103 audit(1757724956.941:580): pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:56.947872 kernel: audit: type=1006 audit(1757724956.941:581): pid=5580 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 13 00:55:56.941000 audit[5580]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc26b34830 a2=3 a3=0 items=0 ppid=1 pid=5580 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:56.941000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:55:56.949133 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:56.972869 systemd[1]: Started session-23.scope. Sep 13 00:55:56.974499 systemd-logind[1291]: New session 23 of user core. Sep 13 00:55:56.994000 audit[5580]: USER_START pid=5580 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:56.997000 audit[5583]: CRED_ACQ pid=5583 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:57.786604 kubelet[2107]: E0913 00:55:57.786541 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:55:57.834009 sshd[5580]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:57.835000 audit[5580]: USER_END pid=5580 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:57.835000 audit[5580]: CRED_DISP pid=5580 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:55:57.837756 systemd[1]: sshd@22-161.35.238.92:22-147.75.109.163:53888.service: Deactivated successfully. Sep 13 00:55:57.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-161.35.238.92:22-147.75.109.163:53888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:57.838957 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:55:57.839002 systemd-logind[1291]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:55:57.839980 systemd-logind[1291]: Removed session 23. Sep 13 00:56:02.845753 systemd[1]: Started sshd@23-161.35.238.92:22-147.75.109.163:49516.service. Sep 13 00:56:02.853066 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:56:02.855156 kernel: audit: type=1130 audit(1757724962.846:587): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-161.35.238.92:22-147.75.109.163:49516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:02.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-161.35.238.92:22-147.75.109.163:49516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:02.975678 sshd[5613]: Accepted publickey for core from 147.75.109.163 port 49516 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:56:02.975000 audit[5613]: USER_ACCT pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:02.985772 kernel: audit: type=1101 audit(1757724962.975:588): pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:02.985878 kernel: audit: type=1103 audit(1757724962.979:589): pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:02.985907 kernel: audit: type=1006 audit(1757724962.979:590): pid=5613 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 13 00:56:02.979000 audit[5613]: CRED_ACQ pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:02.986824 sshd[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:02.979000 audit[5613]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5cc1de50 a2=3 a3=0 items=0 ppid=1 pid=5613 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:02.995490 kernel: audit: type=1300 audit(1757724962.979:590): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5cc1de50 a2=3 a3=0 items=0 ppid=1 pid=5613 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:02.995610 kernel: audit: type=1327 audit(1757724962.979:590): proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:02.979000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:03.004720 systemd-logind[1291]: New session 24 of user core. Sep 13 00:56:03.005298 systemd[1]: Started session-24.scope. Sep 13 00:56:03.017000 audit[5613]: USER_START pid=5613 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:03.023469 kernel: audit: type=1105 audit(1757724963.017:591): pid=5613 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:03.022000 audit[5616]: CRED_ACQ pid=5616 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:03.028428 kernel: audit: type=1103 audit(1757724963.022:592): pid=5616 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:04.043796 sshd[5613]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:04.044000 audit[5613]: USER_END pid=5613 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:04.059513 kernel: audit: type=1106 audit(1757724964.044:593): pid=5613 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:04.062511 systemd[1]: sshd@23-161.35.238.92:22-147.75.109.163:49516.service: Deactivated successfully. Sep 13 00:56:04.065919 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:56:04.066105 systemd-logind[1291]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:56:04.072897 kernel: audit: type=1104 audit(1757724964.058:594): pid=5613 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:04.058000 audit[5613]: CRED_DISP pid=5613 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:04.069596 systemd-logind[1291]: Removed session 24. Sep 13 00:56:04.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-161.35.238.92:22-147.75.109.163:49516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.962548 systemd[1]: run-containerd-runc-k8s.io-b2d7a1d81121a664219eb883213246ee5a3d80d6ac361ab333cef92789faf898-runc.DW4Azp.mount: Deactivated successfully. Sep 13 00:56:07.415224 systemd[1]: run-containerd-runc-k8s.io-0d7d231652388582a5043b91f1063acf3a7758f8977bf0d6e4b109a9e5eaa68b-runc.XVQkD1.mount: Deactivated successfully. Sep 13 00:56:09.085490 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:56:09.101024 kernel: audit: type=1130 audit(1757724969.077:596): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-161.35.238.92:22-147.75.109.163:49528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:09.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-161.35.238.92:22-147.75.109.163:49528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:09.077618 systemd[1]: Started sshd@24-161.35.238.92:22-147.75.109.163:49528.service. Sep 13 00:56:09.247307 sshd[5680]: Accepted publickey for core from 147.75.109.163 port 49528 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:56:09.252865 kernel: audit: type=1101 audit(1757724969.245:597): pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:09.245000 audit[5680]: USER_ACCT pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:09.253000 audit[5680]: CRED_ACQ pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:09.266074 kernel: audit: type=1103 audit(1757724969.253:598): pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:09.266283 kernel: audit: type=1006 audit(1757724969.257:599): pid=5680 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 13 00:56:09.266312 kernel: audit: type=1300 audit(1757724969.257:599): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc986b2f0 a2=3 a3=0 items=0 ppid=1 pid=5680 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:09.266337 kernel: audit: type=1327 audit(1757724969.257:599): proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:09.257000 audit[5680]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc986b2f0 a2=3 a3=0 items=0 ppid=1 pid=5680 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:09.257000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:09.267775 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:09.295493 systemd-logind[1291]: New session 25 of user core. Sep 13 00:56:09.297991 systemd[1]: Started session-25.scope. Sep 13 00:56:09.317000 audit[5680]: USER_START pid=5680 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:09.323566 kernel: audit: type=1105 audit(1757724969.317:600): pid=5680 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:09.327493 kernel: audit: type=1103 audit(1757724969.323:601): pid=5683 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:09.323000 audit[5683]: CRED_ACQ pid=5683 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:10.163021 sshd[5680]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:10.164000 audit[5680]: USER_END pid=5680 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:10.173788 kernel: audit: type=1106 audit(1757724970.164:602): pid=5680 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:10.176436 kernel: audit: type=1104 audit(1757724970.164:603): pid=5680 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:10.164000 audit[5680]: CRED_DISP pid=5680 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:10.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-161.35.238.92:22-147.75.109.163:49528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:10.174217 systemd[1]: sshd@24-161.35.238.92:22-147.75.109.163:49528.service: Deactivated successfully. Sep 13 00:56:10.175740 systemd-logind[1291]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:56:10.175855 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:56:10.179298 systemd-logind[1291]: Removed session 25. Sep 13 00:56:14.698246 kubelet[2107]: E0913 00:56:14.695287 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:56:15.171386 systemd[1]: Started sshd@25-161.35.238.92:22-147.75.109.163:39448.service. Sep 13 00:56:15.178464 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:56:15.179078 kernel: audit: type=1130 audit(1757724975.171:605): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-161.35.238.92:22-147.75.109.163:39448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-161.35.238.92:22-147.75.109.163:39448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:15.302315 sshd[5693]: Accepted publickey for core from 147.75.109.163 port 39448 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:56:15.311996 kernel: audit: type=1101 audit(1757724975.300:606): pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.312093 kernel: audit: type=1103 audit(1757724975.305:607): pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.312119 kernel: audit: type=1006 audit(1757724975.305:608): pid=5693 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 13 00:56:15.300000 audit[5693]: USER_ACCT pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.305000 audit[5693]: CRED_ACQ pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.313516 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:15.305000 audit[5693]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4ef07e70 a2=3 a3=0 items=0 ppid=1 pid=5693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:15.323190 kernel: audit: type=1300 audit(1757724975.305:608): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4ef07e70 a2=3 a3=0 items=0 ppid=1 pid=5693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:15.327420 kernel: audit: type=1327 audit(1757724975.305:608): proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:15.305000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:15.328531 systemd-logind[1291]: New session 26 of user core. Sep 13 00:56:15.329606 systemd[1]: Started session-26.scope. Sep 13 00:56:15.353000 audit[5693]: USER_START pid=5693 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.362921 kernel: audit: type=1105 audit(1757724975.353:609): pid=5693 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.363115 kernel: audit: type=1103 audit(1757724975.354:610): pid=5696 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.354000 audit[5696]: CRED_ACQ pid=5696 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.833196 sshd[5693]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:15.834000 audit[5693]: USER_END pid=5693 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.834000 audit[5693]: CRED_DISP pid=5693 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.843039 kernel: audit: type=1106 audit(1757724975.834:611): pid=5693 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.843438 kernel: audit: type=1104 audit(1757724975.834:612): pid=5693 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:15.847001 systemd[1]: sshd@25-161.35.238.92:22-147.75.109.163:39448.service: Deactivated successfully. Sep 13 00:56:15.849146 systemd-logind[1291]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:56:15.849697 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:56:15.850669 systemd-logind[1291]: Removed session 26. Sep 13 00:56:15.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-161.35.238.92:22-147.75.109.163:39448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:20.840385 systemd[1]: Started sshd@26-161.35.238.92:22-147.75.109.163:51972.service. Sep 13 00:56:20.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-161.35.238.92:22-147.75.109.163:51972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:20.841803 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:56:20.841876 kernel: audit: type=1130 audit(1757724980.839:614): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-161.35.238.92:22-147.75.109.163:51972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:20.986256 sshd[5713]: Accepted publickey for core from 147.75.109.163 port 51972 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:56:20.984000 audit[5713]: USER_ACCT pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:20.991494 kernel: audit: type=1101 audit(1757724980.984:615): pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:20.990000 audit[5713]: CRED_ACQ pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:20.995607 kernel: audit: type=1103 audit(1757724980.990:616): pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:20.996450 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:20.999492 kernel: audit: type=1006 audit(1757724980.990:617): pid=5713 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Sep 13 00:56:20.990000 audit[5713]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1fe3e510 a2=3 a3=0 items=0 ppid=1 pid=5713 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:21.005468 kernel: audit: type=1300 audit(1757724980.990:617): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1fe3e510 a2=3 a3=0 items=0 ppid=1 pid=5713 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:21.014695 systemd-logind[1291]: New session 27 of user core. Sep 13 00:56:21.016060 systemd[1]: Started session-27.scope. Sep 13 00:56:20.990000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:21.024414 kernel: audit: type=1327 audit(1757724980.990:617): proctitle=737368643A20636F7265205B707269765D Sep 13 00:56:21.032000 audit[5713]: USER_START pid=5713 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:21.039423 kernel: audit: type=1105 audit(1757724981.032:618): pid=5713 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:21.038000 audit[5716]: CRED_ACQ pid=5716 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:21.044447 kernel: audit: type=1103 audit(1757724981.038:619): pid=5716 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:21.539745 sshd[5713]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:21.539000 audit[5713]: USER_END pid=5713 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:21.546277 systemd-logind[1291]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:56:21.548714 kernel: audit: type=1106 audit(1757724981.539:620): pid=5713 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:21.548808 kernel: audit: type=1104 audit(1757724981.539:621): pid=5713 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:21.539000 audit[5713]: CRED_DISP pid=5713 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 13 00:56:21.547371 systemd[1]: sshd@26-161.35.238.92:22-147.75.109.163:51972.service: Deactivated successfully. Sep 13 00:56:21.548342 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:56:21.550857 systemd-logind[1291]: Removed session 27. Sep 13 00:56:21.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-161.35.238.92:22-147.75.109.163:51972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:22.626624 systemd[1]: run-containerd-runc-k8s.io-b2d7a1d81121a664219eb883213246ee5a3d80d6ac361ab333cef92789faf898-runc.La42xh.mount: Deactivated successfully. Sep 13 00:56:22.748967 systemd[1]: run-containerd-runc-k8s.io-df951fd2f48ec9d33b9b0137a1c3e2681a69e26a2cf3866709ed08fa01fcd896-runc.Ij30iV.mount: Deactivated successfully.